text
stringlengths 8
5.74M
| label
stringclasses 3
values | educational_prob
listlengths 3
3
|
---|---|---|
Q: how to use gcc like assembler? is it possible? I want to use gcc like assembler and after compile it to executable on ubuntu. I tried it: gcc a.asm -o out.o and from out.o file compiler it to .out executable file. but I get the following error: file format not recognized; treating as linker script. I'm new on linux environment. I hope this is clean for you. Any help is very appreciadted. Thanks in advance. A: Change file name a.asm to a.s and let gcc autodetect assembler (by extension). A: Read the documentation for the -x option to gcc. It allows you to specify the language of the source file. | High | [
0.6900269541778971,
32,
14.375
]
|
Mayor of Tartu Urmas Klaas (Reform) demanded the resignation of two of his deputies, Valvo Semilarski (Reform) and Artjom Suvorov (Center). The two were arrested by the Internal Security Service earlier on Wednesday and are suspected of corruption. Officials of the Internal Security Service arrested the two on Wednesday morning and conducted searches at the Tartu Town Hall. Deputy mayors Valvo Semilarski and Artjom Suvorov are suspects in criminal offenses, spokespeople for the city government confirmed. Mayor of Tartu Urmas Klaas told the media on Wednesday that he didn’t have any details, but that the city’s administration would “cooperate fully” with both the ISS and the South district prosecutor. “Although it is just a suspicion at this point and the investigation is ongoing, I don’t see how it is possible for them to continue as deputy mayors, and I demand that they resign,” the Reform Party mayor said. Asked by ERR’s radio news whether he himself was going to resign—a question prompted by Reform’s own demands earlier this year that Center Party officials step down after it turned out that Tallinn deputy mayor Arvo Sarapuu had interfered with the city’s waste disposal contracts—Klaas said he sees no reason to do so. The city would run its own internal investigation once the circumstances of the case became clear, Klaas added. Semilarski and Suvorov were arrested on Wednesday morning. The prosecutor suspects Semilarski of a large-scale breach of procedural restrictions. He is also suspected of having accepted a bribe, the case involving a businessman as well who is suspected of having bribed him. Artjom Suvorov is suspected of having received bribes on several occasions in exchange for allocating support money to different organizations. One more individual is suspected of having bribed him and also subject to this investigation. The criminal investigations of Semilarski and Suvorov are not connected with each other. The investigation is carried out by the ISS and supervised by the South District Prosecutor's Office. | Mid | [
0.562118126272912,
34.5,
26.875
]
|
It has recently come to my attention that there are but two Monitors still afloat in this world today. However, there seems to be a lack of good resource material surrounding them.The oldest of the two, the Hungarian river monitor Leitha was launched in 1871 and became the first Danube monitor of the then k.k. Kriegsmarine. After 26 years, in 1894, she was re-armed and re-engined. She saw action in WWI, aswell as in the short lived Communist Hungary action against Czechoslovakia in 1919. Afterwards, she was disarmed and sold as a hulk, and has survived in various forms until present day.The Leitha was originally a classic looking monitor, but later on in her 1894 modifications she was given an entire new superstructure and turrets. Does anyone have any good pics of her Post-1894 look? All I can find are images of her original configuration. The second monitor still around is the little Swedish coastal defence monitor Sölve. Built in 1875 she was fitted with a non-moving turret. The 25cm gun was aimed by the traverse of the ship itself. Online information says that the Sölve stayed in service until 1922. And thats where some confusion starts. In my Janes Fighting Ships of WWI, it lists that of the original seven Sölve style monitors, only the Bjorn and Gerda were not deleted during WWI. (in Janes, the Sölve is miswritten "Silve", but it is clearly the same ship) Furthermore, the old 25cm gun fitted inside fixed superstructure on the ships, seems to have been changed out for a long barreled 4.7 inch gun, fitted in a traversing sponson-style mount. Armament had been augmented further by the adition of three 6 pounders. So, is Janes wrong? Was the Sölve actually one of the monitors left in service after WWI? Also, does anyone have any good pictures of the Sölve, in either her first configuration, or later refit configuration? I know there are paper models of both Leitha and Sölve, but they only show the original configuration. Of interesting note, is that both these ships served for almost exactly the same number of years! Both around 48 years of hardy military service. Obviously from a military standpoint, the Leitha is more siginificant because she actually saw combat. Hey Vilkata, neat images and story!I'm always fascinated by coastal/river craft like that; ie gunboats, monitors, etc. I'm especially interested in the era 1862-1920; perhaps their heyday. Can you recommend any references for such vessels? I can never seem to find any in English that don't also cost a fortune. Any help at all is appreciated. As to any information on the monitors in question, I'm afraid I'm of little help there. Sorry. However, it might be of interest that the US navy didn't get rid of the last of the Civil War-era monitors until 1907 or so, with one of the last vessels being used as a torpedo test-firing platform. 'There's "Big Gun Monitors" by Ian Buxton, 1978, US Naval Institute, subtitled "the history of the design, construction and operation of the Royal Navy's monitors".' However, that book would seem to deal with the Royal Navys super huge BIG gun monitors they used for coastal bombardment pre-WWI all the way up till WWII. They dont even really look like monitor ships. Cool looking! But definitely not a monitor in the classic sense. Here are some pics of what a Big Gun RN monitor would look like. This sucker is the HMS Roberts. The first Roberts was a WWI BG Monitor, the second was a much cooler looking WWII BG Monitor that participated in the Normandy Invasion, using its two gigantic 15 inch guns to bombard the coast. 'If you are interested in reading about the Royal Navys River Gun boats I can recomendArmed With Stings by A Cecil Hampshire, The story of the Insect Class from WW1til the end of WW2. A very good read.' And there is also 'H.M.S. Saracen, by Douglas Reeman.', which is about a BG British Monitor that is used in WWI, and later on when thoroughly obsolete participates in WWI also. Its a ficticious story. And, again, it deals with one of those massive Big Gun Monitors, which arent really true montitors anyways. My one and only source book is Janes Fighting Ships Of World War I. I think its on the pricier side, but it has loads of great information, although it obviously doesn't focus on the monitors. If you have trouble finding any of these, I can suggest www.ebay.com, and www.bibliofind.com. The USA actually used river monitors in Vietnam. They built these suckers with the exact philosophy of the monitor. Low draft, low freeboard, dont give the enemy much to shoot at. However, because they wanted it to be very very simple, they had to give them a fair bit of freeboard. That would allow there to be a sunken in area of the ship for soldiers to be in, without the worry of it filling up with water and the boat sinking. As far as I know, they were they most recent monitors ever buil. I dont know if any navy in the world currently employs designs with that philosophy. Monitor ships... The one and only class of ship where THIS is considered NORMAL, and not -VERY- bad.--- Hey thanks, Vilkata!I actually already have the Osprey books. I also have a book from Squadron/Signal called "Riverine", which covers the monitors and coastal craft used in the Vietnam war. I've also priced Big Gun Monitors several times before, and lord is it expensive! I think I'll pass on that for the time being. Armybook.com has some intersting books on the subject, too, unfortunately only in Russian. There's a two volume set called "The Soviet Monitors Gunboats and Armour Boats", and a reprint of a book originally published in 1938 called "Austria Hungary Danube Flotilla in World War 1914 - 1917." They're reasonably priced, as well. But as I said they're in Russian, which I can't read (yet), plus I've had bad luck with armybook.com in the past. Which is a real shame, because they have some really neat books on World War 1 armor, too. Anyway, thanks again for the help!Matt p.s.: And I still think those British Monitors are neat, whether they're proper monitors or not! A bit earlier than WWI, but if you're interested in the development of breastwork monitors and suchlike, then a book entitled "Birth of the British Battleship" by John Beeler covering British (but refers also to other countries) Capital Ship design 1870 - 1881. The British developed two distincltly different kind of ship - the local one for service in the Atlantic and Med coastal areas of Europe - and a "Cruising" Battleship that still had rigging and was for use on the far side of the Atlantic and other far away stations. The first British Ironclad Dreadnought was effectively a monitor for operating in large rivers and against defended ports and forts. The book is readily available on abebooks at wildly differing prices. As a treatise on the rapid changes in technology and strategic thinking affecting ship design for armament, type of power, armour, etc, it is a fascinating book. The Aussies have the wreck of a breastwork monitor that they're trying to recover and refurbish to a limited extent called HMVS Cerberus, bought by the State of Victoria to defend it against a possible Russian (!) invasion. There is a website and 1/250 plans for paper or plastic card. A bit earlier than WWI, but if you're interested in the development of breastwork monitors and suchlike, then a book entitled "Birth of the British Battleship" by John Beeler covering British (but refers also to other countries) Capital Ship design 1870 - 1881. That is a superb book, Tony - like you, I highly recommend it. David K Brown, himself a former naval constructor, has also written an excellent book on the development of the British navy at that time, 'Warrior to Dreadnought'. I hope the Aussies can recover and restore the 'Cerberus'. There are another couple of early turret ships still around in good order - the Dutch 'Buffel' and the Chilean 'Huascar', both equipped with Coles turrets. Roger Todd wrote:I hope the Aussies can recover and restore the 'Cerberus'. There are another couple of early turret ships still around in good order - the Dutch 'Buffel' and the Chilean 'Huascar', both equipped with Coles turrets. I think I'm right in saying that the philosopher Ludwig Wittgenstein served on one of the Danube monitors for a while. Don't know how that helps, but I thought I'd mention it. You could make a little scale model of him having a think. __________________ "Sometimes things that are not true are included in Wikipedia. While at first glance that may appear like a very great problem for Wikipedia, in reality is it not. In fact, it's a good thing." - Wikipedia. Roger,I've also got the Warrior to Dreadnought book by David Brown, but it is a bit "dryer" than the Beeler book.I don't think any armed force has been under so much pressure due to technology advances than the navy during that period - it's a fascinating story. I've got the "Anatomy of a Ship" on the Dreadnought and one of these days will scratchbuild it in 1/350 to go alongside a KGV (kit) and the HMVS Cerberus. There is another surviving monitor - M33 which sits in drydock adjacent to HMS Victory in Portsmouth Dockyard. There are two river gunboats (one in slices, but still saveable), the Melik and the paddle steamer Bordein, both in Sudan. Bordein was one of "Chinese" Gordon's gunboats, Melik was one of "Monkey" Gordon's greyhounds and took part in the battle of Omdurman. Melik appeared in the Alexander Korda version of "The Four Feathers" If I may chime in on this subject..... There are actually two surviving members of the K.u.K. Danauflotille. In addition to the Leitha( now restored to it's original 1880s configerations; not WW1 ), the other monitor is the SMS Bodrog, or SAVA as it became after leaving KuK service. Last I heard the stripped down hull was residing in present day Croatia. I do hope the British monitor, M33, is made more visitor friendly - at the moment you can only view it in drydock, although there was a plan to cut a huge hole in the side for wheelchair access direct into the hull. There's three WW1 Royal Navy ships surviving - M33 as mentioned, HMS Caroline, a wonderful Jutland veteran Cruiser which is now, at long long last, finally going to be restored as a museum exhibit I believe in Belfast, and HMS President, ex-HMS Saxifrage, a Q ship, which is a floating bar! Nice little clip at www.laht.com/article.asp?ArticleId=19422     Hulk is visible on Google Earth, search Ada Huja, Belgrade and look for two hulls on South bank a bit less than 1km to the Right of the bridge. I have US Navy issue postcards for most US Monitors built post 1868, will try to post a link to photos & histories within next 30 days. Nice little clip at www.laht.com/article.asp?ArticleId=19422     Hulk is visible on Google Earth, search Ada Huja, Belgrade and look for two hulls on South bank a bit less than 1km to the Right of the bridge. I have US Navy issue postcards for most US Monitors built post 1868, will try to post a link to photos & histories within next 30 days. | Mid | [
0.5866141732283461,
37.25,
26.25
]
|
University professor speculates ‘foreign gang’ behind rash of subway graffiti TOKYO (TR) – Over the past week, subway operator Tokyo Metro has discovered that three of its trains have been vandalized with graffiti. Based on the style of the tagging — large, bulbous lettering in a variety of colors — a professor at Tokyo City University tells Nikkan Sports (Jan. 19) that the perpetrators are likely from overseas. “I think it is a foreign gang composed of three or four experienced persons,” said Shigeo Kobayashi, a professor of environmental psychology who has examined graffiti. “I have never heard of cases where Japanese people have scrawled such graffiti.” In the latest case, Tokyo Metro personnel preparing to start services for the day early Thursday noticed at Nakano Station that the exterior of one car of a Tozai Line train had been painted with green and purple writing measuring 7 meters wide and 1 meter tall. Similar markings were found on a Chiyoda Line train at Yoyogi Uehara Station in Shibuya Ward on January 13. Two days later, personnel found two Hibiya Line cars to have been vandalized at Nakameguro Station in Meguro Ward. In explaining a possible motive for the perpetrators, Kobayashi said, “It is a simple matter of self-satisfaction and garnering attention.” Tokyo Metro has filed reports claiming property damage with Tokyo Metropolitan Police in the first two cases. A third report is expected to be filed in due course. “Damage in Japan has been increasing over the past 10 years or so” Kobayashi believes that the group could have stopped in Japan for short period, possibly 10 days, for the purpose of tagging. “There are several such ‘tagging teams’ centered mainly in Europe, and damage in Japan has been increasing over the past 10 years or so,” the professor said. Kobayashi also speculates that the group received help domestically, either by other foreigners already residing in Japan or Japanese nationals. “A person on the ground already will be able to convey an escape path based on whether [an area] is being monitored by security cameras,” he said. Lax security The professor also pointed out that lax security is an issue. Due to terrorism risks in Europe, subway operators there have implemented tight security measures in and around its networks, which is not the case in Japan, he said. Tokyo Metro is evaluating the situation. “We will examine these incidents and consider countermeasures,” the operator told Nikkan Sports. | High | [
0.6618004866180041,
34,
17.375
]
|
Q: Trouble deleting dimension in Eagle I want to get rid of the measurement label in the PCB (the double arrow showing the dimension) but I am not able to delete it. It is showing overlap errors with the bottom traces. How do I delete this? I am using Eagle 7.7. Things I have tried: Hit the Delete button and tried to delete the arrows as well as other parts. Hit the Ripup button and tried deleting at multiple points. A: The OP surely has solved this problem by now, but I found this question when faced with the same problem, so the answer will probably be helpful to somebody. At the time of this writing the current version of Eagle is v8.3.2; the procedure may be different in other versions. To delete a dimension, just choose the trash icon and then click somewhere in the "hot zone" for the dimension marker. Where is the hot zone? Ah, there's the rub: it is unmarked and very tiny. Random clicking in likely locations will generally miss it. Fortunately, though small and invisible, its size and position are predictable. The hot zone is 1 mm tall and 1 mm wide wide. In the versions of Eagle I have used it is found positioned at the exact midpoint of the dimension line. An additional hot zone is found at the spot where the user clicked to begin creating the dimension, but this hot spot is less easily located. The OP has also encountered a second problem that is very common: if you naively add a measurement (dimension) line, it will by default likely be placed on either the top or bottom layer. This is a bad idea because the lines will be treated as traces that may trigger DRC rules. The OP encountered this when the DRC complained about trace overlap errors. AutoDesk recommends placing measurements on layer 47 (Measures). If you do so, you will not trigger DRC rules. There is another common source of confusion when measurements are placed on the Top or Bottom layer: the default settings violate common DRC rules for line width and for font, so the DRC creates a hatched design covering the lines, arrowheads, and the text of the measurement. Even if the actual dimension is then successfully deleted, the hatched coverage remains, making it appear as if the dimension line and arrows still remain. This is the problem that user1155386 has noted in the comments above. It is not possible to delete this phantom dimension line, but it will go away if the DRC is run again or if the file is closed and reopened. I hope this answer will help somebody, because personally I found these problems somewhat frustrating and did not find any answers in the documentation or in other questions on this site or other sites. | Mid | [
0.641752577319587,
31.125,
17.375
]
|
There’s A Recall On Dog Food That Could Kill Your Dog Dog food manufacturer JM Smucker is recalling four brands of dog food because they might be tainted with Pentobarbital, the drug used to euthanize dogs. Your first reaction is probably "why is Pentobarbital anywhere near the production of dog food?" That would be a good reaction. I can't help you with that. It makes no sense. JM Smucker is blaming it on one single supplier, but that doesn't really answer any questions. | Low | [
0.49635036496350304,
34,
34.5
]
|
1. Field of the Invention The present invention relates to a hollow golf club head that is capable of increasing the launch angle of a hit ball and increasing the distance of the hit ball. 2. Description of the Related Art Recently, there have been proposed hollow golf club heads constructed such that elastic deformation is generated at a crown part as well as a face part when hitting a ball, whereby the launch angle and distance of the hit ball are increased. Examples of such hollow golf club heads are disclosed in JP-A-2003-52866, JP-A-2003-79768, JP-A-2003-88601 and JP-A-2005-137788. The golf club head disclosed in JP-A-2003-52866 is a metal hollow golf club head comprising a face part, a sole part, a side part, a crown part, and a hosel part, wherein at least the main portions of the crown part and the face part are integrally formed with each other, by casting, to constitute a front part, the other parts of the golf club head excluding the front part are also integrally formed with each other to constitute a back part, and the front part and the back part are joined to each other. The golf club head disclosed in JP-A-2003-79768 is a metal hollow golf club head comprising at least a face part, a sole part, a side part, and a crown part, wherein the metal material forming the crown part has the lowest modulus of longitudinal elasticity. The golf club head disclosed in JP-A-2003-88601 is a metal hollow golf club head comprising a face part, a sole part, a toe-side side part, a heel-side side part, back-side side part, a crown part, and a hosel part, wherein the crown part is provided with a plurality of grooves, which extend from the toe-side side part toward the heel-side side part. The golf club head disclosed in JP-A-2005-137788 is a hollow golf club head comprising a face part having a face surface, by which a ball is hit, and a head body part extending to the rear of the head along the rear surface of the face part, wherein the head body part includes a crown part, a sole part, and a side part, which form an upper head part, a lower head part, and a side head part, respectively, and the crown part includes a front crown part forming a front section extending a distance corresponding to 0.15 of the crown depth-wise length Lc from the rear surface of the face part and a rear crown part forming a rear section extending 0.30 or more, moreover, 1.0 of the crown depth-wise length Lc from the rear surface of the face part, the front crown part having a rigidity smaller than that of the rear crown part. However, it is required that the golf club heads according to JP-A-2003-52866, JP-A-2003-79768, JP-A-2003-88601, and JP-A-2005-137788 be improved to increase launch angle. | Mid | [
0.5472061657032751,
35.5,
29.375
]
|
Lemoore Union High School District Lemoore Union High School District is a public school district based in Kings County, California, United States. External links Category:School districts in Kings County, California Category:Lemoore, California | High | [
0.703703703703703,
38,
16
]
|
Remove stems from collards and slice leaves into half inch strips. In a large pot heat 1 Tbs. olive oil over medium-high heat. Add onions and garlic and sauté until fragrant. Stir in the collards and continue to sauté until they begin to wilt. Pour broth or water over collars; enough for them to be fully covered. Add apple cider vinegar, sugar, 1 tsp. salt and a few dashes of hot sauce, if using. Bring to a boil then cover, reduce heat to low and simmer for 1 hour. 20 minutes before collards are done stir in diced tomato. Combine 2 cups water with 1 cup rice grits, a pat of butter, if using and a tsp. of salt. Bring to a boil. Stir once. Cover and simmer on low for about 20 minutes or until water has absorbed and rice grits are tender. Keep covered, remove from heat and let sit for 10 minutes. Divide grits between two plates. Spoon cooked collards over grits and top with minced chives. | Mid | [
0.619834710743801,
37.5,
23
]
|
Moseley School Moseley School (incorporating Spring Hill College) is a large comprehensive school in the Moseley area of Birmingham, England. The school's main entrance is situated on Wake Green Road and it lies in the parish of St Christopher, Springfield. In the early 21st century, the school is non-denominational with around 1,360 students, two-thirds of whom are boys. 80% do not have English as a first language, and over 40% are eligible for free school meals. The March 2016 Ofsted report graded the school as good with good features, at which students make good progress. The school comprises three main buildings on a single campus – a Victorian college built in the 1850s, and a building completed in 2012, and a newly built sports complex. School history The history of what is now Moseley School is complicated. In 1838 a private house in Spring Hill, Hockley, Birmingham, was opened as a training college for Congregationalist ministers under the patronage of George Storer Mansfield (1764–1837) and his two sisters Sarah (1767–1853) and Elizabeth (1772–1847). Twenty years later, in 1857, after expansion to include a further three private houses, the establishment, still named Spring Hill College, moved to new, much larger, purpose-built premises on Wake Green Road, in what was then rural Worcestershire, some miles south of the city. This striking Gothic revival building was designed by the architect Joseph James, and is particularly noted for its gargoyles. In 1886, the college was closed and a replacement establishment founded in Oxford, known as Mansfield College (which is now part of the University of Oxford). Meanwhile, the Wake Green Road buildings were re-opened as the 'Pine Dell Hydropathic Establishment and Moseley Botanical Gardens', which entailed the construction of a swimming bath (with highly decorative ceiling) and greenhouses. At the outbreak of the First World War in 1914, the building was commandeered by the War Office for use as a military barracks. After a brief period as an orphanage, the site returned to educational use in 1921 as a teacher-training facility, under the new name of Springfield College. Finally, in 1923, the premises were transferred to Birmingham City Council, which opened the Moseley Secondary School, for boys only and with a selective entrance examination. Major Ernest Robinson served as headmaster until 1956. The study bedrooms of the former college were merged in pairs to form classrooms. The former hydropathic swimming bath was boarded over to serve as the school assembly hall. An extension was built to house laboratories and further classrooms. A feature of the school (as with many grammar schools) was that the headmaster would live on the premises, which continued as the practice until 1972. The school changed its name to Moseley Grammar School in 1939. In 1955, the city council opened a separate school, known as Moseley Secondary Modern School, fronting College Road, on what had previously been a playing field adjacent to the grammar school site. This new school, with Miss Eileen Cohen (later Mrs Eileen North) as headmistress until 1967, was both co-educational and non-selective. It specialised in performing arts such as theatre and music. Only a fence separated the two schools, and relations between the two sets of pupils were not always peaceful. It was during the headmastership of Bruce Gaskin from 1956 to 1972, that Moseley Grammar School acquired its reputation for academic excellence. Previously it had been known more for its sporting achievements, particularly in rugby. In 1968 it acquired a former inn near Abergavenny, Wales, known as Old Grouse Cottage, for outdoor activities and field trips, which the current school still retains. The main school range was designated as a Grade II listed building in the year of Mr Gaskin's retirement. In 1974, after two years of uncertainty, the grammar school and the secondary modern school were amalgamated into a single school. This changed was controversial but supported by many (among the latter, Mr Gaskin, who after his retirement remained active on the new school's Board of Governors until the 1980s). The combined establishment, known simply as Moseley School, became one of the largest comprehensives in Birmingham. Initially at least, it inherited the good reputations of its predecessors in their respective fields. Moseley Grammar School had been without a head since 1972, and Donald Wilford, headmaster of Moseley Secondary Modern School since 1967, applied for the appointment as head of the merged school. In the event, the job went to an outsider, Alan Goodfellow, who was on record as being bitterly critical of comprehensive education. He was also plagued by ill-health and died in office in 1981. David Swinfen was appointed as head the following year. His ambitious plans, however, were overwhelmed by events, when the former grammar school building, known since the amalgamation as the West Wing, began falling apart as a result of decades of neglect and under-funding. In 1986 the roof of the library was declared unsafe halfway through an exam, and the entire building was closed and earmarked for demolition – the latter prevented only by Mr Swinfen's speedily organised campaign and the resultant public outcry. By the end of his tenure in 1992, the school had undergone a radical change of character, following the redrawing of its catchment area in 1987/88. Hitherto, Moseley School had taken a majority of its pupils from the (then) largely white area of Hall Green, but now it took them from the mainly Asian area of Sparkhill. The campaign for the restoration of the West Wing continued for many years. As part of it, in 1995 Mrs Mary Miles, head teacher from 1992 to 2001, authorised the formation of the Moseleians Association, for former students and staff of the grammar school, secondary modern school, and comprehensive school. It publishes the twice-yearly Moseleian Gazette, and organises regular reunions and many other events. Continuing the work of the Old Moseleians Association – founded by Major Robinson in 1927, but with which the school had severed links in 1968 – the Moseleians Association has assumed an increasingly important role in school life. It sponsors competitions and prizes for pupils, raising funds for the school cottage, planting trees on the school grounds, and taking over the administration of the school archives. After more than a decade of being closed and shored up with scaffolding, in 1998 – with financial assistance from the Heritage Lottery Fund and the European Regional Development Fund – the West Wing was completely refurbished, and re-opened under its original name of Spring Hill College (as the sixth form of Moseley School). To coincide with its re-opening, the three daughters of Mr Gaskin published Moseley into the Millennium: The Story of Moseley School, detailing and celebrating the history of the school. Rebuilding Following the resignation of David Peck, head teacher from 2001 to 2008, Tim Boyes, head of nearby Queensbridge School, was brought in as an interim replacement. He, and the City Council, advocated the creation of a combined Trust to administer both schools, which would share facilities and have a merged sixth form, based at Moseley. This plan, however, was scrapped in 2011 when Mr Boyes failed to secure the job of head teacher on a permanent basis. As part of the government's 'Building Schools for the Future' (BSF) strategy, in 2009 Moseley School received approval for a massive new rebuilding programme, involving the complete demolition of the East Wing (the former Moseley Modern School, now in a bad state of repair), with the exception of its more recently built sporting facilities. The rest of the area would become the school's main car park, and meanwhile a new building would be constructed straddling the boundary between the former grammar and secondary modern sites, despite the steep incline from the latter to the former. The old grammar school building, or West Wing (Spring Hill College), would also have a number of alterations carried out to increase its capacity. These plans survived the Coalition Government's cuts almost completely intact. Work began in summer 2011 and was completed by October 2012. The East Wing was demolished in February 2013 and the new building, which had already been in use for some months, was officially opened by the Lord Mayor of Birmingham, Councillor Mike Leddy, on 30 June 2013. The school has invested in excess of £1.5 million into ICT facilities to transform learning and teaching, with larger than average classrooms to provide students with a flexible learning environment. To coincide with the construction of the new building, Craig Jansen, head teacher from 2011 to 2015, introduced eight new school houses to Moseley, which had been without a house system since 1982. Named after Oxford colleges, these are Mansfield, Nuffield, Keble, Pembroke, Hertford, Worcester, Lincoln and Exeter. Lincoln and Exeter were removed in 2015. List of head teachers The following is a list of all those who have held the office of head teacher (earlier, headmaster or headmistress), or acted as such during vacancies, of Moseley School and its predecessor institutions, since the first secondary school was opened on the site in 1923. Moseley Grammar School Moseley Boys' Secondary School 1923–1939; Moseley Boys' Grammar School 1939–1974. Colours: black, red & white. Moseley Modern School Moseley Mixed Secondary Modern School 1955–1974. Colours: red & green. Moseley School Moseley School 1974–2000; Moseley School / A Language College 2000– . Colours: black, red & white. Former pupils The individuals below are listed by the Moseleians Association as famous Moseleians, former pupils of Moseley Grammar School (MGS), Moseley Modern School (MMS), or Moseley School (MS). Those who were pupils at the time of the merger are identified according to the school they started at. Mansoor Nazir - RIBA (MS) Director - DHA Architects References External links Moseley School Moseleians Association EduBase: Moseley School Category:Comprehensive schools in Birmingham, West Midlands Category:Grade II listed buildings in Birmingham Category:Grade II listed educational buildings Category:Educational institutions established in 1923 * Category:1923 establishments in England Category:Foundation schools in Birmingham, West Midlands Category:Secondary schools in Birmingham, West Midlands Category:Moseley | Mid | [
0.6048192771084331,
31.375,
20.5
]
|
UAA to celebrate the Class of 2010 on Sunday, May 2 Dana Fabe, Oliver Leavitt and John Strohmeyer will receive honorary degrees; more than 2,200 students to graduate ANCHORAGE, AK - The University of Alaska Anchorage will celebrate the 2,237 graduates of the Class of 2010 at the annual Commencement Ceremony on Sunday, May 2 at 3 p.m. at the Sullivan Arena. A special Graduate Hooding Ceremony will honor graduate students receiving their hoods on Saturday, May 1 at 10 a.m. in the Wendy Williamson Auditorium on UAA's campus. Commencement is UAA's capstone event celebrating the achievements of its students and the lifetime achievements of extraordinary faculty and community members. UAA will confer degrees on students from the following schools and colleges: Colleges of Arts and Sciences, Business and Public Policy, Health and Social Welfare, Education, the Community and Technical College and the School of Engineering. In all, UAA will confer 560 associate degrees, 1,180 bachelor's degrees, 391 master's degrees, and 160 certificates and licenses. This year's student speaker is Katie Marquette. She grew up in Fairbanks, Alaska and graduated from Austin E. Lathrop High School in 2006. Graduating cum laude with UAA Leadership Honors, Katie will receive a Bachelor of Arts in Sociology with an emphasis in Community and Change. Since 2007, she's worked with UAA's Career Services Center to help coordinate several career fairs and workshops. She's conducted important research on the availability of same-sex domestic partner benefits and sexual orientation protections in Alaska's private sector, and has traveled the world to attend prestigious academic conferences. Upon graduating, Katie will begin working with a team of professionals to help manage a state gubernatorial campaign, and hopes to one day attend law school. Read Katie's full bio at http://greenandgold.uaa.alaska.edu/index.php?option=com_content&view=article&id=4861. Three prominent leaders will be recognized for their many achievements and contributions in service to Alaska, to learning and to humankind: Dana Fabe, Honorary Doctor of Laws; Oliver Leavitt, Honorary Doctor of Laws; and John Strohmeyer, Honorary Doctor of Letters (posthumous award). Many years of outstanding service to the University and its students will be gratefully recognized with the conferring of emeriti status on three faculty members: Stephen Haycox, Professor Emeritus of History, College of Arts and Sciences; Kerry Feldman, Professor Emeritus of Anthropology, College of Arts and Sciences; and Brian Wick, Professor Emeritus of Mathematics, College of Arts and Sciences. | High | [
0.6650485436893201,
34.25,
17.25
]
|
Factors1) Glass segment-Overall sales increased by 1.6% from the previous year, however, operating profit decreased by JPY 13,900 million resulting in an operating loss of JPY 4,000 million. Background of sales increase:-Europe: despite a drop in vehicle production due to the worsening economical climate, the Company maintained the same level of shipment volume as last year.-Japan: vehicle production increased from last year when production slowed in the aftermath of the Great East Japan Earthquake.-Asia & North America: vehicle production remained stable. Management plan -In February 2013, the Company announced its new, mid-term business plan covering from 2013 through 2015. Based on the three-year program called the "Grow Beyond-2015", the Company aims to increase sales to over 1.55 trillion yen, up 30 percent from the 2012 result, and operating profit to approximately 140 billion yen, up 50 percent from 2012. The Group is intending to achieve this goal by reducing its excessive dependence on its electronics-related business, which generated more than 70 percent of the Company's operating profit in 2012, while boosting the profitability of its glass and chemical products businesses. By 2015, the Company is going to invest 440 billion yen in plants and equipment, which includes 150 billion yen in the glass segment, and 140 billion yen in the electronics segment. Challenges to be made to achieve the new mid-term management plan: 1. To reinforce its growth area-The Company aims to establish a new, highly profitable business, while anticipating a slowdown in sales at its Flat Panel Display (FPD) business. -In the area of automotive glass, the Company will strengthen sales of functional glass such as UV cut glass and IR (infrared) cut glass, which contribute to offering comfortable cabins. -The Company's chemical strengthened special glass, its strategic products, have been chosen for instrument panel applications on luxury vehicles made by European and U.S. automakers, marking the first adoption of such technology for automobiles. -The Company's strategic markets include Russia, where it already has a strong presence; Brazil, where it is launching new operations; and Asia. -Pursuing new business opportunities in Southeast Asia, the Company will establish a regional headquarters in Singapore in 2013. 2. To boost sales and profits-By accelerating releases of high value-added products such as functional glass for automobiles, the Company aims to boost its overall profitability. -The Company intends to increase sales in Europe, while accelerating development of coating products by bolstering its partnership with Germany-based Interpane Glas Industrie AG. Recent Development Outside Japan -The Company held in April 2012, a cornerstone-laying ceremony in Guaratingueta, Sao Paulo State, Brazil for a new plant of its subsidiary, AGC Vidros do Brasil Ltda. The Group is investing approx. 40 billion yen to start business operations in Brazil as one of its major projects. The new plant will begin operations in 2013. The Group intends to provide its globally acclaimed technologies, quality and services to customers in Brazil and increase sales of automotive and architectural glass. (From an article in the Nikkan Jidosha Shimbun on April 20, 2012) Contracts -The super UV blocking glass with IR blocking function, the "UV Verre Premium Cool on" has been selected for the first time for the front door windows of the Toyota Motor's "Vitz"'s special editions (the F Ciel and the F Smart Stop Package Ciel). It blocks approx. 99% of UV rays and 70% of infrared rays. (From an article in the Nikkan Jidosha Shimbun on Dec. 6, 2012) -The Company announced that Honda has selected for the first time the "UV Verre Premium", its tempered glass for automotive front door windows that filters out approximately 99% of the sun's UV rays. It has been installed in the "She's", "HYBRID She's" and "HYBRID XH Selection" of the"Fit" compact cars. (From an article in the Nikkan Jidosha Shimbun on Jun. 9, 2012) -The Company announced that the "UV Verre Premium" has been selected as a "beauty package" option on the front door windows of the "Aqua", Toyota's new hybrid vehicle. The "Aqua" is the fifth Toyota vehicle model using the UV Verre Premium tempered glass. The UV Verre Premium glass was initially featured on some trim levels of "Vitz" vehicles at the end of 2010. (From an article in the Nikkan Jidosha Shimbun on January 25, 2012) -The Group plans to spend approximately JPY 150,000 million in R&D activities in FY2013-2015. R&D Activities Glass division-The Company will accelerate its activities to develop a new glass-melting technology, which significantly reduces both environmental footprint and production cost. -The Company will develop high value-added products (display materials, energy-saving glass) by integrating its expertise in the areas of glass, chemical and ceramics technologies. Product Development Chemically strengthened special glass-The Company will step up its efforts to expand sales of its chemically strengthened glass for automotive applications. The Company's chemically strengthened Dragontrail glass was released in January 2011. In addition to offering a significantly higher level of product strength than conventional glass, Dragontrail is extremely resistant to scratches, while featuring a high texture quality that cannot be achieved with plastics. The Dragontrail is already used widely in cover glass of smart-phones, tablet PCs, LCD TVs and other electronic devices. The Company is expecting that the thin and lightweight material will be extensively employed also in automotive products for its quality surface finish suitable for touch-screen and other technologies, as well as its superior bending form-ability. The Company is making technology-based proposals of the Dragontrail for the cover glass of car navigation systems and front instrument panels. (From an article in the Nikkan Jidosha Shimbun on Jun. 28, 2012) Investment Activities Capital Expenditure (in million JPY) FY ended Dec. 31, 2012 FY ended Dec. 31, 2011 FY ended Dec. 31, 2010 Overall 155,300 152,700 117,400 Glass business 58,400 50,400 34,600 Capital investment at the Glass Division as a % of overall capital expenditures 37.6 33.0 29.4 -Major investment at the glass business for FY ended Dec. 31, 2012 included installing equipment for residential glass and automotive glass in Brazil factory. -The Group is planning to invest approximately JPY 440,000 million in plants and equipment in FY2013-2015. | Mid | [
0.5944700460829491,
32.25,
22
]
|
Over the years, a number of ways have developed for the design and construction of control devices using mechanical and electromechanical equipment that have proved to be safe and reliable in operation. These types of devices have been used for many years in the control of equipment that can create unsafe conditions if a failure occurs. An example of this type of equipment is a burner control system that is operated under the supervision of units that generically are referred to as flame safeguard systems. In this burner control art it is essential that upon certain types of failures that the fuel valve to a fuel burner be closed. The failure of a flame safeguard control system to operate properly can lead to a situation in which a fuel valve is left open when no flame exists, and a fuel-burning chamber can be loaded with fuel. This fuel can then accidentally be ignited causing an explosion. This type of failure can generally be guarded against in the existing technology of flame safeguard systems by utilizing safety checking types of circuits that repetitively simulate the absence of flame and then check for the presence of flame. These types of systems then repetitively charge and discharge a capacitive series arrangement to hold in a control relay that in turn energizes the fuel valve. This type of closed loop safety system has been used for a number of years, and is generally considered to be quite reliable. In recent years, the conventional electromechanical and electronic types of control systems, including flame safeguard control systems, have been displaced by electronic control systems of the digital type that utilize microprocessors or microcomputers as the heart of the condition responsive control circuit means. The use of digital logic including microcomputers and microprocessors leads to many benefits in that more sophisticated and fuel efficient types of control systems can be developed. The detriment of the use of digital logic and microcomputers or microprocessors is that circuit failures within the digital equipment can occur and result in an unsafe mode of operation of the overall control system. The normal technique for verifying the operation of a computer-type of microprocessors or microcomputer arrangement is in the use of dual processors. In this case, one computer or processor is programmed to check up on the other processor or computer, and vice versa. This redundancy allows for the detection of a malfunction, and allows the healthy processor or microcomputer to take the necessary corrective action in the event of a failure of the other of the dual elements. The use of dual microcomputers or microprocessors is a very expensive and complex technique for generating a safe operating control system. It is essential for the practical application of safety control systems, such as the flame safeguard control systems, that a reliable and less expensive approach be developed. | Mid | [
0.6353467561521251,
35.5,
20.375
]
|
Food Network is launching a paid subscription service that will offer live, interactive cooking classes with star chefs like Bobby Flay, Rachael Ray and Giada De Laurentis. The new streaming-video app, called Food Network Kitchen, also will enable viewers to order ingredients for recipes and cooking utensils through a partnership with Amazon. Launching in late October, the service will cost $47.99 a year, and will include over 80,000 recipes, 800 on-demand classes, as well as 25 live classes a week, which will be filmed in Food Network’s test kitchen in Chelsea Market in New York. “There’s something special about taking a class live,” said Peter Faricy, the head of direct-to-consumer at Food Network’s parent Discovery. “You can ask questions during the class. Someone may ask Bobby Flay: ‘How much salt do we need in this?’ It’s about as authentic experience as you can get.” It’s the latest offering from Discovery, which is looking to stand out among a slew of media giants that are launching subscription-based streaming services in the coming year to compete against Netflix. “There are eight players fighting over who is going to be the king of the scripted series,” Discovery Chief Executive David Zaslav told The Post, ticking off giants like Disney, Apple, WarnerMedia and NBCUniversal. “The truth is, that pie of entertainment is one pie and they are fighting over who gets that piece.” Food Network’s new app, meanwhile, “is unlike any other media app that exists,” Zaslav said. “It’s about creating content that people can use and transact with.” Zaslav said he and his top lieutenants took inspiration from Peloton, the indoor exercise bike that allows users to take live at-home classes virtually with other Peloton users. Zaslav tapped Faricy, a former Amazon Marketplace vice president, and Faricy, in turn, approached his former colleagues at Amazon to partner on the project. The three-year deal with Amazon includes placements on Amazon Fire Tablets, Amazon Alexa and Echo Show, as well as Fire TV streaming media devices and Fire TV Edition smart TVs. Users will also have the ability to order ingredients from Amazon Fresh and other retailers. Down the road, Food Network Kitchen will partner with retailers including Amazon to sell cooking utensils used in its classes. | High | [
0.6740947075208911,
30.25,
14.625
]
|
With the development of radio networks, data services play a more and more important role in the radio networks, and accordingly require a broader and broader transmission bandwidth. The data services demand a broader bandwidth than voice services do. Especially after the introduction of High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA) and Code Division Multiple Access (CDMA) 1X Do, the transmission rates of the data services that may be provided for the terminals have become higher and higher, and development of the data services has also become faster and faster. Consequently, the traffic load on a base station grows increasingly, and a broader transmission bandwidth is demanded by the base station. In the conventional radio networks, the base stations having three sectors are generally used. After the introduction of HSDPA, HSUPA or CDMA 1X Do, the downlink throughput of the base station may be up to 9 megabits per second (Mbps) and the uplink throughput thereof may be up to 1 Mbps, so as to ensure the data service transmission for the terminals. Plus the overheads of the lower layers, the resulting rate of the physical layer is proximately 15 Mbps in the downlink direction and 1.5 Mbps in the uplink direction. The fees for the data services are relatively low, and thus the income from the data services is greatly less than that from the voice services. In this case, if operators go on to implement the access and data transmission of the base station by using the E1/T1. with expensive rent, their profit is reduced severely. The x Digital Subscriber Line (xDSL) is a family of technologies that provide very high-bandwidth digital signal transmission over conventional telephone lines and have advantages in convenient access, abundant transmission resources and low transmission cost. At present the xDSL technologies are generally applied in the radio networks. Compared with the accessing of base station by use of E1/T1, the accessing of base station by use of xDSL may reduce the cost in base station access significantly. It is primarily the Very high speed Digital Subscriber Line (VDSL) and the Asymmetric Digital Subscriber Line (ADSL) in the xDSL family that are used to implement the base station access in the prior art. The two application scenarios are described respectively as below. FIG. 1 shows a schematic diagram of a networking structure for implementing base station access by use of VDSL in the prior art. It may he seen from FIG. 1 that a base station is connected to a VDSL Modem via an Ethernet line, the VDSL Modern is connected to a Digital Subscriber Line Access Multiplexer (DST AM) via a twisted-pair, and the DSLAM is connected to a Broadband Access Service (BAS) via a fast Ethernet network. The BAS transfers the traffic on the DSLAM to a radio network controller (RNC) via an IP network, The transmission rate of VDSL in the uplink direction may be up to 1.5 Mbps and in the downlink direction may be up to 12 Mbps. Thus, the bandwidth of VDSL may meet the requirements for data transmission in base station access. However, the valid transmission distance of VDSL is 1 kilometer(km) beyond which the rate of VDSL decreases rapidly. Therefore, VDSL may be only used in short-distance base station access and thus can not be applied widely. FIG. 2 shows a schematic diagram of a networking structure for implementing base station access by use of ADSL in the prior art. This figure is basically the same as FIG. 1, except that the base station is connected to an ADSL Modem via an Ethernet line and ADSL access technologies are employed therein. ADSL is applied widely. ADSL has a valid transmission distance up to 3 km and thus is applicable to long-distance base station access. However, because the bandwidth of ADSL is too narrow, i.e. proximately 0.5 Mbps in the uplink direction and proximately 6 Mbps in the downlink direction, the base station access by use of ADSL is unable to meet the requirements of most base stations for transmission bandwidth. | Mid | [
0.5662100456621001,
31,
23.75
]
|
<!DOCTYPE html> <!--[if IE 8]><html class="no-js lt-ie9" lang="en" > <![endif]--> <!--[if gt IE 8]><!--> <html class="no-js" lang="en" > <!--<![endif]--> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>claf.data.dataset package — CLaF 0.2.0 documentation</title> <link rel="shortcut icon" href="_static/favicon.ico"/> <script type="text/javascript" src="_static/js/modernizr.min.js"></script> <script type="text/javascript" id="documentation_options" data-url_root="./" src="_static/documentation_options.js"></script> <script type="text/javascript" src="_static/jquery.js"></script> <script type="text/javascript" src="_static/underscore.js"></script> <script type="text/javascript" src="_static/doctools.js"></script> <script type="text/javascript" src="_static/language_data.js"></script> <script type="text/javascript" src="_static/js/theme.js"></script> <link rel="stylesheet" href="_static/css/theme.css" type="text/css" /> <link rel="stylesheet" href="_static/pygments.css" type="text/css" /> <link rel="stylesheet" href="_static/theme_overrides.css" type="text/css" /> <link rel="index" title="Index" href="genindex.html" /> <link rel="search" title="Search" href="search.html" /> <link rel="next" title="claf.data.reader package" href="claf.data.reader.html" /> <link rel="prev" title="claf.data package" href="claf.data.html" /> </head> <body class="wy-body-for-nav"> <div class="wy-grid-for-nav"> <nav data-toggle="wy-nav-shift" class="wy-nav-side"> <div class="wy-side-scroll"> <div class="wy-side-nav-search"> <a href="index.html"> <img src="_static/logo.png" class="logo" alt="Logo"/> </a> <div class="version"> 0.2.0 </div> <div role="search"> <form id="rtd-search-form" class="wy-form" action="search.html" method="get"> <input type="text" name="q" placeholder="Search docs" /> <input type="hidden" name="check_keywords" value="yes" /> <input type="hidden" name="area" value="default" /> </form> </div> </div> <div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="main navigation"> <p class="caption"><span class="caption-text">Contents</span></p> <ul> <li class="toctree-l1"><a class="reference internal" href="contents/dataset_and_model.html">Dataset and Model</a></li> <li class="toctree-l1"><a class="reference internal" href="contents/pretrained_vector.html">Pretrained Vector</a></li> <li class="toctree-l1"><a class="reference internal" href="contents/tokens.html">Tokens</a></li> </ul> <p class="caption"><span class="caption-text">Package Reference</span></p> <ul class="current"> <li class="toctree-l1"><a class="reference internal" href="claf.config.html">config</a></li> <li class="toctree-l1 current"><a class="reference internal" href="claf.data.html">data</a><ul class="current"> <li class="toctree-l2 current"><a class="reference internal" href="claf.data.html#subpackages">Subpackages</a><ul class="current"> <li class="toctree-l3 current"><a class="current reference internal" href="#">claf.data.dataset package</a><ul> <li class="toctree-l4"><a class="reference internal" href="#module-claf.data.dataset.base">Submodules</a></li> <li class="toctree-l4"><a class="reference internal" href="#module-claf.data.dataset">Module contents</a></li> </ul> </li> <li class="toctree-l3"><a class="reference internal" href="claf.data.reader.html">claf.data.reader package</a></li> </ul> </li> <li class="toctree-l2"><a class="reference internal" href="claf.data.html#module-claf.data.collate">Submodules</a></li> <li class="toctree-l2"><a class="reference internal" href="claf.data.html#module-claf.data">Module contents</a></li> </ul> </li> <li class="toctree-l1"><a class="reference internal" href="claf.learn.html">learn</a></li> <li class="toctree-l1"><a class="reference internal" href="claf.metric.html">metric</a></li> <li class="toctree-l1"><a class="reference internal" href="claf.model.html">model</a></li> <li class="toctree-l1"><a class="reference internal" href="claf.modules.html">modules</a></li> <li class="toctree-l1"><a class="reference internal" href="claf.tokens.html">tokens</a></li> </ul> <p class="caption"><span class="caption-text">Reports</span></p> <ul> <li class="toctree-l1"><a class="reference internal" href="reports/glue.html">GLUE</a></li> <li class="toctree-l1"><a class="reference internal" href="reports/historyqa.html">HistoryQA</a></li> <li class="toctree-l1"><a class="reference internal" href="reports/korquad.html">KorQuAD</a></li> <li class="toctree-l1"><a class="reference internal" href="reports/squad.html">SQuAD</a></li> <li class="toctree-l1"><a class="reference internal" href="reports/wikisql.html">WikiSQL</a></li> </ul> <p class="caption"><span class="caption-text">Summary</span></p> <ul> <li class="toctree-l1"><a class="reference internal" href="summary/reading_comprehension.html">Reading Comprehension</a></li> </ul> <p class="caption"><span class="caption-text">Appendix</span></p> <ul> <li class="toctree-l1"><a class="reference internal" href="references.html">References</a></li> </ul> </div> </div> </nav> <section data-toggle="wy-nav-shift" class="wy-nav-content-wrap"> <nav class="wy-nav-top" aria-label="top navigation"> <i data-toggle="wy-nav-top" class="fa fa-bars"></i> <a href="index.html">CLaF</a> </nav> <div class="wy-nav-content"> <div class="rst-content"> <div role="navigation" aria-label="breadcrumbs navigation"> <ul class="wy-breadcrumbs"> <li><a href="index.html">Docs</a> »</li> <li><a href="claf.data.html">claf.data package</a> »</li> <li>claf.data.dataset package</li> <li class="wy-breadcrumbs-aside"> <a href="_sources/claf.data.dataset.rst.txt" rel="nofollow"> View page source</a> </li> </ul> <hr/> </div> <div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article"> <div itemprop="articleBody"> <div class="section" id="claf-data-dataset-package"> <h1>claf.data.dataset package<a class="headerlink" href="#claf-data-dataset-package" title="Permalink to this headline">¶</a></h1> <div class="section" id="module-claf.data.dataset.base"> <span id="submodules"></span><h2>Submodules<a class="headerlink" href="#module-claf.data.dataset.base" title="Permalink to this headline">¶</a></h2> <dl class="class"> <dt id="claf.data.dataset.base.DatasetBase"> <em class="property">class </em><code class="sig-prename descclassname">claf.data.dataset.base.</code><code class="sig-name descname">DatasetBase</code><a class="reference internal" href="_modules/claf/data/dataset/base.html#DatasetBase"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.base.DatasetBase" title="Permalink to this definition">¶</a></dt> <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">torch.utils.data.dataset.Dataset</span></code></p> <p>Dataset Base Model An abstract class representing a Dataset.</p> <dl class="method"> <dt id="claf.data.dataset.base.DatasetBase.collate_fn"> <code class="sig-name descname">collate_fn</code><span class="sig-paren">(</span><em class="sig-param">cuda_device_id</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/base.html#DatasetBase.collate_fn"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.base.DatasetBase.collate_fn" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.base.DatasetBase.get_ground_truth"> <code class="sig-name descname">get_ground_truth</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/base.html#DatasetBase.get_ground_truth"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.base.DatasetBase.get_ground_truth" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.base.DatasetBase.get_ground_truths"> <code class="sig-name descname">get_ground_truths</code><span class="sig-paren">(</span><em class="sig-param">data_idxs</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/base.html#DatasetBase.get_ground_truths"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.base.DatasetBase.get_ground_truths" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.base.DatasetBase.get_predict"> <code class="sig-name descname">get_predict</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/base.html#DatasetBase.get_predict"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.base.DatasetBase.get_predict" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.base.DatasetBase.lazy_evaluation"> <code class="sig-name descname">lazy_evaluation</code><span class="sig-paren">(</span><em class="sig-param">index</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/base.html#DatasetBase.lazy_evaluation"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.base.DatasetBase.lazy_evaluation" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> </dd></dl> <span class="target" id="module-claf.data.dataset.seq_cls"></span><dl class="class"> <dt id="claf.data.dataset.seq_cls.SeqClsDataset"> <em class="property">class </em><code class="sig-prename descclassname">claf.data.dataset.seq_cls.</code><code class="sig-name descname">SeqClsDataset</code><span class="sig-paren">(</span><em class="sig-param">batch</em>, <em class="sig-param">vocab</em>, <em class="sig-param">helper=None</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/seq_cls.html#SeqClsDataset"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.seq_cls.SeqClsDataset" title="Permalink to this definition">¶</a></dt> <dd><p>Bases: <a class="reference internal" href="#claf.data.dataset.base.DatasetBase" title="claf.data.dataset.base.DatasetBase"><code class="xref py py-class docutils literal notranslate"><span class="pre">claf.data.dataset.base.DatasetBase</span></code></a></p> <p>Dataset for Sequence Classification</p> <ul class="simple"> <li><dl class="simple"> <dt>Args:</dt><dd><p>batch: Batch DTO (claf.data.batch)</p> </dd> </dl> </li> <li><dl class="simple"> <dt>Kwargs:</dt><dd><p>helper: helper from data_reader</p> </dd> </dl> </li> </ul> <dl class="method"> <dt id="claf.data.dataset.seq_cls.SeqClsDataset.collate_fn"> <code class="sig-name descname">collate_fn</code><span class="sig-paren">(</span><em class="sig-param">cuda_device_id=None</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/seq_cls.html#SeqClsDataset.collate_fn"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.seq_cls.SeqClsDataset.collate_fn" title="Permalink to this definition">¶</a></dt> <dd><p>collate: indexed features and labels -> tensor</p> </dd></dl> <dl class="method"> <dt id="claf.data.dataset.seq_cls.SeqClsDataset.get_class_text_with_idx"> <code class="sig-name descname">get_class_text_with_idx</code><span class="sig-paren">(</span><em class="sig-param">class_index</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/seq_cls.html#SeqClsDataset.get_class_text_with_idx"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.seq_cls.SeqClsDataset.get_class_text_with_idx" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.seq_cls.SeqClsDataset.get_ground_truth"> <code class="sig-name descname">get_ground_truth</code><span class="sig-paren">(</span><em class="sig-param">data_id</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/seq_cls.html#SeqClsDataset.get_ground_truth"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.seq_cls.SeqClsDataset.get_ground_truth" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.seq_cls.SeqClsDataset.get_id"> <code class="sig-name descname">get_id</code><span class="sig-paren">(</span><em class="sig-param">data_index</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/seq_cls.html#SeqClsDataset.get_id"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.seq_cls.SeqClsDataset.get_id" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.seq_cls.SeqClsDataset.num_classes"> <em class="property">property </em><code class="sig-name descname">num_classes</code><a class="headerlink" href="#claf.data.dataset.seq_cls.SeqClsDataset.num_classes" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.seq_cls.SeqClsDataset.sequence_maxlen"> <em class="property">property </em><code class="sig-name descname">sequence_maxlen</code><a class="headerlink" href="#claf.data.dataset.seq_cls.SeqClsDataset.sequence_maxlen" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> </dd></dl> <span class="target" id="module-claf.data.dataset.squad"></span><dl class="class"> <dt id="claf.data.dataset.squad.SQuADDataset"> <em class="property">class </em><code class="sig-prename descclassname">claf.data.dataset.squad.</code><code class="sig-name descname">SQuADDataset</code><span class="sig-paren">(</span><em class="sig-param">batch</em>, <em class="sig-param">vocab</em>, <em class="sig-param">helper=None</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/squad.html#SQuADDataset"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.squad.SQuADDataset" title="Permalink to this definition">¶</a></dt> <dd><p>Bases: <a class="reference internal" href="#claf.data.dataset.base.DatasetBase" title="claf.data.dataset.base.DatasetBase"><code class="xref py py-class docutils literal notranslate"><span class="pre">claf.data.dataset.base.DatasetBase</span></code></a></p> <dl class="simple"> <dt>SQuAD Dataset</dt><dd><p>compatible with v1.1 and v2.0</p> </dd> </dl> <ul class="simple"> <li><dl class="simple"> <dt>Args:</dt><dd><p>batch: Batch DTO (claf.data.batch)</p> </dd> </dl> </li> <li><dl class="simple"> <dt>Kwargs:</dt><dd><p>helper: helper from data_reader</p> </dd> </dl> </li> </ul> <dl class="method"> <dt id="claf.data.dataset.squad.SQuADDataset.collate_fn"> <code class="sig-name descname">collate_fn</code><span class="sig-paren">(</span><em class="sig-param">cuda_device_id=None</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/squad.html#SQuADDataset.collate_fn"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.squad.SQuADDataset.collate_fn" title="Permalink to this definition">¶</a></dt> <dd><p>collate: indexed features and labels -> tensor</p> </dd></dl> <dl class="method"> <dt id="claf.data.dataset.squad.SQuADDataset.context_maxlen"> <em class="property">property </em><code class="sig-name descname">context_maxlen</code><a class="headerlink" href="#claf.data.dataset.squad.SQuADDataset.context_maxlen" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.squad.SQuADDataset.get_context"> <code class="sig-name descname">get_context</code><span class="sig-paren">(</span><em class="sig-param">data_index</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/squad.html#SQuADDataset.get_context"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.squad.SQuADDataset.get_context" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.squad.SQuADDataset.get_ground_truths"> <code class="sig-name descname">get_ground_truths</code><span class="sig-paren">(</span><em class="sig-param">data_index</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/squad.html#SQuADDataset.get_ground_truths"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.squad.SQuADDataset.get_ground_truths" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.squad.SQuADDataset.get_predict"> <code class="sig-name descname">get_predict</code><span class="sig-paren">(</span><em class="sig-param">data_index</em>, <em class="sig-param">start</em>, <em class="sig-param">end</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/squad.html#SQuADDataset.get_predict"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.squad.SQuADDataset.get_predict" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.squad.SQuADDataset.get_qid"> <code class="sig-name descname">get_qid</code><span class="sig-paren">(</span><em class="sig-param">data_index</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/squad.html#SQuADDataset.get_qid"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.squad.SQuADDataset.get_qid" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.squad.SQuADDataset.get_text_span"> <code class="sig-name descname">get_text_span</code><span class="sig-paren">(</span><em class="sig-param">data_index</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/squad.html#SQuADDataset.get_text_span"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.squad.SQuADDataset.get_text_span" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.squad.SQuADDataset.get_text_with_index"> <code class="sig-name descname">get_text_with_index</code><span class="sig-paren">(</span><em class="sig-param">data_index</em>, <em class="sig-param">start</em>, <em class="sig-param">end</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/squad.html#SQuADDataset.get_text_with_index"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.squad.SQuADDataset.get_text_with_index" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.squad.SQuADDataset.question_maxlen"> <em class="property">property </em><code class="sig-name descname">question_maxlen</code><a class="headerlink" href="#claf.data.dataset.squad.SQuADDataset.question_maxlen" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> </dd></dl> <span class="target" id="module-claf.data.dataset.wikisql"></span><dl class="class"> <dt id="claf.data.dataset.wikisql.WikiSQLDataset"> <em class="property">class </em><code class="sig-prename descclassname">claf.data.dataset.wikisql.</code><code class="sig-name descname">WikiSQLDataset</code><span class="sig-paren">(</span><em class="sig-param">batch</em>, <em class="sig-param">vocab</em>, <em class="sig-param">helper=None</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/wikisql.html#WikiSQLDataset"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.wikisql.WikiSQLDataset" title="Permalink to this definition">¶</a></dt> <dd><p>Bases: <a class="reference internal" href="#claf.data.dataset.base.DatasetBase" title="claf.data.dataset.base.DatasetBase"><code class="xref py py-class docutils literal notranslate"><span class="pre">claf.data.dataset.base.DatasetBase</span></code></a></p> <p>WikiSQL Dataset</p> <ul class="simple"> <li><dl class="simple"> <dt>Args:</dt><dd><p>batch: Batch DTO (claf.data.batch)</p> </dd> </dl> </li> <li><dl class="simple"> <dt>Kwargs:</dt><dd><p>helper: helper from data_reader</p> </dd> </dl> </li> </ul> <dl class="method"> <dt id="claf.data.dataset.wikisql.WikiSQLDataset.collate_fn"> <code class="sig-name descname">collate_fn</code><span class="sig-paren">(</span><em class="sig-param">cuda_device_id=None</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/wikisql.html#WikiSQLDataset.collate_fn"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.wikisql.WikiSQLDataset.collate_fn" title="Permalink to this definition">¶</a></dt> <dd><p>collate: indexed features and labels -> tensor</p> </dd></dl> <dl class="method"> <dt id="claf.data.dataset.wikisql.WikiSQLDataset.get_ground_truth"> <code class="sig-name descname">get_ground_truth</code><span class="sig-paren">(</span><em class="sig-param">data_index</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/wikisql.html#WikiSQLDataset.get_ground_truth"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.wikisql.WikiSQLDataset.get_ground_truth" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.wikisql.WikiSQLDataset.get_id"> <code class="sig-name descname">get_id</code><span class="sig-paren">(</span><em class="sig-param">data_index</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/wikisql.html#WikiSQLDataset.get_id"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.wikisql.WikiSQLDataset.get_id" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.wikisql.WikiSQLDataset.get_table_id"> <code class="sig-name descname">get_table_id</code><span class="sig-paren">(</span><em class="sig-param">data_index</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/wikisql.html#WikiSQLDataset.get_table_id"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.wikisql.WikiSQLDataset.get_table_id" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.wikisql.WikiSQLDataset.get_tokenized_question"> <code class="sig-name descname">get_tokenized_question</code><span class="sig-paren">(</span><em class="sig-param">data_index</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/wikisql.html#WikiSQLDataset.get_tokenized_question"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.wikisql.WikiSQLDataset.get_tokenized_question" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.wikisql.WikiSQLDataset.question_maxlen"> <em class="property">property </em><code class="sig-name descname">question_maxlen</code><a class="headerlink" href="#claf.data.dataset.wikisql.WikiSQLDataset.question_maxlen" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> </dd></dl> </div> <div class="section" id="module-claf.data.dataset"> <span id="module-contents"></span><h2>Module contents<a class="headerlink" href="#module-claf.data.dataset" title="Permalink to this headline">¶</a></h2> <dl class="class"> <dt id="claf.data.dataset.MultiTaskBertDataset"> <em class="property">class </em><code class="sig-prename descclassname">claf.data.dataset.</code><code class="sig-name descname">MultiTaskBertDataset</code><span class="sig-paren">(</span><em class="sig-param">batches</em>, <em class="sig-param">vocab</em>, <em class="sig-param">helper=None</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/bert/multi_task.html#MultiTaskBertDataset"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.MultiTaskBertDataset" title="Permalink to this definition">¶</a></dt> <dd><p>Bases: <a class="reference internal" href="#claf.data.dataset.base.DatasetBase" title="claf.data.dataset.base.DatasetBase"><code class="xref py py-class docutils literal notranslate"><span class="pre">claf.data.dataset.base.DatasetBase</span></code></a></p> <p>Dataset for Multi-Task GLUE using BERT</p> <ul class="simple"> <li><dl class="simple"> <dt>Args:</dt><dd><p>batch: Batch DTO (claf.data.batch)</p> </dd> </dl> </li> <li><dl class="simple"> <dt>Kwargs:</dt><dd><p>helper: helper from data_reader</p> </dd> </dl> </li> </ul> <dl class="method"> <dt id="claf.data.dataset.MultiTaskBertDataset.collate_fn"> <code class="sig-name descname">collate_fn</code><span class="sig-paren">(</span><em class="sig-param">cuda_device_id=None</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/bert/multi_task.html#MultiTaskBertDataset.collate_fn"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.MultiTaskBertDataset.collate_fn" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.MultiTaskBertDataset.init_iterators"> <code class="sig-name descname">init_iterators</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/bert/multi_task.html#MultiTaskBertDataset.init_iterators"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.MultiTaskBertDataset.init_iterators" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> </dd></dl> <dl class="class"> <dt id="claf.data.dataset.RegressionBertDataset"> <em class="property">class </em><code class="sig-prename descclassname">claf.data.dataset.</code><code class="sig-name descname">RegressionBertDataset</code><span class="sig-paren">(</span><em class="sig-param">batch</em>, <em class="sig-param">vocab</em>, <em class="sig-param">helper=None</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/bert/regression.html#RegressionBertDataset"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.RegressionBertDataset" title="Permalink to this definition">¶</a></dt> <dd><p>Bases: <a class="reference internal" href="#claf.data.dataset.base.DatasetBase" title="claf.data.dataset.base.DatasetBase"><code class="xref py py-class docutils literal notranslate"><span class="pre">claf.data.dataset.base.DatasetBase</span></code></a></p> <p>Dataset for Regression using BERT</p> <ul class="simple"> <li><dl class="simple"> <dt>Args:</dt><dd><p>batch: Batch DTO (claf.data.batch)</p> </dd> </dl> </li> <li><dl class="simple"> <dt>Kwargs:</dt><dd><p>helper: helper from data_reader</p> </dd> </dl> </li> </ul> <dl class="method"> <dt id="claf.data.dataset.RegressionBertDataset.collate_fn"> <code class="sig-name descname">collate_fn</code><span class="sig-paren">(</span><em class="sig-param">cuda_device_id=None</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/bert/regression.html#RegressionBertDataset.collate_fn"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.RegressionBertDataset.collate_fn" title="Permalink to this definition">¶</a></dt> <dd><p>collate: indexed features and labels -> tensor</p> </dd></dl> <dl class="method"> <dt id="claf.data.dataset.RegressionBertDataset.get_ground_truth"> <code class="sig-name descname">get_ground_truth</code><span class="sig-paren">(</span><em class="sig-param">data_id</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/bert/regression.html#RegressionBertDataset.get_ground_truth"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.RegressionBertDataset.get_ground_truth" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.RegressionBertDataset.get_id"> <code class="sig-name descname">get_id</code><span class="sig-paren">(</span><em class="sig-param">data_index</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/bert/regression.html#RegressionBertDataset.get_id"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.RegressionBertDataset.get_id" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.RegressionBertDataset.sequence_maxlen"> <em class="property">property </em><code class="sig-name descname">sequence_maxlen</code><a class="headerlink" href="#claf.data.dataset.RegressionBertDataset.sequence_maxlen" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> </dd></dl> <dl class="class"> <dt id="claf.data.dataset.SeqClsDataset"> <em class="property">class </em><code class="sig-prename descclassname">claf.data.dataset.</code><code class="sig-name descname">SeqClsDataset</code><span class="sig-paren">(</span><em class="sig-param">batch</em>, <em class="sig-param">vocab</em>, <em class="sig-param">helper=None</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/seq_cls.html#SeqClsDataset"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.SeqClsDataset" title="Permalink to this definition">¶</a></dt> <dd><p>Bases: <a class="reference internal" href="#claf.data.dataset.base.DatasetBase" title="claf.data.dataset.base.DatasetBase"><code class="xref py py-class docutils literal notranslate"><span class="pre">claf.data.dataset.base.DatasetBase</span></code></a></p> <p>Dataset for Sequence Classification</p> <ul class="simple"> <li><dl class="simple"> <dt>Args:</dt><dd><p>batch: Batch DTO (claf.data.batch)</p> </dd> </dl> </li> <li><dl class="simple"> <dt>Kwargs:</dt><dd><p>helper: helper from data_reader</p> </dd> </dl> </li> </ul> <dl class="method"> <dt id="claf.data.dataset.SeqClsDataset.collate_fn"> <code class="sig-name descname">collate_fn</code><span class="sig-paren">(</span><em class="sig-param">cuda_device_id=None</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/seq_cls.html#SeqClsDataset.collate_fn"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.SeqClsDataset.collate_fn" title="Permalink to this definition">¶</a></dt> <dd><p>collate: indexed features and labels -> tensor</p> </dd></dl> <dl class="method"> <dt id="claf.data.dataset.SeqClsDataset.get_class_text_with_idx"> <code class="sig-name descname">get_class_text_with_idx</code><span class="sig-paren">(</span><em class="sig-param">class_index</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/seq_cls.html#SeqClsDataset.get_class_text_with_idx"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.SeqClsDataset.get_class_text_with_idx" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.SeqClsDataset.get_ground_truth"> <code class="sig-name descname">get_ground_truth</code><span class="sig-paren">(</span><em class="sig-param">data_id</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/seq_cls.html#SeqClsDataset.get_ground_truth"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.SeqClsDataset.get_ground_truth" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.SeqClsDataset.get_id"> <code class="sig-name descname">get_id</code><span class="sig-paren">(</span><em class="sig-param">data_index</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/seq_cls.html#SeqClsDataset.get_id"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.SeqClsDataset.get_id" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.SeqClsDataset.num_classes"> <em class="property">property </em><code class="sig-name descname">num_classes</code><a class="headerlink" href="#claf.data.dataset.SeqClsDataset.num_classes" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.SeqClsDataset.sequence_maxlen"> <em class="property">property </em><code class="sig-name descname">sequence_maxlen</code><a class="headerlink" href="#claf.data.dataset.SeqClsDataset.sequence_maxlen" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> </dd></dl> <dl class="class"> <dt id="claf.data.dataset.SeqClsBertDataset"> <em class="property">class </em><code class="sig-prename descclassname">claf.data.dataset.</code><code class="sig-name descname">SeqClsBertDataset</code><span class="sig-paren">(</span><em class="sig-param">batch</em>, <em class="sig-param">vocab</em>, <em class="sig-param">helper=None</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/bert/seq_cls.html#SeqClsBertDataset"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.SeqClsBertDataset" title="Permalink to this definition">¶</a></dt> <dd><p>Bases: <a class="reference internal" href="#claf.data.dataset.base.DatasetBase" title="claf.data.dataset.base.DatasetBase"><code class="xref py py-class docutils literal notranslate"><span class="pre">claf.data.dataset.base.DatasetBase</span></code></a></p> <p>Dataset for Sequence Classification using BERT</p> <ul class="simple"> <li><dl class="simple"> <dt>Args:</dt><dd><p>batch: Batch DTO (claf.data.batch)</p> </dd> </dl> </li> <li><dl class="simple"> <dt>Kwargs:</dt><dd><p>helper: helper from data_reader</p> </dd> </dl> </li> </ul> <dl class="method"> <dt id="claf.data.dataset.SeqClsBertDataset.collate_fn"> <code class="sig-name descname">collate_fn</code><span class="sig-paren">(</span><em class="sig-param">cuda_device_id=None</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/bert/seq_cls.html#SeqClsBertDataset.collate_fn"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.SeqClsBertDataset.collate_fn" title="Permalink to this definition">¶</a></dt> <dd><p>collate: indexed features and labels -> tensor</p> </dd></dl> <dl class="method"> <dt id="claf.data.dataset.SeqClsBertDataset.get_class_text_with_idx"> <code class="sig-name descname">get_class_text_with_idx</code><span class="sig-paren">(</span><em class="sig-param">class_index</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/bert/seq_cls.html#SeqClsBertDataset.get_class_text_with_idx"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.SeqClsBertDataset.get_class_text_with_idx" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.SeqClsBertDataset.get_ground_truth"> <code class="sig-name descname">get_ground_truth</code><span class="sig-paren">(</span><em class="sig-param">data_id</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/bert/seq_cls.html#SeqClsBertDataset.get_ground_truth"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.SeqClsBertDataset.get_ground_truth" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.SeqClsBertDataset.get_id"> <code class="sig-name descname">get_id</code><span class="sig-paren">(</span><em class="sig-param">data_index</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/bert/seq_cls.html#SeqClsBertDataset.get_id"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.SeqClsBertDataset.get_id" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.SeqClsBertDataset.num_classes"> <em class="property">property </em><code class="sig-name descname">num_classes</code><a class="headerlink" href="#claf.data.dataset.SeqClsBertDataset.num_classes" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.SeqClsBertDataset.sequence_maxlen"> <em class="property">property </em><code class="sig-name descname">sequence_maxlen</code><a class="headerlink" href="#claf.data.dataset.SeqClsBertDataset.sequence_maxlen" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> </dd></dl> <dl class="class"> <dt id="claf.data.dataset.SQuADDataset"> <em class="property">class </em><code class="sig-prename descclassname">claf.data.dataset.</code><code class="sig-name descname">SQuADDataset</code><span class="sig-paren">(</span><em class="sig-param">batch</em>, <em class="sig-param">vocab</em>, <em class="sig-param">helper=None</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/squad.html#SQuADDataset"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.SQuADDataset" title="Permalink to this definition">¶</a></dt> <dd><p>Bases: <a class="reference internal" href="#claf.data.dataset.base.DatasetBase" title="claf.data.dataset.base.DatasetBase"><code class="xref py py-class docutils literal notranslate"><span class="pre">claf.data.dataset.base.DatasetBase</span></code></a></p> <dl class="simple"> <dt>SQuAD Dataset</dt><dd><p>compatible with v1.1 and v2.0</p> </dd> </dl> <ul class="simple"> <li><dl class="simple"> <dt>Args:</dt><dd><p>batch: Batch DTO (claf.data.batch)</p> </dd> </dl> </li> <li><dl class="simple"> <dt>Kwargs:</dt><dd><p>helper: helper from data_reader</p> </dd> </dl> </li> </ul> <dl class="method"> <dt id="claf.data.dataset.SQuADDataset.collate_fn"> <code class="sig-name descname">collate_fn</code><span class="sig-paren">(</span><em class="sig-param">cuda_device_id=None</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/squad.html#SQuADDataset.collate_fn"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.SQuADDataset.collate_fn" title="Permalink to this definition">¶</a></dt> <dd><p>collate: indexed features and labels -> tensor</p> </dd></dl> <dl class="method"> <dt id="claf.data.dataset.SQuADDataset.context_maxlen"> <em class="property">property </em><code class="sig-name descname">context_maxlen</code><a class="headerlink" href="#claf.data.dataset.SQuADDataset.context_maxlen" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.SQuADDataset.get_context"> <code class="sig-name descname">get_context</code><span class="sig-paren">(</span><em class="sig-param">data_index</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/squad.html#SQuADDataset.get_context"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.SQuADDataset.get_context" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.SQuADDataset.get_ground_truths"> <code class="sig-name descname">get_ground_truths</code><span class="sig-paren">(</span><em class="sig-param">data_index</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/squad.html#SQuADDataset.get_ground_truths"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.SQuADDataset.get_ground_truths" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.SQuADDataset.get_predict"> <code class="sig-name descname">get_predict</code><span class="sig-paren">(</span><em class="sig-param">data_index</em>, <em class="sig-param">start</em>, <em class="sig-param">end</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/squad.html#SQuADDataset.get_predict"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.SQuADDataset.get_predict" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.SQuADDataset.get_qid"> <code class="sig-name descname">get_qid</code><span class="sig-paren">(</span><em class="sig-param">data_index</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/squad.html#SQuADDataset.get_qid"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.SQuADDataset.get_qid" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.SQuADDataset.get_text_span"> <code class="sig-name descname">get_text_span</code><span class="sig-paren">(</span><em class="sig-param">data_index</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/squad.html#SQuADDataset.get_text_span"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.SQuADDataset.get_text_span" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.SQuADDataset.get_text_with_index"> <code class="sig-name descname">get_text_with_index</code><span class="sig-paren">(</span><em class="sig-param">data_index</em>, <em class="sig-param">start</em>, <em class="sig-param">end</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/squad.html#SQuADDataset.get_text_with_index"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.SQuADDataset.get_text_with_index" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.SQuADDataset.question_maxlen"> <em class="property">property </em><code class="sig-name descname">question_maxlen</code><a class="headerlink" href="#claf.data.dataset.SQuADDataset.question_maxlen" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> </dd></dl> <dl class="class"> <dt id="claf.data.dataset.SQuADBertDataset"> <em class="property">class </em><code class="sig-prename descclassname">claf.data.dataset.</code><code class="sig-name descname">SQuADBertDataset</code><span class="sig-paren">(</span><em class="sig-param">batch</em>, <em class="sig-param">vocab</em>, <em class="sig-param">helper=None</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/bert/squad.html#SQuADBertDataset"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.SQuADBertDataset" title="Permalink to this definition">¶</a></dt> <dd><p>Bases: <a class="reference internal" href="#claf.data.dataset.base.DatasetBase" title="claf.data.dataset.base.DatasetBase"><code class="xref py py-class docutils literal notranslate"><span class="pre">claf.data.dataset.base.DatasetBase</span></code></a></p> <dl class="simple"> <dt>SQuAD Dataset for BERT</dt><dd><p>compatible with v1.1 and v2.0</p> </dd> </dl> <ul class="simple"> <li><dl class="simple"> <dt>Args:</dt><dd><p>batch: Batch DTO (claf.data.batch)</p> </dd> </dl> </li> <li><dl class="simple"> <dt>Kwargs:</dt><dd><p>helper: helper from data_reader</p> </dd> </dl> </li> </ul> <dl class="method"> <dt id="claf.data.dataset.SQuADBertDataset.bert_input_maxlen"> <em class="property">property </em><code class="sig-name descname">bert_input_maxlen</code><a class="headerlink" href="#claf.data.dataset.SQuADBertDataset.bert_input_maxlen" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.SQuADBertDataset.collate_fn"> <code class="sig-name descname">collate_fn</code><span class="sig-paren">(</span><em class="sig-param">cuda_device_id=None</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/bert/squad.html#SQuADBertDataset.collate_fn"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.SQuADBertDataset.collate_fn" title="Permalink to this definition">¶</a></dt> <dd><p>collate: indexed features and labels -> tensor</p> </dd></dl> <dl class="method"> <dt id="claf.data.dataset.SQuADBertDataset.get_bert_tokens"> <code class="sig-name descname">get_bert_tokens</code><span class="sig-paren">(</span><em class="sig-param">data_index</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/bert/squad.html#SQuADBertDataset.get_bert_tokens"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.SQuADBertDataset.get_bert_tokens" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.SQuADBertDataset.get_context"> <code class="sig-name descname">get_context</code><span class="sig-paren">(</span><em class="sig-param">data_index</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/bert/squad.html#SQuADBertDataset.get_context"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.SQuADBertDataset.get_context" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.SQuADBertDataset.get_ground_truths"> <code class="sig-name descname">get_ground_truths</code><span class="sig-paren">(</span><em class="sig-param">data_index</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/bert/squad.html#SQuADBertDataset.get_ground_truths"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.SQuADBertDataset.get_ground_truths" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.SQuADBertDataset.get_id"> <code class="sig-name descname">get_id</code><span class="sig-paren">(</span><em class="sig-param">data_index</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/bert/squad.html#SQuADBertDataset.get_id"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.SQuADBertDataset.get_id" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.SQuADBertDataset.get_predict"> <code class="sig-name descname">get_predict</code><span class="sig-paren">(</span><em class="sig-param">data_index</em>, <em class="sig-param">start</em>, <em class="sig-param">end</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/bert/squad.html#SQuADBertDataset.get_predict"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.SQuADBertDataset.get_predict" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.SQuADBertDataset.get_qid"> <code class="sig-name descname">get_qid</code><span class="sig-paren">(</span><em class="sig-param">data_index</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/bert/squad.html#SQuADBertDataset.get_qid"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.SQuADBertDataset.get_qid" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.SQuADBertDataset.get_qid_index"> <code class="sig-name descname">get_qid_index</code><span class="sig-paren">(</span><em class="sig-param">data_index</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/bert/squad.html#SQuADBertDataset.get_qid_index"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.SQuADBertDataset.get_qid_index" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.SQuADBertDataset.get_text_with_index"> <code class="sig-name descname">get_text_with_index</code><span class="sig-paren">(</span><em class="sig-param">data_index</em>, <em class="sig-param">start</em>, <em class="sig-param">end</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/bert/squad.html#SQuADBertDataset.get_text_with_index"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.SQuADBertDataset.get_text_with_index" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> </dd></dl> <dl class="class"> <dt id="claf.data.dataset.TokClsBertDataset"> <em class="property">class </em><code class="sig-prename descclassname">claf.data.dataset.</code><code class="sig-name descname">TokClsBertDataset</code><span class="sig-paren">(</span><em class="sig-param">batch</em>, <em class="sig-param">vocab</em>, <em class="sig-param">helper=None</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/bert/tok_cls.html#TokClsBertDataset"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.TokClsBertDataset" title="Permalink to this definition">¶</a></dt> <dd><p>Bases: <a class="reference internal" href="#claf.data.dataset.base.DatasetBase" title="claf.data.dataset.base.DatasetBase"><code class="xref py py-class docutils literal notranslate"><span class="pre">claf.data.dataset.base.DatasetBase</span></code></a></p> <p>Dataset for Token Classification</p> <ul class="simple"> <li><dl class="simple"> <dt>Args:</dt><dd><p>batch: Batch DTO (claf.data.batch)</p> </dd> </dl> </li> <li><dl class="simple"> <dt>Kwargs:</dt><dd><p>helper: helper from data_reader</p> </dd> </dl> </li> </ul> <dl class="method"> <dt id="claf.data.dataset.TokClsBertDataset.collate_fn"> <code class="sig-name descname">collate_fn</code><span class="sig-paren">(</span><em class="sig-param">cuda_device_id=None</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/bert/tok_cls.html#TokClsBertDataset.collate_fn"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.TokClsBertDataset.collate_fn" title="Permalink to this definition">¶</a></dt> <dd><p>collate: indexed features and labels -> tensor</p> </dd></dl> <dl class="method"> <dt id="claf.data.dataset.TokClsBertDataset.get_ground_truth"> <code class="sig-name descname">get_ground_truth</code><span class="sig-paren">(</span><em class="sig-param">data_id</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/bert/tok_cls.html#TokClsBertDataset.get_ground_truth"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.TokClsBertDataset.get_ground_truth" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.TokClsBertDataset.get_id"> <code class="sig-name descname">get_id</code><span class="sig-paren">(</span><em class="sig-param">data_index</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/bert/tok_cls.html#TokClsBertDataset.get_id"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.TokClsBertDataset.get_id" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.TokClsBertDataset.get_tag_text_with_idx"> <code class="sig-name descname">get_tag_text_with_idx</code><span class="sig-paren">(</span><em class="sig-param">tag_index</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/bert/tok_cls.html#TokClsBertDataset.get_tag_text_with_idx"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.TokClsBertDataset.get_tag_text_with_idx" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.TokClsBertDataset.get_tag_texts_with_idxs"> <code class="sig-name descname">get_tag_texts_with_idxs</code><span class="sig-paren">(</span><em class="sig-param">tag_idxs</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/bert/tok_cls.html#TokClsBertDataset.get_tag_texts_with_idxs"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.TokClsBertDataset.get_tag_texts_with_idxs" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.TokClsBertDataset.num_tags"> <em class="property">property </em><code class="sig-name descname">num_tags</code><a class="headerlink" href="#claf.data.dataset.TokClsBertDataset.num_tags" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.TokClsBertDataset.sequence_maxlen"> <em class="property">property </em><code class="sig-name descname">sequence_maxlen</code><a class="headerlink" href="#claf.data.dataset.TokClsBertDataset.sequence_maxlen" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> </dd></dl> <dl class="class"> <dt id="claf.data.dataset.WikiSQLDataset"> <em class="property">class </em><code class="sig-prename descclassname">claf.data.dataset.</code><code class="sig-name descname">WikiSQLDataset</code><span class="sig-paren">(</span><em class="sig-param">batch</em>, <em class="sig-param">vocab</em>, <em class="sig-param">helper=None</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/wikisql.html#WikiSQLDataset"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.WikiSQLDataset" title="Permalink to this definition">¶</a></dt> <dd><p>Bases: <a class="reference internal" href="#claf.data.dataset.base.DatasetBase" title="claf.data.dataset.base.DatasetBase"><code class="xref py py-class docutils literal notranslate"><span class="pre">claf.data.dataset.base.DatasetBase</span></code></a></p> <p>WikiSQL Dataset</p> <ul class="simple"> <li><dl class="simple"> <dt>Args:</dt><dd><p>batch: Batch DTO (claf.data.batch)</p> </dd> </dl> </li> <li><dl class="simple"> <dt>Kwargs:</dt><dd><p>helper: helper from data_reader</p> </dd> </dl> </li> </ul> <dl class="method"> <dt id="claf.data.dataset.WikiSQLDataset.collate_fn"> <code class="sig-name descname">collate_fn</code><span class="sig-paren">(</span><em class="sig-param">cuda_device_id=None</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/wikisql.html#WikiSQLDataset.collate_fn"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.WikiSQLDataset.collate_fn" title="Permalink to this definition">¶</a></dt> <dd><p>collate: indexed features and labels -> tensor</p> </dd></dl> <dl class="method"> <dt id="claf.data.dataset.WikiSQLDataset.get_ground_truth"> <code class="sig-name descname">get_ground_truth</code><span class="sig-paren">(</span><em class="sig-param">data_index</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/wikisql.html#WikiSQLDataset.get_ground_truth"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.WikiSQLDataset.get_ground_truth" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.WikiSQLDataset.get_id"> <code class="sig-name descname">get_id</code><span class="sig-paren">(</span><em class="sig-param">data_index</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/wikisql.html#WikiSQLDataset.get_id"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.WikiSQLDataset.get_id" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.WikiSQLDataset.get_table_id"> <code class="sig-name descname">get_table_id</code><span class="sig-paren">(</span><em class="sig-param">data_index</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/wikisql.html#WikiSQLDataset.get_table_id"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.WikiSQLDataset.get_table_id" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.WikiSQLDataset.get_tokenized_question"> <code class="sig-name descname">get_tokenized_question</code><span class="sig-paren">(</span><em class="sig-param">data_index</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/claf/data/dataset/wikisql.html#WikiSQLDataset.get_tokenized_question"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#claf.data.dataset.WikiSQLDataset.get_tokenized_question" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> <dl class="method"> <dt id="claf.data.dataset.WikiSQLDataset.question_maxlen"> <em class="property">property </em><code class="sig-name descname">question_maxlen</code><a class="headerlink" href="#claf.data.dataset.WikiSQLDataset.question_maxlen" title="Permalink to this definition">¶</a></dt> <dd></dd></dl> </dd></dl> </div> </div> </div> </div> <footer> <div class="rst-footer-buttons" role="navigation" aria-label="footer navigation"> <a href="claf.data.reader.html" class="btn btn-neutral float-right" title="claf.data.reader package" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a> <a href="claf.data.html" class="btn btn-neutral" title="claf.data package" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a> </div> <hr/> <div role="contentinfo"> <p> © Copyright 2019, Dongjun Lee </p> </div> Built with <a href="http://sphinx-doc.org/">Sphinx</a> using a <a href="https://github.com/rtfd/sphinx_rtd_theme">theme</a> provided by <a href="https://readthedocs.org">Read the Docs</a>. </footer> </div> </div> </section> </div> <script type="text/javascript"> jQuery(function () { SphinxRtdTheme.Navigation.enable(true); }); </script> </body> </html> | Low | [
0.46995708154506405,
27.375,
30.875
]
|
VARGAS: The president said after the summit we cannot have another year of debate on this issue. We need decisions now. You said on Friday, "We are determined to pass health care." Do you have the 217 votes necessary to pass it in the House? PELOSI: Well right now we're working on the -- on the policy. The -- the president put a -- a -- I think a good proposal on the Internet on Sunday. We're examining that very carefully to make sure it has all the affordability we need for the middle class. All the accountability for the insurance industry. And the accessibility that we need to have. I -- from the meeting on Thursday -- the summit meeting, I -- I believe that we're ready for the next step, which is to write legislative language, and then go from there. VARGAS: So what are the fixes the Senate needs to make in your opinion? Through reconciliation presumably before the House can vote on it... PELOSI: Well whatever... (CROSSTALK) PELOSI: Well I -- I believe listening to the president yesterday -- he's still hopeful that there's a way to have a bipartisan bill. But whatever route the Senate takes, we would like to see again more affordability for the middle class. This is very, very important. This is a bill about the middle class -- their access to health care, and the affordability that makes that access possible. Secondly we want to close the donut hole for seniors. This is really an important mistake that was made when the republicans passed the prescription drug bill. And we want the seniors to have the comfort of knowing that in this bill the donut hole will be patched. And it's a technical -- a slang term for something that means the seniors pay more... VARGAS: But if you get that...Will you-- PELOSI: The seniors pay more. And we have more. We want to eliminate the Nebraska fix that -- have equity for all of the states. And that in terms of some of the investments. There are more. But those are the three -- three of the main ones. But one of the biggest differences is the -- how the bill would be paid for. We -- will we cut waste, fraud, and abuse. Over half a trillion dollars in the bill. But we still needed more of a pay for. The Senate bill had a tax that we did not like in the House. And I think the president's proposal addresses that concern. So now we will -- it's a question of when you go down to legislative language, you -- you need the clarity. And that's when you find out what everything means. VARGAS: But you know that the polls show that the American people are deeply divided on health care. Many of them are opposed to it. Even though they are supporting certain... PELOSI: Pieces of it. VARGAS: Specific pieces of it. What do you say to your members, when it does come to the House to vote on this, who are in real fear of losing their seats in November if they support you now? PELOSI: Well first of all our members -- every one of them -- wants health care. I think everybody wants affordable health care for all Americans. They know that this will take courage. It took courage to pass Social Security. It took courage to pass Medicare. And many of the same forces that were at work decades ago are at work again against this bill. But the American people need it, why are we here? We're not here just to self perpetuate our service in Congress. We're here to do the job for the American people. To get them results that gives them not only health security, but economic security, because the health issue is an economic issue for -- for America's families. VARGAS: Do you wish though that the president had posted his bill before this week? That six months ago it might have been more helpful for you. That maybe six months ago you knew that the public option was something he was going to drop before you fought so hard for it? PELOSI: Well we -- we still fight for the -- what the public option will do. Whether it's in the bill or not, its purpose must be recognized. And that is to keep the insurance companies honest. To keep them accountable, and to increase competition. And I think in the summit on Thursday it became very clear that what the president was proposing was regulation of the insurance companies. Left to their own devices they have done harm to the American people. They need to be regulated. And that is one of the biggest differences between the Democrats and the Republicans. Another one for example is -- an example of it is ending the denial of -- of coverage to those who have a preexisting condition. The Democrats have that in their bill. The Republicans do not. VARGAS: But would you... PELOSI: But that's a major insurance reform that has to take place. VARGAS: But would we still be debating this if the president had put his plan out six months ago? PELOSI: Well, I don't know what -- what the value of trying -- the president has tried since one year ago March 5th. We met in Washington D.C. in a bipartisan way with some of the outside stakeholders to talk about working together to have health care accessible for all Americans. I smile because I remember Senator Kennedy coming into the room and saying, "I'm signing up as a foot soldier in the fight for health care reform." And of course he was such a tremendous leader. But that was a year ago. Since then we've had hundreds of hours of meetings, and hearings, and mark ups of bills -- well over a hundred Republican amendments are in this bill -- the -- the House and Senate bills. And what the president put forth – we'll see some of what was said yesterday. So those who were making constructive contributions can be accommodated. Whether we get Republican votes or not -- the bill definitely has bipartisan provisions in it. But if they have a good idea that works for the American people, whether they're in the vote for the bill or not, we want it in the bill. VARGAS: How long are willing to wait for those ideas? PELOSI: Well we -- but that that happened yesterday. And so ... VARGAS: I mean -- I made it clear -- the president -- The president made it clear that time is up. PELOSI: Time is up. Yes. So we really have to go forth, because as I said there -- as we sit around this table, this big table in Blair House -- every night families sit around their kitchen table -- try to figure out their finances. Their -- the security of their jobs, the cost of their children's education, how they're going to pay their medical bills. What is the status of their pensions? And they can't wait any longer. If, you know, if your family has a – a preexisting condition, or if you ever been denied coverage, or if you have a -- a rescission. If your insurance has been withdrawn just as you're about to need a procedure, you know it's long overdue. And what's the point of talking about it any longer? VARGAS: If -- but the point is when it does finally come to vote on it in the House, you're certain that you can muster the 217 votes that you need... PELOSI: We... VARGAS: ... even with the differences over abortion language? Things... PELOSI: Yes. VARGAS:... that there are members of the House who voted in favor of it before, who are now saying, "We can't vote for this bill, because of the Senate language on abortion? PELOSI: Well let me say I have this in three -- just so you know how we sequence this. First we zero in on what the policy will be. And that is what we'll be doing -- following the president's summit yesterday. Secondly, we'll see what the Senate can do. What is the substance? And what is the Senate prepared to do? And then we'll go to the third step as to what my -- my members will vote for. But we have a very diverse party. But we all agree that the present system is unsustainable. It's unsustainable. It's unaffordable for families, for -- and individuals, for businesses -- large, small, and moderate sized businesses. It's unsustainable to our budget. We cannot afford the rising cost of -- of health care. As the president has said, "Health care reform is entitlement reform." And it's unsustainable for our economy. We want to be competitive. These health care costs are a competitiveness issue. They diminish the opportunities for our businesses domestically and internationally to compete without this anvil of health care costs around their necks. VARGAS: You mentioned jobs. Members of the House have already weighed in on the Senate jobs bill saying it's too small and does too little. The Congressional Black Caucus said it shouldn't even be called a jobs bill. Should you agree to the smaller, incremental approach given that unemployment is the single biggest issue in this country right now? PELOSI: Well, we wanted to move as quickly as possible on jobs. We passed our bill in December, as you probably know. What the Senate is taking is a segmented approach to it. And I think when everyone sees what the different pieces are, they will know that we're on the path -- VARGAS: But you've said that's OK. Is it OK to do it in that smaller, incremental way, and not the big, dramatic way that the House proposed? PELOSI: Well, it would have been faster if they would just agree to our bill last year because people are hurting, they need jobs and we need to move quickly. This won't take a long time to do, but every piece of it will not have every provision in it that we want but it will all create jobs and help small businesses grow because that's where major job creation is. It addresses concerns that we have about our veterans coming home who have -- are facing unemployment. It is the biggest issue for our seniors. And believe it or not, jobs in the economy are the biggest issue for our seniors and their opportunities as well. So it is -- it's a four letter word that we use around here all the time, jobs, jobs, jobs, jobs. And by the way, the health care bill is a jobs bill. It will create four million new jobs, several hundred thousand immediately upon enactment. And it will also encourage an entrepreneurial spirit in our country where people can take risks and be entrepreneurial because they know they have health care. VARGAS: The Ethics Committee on Charles Rangel said that he has violated the House gift rule. PELOSI: Uh-huh. VARGAS: How can he remain in such a powerful position as chairman of the House Ways and Means Committee? PELOSI: Well, I think -- VARGAS: Given the fact that there are further pending ethics investigations and this public admonishment has taken place. PELOSI: Well, it is a public admonishment. It said he did not knowingly violate House rules. So that gives him some comfort. But the fact is that we have a -- VARGAS: He should have known though, don't you think? PELOSI: Well, I don't know. You understand that the Ethics Committee is an independent, bipartisan committee in the House. They act independent of us. And that's exactly the way it should be. I, though, when I became speaker, instituted an outside ethics panel which makes recommendations in so that we have a double way to receive information, although the ethics committee can self initiate, as well as take recommendations from the outside panel. So we're going to look forward to seeing what else they have to say about what they have before him regarding Chairman Rangel. VARGAS: If there are further admonishments, though, should he remain in this position? PELOSI: Well, let's why don't we just give him a chance to hear what the independent, bipartisan -- they work very hard to reach their conclusions and we obviously there's more to come here. VARGAS: And – but you don't -- you understand this is why so many Americans think Congress is corrupt. It just doesn't -- it doesn't look good. It doesn't pass the smell test. PELOSI: No, it doesn't. No, it doesn't. I served for seven years on the Ethics Committee and the last thing I would have wanted would be for the Speaker of the House to interfere in a political way in what was going on there. That just should never happen. But the fact is, is that what Mr. Rangel has been admonished for is not good. It was a violation of the rules of the House. It was not a -- something that jeopardized our country in any way. So it remains to be seen what the rest of the work of the committee is. And I hope it will be soon. But again, it's independent and they go with their own -- they go at their own pace. VARGAS: Let's talk a bit about the coming elections in November. You had recently-- and the Tea Party movement, do you think it will be a force to be reckoned with? You had said last summer that it was a faux grassroots movement. You called it the Astroturf movement. PELOSI: In some respects it is. Uh-huh. VARGAS: Is the Tea Party movement a force? PELOSI: No – No what I said at the time is, that they were -- the Republican Party directs a lot of what the Tea Party does, but not everybody in the Tea Party takes direction from the Republican Party. And so there was a lot of, shall we say, Astroturf, as opposed to grassroots. But, you know, we share some of the views of the Tea Partiers in terms of the role of special interest in Washington, D.C., as -- it just has to stop. And that's why I've fought the special interest, whether it's on energy, whether it's on health insurance, whether it's on pharmaceuticals and the rest. VARGAS: So, common ground with many people in the Tea Party movement. PELOSI: Well, no, there are some. There are some because they, again, some of it is orchestrated from the Republican headquarters. Some of it is hijacking the good intentions of lots of people who share some of our concerns that we have about the role of special interests and many Tea Partiers, not that I speak for them, share the view, whether it's -- and Democrats, Republicans and Independents share the view that the recent Supreme Court decision, which greatly empowers the special interests, is something that they oppose. VARGAS: Finally, President Obama, when asked to rate his year in office, gave himself a B plus. How would you rate yourself in the past year? PELOSI: Well, I have a -- I think I get an A for effort. And in the House of Representatives, my mark is the mark of our members. We have passed every piece of legislation that is part of the Obama agenda. Whether it's the creation of jobs, expanding access to health care, creating new green jobs for the future, regulatory reform, we have passed the full agenda. VARGAS: Are you frustrated so many bills have not have been stalled in the Senate? Almost 300 bills passed by the House that are sitting languishing in the Senate? PELOSI: And most of those bills have bipartisan support. Strong bipartisan support in the House that have gone over there. But that you know what that's about? That's about -- and it's very important for you to know, that's about the Republican delay tactics. By requiring 60 votes on some simple legislation that Harry Reid always gets -- has the votes for, but he doesn't have the time to go through the procedural day after day where you have to wait days for the time to go by in order to get the 60 votes. That's how it works in the Senate. So it's about time. Everything's about time. The most finite commodity that we have. We used our time very well in the House to get an agenda passed in time for it to be considered by the Senate. The delaying tactics of the Republicans in the Senate… VARGAS: Dare I ask you to grade the Senate? PELOSI: Well, let's grade this all on a curve. What really matters is, what we do and how it relates to the lives of the American people back to that kitchen table where they have to think about how they make ends meet and how they make the future better for their children and provide for their own retirement. That's really where the grade goes. And the grade is given on election day. We -- we're fully prepared to face the American people with the integrity of what we have put forth, the commitment to jobs and health care and education and a world at peace and safe for our children and with the political armed power to go with it to win those elections. VARGAS: Madam Speaker, thank you for joining us. PELOSI: My pleasure. END VARGAS: And we are joined now by the Republican point man at the health care summit, Senator Lamar Alexander. Senator, welcome to "This Week." ALEXANDER: Thank you, Elizabeth. VARGAS: You just said heard Speaker Pelosi and President Obama say time is up, we're not scrapping the plan, we're not starting from scratch, this is it. Are you going to -- are the Republicans going to offer some amendments (inaudible) ALEXANDER: We -- we already have. I mean, we spent seven hours on Thursday, which I thought was a great opportunity for us to say why we thought the president's bill is not a good bill and what we think we ought to do, which is to establish a goal of reducing costs and go step by step toward that goal. And we offered a number of good ideas, some of which the president agreed with, and he'll put his bill aside and renounce jamming the bill through. We can go to work on this the way we normally do in the United States Senate, which is in a bipartisan way. VARGAS: But he has said he's not going to scrap the bill, he's moving forward with or without you. So why not be part of the process? Why not take what you consider to be an imperfect bill and at least attach some proposals that you support? ALEXANDER: Well, this is a... (CROSSTALK) ALEXANDER: This is a car that can't be recalled and fixed. There are too many things wrong with it. It cuts Medicare a half-trillion dollars. It raises taxes a half-trillion dollars. And in the Medicare cuts, the point that didn't get made very much on Thursday, it doesn't cut it to help Medicare. It cuts Medicare to spend on a new program at a time when Medicare is going broke in 2015. It raises insurance premiums. The president and I had a little exchange on that. It shifts big costs to states, which are going to drive up college tuitions and state taxes. As a former governor, I've heard from Democratic and Republican governors on this. It dumps 15 million low-income Americans into a failed government program called Medicaid. Fifty percent of doctors won't even see patients in Medicaid. So you can't fix that unless they take all those things out. And if they did, they wouldn't have a bill. VARGAS: You had said in your opening remarks at the health care summit, you quoted Senator Byrd when you said -- you called on the president to renounce using reconciliation to push the bill through the Senate with a simple majority vote, saying, quote, "It would be an outrage to run the health care bill through the Senate like a freight train with this process." Why -- why are you so opposed to this, given the fact that Republicans have used reconciliation more often than Democrats in the past? ALEXANDER: You're correct. The reconciliation procedure is a -- where you use legislative (ph) procedure is a (ph) -- where you use -- legislative procedure 19 times it's been used. It's for the purpose of taxing and spending and -- and reducing deficits. But the difference here is that there's never been anything of this size and magnitude and complexity run through the Senate in this way. There are a lot of technical problems with it, which we could discuss. It would turn the Senate -- it would really be the end of the United States Senate as a protector of minority rights, as a place where you have to get consensus, instead of just a partisan majority, and it would be a political kamikaze mission for the Democratic Party if they jam this through after the American people have been saying, look, we're trying to tell you in every way we know how, in elections, in surveys, in town hall meetings, we don't want this bill. VARGAS: Why political kamikaze, though? We know that Americans don't support health care in general, but when you start drilling down into the specifics, a lot of people do support some of those specifics. ALEXANDER: Oh, they do support some of the specifics, but you put it all together, they don't like it. They don't want their Medicare cut. They don't want their taxes increased. They don't want their premiums increased. I mean, millions of American will have their premiums increased. The governors are up in arms about the new cost on states, so people have decided -- and -- and there's a sense that Washington is taking over too much. So I was thinking this morning of President George W. Bush, when he tried so hard to have private accounts for Social Security. He thought he was right. He pushed, he pushed, and he pushed. If he'd stopped about halfway through and shifted, he could have probably gotten a bipartisan agreement on Social Security. I think President Obama could learn from that. He has a lot of us who would like to help him write a health care bill, but not this one. VARGAS: When you say political kamikaze, are you saying that if the Democrats push this through, they will lose all their seats in November? I mean, what are we talking about here? ALEXANDER: Well, here's what I think. I mean, the people are saying, "We don't want it," and the Democrats are saying, "We don't care. We're going to pass it anyway." And so for the next three months, Washington will be consumed with the Democrats trying to jam this through in a very messy procedure an unpopular health care bill. And then for the rest of the year, we're going to be involved in a campaign to repeal it. And every Democratic candidate in the country is going to be defined by this unpopular health care bill at a time when the real issues are jobs, terror and debt. VARGAS: You also said in your remarks at the summit that Republicans have come to the conclusion that Congress, quote, "doesn't do comprehensive well," that our country is too big and too complicated for Washington. But Congress has passed many historic and sweeping and comprehensive bills in the past, Medicare, the civil rights bill, the Americans with Disabilities Act. Are you saying that this Congress is uniquely incapable of doing something sweeping and massive and dramatic? ALEXANDER: Well, the answer's yes, in that sense. VARGAS: That's not good. ALEXANDER: But no -- but let me go back. You mentioned the civil rights bill. I was a very young aide here when President Johnson, who had more Democratic votes in Congress than President Obama had, had the civil rights bill written in Everett Dirksen's office. He was the Republican leader. He did that not just to pass it. He did it to make sure that, when it was passed, it would be accepted by the people and there wouldn't be a campaign as there will be in health care to repeal it from the day it's passed. Today I've watched the comprehensive immigration bill, I've watched the comprehensive economy-wide cap and trade, I've watched the comprehensive health care bill, they fall of their own weight, because we're biting off more than we can chew in a country this big and complex and complicated. I think we do better as a country when we go step by step toward a goal, and the goal in this case should be reducing health care costs. VARGAS: So the country has changed or Congress has changed? ALEXANDER: Well, I think the size of the effort (ph) has changed. I mean, a 2,700-page bill is going to be unpopular because you're hiding something in it. It's full of surprises. It's -- it's -- policy skeptics believe in the law of unintended consequences. And when you write a bill in the middle of the night in a partisan way and, you know, pass it on Christmas Eve and it's that long, it'll have surprises like the cornhusker kickback, which was probably the death blow to the health care bill. VARGAS: Your colleague, Senator Evan Bayh, recently announced his resignation, basically throwing his hands up in disgust, saying Congress is broken, and I want to be -- I don't want to be part of it any more. He cited you as one of the few Republican senators that he felt that he could find common ground with, work with, agree with. How are we going to fix Congress and empower Congress to be able to pass the sweeping kinds of changes that we need in this country when people like Evan Bayh just take their -- go home, in essence, give up and go home? ALEXANDER: Well, you know, former governors -- and I'm one -- always have a hard time with the Senate. You know, we're -- we're used -- governors are used to saying, "Let's go this way," and a legislator in a reactor to things. So that's part of the problem. The second is, a lot more is going on than one would think. I mean, Senator Carper, a Democrat, and I introduced a clean air bill with 11 Democrats and Republicans. We hope we can pass it this year. Senator Webb, a Democrat, and I worked on -- have introduced a nuclear power bill. Senator Graham, Kerry, and Lieberman are working on a climate change bill. So if you take specific steps toward goals, we're more likely to succeed. And my observation is that -- in a country our complex -- we can't do these big comprehensive... (CROSSTALK) VARGAS: But very, very quickly, when somebody like a Senator Scott Brown, for example, breaks ranks with Republicans and votes against a filibuster to get the jobs bill to the floor of the Senate, he gets on his Facebook page, you know, all sorts of angry postings, calling him a double-crosser, a sellout, a Judas. What does that say about the political environment right now? ALEXANDER: It says we live in a very volatile (ph) political environment, and Scott Brown and I and others simply have to do what we think is right. And if we do, which is to get results in a bipartisan way, we'll probably be re-elected or at least we'll have done a good job. VARGAS: Senator Lamar Alexander, thank you so much for joining us here this morning on "This Week." ALEXANDER: Thank you. (BEGIN VIDEO CLIP) VARGAS: Good morning, and welcome to "This Week." The health care summit. Did it make any difference? OBAMA: I hope that this isn't political theater. VARGAS: The parties came together... CANTOR: We just can't afford this. VARGAS: ... but they couldn't bridge the gap. So what's next for health care reform? Questions for our headliners. PELOSI: This will take courage to do, but we will get it done. VARGAS: House Speaker Nancy Pelosi and... ALEXANDER: Mr. President, renounce this idea of jamming through your version of the bill. ANNOUNCER: From the heart of the nation's capital, "This Week" with "20/20" anchor Elizabeth Vargas, live from the Newseum on Pennsylvania Avenue. VARGAS: Good morning, everyone. With so many issues facing Congress, from health care reform to unemployment, and new questions about how Congress does business, I sat down with the speaker of the House, Nancy Pelosi. (BEGIN VIDEOTAPE) VARGAS: Madam Speaker, welcome back again to "This Week." Let's talk health care. PELOSI: Good to be here. VARGAS: The president said after the summit, we cannot have another year of debate on this issue. We need decisions now. You said on Friday, "We are determined to pass health care." Do you have the 217 votes necessary to pass it in the House? PELOSI: Well, right now we're working on the -- on the policy. The -- the president put a -- a -- I think a good proposal on the Internet on Sunday. We're examining that very carefully to make sure it has all the affordability we need for the middle class, all the accountability for the insurance industry, and the accessibility that we need to have. I -- from the meeting on Thursday -- the summit meeting, I -- I believe that we're ready for the next step, which is to write legislative language, and then go from there. VARGAS: So what are the fixes the Senate needs to make in your opinion? Through reconciliation presumably before the House can vote on it... (CROSSTALK) PELOSI: Well, I -- I believe, listening to the president yesterday, he's still hopeful that there's a way to have a bipartisan bill. But whatever route the Senate takes, we would like to see, again, more affordability for the middle class. This is very, very important. This is a bill about the middle class -- their access to health care, and the affordability that makes that access possible. Secondly, we want to close the donut hole for seniors. This is really an important mistake that was made when the Republicans passed the prescription drug bill. And we want the seniors to have the comfort of knowing that in this bill the donut hole will be patched. And it's a technical -- a slang term for something that means the seniors pay more... VARGAS: But if you get that, will you... PELOSI: The seniors pay more, and we have more (ph). We want to eliminate the Nebraska fix and have equity for all of the states. And that, in terms of some of the investments, there are more, but those are the three -- three of the main ones. But one of the biggest differences is the -- how the bill would be paid for. We -- we cut waste, fraud, and abuse, over half a trillion dollars in the bill. But we still needed more of a pay-for. The Senate bill had a tax that we did not like in the House. And I think the president's proposal addresses that concern. So now we will -- it's a question of when you go down to legislative language, you -- you need the clarity, and that's when you find out what everything means. VARGAS: But you know that the polls show that the American people are deeply divided on health care. Many of them are opposed to it. Even though they are supporting certain -- specific pieces of it. (CROSSTALK) VARGAS: What do you say to your members, when it does come to the House to vote on this, who are in real fear of losing their seats in November if they support you now? PELOSI: Well, first of all, our members -- every one of them -- wants health care. I think everybody wants affordable health care for all Americans. They know that this will take courage. It took courage to pass Social Security. It took courage to pass Medicare. And many of the same forces that were at work decades ago are at work again against this bill. But the American people need it. Why are we here? We're not here just to self-perpetuate our service in Congress. We're here to do the job for the American people, to get them results that gives them not only health security, but economic security, because the health issue is an economic issue for -- for America's families. VARGAS: Do you wish, though, that the president had posted his bill before this week, that six months ago it might have been more helpful for you, that maybe six months ago you knew that the public option was something he was going to drop before you fought so hard for it? PELOSI: Well, we -- we still fight for the -- what the public option will do. Whether it's in the bill or not, its purpose must be recognized and that is to keep the insurance companies honest, to keep them accountable, and to increase competition. And I think in the summit on Thursday it became very clear that what the president was proposing was regulation of the insurance companies. Left to their own devices, they have done harm to the American people. They need to be regulated. And that is one of the biggest differences between the Democrats and the Republicans. Another one, for example, is -- an example of it is ending the denial of -- of coverage to those who have a pre-existing condition. The Democrats have that in their bill; the Republicans do not. VARGAS: But would you... PELOSI: But that's a major insurance reform that has to take place. VARGAS: But would we still be debating this if the president had put his plan out six months ago? PELOSI: Well, I don't know what -- what the value of trying -- the president has tried since one year ago, March 5th. We met in Washington D.C. in a bipartisan way with some of the outside stakeholders to talk about working together to have health care accessible for all Americans. I smile because I remember Senator Kennedy coming into the room and saying, "I'm signing up as a foot soldier in the fight for health care reform." And, of course, he was such a tremendous leader. But that was a year ago. Since then, we've had hundreds of hours of meetings, and hearings, and markups of bills -- well over a hundred Republican amendments are in this bill -- the -- the House and Senate bills, and what the president put forth. We'll see some of what was said yesterday. So those who were making constructive contributions can be accommodated. Whether we get Republican votes or not, the bill definitely has bipartisan provisions in it. But if they have a good idea that works for the American people, whether they're going to vote for the bill or not, we want it in the bill. VARGAS: How long are willing to wait for those ideas? PELOSI: Well, we -- but that that happened yesterday. And so... VARGAS: I mean... (CROSSTALK) VARGAS: ... the president seemed to made it clear that time's up. PELOSI: Time's up, yes. So we really have to go forth, because as I said there, was we sit around this table, this big table in Blair House -- every night families sit around their kitchen table, try to figure out their finances, their -- the security of their jobs, the cost of their children's education, how they're going to pay their medical bills, what is the status of their pensions? And they can't wait any longer. If -- you know, if your family has a pre-existing condition or if you are denied coverage or if you have a -- a rescission, if your insurance has been withdrawn just as you're about to need a procedure, you know that it's long overdue. And what's the point of talking about it any longer? VARGAS: But the point is, when it does finally come to vote on it in the House, you're certain that you can muster the 217 votes that you need, even with the differences over abortion language, things -- that there are members of the House who voted in favor of it before, who are now saying, "We can't vote for this bill, because of the Senate language on abortion"? PELOSI: Well, let me say I have this in three -- just so you know, how we sequence this. First, we zero in on what the policy will be, and that is what we'll be doing following the president's summit yesterday. Secondly, we'll see what the Senate can do. What is the substance? What is the Senate prepared to do? And then we'll go to the third step as to what my -- my members will vote for. But we have a very diverse party, but we all agree that the present system is unsustainable. It's unsustainable. It's unaffordable for families, for -- and individuals, for businesses, large-, small-, and moderate-sized businesses. It's unsustainable to our budget. We cannot afford the rising cost of -- of health care. As the president has said, "Health care reform is entitlement reform." And it's unsustainable for our -- our economy. We want to be competitive. These health care costs are a competitiveness issue. They diminish the opportunities for our businesses domestically and internationally to compete without this anvil of health care costs around their necks. VARGAS: You mentioned jobs. Members of the House have already weighed in on the Senate jobs bill saying it's too small and does too little. The Congressional Black Caucus said it shouldn't even be called a jobs bill. Should you agree to the smaller, incremental approach, given that unemployment is the single biggest issue in this country right now? PELOSI: Well, we wanted to move as quickly as possible on jobs. We passed our bill in December, as you probably know. What the Senate is taking is a segmented approach to it, and I think when everyone sees what the different pieces are, they will know that we're on the path... VARGAS: But you've said that's OK. Is it OK to do it in that smaller, incremental way, and not the big, dramatic way that the House proposed? PELOSI: Well, it would have been faster if they would just agree to our bill last year because people are hurting, they need jobs and we need to move quicker. This won't take a long time to do, but every piece of it will not have every provision in it that we want but it will all create jobs and help small businesses grow, because that's where major job creation is. It addresses concerns that we have about our veterans coming home who have -- are facing unemployment. It is the biggest issue for our seniors. And believe it or not, jobs and the economy are the biggest issue for our seniors and their opportunities, as well. So it is -- it's a four-letter word that we use around here all the time, jobs, jobs, jobs, jobs. And by the way, the health care bill is a jobs bill. It will create four million new jobs, several hundred thousand immediately upon enactment. And it will also encourage an entrepreneurial spirit in our country where people can take risks and be entrepreneurial because they know they have health care. VARGAS: The Ethics Committee on Charles Rangel said that he has violated the House gift rule. How can he remain in such a powerful position as chairman of the House Ways and Means Committee... PELOSI: Well, I think... VARGAS: ... given the fact that there are further pending ethics investigations and this public admonishment has taken place? PELOSI: Well, it is a public admonishment. It said he did not knowingly violate House rules, so that gives him some comfort. But the fact is that we have a... VARGAS: He should have known, though, don't you think? PELOSI: Well, I don't know. You understand that the Ethics Committee is an independent, bipartisan committee in the House. They act independent of us, and that's exactly the way it should be. I, though, when I became speaker, instituted an outside ethics panel which makes recommendations in so that we have a double way to receive information, although the Ethics Committee can self-initiate, as well as take recommendations from the outside panel. So we look forward to seeing what else they have to say about what they have before him regarding Chairman Rangel. VARGAS: If there are further admonishments, though, should he remain in this position? PELOSI: Well, why don't we just give him a chance to hear what the independent, bipartisan -- they work very hard to reach their conclusions, and, obviously, there's more to come here. VARGAS: But you don't -- you understand this is why so many Americans think Congress is corrupt. It just doesn't -- it doesn't look good. It doesn't pass the smell test. PELOSI: No, it doesn't. No, it doesn't. And I served for seven years on the Ethics Committee. The last thing I would have wanted would be for the speaker of the House to interfere in a political way in what was going on there. That just should never happen. But the fact is, is that what Mr. Rangel has been admonished for is not good. It was a violation of the rules of the House. It was not something that jeopardized our country in any way. So it remains to be seen what the rest of the work of the committee is, and I hope it will be soon. But, again, it's independent, and they go with their own -- they go at their own pace. VARGAS: Let's talk a bit about the coming elections in November. You had recently -- and the Tea Party movement. Do you think it will be a force to be reckoned with? You had said last summer that it was a faux grassroots movement; you called it the Astroturf movement. PELOSI: In some respects. VARGAS: Is the Tea Party movement a force? PELOSI: No -- no, what I said at the time is, that they were -- the Republican Party directs a lot of what the Tea Party does, but not everybody in the Tea Party takes direction from the Republican Party. And so there was a lot of, shall we say, Astroturf, as opposed to grassroots. But, you know, we share some of the views of the Tea Partiers in terms of the role of special interest in Washington, D.C., as -- it just has to stop. And that's why I've fought the special interest, whether it's on energy, whether it's on health insurance, whether it's on pharmaceuticals and the rest. VARGAS: So common ground with many people in the Tea Party movement? (CROSSTALK) PELOSI: There are some because, they -- again, some of it is orchestrated from the Republican headquarters. Some of it is hijacking the good intentions of lots of people who share some of our concerns that we have about -- about the role of special interests. And many Tea Partiers, not that I speak for them, share the view, whether it's -- and Democrats, Republicans and independents share the view that the recent Supreme Court decision, which greatly empowers the special interests, is something that they oppose. VARGAS: Finally, President Obama, when asked to rate his year in office, gave himself a B-plus. How would you rate yourself in the past year? PELOSI: Well, I have a -- I think I get an A for effort. And in the House of Representatives, my mark is the mark of our members. We have passed every piece of legislation that is part of the Obama agenda, whether it's the creation of jobs, expanding access to health care, creating new green jobs for the future, regulatory reform. We have passed the full agenda. VARGAS: Are you frustrated so many bills have not -- have been stalled in the Senate, almost 300 bills passed by the House that are sitting languishing in the Senate? PELOSI: And most of those bills have bipartisan support, strong bipartisan support in the House that have gone over there. But that -- you know what that's about? That's about -- and it's very important for you to know -- that's about the Republican delay tactics. By requiring 60 votes on some simple legislation that Harry Reid always gets -- has the votes for, but he doesn't have the time to go through the procedural day after day where you have to wait days for the time to go by in order to get the 60 votes. That's how it works in the Senate. So it's about time. Everything's about time, the most finite commodity that we have. We used our time very well in the House to get an agenda passed in time for it to be considered by the Senate, the delaying tactics of the Republicans in the Senate. VARGAS: Dare I ask you to grade the Senate? PELOSI: Well, let's grade this all on a curve. What really matters is what we do and how it relates to the lives of the American people back to that kitchen table where they have to think about how they make ends meet and how they make the future better for their children and provide for their own retirement. That's really where the grade goes. And the grade is given on Election Day. We're fully prepared to face the American people with the integrity of what we have put forth, the commitment to jobs and health care and education, and a world at peace and safe for our children, and with the political armed power to go with it to win those elections. VARGAS: Madam Speaker, thank you for joining us. PELOSI: My pleasure. (END VIDEOTAPE) VARGAS: And we are joined now by the Republican point man at the health care summit, Senator Lamar Alexander. Senator, welcome to "This Week." ALEXANDER: Thank you, Elizabeth. VARGAS: You just said heard Speaker Pelosi and President Obama say time is up, we're not scrapping the plan, we're not starting from scratch, this is it. Are you going to -- are the Republicans going to offer some amendments and play ball? ALEXANDER: We -- we already have. I mean, we spent seven hours on Thursday, which I thought was a great opportunity for us to say why we thought the president's bill is not a good bill and what we think we ought to do, which is to establish a goal of reducing costs and go step by step toward that goal. And we offered a number of good ideas, some of which the president agreed with, and if he'll put his bill aside and renounce jamming the bill through, we can go to work on this the way we normally do in the United States Senate, which is in a bipartisan way. VARGAS: But he has said he's not going to scrap the bill, he's moving forward with or without you. So why not be part of the process? Why not take what you consider to be an imperfect bill and at least attach some proposals that you support? ALEXANDER: Well, this is a-- (CROSSTALK) ALEXANDER: This is a car that can't be recalled and fixed. There are too many things wrong with it. It cuts Medicare a half- trillion dollars. It raises taxes a half-trillion dollars. And in the Medicare cuts, the point that didn't get made very much on Thursday, it doesn't cut it to help Medicare. It cuts Medicare to spend on a new program at a time when Medicare is going broke in 2015. It raises insurance premiums. The president and I had a little exchange on that. It shifts big costs to states, which are going to drive up college tuitions and state taxes. As a former governor, I've heard from Democratic and Republican governors on this. It dumps 15 million low-income Americans into a failed government program called Medicaid. Fifty percent of doctors won't even see patients in Medicaid. So you can't fix that unless they take all those things out. And if they did, they wouldn't have a bill. VARGAS: You had said in your opening remarks at the health care summit, you quoted Senator Byrd when you said -- you called on the president to renounce using reconciliation to push the bill through the Senate with a simple majority vote, saying, quote, "It would be an outrage to run the health care bill through the Senate like a freight train with this process." Why -- why are you so opposed to this, given the fact that Republicans have used reconciliation more often than Democrats in the past? VARGAS: True, I said you were quoting Senator Byrd. ALEXANDER: You're correct. The reconciliation procedure is a little used legislative procedure. Nineteen times it's been used. It's for the purpose of taxing and spending and -- and reducing deficits. But the difference here is that there's never been anything of this size and magnitude and complexity run through the Senate in this way. There are a lot of technical problems with it, which we could discuss. It would turn the Senate -- it would really be the end of the United States Senate as a protector of minority rights, as a place where you have to get consensus, instead of just a partisan majority, and it would be a political kamikaze mission for the Democratic Party if they jam this through after the American people have been saying, look, we're trying to tell you in every way we know how, in elections, in surveys, in town hall meetings, we don't want this bill. VARGAS: Why political kamikaze, though? We know that Americans don't support health care in general, but when you start drilling down into the specifics, a lot of people do support some of those specifics. ALEXANDER: Oh, they do support some of the specifics, but you put it all together, they don't like it. They don't want their Medicare cut. They don't want their taxes increased. They don't want their premiums increased. I mean, millions of Americans will have their premiums increased. The governors are up in arms about the new cost on states, so people have decided -- and -- and there's a sense that Washington is taking over too much. So I was thinking this morning of President George W. Bush, when he tried so hard to have private accounts for Social Security. He thought he was right. He pushed, he pushed, and he pushed. If he'd stopped about halfway through and shifted, he could have probably gotten a bipartisan agreement on Social Security. I think President Obama could learn from that. He has a lot of us who would like to help him write a health care bill, but not this one. VARGAS: When you say political kamikaze, are you saying that if the Democrats push this through, they will lose all their seats in November? I mean, what are we talking about here? ALEXANDER: Well, here's what I think. I mean, the people are saying, "We don't want it," and the Democrats are saying, "We don't care. We're going to pass it anyway." And so for the next three months, Washington will be consumed with the Democrats trying to jam this through in a very messy procedure an unpopular health care bill. And then for the rest of the year, we're going to be involved in a campaign to repeal it. And every Democratic candidate in the country is going to be defined by this unpopular health care bill at a time when the real issues are jobs, terror and debt. VARGAS: You also said in your remarks at the summit that Republicans have come to the conclusion that Congress, quote, "doesn't do comprehensive well," that our country is too big and too complicated for Washington. But Congress has passed many historic and sweeping and comprehensive bills in the past -- Medicare, the civil rights bill, the Americans With Disabilities Act. Are you saying that this Congress is uniquely incapable of doing something sweeping and massive and dramatic? ALEXANDER: Well, the answer's yes, in that sense. VARGAS: That's not good. ALEXANDER: But no -- but let me go back. You mentioned the civil rights bill. I was a very young aide here when President Johnson, who had more Democratic votes in Congress than President Obama had, had the civil rights bill written in Everett Dirksen's office. He was the Republican leader. He did that not just to pass it. He did it to make sure that, when it was passed, it would be accepted by the people and there wouldn't be a campaign, as there will be in health care, to repeal it from the day it's passed. Today I've watched the comprehensive immigration bill, I've watched the comprehensive economy-wide cap-and-trade, I've watched the comprehensive health care bill. They fall of their own weight because we're biting off more than we can chew in a country this big and complex and complicated. I think we do better as a country when we go step by step toward a goal, and the goal in this case should be reducing health care costs. VARGAS: So the country has changed or Congress has changed? ALEXANDER: Well, I think the size of the effort (ph) has changed. I mean, a 2,700-page bill is going to be unpopular because you're hiding something in it. It's full of surprises. It's -- it's -- policy skeptics believe in the law of unintended consequences. And when you write a bill in the middle of the night in a partisan way and, you know, pass it on Christmas Eve and it's that long, it'll have surprises like the Cornhusker kickback, which was probably the death blow to the health care bill. VARGAS: Your colleague, Senator Evan Bayh, recently announced his resignation, basically throwing his hands up in disgust, saying Congress is broken, and I want to be -- I don't want to be part of it anymore. He cited you as one of the few Republican senators that he felt that he could find common ground with, work with, agree with. How are we going to fix Congress and empower Congress to be able to pass the sweeping kinds of changes that we need in this country when people like Evan Bayh just take their -- go home, in essence, give up and go home? ALEXANDER: Well, you know, former governors -- and I'm one -- always have a hard time with the Senate. You know, we're -- we're used -- governors are used to saying, "Let's go this way," and a legislator is a reactor to things. So that's part of the problem. The second is, a lot more is going on than one would think. I mean, Senator Carper, a Democrat, and I introduced a clean air bill with 11 Democrats and Republicans. We hope we can pass it this year. Senator Webb, a Democrat, and I worked on -- have introduced a nuclear power bill. Senator Graham, Kerry, and Lieberman are working on a climate change bill. So if you take specific steps toward goals, we're more likely to succeed. And my observation is that in a country our complex -- we can't do these big comprehensive-- VARGAS: But very, very quickly, when somebody like a Senator Scott Brown, for example, breaks ranks with Republicans and votes against a filibuster to get the jobs bill to the floor of the Senate, he gets on his Facebook page, you know, all sorts of angry postings, calling him a double-crosser, a sellout, a Judas. What does that say about the political environment right now? ALEXANDER: It says we live in a very volatile (ph) political environment, and Scott Brown and I and others simply have to do what we think is right. And if we do, which is to get results in a bipartisan way, we'll probably be re-elected, or at least we'll have done a good job. VARGAS: Senator Lamar Alexander, thank you so much for joining us here this morning on "This Week." ALEXANDER: Thank you. VARGAS: Coming up next, the roundtable with George Will, Cokie Roberts, Sam Donaldson, and Paul Krugman. And of course, later, the Sunday funnies. (BEGIN VIDEO CLIP) MEYERS: The U.S. State Department this week unveiled plans for the new U.S. embassy in London, which will be made of glass and include many advanced security measures, I guess to compensate for the fact that it's made of glass. (END VIDEO CLIP) (COMMERCIAL BREAK) (BEGIN VIDEO CLIP) OBAMA: I'm going to start off by saying, "Here are some things we agree on." (UNKNOWN): I think we can all agree on that. OBAMA: We agree more than we disagree. (UNKNOWN): I think we all agree on that. OBAMA: All parties in both chambers should be able to agree. (UNKNOWN): I agree with that. OBAMA: We agree that there have to be some. We agree... (UNKNOWN): We all agree. OBAMA: We basically agree. (UNKNOWN): We certainly agree with the premise you stated. OBAMA: We agree philosophically. (UNKNOWN): You're right. We agree with that. OBAMA: You agree that we should have some insurance regulation. (UNKNOWN): The main point is, we basically agree. KIMMEL: And I'm happy to announce that no agreement was reached. (END VIDEO CLIP) VARGAS: But we are agreeing to go to our roundtable now, with George Will, Sam Donaldson, Paul Krugman, and Cokie Roberts. Good to have all of you here this morning. And let's... ROBERTS: And we're all going to agree. VARGAS: And we're all going to agree, exactly. DONALDSON: Not a chance. VARGAS: Exactly. Thanks to you, Sam. George, what did you think of the summit? Did it mean anything? WILL: Well, let's put it in context. The country having said we want to concentrate on the economy and jobs, not health care, the president doubles down on health care. And days after he unveils a commission that will propose remedies for our Ponzi entitlement structure, he pushes ahead with a trillion-dollar new entitlement. The country having said it's too expensive, he melds the House and Senate bills and comes up with a bill that's $70 billion more expensive than the original Senate bill. The country having said let's do it piecemeal, he says -- and he may have a point here -- he says, look, this is such a complex system that you can't do piecemeal. It's a Calder mobile. If you touch something here, something jiggles way over here. So, at the end of the day, it turns out we have two parties for a reason, and they have differing views about, A, the purposes and, B, the competence of government. And so we slog ahead. DONALDSON: Well, he comes up with a bill that the Congressional Budget Office says over 20 years will save billions of dollars. You can argue it if you want, but that's what they say. The thing that the summit demonstrated -- if there was any doubt in anyone's mind -- is the Republicans are not going to play on anything. It's not a question of, "Let's meet in the middle," or even, "You're the majority party, so you're going to get most of it, but give us something." They're not going to play. So what the Democrats have to do now is pass the bill, put back the public option, since it's their bill, and pass it. And President Obama... ROBERTS: But you can't pass it with the public option. DONALDSON: Well, oh, wait a moment. If 51 votes in the Senate, they can. ROBERTS: They can't get it. (CROSSTALK) KRUGMAN: Unclear even then. But... (CROSSTALK) DONALDSON: Let me just finish here, because I want to say the final thing. The president has to drop his George B. McClellan mask and become Ulysses Grant. Be ruthless. That's what a Franklin Roosevelt would have done. That's what Harry Truman would have done. VARGAS: And, Sam, that's a good point, because, Paul, you've been arguing that the president should be more ruthless, that he should be... KRUGMAN: Well, yes, I mean, I think the summit actually served its purpose, from his point of view, which was to demonstrate that the Republicans are not going to give on anything, that they're not going to -- you know, they're going to make every possible claim, they're going to say things that aren't true, like premiums are going to go up under this bill, which isn't -- isn't going to happen. And, yes, I mean, I prefer -- I mean, and George and I actually have the same view, but I think the better metaphor is it's a three- legged stool. You have to have guaranteed issue. You can get -- you know, pre-existing conditions are covered. To make that work, you have to have universality. You have to have a mandate. And to have that work, you have to have large subsidies. So the bill has to be more or less what it is. It has to be a comprehensive reform. And the Democrats, you know, from their own point of view, they actually have to do this. They have to -- they can't go into November elections... VARGAS: And that's the big question, Cokie. ROBERTS: That's the big question. That is the big question. There's no certainty at this point that there are 217 votes in the House and 51 in the Senate, no matter what procedure they use. So that is still where they are hung up, which is where they've been hung up all along. Now, the White House did a couple of smart things in terms of what people were upset about. You heard Senator Alexander talk about, in the dead of night, 2,700 pages, Christmas Eve. Those are the talking points. And -- and so the White House puts it up on the Web, has a, you know, seven-hour meeting, and takes out the special provisions, particularly for Nebraska. And so that -- they're trying to fix the things that they see are -- that the public has had problems with. And it is true that you can -- you can sing it round or flat, George, about whether the public's for this bill or not. In a recent poll that we came out with, 58 percent -- a Kaiser poll -- 58 percent said they would be angry or disappointed if a bill didn't pass. So I think that that is what the Democrats are going with. VARGAS: They want something. They're just... ROBERTS: They want something, and the Democrats just have to, you know, say their prayers, and vote for a bill, and hope it works for them. DONALDSON: But, Cokie, it's true. I think in the short run they're going to lose seats, because they dropped the ball... (CROSSTALK) ROBERTS: They're going to lose seats anyway. DONALDSON: They dropped the ball last summer. The Republicans brilliantly picked it up. It probably won't be reversed by November. But this is the only chance in how many years to do this? ROBERTS: Right. DONALDSON: And I think history will show that they were right if they get it done. VARGAS: George? WILL: Two things. First of all, Sam, you want the president to be Ulysses Grant, who won the war by his wonderful indifference to his own casualties, and I think some members in the Senate and in the House would not approve of that. DONALDSON: Did I not just say that they may lose some seats? Were you listening? WILL: By the millions. Now -- second, now, Paul says that, in fact, the Republicans have no ideas. They do, cross-selling across state lines, tort reforms, all those. Just a second, Paul. Then you say they're telling whoppers. That was your view about Lamar Alexander when he said, for millions of Americans, premiums will go up. You said in the next sentence in your column, I guess you could say he wasn't technically lying, because the Congressional Budget Office says that's true. KRUGMAN: No, it's not what it says. (CROSSTALK) KRUGMAN: Can I explain? This is... (CROSSTALK) WILL: Wait. Let me -- let me set the predicate here, because you then go on and say the Senate does say the average premiums would go up, but people would be getting better premiums. KRUGMAN: Look, let me explain what happens, because you actually have to read the CBO report. And what the CBO report tells you -- in fairly elliptical language -- is that what it will do, what the bill will do is bring a lot of people who are uninsured, who are currently young and therefore relatively low cost, into the risk pool, which will actually bring premiums down a little bit. It will also have, however, let -- lead a lot of people to get better insurance. It will lead a lot of people who are currently underinsured, who have insurance policies that are paper thin and don't actually protect you in a crisis, will actually get those people up to having full coverage. That makes the average payments go up, but it does not mean that people who currently have good coverage under their policies will pay more for their -- for their insurance. In fact, they'll end up paying a little bit less. WILL: One question. If the government came to you and said, "Professor Krugman, you have a car. We're going to compel you to buy a more expensive car," but it's not really more expensive, because it's a better car, wouldn't you tell them to get off your land? KRUGMAN: It's not -- Catherine Rampell did a very good piece in the Times blogs recently which said that the main obstacle to the people who are uninsured is not that they are choosing not to be insured. It is income. It is, in fact, young people who are not buying insurance because they're not being able to afford it, will be brought in through the subsidies. And that will end up being better even for the people who are currently insured. ROBERTS: One of the things that the -- the Congress has failed to do until now is convince people who have insurance, which is most of us, that this bill will work for them, and that's why this argument is important. But the -- the one thing that has been added on, apparently, since we haven't actually seen the bill in the last week, is the decision to have the federal government regulate rates, and that could be extremely popular with people... (CROSSTALK) DONALDSON: ... old guys, they say to us, "We're going to cut your Medicare." They're not going to cut Medicare benefits, not touch them. What they want to cut in the bill, as I understand it, is Medicare Advantage, which was put in with a government subsidy of 15 cents for every dollar, take the 15 cents away. The private insurers now can compete on their own and use that money elsewhere, and you could argue where it should be used, but it's not correct that they're trying to cut Medicare. VARGAS: I do want to get to one other issue related to this health care bill, which is the language on abortion, because it almost died in the House, the health care bill, because of abortion. There was the Stupak amendment, which attached highly restrictive language to when abortions could be covered, and there -- Bart Stupak says this is unacceptable, this current bill, as Obama has proposed it, and he says 20 other members of the House will have problems with it, too. Will abortion kill this thing in the end? WILL: Well, Alan Frumin's 15 minutes of fame have arrived. He is the hitherto obscure, but soon to be quite famous parliamentarian of the Senate, and it will be his job to rule on what can and cannot be passed under reconciliation. That is, is it a budgetary-related thing? You can argue about a great many things in the health care bill. Can you say that's budget-related? No one thinks you can change the abortion language under reconciliation. KRUGMAN: Let me just point out... VARGAS: And, Cokie... KRUGMAN: ... that in 2001, the Senate parliamentarian was in doubts about the -- some of the things Republicans were doing through reconciliation, and they dealt with that by firing him and replacing him. VARGAS: And, Cokie, can Speaker Pelosi, given this issue, if they can't get through on reconciliation some sort of changing of the abortion language... (CROSSTALK) VARGAS: ... can she find the votes? ROBERTS: It's going to be very, very tough. That's what I said at the beginning. I mean, this -- this bill is not at the moment passable by Democratic votes. DONALDSON: She'll get the votes. ROBERTS: I think in the end she will, too. DONALDSON: In the end, the Democrats understand the old phrase, "We hang together or we hang separately." ROBERTS: At the moment... VARGAS: Well, and they're on record already taking an unpopular vote. ROBERTS: ... the calculation... VARGAS: It's going to kill them in November. ROBERTS: The calculation that they've made all along -- and I personally think it's a correct calculation -- is that it's worse to do nothing than to do something and that, in the long run, people will like this bill. WILL: Can I say something that Paul and I might actually agree on? VARGAS: Sure. WILL: Twenty years from now, the country is going to be spending a larger portion of its GDP on health care than it is now for three reasons. We're getting older, and as we age, we get more chronic diseases that interact with one another. Second, we're getting richer; we can afford to buy more medicine. And, third, medicine is becoming more competent. Therefore, we're going to spend more on health care. KRUGMAN: But there's a... ROBERTS: The other thing is, you know, the health care industry is the biggest employer in most of our cities now. So when -- when the speaker talks about a job creation bill... VARGAS: A jobs bill, exactly. ROBERTS: ... it's true. VARGAS: Let's shift a little bit to Charlie Rangel, because we heard Speaker Pelosi talk about the fact that what he did didn't endanger national security, but it doesn't look good. We've got a handful of Democrats who have now started to join Republicans and calling for him to step down as chairman of the House Ways and Means Committee, a powerful post in the House of Representatives. Can he hold this post, Cokie? ROBERTS: Yes, he can hold it, as long as people -- you know, his colleagues say he can hold it. But whether it becomes too hot for him to hold is something that, you know, sort of evolves. And you see what happens in the papers in New York and all of that and whether he can withstand it. But, you know, in terms of that Ethics Committee report, there were two sets of issues they were dealing with. One was this trip to the Caribbean that was apparently paid for by corporations. The other was donations to members of Congress who then provided things in legislation for the people who gave those donations. I think that's a far, far more serious offense... VARGAS: Very serious. ROBERTS: ... and -- and the Ethics Committee basically said, "No problem." That's the kind of thing that really makes people very uncomfortable about the Congress and feel like the Congress is all on the take. DONALDSON: Now, let's talk about -- talk about the man for a moment. Years ago, he wrote his autobiography, titled, "I Haven't Had a Bad Day Since," referring to the day in Korea when Sergeant Rangel, pressed by the enemy, led his men over a steep, frigid mountain pass to safety and got the bronze star for it. I didn't know him then. But when he came to Congress, having unseated Adam Clayton Powell in Harlem, he came as a reformer. He was on the Impeachment Committee and the Judiciary Committee for Richard Nixon, the real impeachment process. And through the years, we've watched him. Now, if these charges before the Ethics Committee -- and I agree with you, they're much more serious than the one for which he's been admonished... VARGAS: And there are further ones... (CROSSTALK) DONALDSON: If, in fact... VARGAS: ... apartments in Harlem and... DONALDSON: ... that's -- it's all true, he has to give it up. He has to have it be taken away from him. And I think his being in the House has been good for his constituents and good for the country. VARGAS: George? WILL: To know Charlie Rangel is to like him. ROBERTS: Exactly. WILL: He's a wonderful spirit and all that. Still, one has to wonder. Suppose a Republican has revised his disclosure form and suddenly his net worth doubled and he came upon not one, but two checking accounts with $500,000 in them... DONALDSON: They're serious. WILL: I mean, this is -- there comes a point at which the tax writing committee should be headed by someone without these... (CROSSTALK) VARGAS: Well, and Speaker Pelosi and Steny Hoyer were all calling for Tom DeLay to relinquish his post when he was also admonished by the Ethics Committee. KRUGMAN: Yes, this is -- you know, it's -- it is worth pointing out that none of these things actually seem to affect national policy. You know, when Billy Tauzin... (CROSSTALK) KRUGMAN: When Bill Tauzin basically wrote the drug -- the Medicare drug bill then left Congress to become head of the pharmaceutical lobby, that was much more serious, but it didn't actually violate House ethics rules. So, yes, I'm unable with this. I wish Rangel would go away. But it's -- you know, it really has no national significance. VARGAS: And now let's go to the New York governor, because the state of New York has quite a brouhaha playing out this weekend, the end of last weekend, this weekend. Governor David Paterson stepping down amid allegations that he and his state police contingent improperly tried to influence a woman involved in a domestic violence dispute with one of his closest aides. ROBERTS: And to keep her from testifying against a man who had abused her. It's really... VARGAS: And domestic violence was his signature issue coming into office. ROBERTS: It's just unbelievable. The idea that he would use the state police and himself -- he called her himself to basically say -- or is alleged to have -- to say, "Don't show up in court to testify against my friend, who beat you up." You know, that is -- that is the worst kind of harassment of women who are already very reluctant to go the court on domestic violence issues. VARGAS: He has said he will not run for election in November... ROBERTS: Yes, because he couldn't win. VARGAS: But this weekend, Democrats in New York are meeting because they're not sure he can govern for 10 more months. DONALDSON: Well, that's a real question. You know, Basil Paterson, one of the great power brokers in New York... ROBERTS: His father. DONALDSON: ... Democratic politics, his father, is a man of great substance. His son has proved not to be. And I think one of the lessons here is, when you run -- because they run as a team in New York, governor and lieutenant governor -- you ought -- just like a president and vice president -- you don't put someone on the ticket because there's a political advantage who is not capable of stepping in, as he has proved not to be capable. And I think it's a real question whether he should serve out the rest of his term. VARGAS: And, George, what a bumpy term for him. He's got terrible approval ratings, a huge budget problem, and he managed to infuriate the Kennedys by his mishandling of Caroline Kennedy's -- you know, when she -- when she tried to take over for Hillary Clinton's Senate seat. WILL: You mentioned the budget problem. I mean, New York state spending has increased almost 70 percent in a decade. It is dead heat with California as to see which is the worst governed state right now. So a lot of New York's problems predate and will follow Mr. Paterson. Whether or not he should resign because he can't govern, who can govern that state? The state legislature governs that state badly. ROBERTS: And locks people out and does all kinds of... (CROSSTALK) KRUGMAN: From my -- from my home state of New Jersey, I think we're in the running there. WILL: You are. ROBERTS: Exactly. DONALDSON: We're going to see whether Andrew Cuomo can govern. He's going to be the Democratic nominee. VARGAS: Well, he's -- he's the attorney general, who is currently investigating Governor Paterson, and has expressed interest... (CROSSTALK) ROBERTS: Currently investigating the governor... (CROSSTALK) VARGAS: ... the White House had tried privately to encourage Governor Paterson to step... (CROSSTALK) ROBERTS: Privately? It wasn't so private. VARGAS: ... wasn't so private, to step aside, so I guess they're probably looking at this as a positive development, that he's not running for election. ROBERTS: Oh, sure. DONALDSON: Oh, yes. (CROSSTALK) ROBERTS: Oh, sure. Yes, but, you know, this business of using the state troopers, which, of course, Eliot Spitzer was also -- I mean, it was all of these -- all these echoes of, you know, the wife standing by as the governor admits to, you know, some perfidy. And the state troopers, really, if I were the state troopers, I would find a way to just not do what the governor says, because it just gets them in trouble over and over again... VARGAS: Yes, exactly. ROBERTS: ... and then there was Arkansas. VARGAS: And then, of course, this weekend, we have a brand-new White House social secretary appointed to replace Desiree Rogers, a close friend of the Obamas who is exiting after a bumpy tenure, I would say. Cokie, you spoke with her. She -- she was highly criticized after the Obamas' first state dinner in which she arrived, looking absolutely gorgeous, but in what some people later said was far too fancy a dress, but most importantly, that was the state dinner that was crashed by the Salahis, who walked in without an invitation when the social secretary's office didn't have people manning the security sites. ROBERTS: Well, I talked to -- I did talk to her, Desiree, yesterday at length. She is from my home city of New Orleans and fellow Sacred Heart girl. DONALDSON: What's the name of the city? ROBERTS: New Orleans. DONALDSON: I love to hear her say it. (CROSSTALK) ROBERTS: But -- and she has lots of good explanations about that dinner. And basically, the bottom line is, it's the Secret Service. But she -- but her -- her major point is -- and I -- and I completely take this -- is that she -- she put on 330 events at the White House last year and did open the building to all kinds of people who had not been there before. And they had wonderful music days of all kinds of music, where you had during the day, the musicians would work with kids in Washington and teach them things before coming on at night. DONALDSON: Cokie, that's irrelevant. ROBERTS: Well, I don't think it's irrelevant. DONALDSON: I mean, it's irrelevant. People who work for the president understand or should understand their place, which is to be spear-carriers. There are two stars in anyone's White House, the president and the president's spouse. After that, this passion for anonymity that once was a hallmark of people who worked for a president, has been lost. She wanted to be a star herself... ROBERTS: And it's been lost. Look at all the people who work for presidents and then go out and write books about them. DONALDSON: I think you're right. VARGAS: Do you think she was -- did she quit, or was she asked to leave? DONALDSON: She was asked to. ROBERTS: She says she quit. DONALDSON: Oh, well... (CROSSTALK) ROBERTS: And she certainly has lots... DONALDSON: And to spend more time with your family. ROBERTS: No, no, to go into the corporate sector and make some money, where she'll make a lot of -- she'll do fine. DONALDSON: Good luck to her. I don't wish her ill. (CROSSTALK) DONALDSON: It's just that she didn't understand... ROBERTS: She'll do very well. DONALDSON: ... she was not a star in the sense that she should make herself prominent. VARGAS: George? WILL: It is axiomatic that when there's no penalty for failure, failure proliferates. She failed conspicuously in her one great challenge, which was the first state dinner, and she's gone. If she's gone because she failed, that's a healthy sign. VARGAS: The big question, of course, because she was one of that close contingent of Chicago friends is whether or not she's just the first to leave or if we'll see other... ROBERTS: But you'll see people leave. (CROSSTALK) ROBERTS: I mean, that's what happens. It's a perfectly normal thing that happens in administration, is that people come, and they come in at the beginning, and then it's time to -- to go back to life. KRUGMAN: Can I say that 20 million Americans unemployed, the fact that we're worrying about the status of the White House social secretary... VARGAS: It's our light way to end, Paul. DONALDSON: Paul, welcome to Washington. VARGAS: Thank you. DONALDSON: Nice to see you. VARGAS: All right. You can get the political updates all week long by signing up for our newsletter on abcnews.com. Thank you, everybody. | Mid | [
0.551020408163265,
33.75,
27.5
]
|
Q: ES5 "strict" and arguments.callee Possible Duplicate: Why was the arguments.callee.caller property deprecated in JavaScript? In ES5 strict mode (i.e. "use strict") the arguments.callee variable that refers to the current function is no longer available. For recursive functions it's obviously sensible to use the function's own name. However there are times when I might want to use properties of arguments.callee (i.e. .length, .prototype) without having to use the name of the current function. Can anyone explain what apparent problem was (allegedly) solved by removing it? A: From here. arguments.callee substantially hinders optimizations like inlining functions, because it must be made possible to provide a reference to the un-inlined function if arguments.callee is accessed. | Mid | [
0.617079889807162,
28,
17.375
]
|
The House of Representatives is set to debate and vote Thursday on the resolution formalizing the impeachment inquiry against President Trump. Remember, this is not a vote on articles of impeachment. This is simply a vote to codify the investigation. It is a “resolution,” not a “bill.” It is not “legislation.” It’s a resolution because it’s an internal House matter. The House meets at 9 a.m. ET. Fox News is told the House will get to the measure rather quickly after some initial bookkeeping and short speeches. Debate likely starts around 9:10-9:15 a.m. ET. House Rules Committee Chairman Jim McGovern, D-Mass., is set to manage the measure for the Democrats on the floor, with Rep. Tom Cole of Oklahoma, the top Republican on the Rules Committee, running things for the GOP. The resolution itself – which the House Rules Committee processed into the night Wednesday – will receive one hour of debate on the floor. A “straight” one hour on the House floor is more like one hour and 15 minutes or more. There are constant fits and starts. If things move quickly, without interruptions, the House could complete debate by 10:30 a.m. ET. However... this is Congress. As one lawmaker said years ago when she was called into a meeting in the majority leader’s Office at 1:20 a.m. on a Saturday, “things kind of happen around here when they happen.” There are several possibilities which could elongate the debate: The House is simply crawling and the debate takes longer than expected. Republicans launch dilatory tactics in protest – such as motions to adjourn, et al. A motion to adjourn is “privileged” and the House must consider it right away. Multiple motions to adjourn or other guerilla tactics are in the mix. All these efforts do is stretch out the process and chew up a lot of time. House leaders from both sides of the aisle take advantage of their “Magic Minute.” Most members just get a minute or so to speak on the floor. But, out of respect, the House allocates a “Magic Minute” to top leaders, including House Speaker Nancy Pelosi, D-Calif., House Minority Leader Kevin McCarthy, R-Calif., and others. The top leaders could take five to ten minutes to speak, if they chose. The extra time would not count against the overall “hour” allocated for debate. The following is subject to change, but here is how the next steps may play out following the conclusion of the debate: At the end of the debate on the resolution, the House will begin a vote series. This is a collection of various votes on different topics. The vote on the resolution itself should be the second in the queue. The first vote in the sequence is procedural and in relation to the resolution. It’s known as “Ordering the Previous Question,” or a “PQ” in Congressional parlance. It is a “vote to have a vote.” The first vote usually consumes about 25 minutes. So, the vote on the resolution itself likely would start about 25 minutes after the beginning of the vote sequence. Here’s a hypothetical: If the vote series starts at 10:45 a.m. ET, that means the vote on the resolution would begin around 11:10 a.m. The House likely will schedule the vote on the resolution to take five minutes. In reality, that means it will eat up seven or eight minutes on the clock. So, once the vote begins on the resolution, a result could come about seven or eight minutes later. The resolution is expected to pass. Note that the House will probably cross the threshold for adoption while the vote is still open, but nothing is official until the chair raps the gavel and closes the vote. The House “scoreboard” on the TV monitors would not reflect the accurate tally. The result the chair announces from the dais is official. SPARKS FLY AS DEMOCRATS PUSH IMPEACHMENT RULES OVER REPEATED GOP OBJECTIONS How is the vote expected to turn out? The House has 432 members, as Rep. Katie Hill, D-Calif., has not officially resigned yet. She is expected to give her final speech before resigning midday. The 432 tally would make the threshold for adoption 217 yeas. House Minority Whip Steve Scalise, R-La., advised his members to vote no on Tuesday night. If nothing else, a united GOP front would give Republicans the chance to show they're fighting for the president. Fox News’ tally has 228 hard yeas from Democrats. Rep. Justin Amash, I-Mich., is expected to vote yes. Most House Speakers rarely vote on the floor. Pelosi voted as recently as Tuesday on a resolution regarding the Armenian genocide. But, House speakers typically cast ballots a few times a year. When Fox News asked Pelosi if she would vote, she said, “if the spirit moves me.” Pelosi added, “I don’t think I’ll need to.” Five Democrats have remained noncommittal: Reps. Kendra Horn of Oklahoma, Anthony Brindisi of New York, Jeff Van Drew of New Jersey, Jared Golden of Maine and Collin Peterson of Minnesota. Van Drew has said he opposed impeachment – but has not indicated definitively that he opposed an investigation. The first four names are freshmen who flipped districts from red to blue in 2018. Peterson has been in Congress since 1991. He’s won with 52 percent of the vote the past two cycles. But, Trump won Peterson’s district by a staggering 31 points in 2016. When asked how he would vote, Peterson told Fox News, “I’ll figure it out.” Peterson and Rep. Ron Kind of Wisconsin are the two remaining Democrats who voted for the impeachment inquiry against then-President Bill Clinton in 1998 and have continued to serve in the House. Kind said he'd support the resolution. “We are getting close to that ‘Fifth Avenue Challenge,’” Kind added. That’s a reference to Trump’s proclamation that he could shoot someone on Fifth Avenue and nothing would happen to him. “Many of (the president’s) supporters won’t abandon him for anything.” CLICK HERE TO GET THE FOX NEWS APP The vote comes on Halloween. “It is very spooky for people in the White House,” said Rep. Alexandria Ocasio-Cortez, D-N.Y., with a wry smile. “It helps people understand the next steps. It is a clarifying vote.” | Mid | [
0.6221198156682021,
33.75,
20.5
]
|
// -*- MPC -*- project : taoclient, anytypecode, codecfactory, valuetype { Source_Files { bug_2543_regression.cpp } } | Low | [
0.503685503685503,
25.625,
25.25
]
|
--- abstract: | I present a new method of deriving the shape of the dark matter (DM) halos of spiral galaxies. The method relies on the comparison of model predictions with high spectral and spatial resolution HI observations of the gas layer. So far, determinations of the flaring of the gas layer (i.e. the increase of the thickness with galactocentric radius) have been used to determine the mass-to-light ratio, , of the stellar disk of several edge-on galaxies. In this paper I describe a method which can be used to determine the shape of DM-halos. This technique will be applied in a forthcoming paper. I show that the model predictions of the gas layer width are best calculated using a global approach, in which the potential arising from the [*total*]{} mass distribution of the galaxy is used in the calculation of the vertical distribution of the gas. I developed a new algorithm to calculate the force field of an arbitrary, azimuthally symmetric, density distribution. This algorithm is used to calculate the forces due to the radially truncated stellar disk as well as of the flaring gas layer. I use a simple two-parameter family of disk-halo models which have essentially the same observed equatorial rotation curve but different vertical forces. This mass model is composed of a stellar disk with constant , and a DM-halo with a given axial ratio (Sackett & Sparke \[ApJ, 1990, 361, 408\]). I approximate the radial force due to the gaseous disk, and iteratively determine the vertical force due to the global distribution of the gas. In agreement with Maloney \[ApJ, 414, 41, 1993\] I find that beyond the Holmberg radius, the thickness of the gaseous disk is sensitive to both the flattening of the DM-halo and the self-gravity of the gas. I also show that the inferred DM-halo flattening is not sensitive to the particular choice of disk-halo decomposition. I show that the determination of the thickness of the gas layer is not restricted to edge-on galaxies, but can be measured for moderately inclined systems as well. Thus, in combination with detailed modeling, high resolution HI imaging of nearby galaxies with extended HI envelopes will enable us to determine the shape of the DM halo of these galaxies. author: - 'Rob P. Olling' title: | On the usage of Flaring Gas Layers to determine the\ Shape of Dark Matter Halos --- =-0.5truecm To appear in the August 1995 issue of [*The Astronomical Journal*]{} Introduction ============ Until recently, little was known about the shape of dark halos : measurements of the equatorial rotation curve provide a one dimensional probe to the potential only. It is possible to construct both spherical and flat mass models which generate the same rotation curve. Therefore, the shape of the dark halo can not be inferred from galactic rotation curves alone. Existing methods to estimate the shape of DM-halos have given mixed results. It has been suggested (Dekel & Shlosman, 1983; Toomre, 1983; Sparke & Casertano, 1988) that galactic warps, frequently observed in the outer parts of gaseous disks, result if the galactic disk is tilted with respect to the plane of a flattened dark halo. Hoffner & Sparke (1994) investigated the time evolution of such warps and concluded that only one out of the five systems they studied requires a halo as flattened as E6. Polar ring galaxies, early type spirals with material rings in a plane perpendicular to the galaxy plane, probe the dark halo potential in two perpendicular directions. Careful analysis of the stellar and gas dynamics of the polar ring system NGC 4650A, led Sackett & Sparke (1990, hereafter SS90) and Sackett (1994, SRJF94 hereafter) to conclude that the dark halo in this system could be as flattened as E6-E7 ($q$=0.4 to 0.3). Recently, Sackett (1994b) reported the discovery of a rather flattened ($q \approx 0.5$) [*luminous*]{} halo around the edge-on spiral NGC 5907, which can account for the observed rotation curve provided that this luminous material has a large mass-to-light ratio ($\approx$ 450). Like the polar rotation curve, the thickness of the gas layer depends on the vertical force, , and hence on the shape of the DM-halo (Maloney 1992; Olling & van Gorkom 1992; Kundić 1993, KHG93; Maloney 1993, M93). That the shape of the dark halo influences the vertical distribution of the gas can be easily understood. Consider a round halo, with a certain density distribution, then decrease the vertical scale-height while maintaining the total mass. Consequently, the densities as well as the exerted gravitational forces will increase, resulting in a thinner HI disk and higher rotation speeds. In order to keep the same equatorial rotation curve, one has to deform the DM-halo in a very specific way (Appendix A, Figure \[fig:Rc.rho0.versus.q\]) as a result of which the DM-halo densities (at large distances) will be roughly inversely proportional to the flattening $q \; (=c/a)$. In accordance with Maloney (1993), I find that (at large distances) the thickness of the gas layer is roughly proportional to the square root of the halo flattening, $q$. Several authors have employed measurements of the gas layer thickness to infer the total mass density in the plane of spiral galaxies. Van der Kruit (1981) showed that the width of a galaxy’s gaseous disk should increase radially in an exponential fashion. From the thickness measurements of the gaseous disk of the edge-on galaxy NGC 891 by Sancisi & Allen (1979) van der Kruit concluded that the ratio of the stellar disk does not change significantly with radius and that the dark halo can not be as flat as the stellar disk. Rupen (1991), using more sensitive and much higher resolution data, found that the width of the gaseous disks of NGC 891 and NGC 4565 increase exponentially, but for NGC 891 with large scatter. Our galaxy has been studied in much greater detail by Knapp (1987) and Merrifield (1992) in HI, and by Malhotra (1994) in CO. These authors find that the radial scale-length of the midplane mass density compares well with the radial scale-length of the stellar disk, implying a constancy of the mass-to-light ratio for the stellar disk. Van der Kruit (1988) concludes that in the outer parts of stellar disks the local mass density is no longer dominated by the stellar disk so that the self-gravity of the gas will become important. Here I extend van der Kruit’s pioneering work by incorporating the stellar and gaseous disks as well as the DM-halo in a self consistent way : rather than making a [*local*]{} approximation to the vertical force, is calculated from the [*global*]{} mass distribution of the galaxy. In §\[sec-Gaseous-self-gravity\] I show that the self-gravity of the gas can play an important role and must be included in a self-consistent manner. Furthermore I will concentrate on the flaring behaviour in the region beyond the optical disk where the vertical force is dominated by the DM-halo, and is hence sensitive to its [*shape*]{}. In combination with the newly achievable large sensitivities and the high resolution of HI synthesis observations (Rupen, 1991), these improved modelling methods allow for the determination of the shape of dark halos of spiral galaxies. In §\[sec-dark-halo-props\] I review some of the properties of dark halos and determine (in §\[sec-flattened-dark-halos\]) how, for a given rotation curve, the core radius and central density depend on the flattening of such an isothermal halo (see also Appendix A). The disk-halo conspiracy is investigated in section §\[sec-disk-halo-conspiracy\] (and Appendix B) where I derive general formulas for the three unknowns (the disk’s ratio and the halo’s core radius and central density) which determine the overall mass distribution of the galaxy. In §\[sec-the-works\] I list the assumptions made to determine the z-distribution of the gas from the galaxian potential. I compare the local and global approaches in §\[sec-The-Local-Approach\], where I also present the vertical force arising from the total mass distribution of the galaxy, $K_{z,tot}$, for several radii (an analytic solution to the multi-component local approximation, as proposed by Bahcall (1984), is given in Appendix C). In §\[sec-halo-shape-gas-layer-thickness\] I calculate how the thickness of the gas layer depends on the two free parameters (and $q$) and discuss how the thickness of the gas layer can be used to determine the flattening of the DM-halo. In the discussion, §\[sec-discussion\], I will indicate which systems might be suitable for an analysis as proposed in this paper. Properties of dark halos {#sec-dark-halo-props} ======================== From the observational fact that rotation curves of spiral galaxies are “flat”, it has been concluded (e.g. Bosma 1978; Rubin 1980; Bahcall & Casertano 1985; Begeman 1987) that there is an unseen mass component present in these galaxies. The standard assumption has been that this dark mass distribution is spherical and isothermal, characterized by the core radius, $R_c$, and central density, $\rho_{h,0}$ (equation \[\[eq:rho.halo.Rz\]\], with $q=c/a$=1). The luminous matter consists of a stellar disk, for some systems a bulge, and a gaseous disk. The ratios of the bulge and the disk are free parameters but are normally taken to be constant with radius. An upper limit to the ratio of the disk, and hence a lower limit to the dark halo mass, can be obtained by assuming that the observed peak rotation velocity is due to the stellar disk only. A more common approach is to scale the mass-to-light ratios down in such a way as to avoid halos with hollow cores. This is commonly known as the “maximum-disk” hypothesis (van Albada & Sancisi, 1986). Other means of constraining the ratio exist : van der Kruit (1981) calculates from the thickness of the gas layer, Efstathiou (1982) invoke disk stability arguments, Athanassoula (1987, ABP87) apply spiral instability criteria, and Bottema (1993) uses stellar velocity dispersion measurements. The mass-to-light ratios found by these authors range from 50 to 100 % of the maximum-disk value. The values for the DM-halo core radius and the ratio of the stellar disk are highly correlated : low mass-to-light ratios require small core radii, and large ratios correspond to large values for the DM-halo core radius. In fact, many different disk-halo decompositions produce acceptable fits to a given observed rotation curve (van Albada [*et al.*]{}, 1985 hereafter ABBS85 ; Persic & Salucci, 1988 ; Lake & Feinswog, 1989 hereafter LF89). The dark to luminous mass ratio may decrease with increasing mass (e.g. ABP87; Casertano & van Gorkom 1992; Broeils 1992, Chapter 10). This could be inferred from galactic rotation curves : dwarf galaxies have rising rotation curves (e.g. Carignan & Freeman 1988) while the rotation curves of massive galaxies fall in the outer parts ( Casertano & van Gorkom 1991; Broeils 1992). For those galaxies where the rotation curves in the outer parts are flat, the luminous and dark matter “conspire” : the increase of $V_{halo}^2$ and the decline in $V_{disk}^2$ are such that their sum remains approximately constant with galactocentric radius. In view of the uncertainties in the disk-halo decomposition, it is clear that more and a different kind of data are needed for a unique determination of the halo flattening. In section §\[sec-halo-shape-gas-layer-thickness\] I show that beyond the optical disk the thickness of the gas layer is rather sensitive to the flattening of the halo. In order to investigate this sensitivity quantitatively I construct a two parameter galaxy model, where these parameters are $\gamma$ (= the fraction of the peak rotation curve due to the stellar disk, see §\[sec-disk-halo-conspiracy\] and Appendix B) and $q$, the flattening of the DM-halo (§\[sec-flattened-dark-halos\] and Appendix A). The explicit dependence of the thickness of the gas layer upon these two parameters is discussed in §\[sec-halo-shape-gas-layer-thickness\]. Flattened dark halos {#sec-flattened-dark-halos} -------------------- As mentioned above, the equatorial rotation curve (i.e. the rotation curve in the plane of the stellar disk) does not constrain the actual shape of the halo. In Appendix A I determine a family of flattened DM-halo models which have the same (to within $1.4$ %) equatorial but different polar rotation curves. Thus, for this family of halo models, the radial force does not depend significantly on the shape of the DM-halo, while the vertical force (and hence the thickness of the gas layer) does. In Figure \[fig:Vrot.q\], I present the rotation curves for this DM-halo family graphically. The lower panel shows the rotation curves for these DM-halo models, while in the top panel I present the ratio of the flattened to the round DM-halo rotation curve. Although the residuals show systematic behavior, the amplitudes ($\leq 1.4 \%$) are smaller than the routinely obtained observational errors (e.g. Begeman 1989, BE89 hereafter; Broeils 1992). In conclusion : rotation curves of flattened DM-halos are indistinguishable from their round equivalents. This family of flattened DM-halo models is fully specified by the equations (\[eq:Rc.versus.q\]) and (\[eq:rho0.versus.q\]) which relate the core radius and the central density to the flattening of the DM-halo and is graphically presented in Figure \[fig:Rc.rho0.versus.q\]. For a given rotation curve, flattened DM-halo models have larger core radii and central densities than their round equivalents. Notice that both the central density and the core radius have an almost linear dependence on the halo flattening for moderately flattened DM-halos. Of course the DM-halo model I have chosen to work with might very well be different from the true DM halo mass distribution. However all mass distributions, with similar rotation curves, share the general feature that flatter distributions have larger vertical forces. Therefore, a DM halo flattening determined using the formalism outlined in this paper serves as an indicator of the true flattening of the (unknown) dark matter density distribution. On the disk-halo conspiracy {#sec-disk-halo-conspiracy} ---------------------------- As we can measure the light distribution of the stellar disk only, and have no a priori knowledge of the mass-to-light ratio of stellar disks, the relative contributions of luminous and dark matter are not known (ABBS85 and LF89). However, as I will show in Appendix B, the galaxy mass model described above is fully determined by [*one*]{} parameter regulating the relative importance of stars and dark matter. Following Bottema (1993) I choose this parameter, $\gamma$, to be the fraction of the peak observed rotation curve which is due to the stellar disk [(defined by eqn. \[\[eq:gamma.beta.definition\]\])]{}. Different choices of $\gamma$ result in quite different values for the stellar mass-to-light ratio, the halo’s central density and core radius. As an example, I present the dependence of these three parameters on $\gamma$ for the galaxy NGC 3198 (rotation curve and optical parameters were taken from BE89) in Figure \[fig:disk.halo.conspiracy\], and algebraically by the equations (\[eq:MoverL.versus.gamma\]), (\[eq:Rc.versus.gamma\]) and (\[eq:result.for.rho0\]). In Figure \[fig:disk.halo.conspiracy\], I have also indicated how a 5% uncertainty in the slope of the rotation curve affects the results. To obtain the core radius and central density of a flattened dark halo, one uses the equations (\[eq:Rc.versus.gamma\]) and (\[eq:result.for.rho0\]) and multiplies these values by ${\cal C}(q)$ (equation \[\[eq:Rc.versus.q\]\]) and ${\cal H}(q)$ (equation \[\[eq:rho0.versus.q\]\]) respectively. In agreement with LF89, I find that observed equatorial rotation curves do not constrain the core radius of the DM-halo (or $\gamma$ in our terminology) very well. This is illustrated in Figure \[fig:rotcur.gamma\], where I present the rotation curve of a model galaxy which resembles the Sc galaxy, NGC 3198[^1]. Several acceptable fits, made with different $\gamma$’s, are shown. In conclusion : the parameterization of the disk-halo conspiracy, as presented in Appendix B, provides a useful way to perform the disk-halo decomposition to acceptable accuracy, and allows for a straightforward way to investigate the dependence of the thickness of the gas layer on the $\gamma$ value (mass-to-light ratio) of the stellar disk. The method {#sec-the-works} ========== In this section I derive the vertical distributions of the HI layer from the potential of the whole galaxy, $\Phi_{galaxy}(R,z)$, and the equation of hydrostatic equilibrium. This derivation is subject to several simplifying assumptions which are listed below : - The system is in steady state, - and is azimuthally symmetric. - The velocity dispersion tensor of the gas is symmetric and round, so that - The equation of hydrostatic equilibrium is a good approximation to the vertical Jeans equation. - The vertical velocity dispersion of the HI gas is constant (isothermal) with z-height. - Magnetic and cosmic ray pressures are neglected Obviously this is just an approximate description of reality. Spiral density waves violate the first two assumptions, while non-thermal pressure terms (magnetic, cosmic ray heating, ...) are not included in the equation of hydrostatic equilibrium. Keeping these assumptions in mind, I take as the starting point the vertical Jeans equation in cylindrical coordinates (as usual, $R$, $\theta$, and $z$ denote the radial, the tangential and the vertical direction respectively) : $$\begin{aligned} \frac{d \; \left( \rho(z) {\mbox{$\sigma(R,z)_{zz}^2$}} \right)}{d \; z} &=& \rho(z) \; {\mbox{$K_z{(z)}$}} - \nonumber \\*[3mm] && \hspace*{-30mm}\; {\mbox{$\frac{1}{R }\;\frac{\partial }{\partial R}$}}\left( R \; \rho {\mbox{$\sigma_{Rz}^2$}} \right) - {\mbox{$\frac{1}{R}\;\frac{\partial}{\partial \theta}$}}\left( \rho {\mbox{$\sigma_{\theta z}^2$}} \right), \label{eq:Vertical.Jeans}\end{aligned}$$ to determine the vertical distribution of the gas ($\rho(z)$). Here is the gravitational force (per unit mass) in the $+z$-direction and ${\mbox{$\sigma(R,z)_{}$}}$ the gaseous velocity dispersion. Due to the assumption of azimuthal symmetry, the last term in equation (\[eq:Vertical.Jeans\]) vanishes. The $\sigma_{Rz}$-term essentially measures the tilt of the velocity dispersion ellipsoid, for cylindrical rotation it is identically zero. In the most extreme case the velocity dispersion ellipsoid points towards the galactic center[^2]. At small $z/R$, $\sigma_{Rz}$ is approximately $(\sigma^2_{RR} - \sigma^2_{zz})z/R$ so that its contribution is expected to be small[^3]. Thus, the non-diagonal terms of the velocity dispersion tensor (i.e. ) will vanish. Then, the vertical Jeans equation reduces to the equation of hydrostatic equilibrium : $$\begin{aligned} \hspace*{-8mm} \frac{d \; \left( \rho_{gas}(z) {\mbox{$\sigma(R,z)_{z,gas}^2$}} \right)}{d \; z} &=& \rho_{gas}(z) \; {\mbox{$K_z{(z)}$}}. \label{eq:Hydro.equil}\end{aligned}$$ With the assumption that the gas is isothermal in the z-direction it follows that : $$\begin{aligned} \hspace*{-8mm} {\mbox{$\sigma_{z,gas}^2$}} \; \frac{d \; \ln{\rho_{gas}(z)} }{d \; z} &=& -\frac{d \; \Phi_{galaxy}(R,z)}{d \; z}, \label{eq:isothermal.Hydro.equil}\end{aligned}$$ so that $$\begin{aligned} && \hspace*{-14mm} \rho_{gas}(R,z) = \rho_{gas}(R,0) e^{-\Phi_{galaxy}(R,z)/{\mbox{$\sigma(R)_{z,gas}^2$}}} \label{eq:rho.gas.z} \\*[3mm] && \hspace*{2mm} \approx \rho_0 \exp{(-z^2/2W_{gas}^2)} . \label{eq:gas.width.approximate}\end{aligned}$$ We see that the gaseous density distribution can be calculated once the gaseous velocity dispersion and the potential of the galaxy,$\Phi_{galaxy}(R,z)$, are known. Generally speaking, the vertical distribution of the gas will not be a “simple” function of z. Since the observational data does generally not allow for a more sophisticated analysis than the fitting of Gaussian functions to the measurements, I choose to fit Gaussians to the model density distributions (eqn. \[\[eq:rho.gas.z\]\]) as well[^4]. In the remainder of this paper I will use the dispersion ($W_{gas}$) of this Gaussian, as defined by eqn \[\[eq:gas.width.approximate\]\], as a measure of the width of the gas layer. I incorporate three components in the present study : 1) a stellar disk with constant scale-height, 2) a flattened isothermal DM-halo, and 3) a gaseous disk. The stellar density distribution for the stars must be close to the one derived from surface brightness measurements by van der Kruit & Searle (\ vdKS81a&b and vdKS82a&b), namely : $$\begin{aligned} && \hspace*{-1.5cm} \rho_s(R,z) \nonumber \\*[3mm] && \hspace*{-1.5cm} = {\mbox{${\cal M/L}\;$}}\; \rho_s(0,0) \; e^{-R/h_R} \; {\rm sech}^2(z/2z_e) \hspace{5mm} R \leq R_{max} \hspace{2mm} \nonumber \\*[3mm] && \hspace*{-1.5cm} = 0.0 \hspace{2mm} \hspace*{4cm} R > R_{max}, \label{eq:stars.rho.lum}\end{aligned}$$ with $R_{max}=(4.7 \pm 0.7)h_R$, $2z_e \approx h_R/(4.7\pm1.8)$, and the average mass per unit luminosity. Note that Barteldrees & Dettmar 1993 (BD93) find that the optical disks are truncated at significantly smaller radii : $R_{max}=(3.0 \pm 0.2)h_R$, $2z_e \approx h_R/(4.0\pm0.3)$. In external galaxies it is very hard to observationally distinguish a sech-squared distribution from an exponential distribution (i.e. Wainscoat 1989). For the Galaxy on the other hand, there is a large body of evidence suggesting that the vertical distribution can be better represented by an exponential distribution[^5] : $\rho_s(z) \; = \;\rho_s(0) \; e^{-z/z_e}$ (Gilmore & Reid 1983; Pritchet 1983; van der Kruit 1986; Yoshi 1987). Therefore, I calculate model potentials for the exponential distribution. I chose the flattened isothermal distribution as proposed by SS90 (equation \[\[eq:rho.halo.Rz\]\]), discussed above, to represent the DM-halo mass distribution. The vertical force of a truncated stellar disk as well as of a flaring disk is calculated from the potential : $$\begin{aligned} \Psi(R,z) \hspace*{-2mm} &=& \hspace*{-2mm} -2 G \int_0^{\infty} r dr \rho(r,0) \int_{-\infty}^{\infty} dz' \rho(r,z') \int_0^{\pi} \frac{d\theta}{|\overline{r} -\overline{R}|} \; \; , \label{eq:equation.for.Psi} \\ K_z(R,z) \hspace*{-2mm} &=& \hspace*{-2mm} -\frac{d}{dz} \Phi(R,z) \label{eq:definition.of.Kz} \\ &=& -2 G \int_0^{\infty} \hspace*{-3mm} r dr \rho(r,0) \int_{-\infty}^{\infty} \hspace*{-3mm} dz' \frac{ \rho(r,z') (z-z') f(R,z,r,z') } { \sqrt{(R+r)^2 + (z-z')^2} \left( [R-r]^2 + [z-z']^2 \right) } \; \; , \label{eq:equation.for.Kz}\end{aligned}$$ where $\rho(r,z')$ is the density distribution for which to calculate the forces. At the center of the galaxy, $f(0,z,r,z')$ equals $\pi$, for all other galactocentric radii $f(R,z,r,z')$ is the complete elliptic integral of the second kind : $f = E(k) = \int_0^{\pi} d\phi \sqrt{1-k^2 \sin^2{\phi}}$, with $k^2 = 4 r / [(1+r)^2 +(z-z')^2]$. With special care for the region around $(R,z)$ this integral can be solved numerically and takes about 8 seconds per point on a SPARC 2 processor. I use an exponential for the vertical density distribution, $\rho(0,z')$, of the stars, while I approximate the gaseous distribution by a Gaussian. As neither the vertical scale-height nor the midplane density, $\rho(r,0)$, are generally analytic functions, I store their values in tabular form and determine the interpolated value whenever the integrating routine (QROMB or QROMO, Press 1990) requires so. I used the double exponential stellar disk (Kuijken & Gilmore, 1989, hereafter KG89, their eqn. \[27\]) as a test case and found that the two methods of calculation agree to within 1 part in 1,000[^6]. ### The global approach {#sec-The.global.approach} As a first step I use the radial part of the potential to determine the structural parameters of the stellar disk and DM-halo by requiring that they reproduce the observed equatorial rotation curve. The radial force due to the gas is calculated (using the ROTMOD program in GIPSY, van der Hulst 1992) assuming that it is infinitely thin. I use the observed photometry and a constant ratio to calculate the rotation curve due to the stars (also by using ROTMOD). I then perform the disk-halo decomposition outlined in Appendix B. I use the values for the mass-to-light ratio of the stellar disk and the core radius and central density of the DM-halo, as found in the disk-halo decomposition, to calculate the vertical force due to the stellar disk (equation \[\[eq:equation.for.Kz\]\]), and the DM-halo (SRJF94’s equation \[6\]). These forces are integrated numerically to yield the potential, from which the gaseous volume density distribution, $\rho_{gas}(R,z)$, is calculated (using eqn. \[\[eq:rho.gas.z\]\], an assumed gaseous velocity dispersion, and an observed surface density distributions). I follow an iterative procedure, where the gaseous density distribution, calculated from an approximation to the true total potential, is used to estimate the gaseous contribution to the potential, which must then be added to the contributions from the stellar disk and DM-halo to yield a better approximation to the true total potential, from which a better approximation to true gaseous density distribution can be calculated, ... etc. Thus, the vertical distribution of the gas is not fully consistent with the Poisson equation $$\begin{aligned} 4 \pi G \; \rho_{tot}(R,z) &=& -\frac{1}{R } \; \frac{\partial (R\;F_R)}{\partial R } \; + \; \frac{1}{R^2} \; \frac{\partial^2 \Phi}{\partial \phi^2} \; - \; \frac{\partial {\mbox{$K_z{}$}}}{\partial z}, \label{eq:Poisson}\end{aligned}$$ since I take the vertical distribution of the gas layer to be Gaussian, and because the radial force ($F_R$) due to a thick gas layer differs from the radial force arising from an infinitely thin disk. These inconsistency are expected to be small because of the small scale-height and low mass of the gaseous disk. This procedure to calculate the thickness of the gas layer is what I term the [*global approach*]{}. In order for the global model to be self-consistent, the stellar velocity dispersion tensor (, with $ij$ all the possible coordinate combinations) has to be such that for the assumed stellar density distribution and the calculated total potential, the vertical Jeans equation (\[eq:Vertical.Jeans\]) will be satisfied. Thus, every mass model described above makes two predictions, firstly the radial variation of the width of the gaseous layer, and secondly the stellar velocity dispersion. In this paper I concentrate on the radial variation of the thickness of the gas layer. The Local Approach {#sec-The-Local-Approach} ------------------ To date, most authors have determined the distribution of the gas above the plane from the [*local*]{} mass densities only. In this section I compare this local approach with the global approach described in the previous section. To separate variables I will postpone investigating the effects of the gaseous self-gravity till §\[sec-Gaseous-self-gravity\]. First I describe how in the local approach the vertical force is calculated from the local mass densities and gradients in the rotation curve. Assuming that, in the region where the gas is found, the rotation speed does not depend on $z$-height and that the vertical force does not depend on radius, the Poisson equation (\[eq:Poisson\]) reduces to the local Poisson equation: $$\begin{aligned} 4 \; \pi \; G \; \rho(z) &=& - \frac{1}{R } \; \frac{d (R\;F_R)}{d R } - \frac{ d {\mbox{$K_z{}$}}}{dz} . \label{eq:Poisson.local}\end{aligned}$$ Combining the appropriate equation of hydrostatic equilibrium (\[eq:isothermal.Hydro.equil\]) with the equation above (\[eq:Poisson.local\]) I find : $$\begin{aligned} {\mbox{$\sigma_{z,gas}^2$}} \; \frac{d \; \ln{\rho_{gas}(z)} }{d \; z} &=& -4 \; \pi \; G \; \int_0^z dz' \left\{ \rho_{matter}(z') + \left( \frac{-2}{4\;\pi \;G} \right) \left( \frac{V_{rot}}{R} \; \frac{d V_{rot}}{d R} \right) \right\} . \nonumber\end{aligned}$$ Setting the rotation curve term to $\rho_{rot}$, it follows that : $$\begin{aligned} {\mbox{$\sigma_{z,gas}^2$}} \; \frac{d \; \ln{\rho_{gas}(z)} }{d \; z} &=& {\mbox{$K_z{(z)}$}} \hspace{5mm} = \hspace{5mm} -4 \; \pi \; G \; \int_0^z dz' \left\{ \rho_{matter}(z') + \rho_{rot}(z') \right\} . \label{eq:rho.from.local.approx}\end{aligned}$$ Negative and positive rotation densities ($\rho_{rot}$) correspond to rising and falling rotation curves respectively. The effect of a rising rotation curve is to increase the thickness of the gaseous and stellar disks, while in a falling rotation curve a disk will be thinner relative to the case where $\rho_{rot}=0\;$. Bottema (1987) find that the rotation term can contribute significantly in the rising parts of galactic rotation curves. For the case of a flat rotation curve, equation (9) leads to the plane-parallel-sheet case. Equation (\[eq:rho.from.local.approx\]) can be solved either iteratively or analytically (if more simplifications are introduced) : the multi-component and Gaussian-equivalent methods described below. ### The Gaussian equivalent method {#sec-The-Gaussian-equivalent-method} In order to arrive at the (analytic) Gaussian-equivalent method, we simplify matters further by assuming that the self-gravity of the gas is negligible and that the total mass density does not vary significantly with height above the plane. I then arrive at : $$\begin{aligned} {\mbox{$\sigma_{z,gas}^2$}} \; \frac{d \; \ln{\rho_{gas}(z)} }{d \; z} &=& -4 \; \pi \; G \; \rho_{tot}(0) \; z , \nonumber \end{aligned}$$ so that the vertical distribution is Gaussian with dispersion $W_{gas,G}$ : $$\begin{aligned} \rho_{gas}(z) &=& \rho_{gas}(0) \exp{(-z^2/2W_{gas,G}^2)} . \nonumber\end{aligned}$$ For which the following relation holds : $$\begin{aligned} W_{gas,G} &=& \frac{{\mbox{$\sigma_{z,gas}$}}}{\sqrt{4 \; \pi \; G \; \rho_{tot}(0)}} \hspace{1cm} . \label{eq:gas.width.Gaussian-equivalent}\end{aligned}$$ In the Gaussian-equivalent method one then assumes that the Gaussian width ($W_{gas,G}$) can be equated with the observed width [$W_{gas}$($\approx FWHM/2.35$, as defined by eqn. \[\[eq:gas.width.approximate\]\])]{}, so that the inferred midplane mass density ($\rho_{inf}$) is given by : $$\begin{aligned} \rho_{inf} &=& \frac{1}{4 \pi G} \; \left( \frac{{\mbox{$\sigma_{z,gas}^2$}}}{W_{gas}^2} \right) . \label{eq:gas.width.MOST.local}\end{aligned}$$ Inside the stellar disk, the Gaussian width ($W_{gas,G}$) will always be smaller then the true width because, for any density distribution, the volume density decreases with z-height. The inferred midplane mass density, as calculated from the true width $W_{gas}$ (which is larger than $W_{gas,G}$) and equation (\[eq:gas.width.MOST.local\]), will therefore underestimate the true midplane mass density. This effects is illustrated in Figure \[fig:rho.local.approx\], where I plot the ratio of the inferred midplane mass density ($\rho_{inf}$) and the actual midplane density ($\rho_{tot}$). For several mass models resembling NGC 3198, this method recovers $50$ to $110$ % of the true midplane density. Especially for the round halo case ($q=1.0$) we clearly see the effects of the truncation of the stellar disk at $R=6h_R$. The truncation of the stellar disk of NGC 3198 occurs at a relatively large distance ($6h_R$ versus $4.7h_R$ on average, vdKS81a&b and vdKS82a&b, and BD93) so that it’s effect may be relatively small for NGC 3198. On the other hand, the effects of the truncation of the stellar disk are smaller if the vertical distribution of the stars is sech$^2$ rather than exponential (i.e. less concentrated towards the midplane). It should be noted that these results depend somewhat on the width estimator being used (see footnote \[footnote.width.estimator\]), for example the radial variation of $\rho_{inf}/\rho_{tot}$ arrises partly because the vertical distribution of the gas gradually changes from a distribution which is strongly centrally peaked (in the inner two scale-lengths) to a Gaussian (beyond the optical disk). Including the rotation density will improve the Gaussian-equivalent method beyond the truncation of the optical disk (right-hand side panel). The radial variation in the density ratio beyond the optical disk results from small scale structure in the model rotation curve. However, this is possible only if the sum of the rotation and DM densities is larger than zero (see eqn. \[\[eq:gas.width.Gaussian-equivalent\]\] and Appendix \[sec-Appendix.local.approach\]). For example, this is [*not*]{} the case for a maximum-disk model of NGC 3198 in the inner two radial scale-lengths. Although its accuracy varies systematically with radius, the Gaussian-equivalent method works reasonably well, especially beyond the stellar disk. ### The plane parallel sheet case {#sec-The-plane-parallel-sheet-case} The difference between the plane-parallel-sheet case and the global approach is most easily illustrated by plotting the vertical forces for the two cases. In Figure \[fig:Kz.all\] $\;$ I plot the forces, as calculated for a NGC 3198-model, for a selection of radii. The dashed line is the vertical force due to a plane parallel sheet of stars with an exponential vertical distribution and a column density appropriate for the indicated radius. The open squares represent the force generated by a radially truncated exponential disk. The full line represents the generated by the combined density distribution of the stellar disk and the DM-halo. For the combination of parameters, maximum-disk and round DM-halo (top-left panel), the plane-parallel-sheet case overestimates the vertical force in the inner two scale-lengths of the stellar disk, because the plane parallel sheet is a bad approximation to the fast radially declining surface density (there is very little surface area at the central surface density). Furthermore, the large negative $\rho_{rot}$ associated with the steeply rising rotation curve (Figure \[fig:rotcur.gamma\]) decreases the effective volume density. Beyond about $2 \; h_R$ the potential due to the interior stellar disk becomes more important and will eventually dominate the potential generated by the local column of stars. Additional model calculations (top-right panel : $\gamma=0.9, q=0.1$ ; bottom-left panel : $\gamma=0.6, q=1.0$ ; bottom-right panel : $\gamma=0.6, q=0.1$) show that the stellar disk has a dominating contribution to the vertical force (over the extent of the stellar disk) if and only if the stellar disk is close to “maximum” and if the DM-halo is close to round (see also §B.4 and Figure \[fig:rhoSTARS.rhoDM\]). ### Inferences {#sec-Inferences} From the figures \[fig:rho.local.approx\], \[fig:Kz.all\] and \[fig:Kz.halo\] we learn the following : —At large galactocentric radii ($R \geq 6 h_R$), is almost linear for the range of models considered here, resulting in an almost Gaussian vertical distribution for the gas. Notwithstanding the low stellar surface densities at these large distances, the contribution of the stellar disk to the total vertical force can be substantial : even at eleven radial scale-lengths the stellar disk contributes about 20 % of the total vertical force (for the round DM-halo cases). The self-gravity of the gas (see Figure \[fig:Kz.halo\]), which is discussed in more detail in §\[sec-Gaseous-self-gravity\], can be important (up to 50% of the total force) in this region as well. —In the transition region ($4 h_R \leq R \leq 6 h_R$) the contribution of the stellar disk, the gaseous disks, and the dark halo to the total vertical force can be of comparable magnitude (for the mass model with maximum disk, round halo, and typical gaseous surface densities. See Figure \[fig:Kz.halo\]). Stellar disks are truncated (vdKS81a&b; vdKS82a&b; BD93) causing a sharp discontinuity in the stellar densities. However the potential as generated by the total stellar mass distribution will change much less abruptly. Thus the local approaches will strongly overpredict the thickness of the gas layer in this region (see Figure \[fig:rho.local.approx\]). Moving away from this corner ($\gamma=0.9, q=1.0$) in parameter space increases the halo’s contribution to the total potential significantly. —Inside the optical disk, the potential is dominated by the stellar disk only in case the disk is close to “maximum” and the halo is almost round. For the other extreme case (minimum-disk+flat-halo), the potential is completely dominated by the DM-halo. In the two remaining corners of parameter space, stellar disk and DM-halo contribute about equally to . In all of these cases, the density distribution of a tracer population will be almost Gaussian as long as the velocity dispersion of the tracer is so low that most of the tracer resides at z-heights below the “knee” in (at two to three stellar scale-heights). For a tracer population like the atomic gas with a velocity dispersion of roughly 7 km/s, the entire extent of the optical disk falls in this regime (see Figure \[fig:Wgas.beyond.Rmax\]). Isothermal tracers with a velocity dispersion larger than two or three times can move beyond the knee in the curve so that their density distribution can no longer be adequately described by a single “simple” function (cf. the thick disk of the Milky Way \[e.g. Gilmore & Reid 1983\]). —As expected, both the amplitude and the shape of the vertical force are almost independent of the $\gamma$ value of the stellar disk if the DM-halo is highly flattened. In case of a round DM-halo, different $\gamma$ values for the stellar disk do result in different shapes for the curves. As the amplitude of the knee in depends linearly on central surface density (quadratically on $V_{rot,max}$, eqn. \[\[eq:V2.stars.only.2p3hR\]\]), the thickness of the gas layer will be smaller in high rotation curve galaxies and larger in dwarfs (see also Appendix \[sec-Four-Special-Cases\]). Two radial regimes are poorly modeled by the Gaussian-equivalent as well as the plane-parallel-sheet local approaches : the region where the rotation curve rises so sharply that ($\rho_{rot}+\rho_{DM} < 0)$, and the region where the stellar disk truncates. Special treatment of these regions can be avoided by using the global approach. For those regions where the rotation curve is almost flat, which are far away from the truncation of the stellar disk, the Gaussian-equivalent method works reasonably well. The self-gravity of the gas {#sec-Gaseous-self-gravity} --------------------------- Rupen (1991) finds in NGC 891 as well as in NGC 4565, that the HI volume densities become comparable to the stellar volume densities beyond about 2 radial scale-lengths. For the Milky Way this is true at the solar circle. The self-consistent solution for the multi-component method as found in Appendix \[sec-Appendix.The.Edge.of.the.Optical.Disk\] (eqn. \[\[eq:solution.of.zRHO\]\]) indicates that the importance of the self-gravity of the gas scales with the ratio of gaseous to stellar kinetic energy. At the solar circle this ratio is about 0.1-0.2, so that the self-gravity of the gas is expected to become increasingly important at larger radii. The importance of the self-gravity of the gas is illustrated in Figure \[fig:Kz.halo\], where I plot the due to the total (stars+DM+gas) potential (the full line), and the forces arising from the global DM, stellar and gaseous density distributions (dotted, short dashed, and long dashed lines respectively). The self-gravity of the gas has been calculated using the procedure outlined in §\[sec-The.global.approach\]. ### The multi-component method {#sec-The-multi-component-method} In case $\rho_{rot}$ and the self-gravity of the gas are neglected it is rather straightforward to show (van der Kruit, 1988) that different isothermal components subject to the same cylindrically symmetric gravitational field have vertical distributions which are powers of each other. Including $\rho_{rot}$, the self-gravity of the gas, and the dark matter halo leads to the multi-component method, first used by Bahcall (1984) to determine the amount of disk-like dark matter in the solar neighborhood. In Appendix \[sec-Appendix.local.approach\], I present an analytic solution (eqn. \[\[eq:solution.of.zRHO\]\], which needs to be solved iteratively) to equation (\[eq:rho.from.local.approx\]). In the same Appendix I also show that in the multi-component method the distribution of the gas is a power of the distribution of the stars as well. In Figure \[fig:Wgas.Global.versus.Local\] I compare the model widths calculated in the multi-component method (where the vertical distribution of the stellar disk is fixed to a sech-squared , see Appendix \[sec-Appendix.local.approach\]) with those calculated using the global approach. The top plot shows the percentage difference between the global and local approaches. The structure in these difference curves mostly arrises due to small scale structure in the observed rotation curve. Thus, application of the multi-component method could be misleading. For example, if the flaring measurements extend to a radius at which the local approach predicts a local maximum in the observable width, too round a DM-halo would be inferred. I used the multi-component method to compare models of NGC 3918 with a varying amount of gaseous self-gravity. The results are presented in Figure \[fig:self-gravity-multi-cmp\]. In the left-hand set of panels I show the width, [$W_{gas}$($\approx FWHM/2.35$, as defined by eqn. \[\[eq:gas.width.approximate\]\])]{}, of the gas layer as a function of galactocentric radius for three different halo flattenings (bottom panel, $q=1.0$; middle panel, $q=0.5$; upper panel, $q=0.1$). Four different curves are presented, the different lines corresponds to differently scaled gaseous surface densities : 1/100, 1, 5, and 10 times the measured surface density for the full line, the dotted line, the short dashed line and the long dashed line respectively. The models are calculated[^7] for a gaseous velocity dispersion of 7.5 km/s and a stellar disk with a $\gamma$ value of 0.93 (i.e. about maximum-disk). For the scaled gaseous disks I plot (right-hand set of panels) the ratio of thickness of the scaled disk and the thickness of the no-self-gravity-disk. From this plot we see that not including the gaseous self-gravity will result in an [*overestimate*]{} of the width of the gas layer (by approximately 10% to 20%). For the predicted gas-layer width to equal the observed widths a more massive DM-halo is required (eqn. \[\[eq:rho0.versus.q\]\]), so that neglecting the self-gravity of the gas will result (using eqn. \[\[eq:gas.width.MOST.local\]\]) in a [*smaller*]{} value (by approximately 20% to 40%) for the halo flattening, $q$. Thus, in the local approach as well as in the global approach (see Figure \[fig:Wgas.beyond.Rmax\]), the self-gravity of the gas is found to have a comparable effect on the inferred DM-halo flattening. From these plots I also conclude that for galaxies with relatively large gaseous surface densities the thickness of the gas-layer is not sensitive to the flattening of the dark matter at all. Thus, the self-gravity of the gas may be an unimportant or a significant contributor to the vertical force (for early and late type systems respectively). Halo shape and gas layer thickness {#sec-halo-shape-gas-layer-thickness} ---------------------------------- In order to investigate the dependence of thickness of the gas layer upon the flattening of the DM-halo and the ratio of the stellar disk, I determined $\rho_{gas}(R,z)$ (and $W_{gas}(R)$ therefrom) for a range of galaxy models. In Figure \[fig:Wgas.beyond.Rmax\] I plot the width,\ [$W_{gas}$($\approx FWHM/2.35$, as defined by eqn. \[\[eq:gas.width.approximate\]\])]{}, of the gas layer in a $\gamma=0.9$ (i.e. $\sim$ maximum-disk ) disk-halo combination for several halo flattenings. The left-hand panel was calculated for a double exponential disk. In the middle panel I incorporated the truncation of the stellar disk (at $R/h_R=6$), while the self-gravity of the gas is included in the right-hand panel. The effect of the self-gravity of the gas is different for the different disk-halo models, and is strongest between 4 and 8 radial scale-lengths : between 20% and 40% for the NGC 3198 models. Beyond several radial scale-lengths the width of the gas layer depends strongly (roughly proportional to $\sqrt{q}$) on the flattening of the DM-halo. Inside the stellar disk, the width of the gas layer is largely independent of the DM-halo flattening because the potential there is dominated by the stellar disk. However, for models with flatter halos and/or lower-$\gamma$ stellar disks, the halo can be of importance inside the stellar disk (see §\[sec-The-multi-component-method\], and Figure \[fig:Kz.all\]). It is important to keep in mind that the self-gravity of the gas is more important for the rounder DM-halo models (§\[sec-Gaseous-self-gravity\] and compare the Figures \[fig:Kz.halo\]-a and \[fig:Kz.halo\]-b). So the “signature” of the halo flattening is somewhat reduced if the self-gravity of the gas is important (see also Figure \[fig:self-gravity-multi-cmp\]). Is the thickness of the gaseous layer dependent on the mass-to-light ratio of the disk? In order to answer that question, I performed the calculations described above for several $\gamma$’s. The results (for $q=1.0$) are presented in Figure \[fig:Wgas.and.gamma\]. We see that, as expected, inside the stellar disk a gas-layer immersed in a more massive stellar disk will be comparatively thinner. At large galactocentric radii the thickness of the gas layer becomes virtually independent of the ratio of the stellar disk. The same conclusion can be reached from Figure \[fig:rho.local.approx\], where we see that, beyond the optical disk, the derived midplane mass densities for different -disks are nearly equal. Effects of incomplete modelling {#sec-Effects-of-incomplete-modelling} ------------------------------- It is instructive to see how uncertainties in gaseous velocity dispersion and uncertainties in the derivation of the vertical force affect our estimates of the DM-halo flattening. Another point of interest is to investigate what differences may arise when using the local instead of the global approach. To summarize the following sections : the errors introduced by using the Gaussian-equivalent method (typically a 10-30% too round halo) tend to cancel out those introduced by neglecting the self-gravity of the gas (20-60% too flat a halo). ### Uncertainties in the velocity dispersion In many practical situations it may be that measurements of the gaseous velocity dispersion are not available. This severely limits our ability to determine the flattening of the DM-halo. In some face-on galaxies the azimuthally averaged gaseous velocity dispersion is observed to decline radially from around 12 km/s in the center to remain constant at 7 km/s beyond $R_{25}$ (van der Kruit & Shostak 1982; Dickey 1990). In other systems, the region beyond $R_{25}$, as well as the inter-arm regions appear to have a constant velocity dispersion of 7 km/s, while on the arms values around 13 km/s are reported, ( Shostak & van der Kruit 1984; see Kamphuis 1993 for a review). In summary, the measurements to date indicate that the gaseous velocity dispersion beyond the optical disk has a value of around 7 km/s. Overestimating the gaseous velocity dispersion by a factor of $(1+x)$ causes an overestimate of the midplane mass density by a factor of $(1+x)^2$. Beyond the optical disk, too flat a DM-halo distribution would be inferred (by the same factor $(1+x)^2 \;$). ### Uncertainties in the vertical force Here I estimate the effects of neglecting certain components to the vertical force (e.g. the self-gravity of the atomic gas, magnetic forces, or the gravitational force due to some remote point source). A mass-model which uses an underestimate of the true vertical force requires a larger DM density and thus a smaller values of the halo flattening to reproduce the observed gas layer widths. Thus, underestimating the vertical force by a factor $(1+y)$ results in too small an estimate for the halo flattening (by a factor $[1+y]$). The inverse is true when working with an overestimate of the vertical force. Neglecting the self-gravity of the neutral atomic gas, which can typically contribute 30 % to the vertical force (§\[sec-Gaseous-self-gravity\] and Figure \[fig:Kz.halo\]), will result in a 30 % too small inferred halo flattening. Since the known molecular gas content of galaxies rarely exceeds 10% of the stellar mass (which contributes roughly 20% to at large distances, see §\[sec-The-Local-Approach\] and Figure \[fig:Kz.all\]), and is confined to the inner parts of the galaxy like the stars, I expect that neglecting the molecular gas will result in an underestimate of $q$ by at most 2%. ### Using the Gaussian-equivalent method Application of the Gaussian-equivalent method to an observed set of gas layer widths underestimates the inferred midplane density by a radially varying amount (see §\[sec-The-Local-Approach\], and Figure \[fig:rho.local.approx\]). As a result the inferred halos will be too round and have core radii that are too large. Discussion and Conclusions {#sec-discussion} ========================== In the previous sections we have seen that a flattened dark halo imprints a very specific signature on the flaring of the gas-layer (Figure \[fig:Wgas.beyond.Rmax\]). Measurement of this flaring allows for the determination of the DM-halo flattening. Although the global approach is limited by some of the same assumptions as the local approach (isothermality of the gas, zero non-thermal pressure terms, constant -ratio of the stellar disk, alignment between DM-halo and stellar disk, ...), it treats some other aspects much better (radial gradients in stellar surface density, the effects of the rising rotation curve, the non-cylindrical symmetry of the potential, ...). In particular the region where the rotation curve rises sharply and the region where the stellar disk truncates are better analyzed using the global approach. In those parts where the rotation curve is ‘flat’ and which are not too close to the truncation of the stellar disk the Gaussian-equivalent and multi-component methods can be used to reasonable accuracy. However, the global approach is less sensitive to small scale irregularities (Figure \[fig:Wgas.Global.versus.Local\]). To obtain reliable results, any applied method [*must*]{} include the self-gravity of the gas. Therefore, I propose that the more robust global approach (§\[sec-The.global.approach\]) should be used in programs aimed at determining the flattening of the dark matter halo. For a set of global mass distributions with varying halo flattening one can calculate the flaring behaviour of the gaseous disk. The DM-halo model exhibiting the best agreement between calculated and observed gas-layer widths then yields the halo flattening. The accuracy of the determined flattening can be gauged by varying the structural parameters of the individual mass components. Since the velocity dispersion of the gas is a crucial parameter in the analysis, care must be taken to determine its radial dependence. This will be done done in a forthcoming paper where I will analyze the flaring gas layer of thee almost edge-on galaxy NGC 4244. Suitable systems {#sec-suitable-systems} ---------------- Nearby edge-on spiral galaxies with large (extending beyond the optical disk) neutral hydrogen envelopes are possible targets for studies of this kind. Symmetric systems are preferred since it is more likely that such systems are in equilibrium. Furthermore suitable systems should not exhibit strong warping. For systems with moderate warps, the kinematical information can be used to disentangle flaring from warping. Unfortunately these kind of systems might be rare. In order to investigate the possibility of determining the thickness of the gas-layer in less inclined systems, I simulated HI observations of two models similar to NGC 3198 (inclined by 72$^o$ w.r.t. the line of sight). In the thin disk model, I constrained the gas-layer to have a FWHM of 100 pc everywhere, while in the flaring disk model the gas-layer has a vertical distribution calculated using the multi-component method (including the self-gravity of the gas, eqn. \[\[eq:solution.of.zRHO\]\]). Both models were calculated with a gaseous velocity dispersion of 7.5 km/s. For a given set of observational parameters (15$\;$ beam \[$\rightarrow$ 700 pc\] and 2.56 km/s velocity resolution) I calculate the intensities originating from all points in the model galaxy for all velocities. Taking the normalized second moment with respect to radial velocity yields the apparent velocity dispersion for all points in the model galaxy. In Figure \[fig:second.moment\] I present the two model velocity dispersion maps. The systematic differences, of the order of 5 km/s, arise due to the fact that a line of sight through a flaring disk samples a wider range in position angle and galactocentric radius than the line of sight through a thin disk. In principle, the gaseous velocity dispersion can be determined after the layer-thickness-effects are corrected for. The thickness of the gas layer can be derived by comparing individual channel maps[^8]. Contour maps of a particular channel are presented in Figure \[fig:channel.map\] : in the top panel the thin disk map is shown, the middle panel shows the thick disk equivalent while the difference between the two is contoured in the lower panel. Due to its larger vertical scale-height, the projected width of the thick-disk channel map should be larger than that of the thin-disk, while the average surface brightness will be reduced. This signature of the thick disk is easily recognized in the displayed contour maps. Furthermore, contrary to edge-on galaxies, the flaring can be measured in many different channel maps so that local irregularities (e.g. spiral streaming motions, bubbles, ...) are less likely to hamper the determination of the radial behaviour of the flaring (cf. Olling & van Gorkom, 1995). Therefore I expect that it must be possible to experimentally determine both the gaseous velocity dispersion and the thickness of the gas-layer for moderately inclined systems as well. A control sample ---------------- It might be possible to test the importance of neglected physics in galaxies where the rotation curve falls in Keplerian manner. In such systems the contribution of the dark matter to the global potential is presumably small, thereby eliminating the gas layer thickness dependence on the shape of the DM-halo. As a result the thickness of the gas layer should depend on the total visible mass of the galaxy and the gaseous velocity dispersion only. Some galaxies exhibit falling rotation curves, albeit not in Keplerian manner (Casertano & van Gorkom 1991; Broeils 1992). The measured width of the gas layer can then be compared with model predictions. If the gas layer is observed to be thicker than predicted, one might conclude that magnetic and cosmic ray pressures are important. On the other hand, a gas layer thinner than predicted indicates that either the ionizing extra-galactic radiation field is important (e.g. Maloney 1993), or that an unseen disk-like DM component is present (i.e. the “clumpuscules” proposed by Pfenniger 1994a, and Pfenniger & Combes 1994b). Summary ------- I have investigated the use of accurate determination of the thickness of the HI-layer in external galaxies. In combination with constraints set by the gaseous velocity dispersion and the rotation curve, I find that in principle it is possible to determine the shape of DM-halos. The gaseous self-gravity contributes significantly to the vertical force field (up to 40%) but can be modeled accurately using the global approach (§\[sec-The.global.approach\]) outlined in this paper. The thus determined halo flattening is independent of the particular choice of disk-halo decomposition. In practice (not necessarily edge-on) systems with extended HI-disks and regular velocity fields should be used, provided that they are observed with high angular and velocity resolution. I would like to thank Jacqueline van Gorkom for the support, encouragement and stimulating discussions during my years in graduate school. Michael Rupen’s work initiated this line of research : the many discussions we had over the years and his comments on an earlier version of this paper were of great help to me. Furthermore I would like to thank HongSheng Zao who reminded me of quite a few analytic tricks applied in Appendix C. Penny Sackett helped me a lot by providing me with the relations for and $F_R$ of flattened DM-halos. I want to thank Kevin Prendergast for his suggestion to investigate the global approach and his careful reading of an earlier version of the manuscript. David Schiminowich, as an un-prejudiced reader, provided useful tips for improvement. I also thank Phil Maloney, the referee, for some useful suggestions to improve this paper. And finally I would like to thank NRAO and the Kapteyn Astronomical Institute for developing and maintaining the AIPS and GIPSY software packages respectively. This work was supported in part through an NFS grant ( AST-90-23254 to J. van Gorkom) to Columbia University. Appendices {#appendices .unnumbered} ========== Rotation curves of flattened DM-halos ===================================== As mentioned before, it is not possible to constrain the DM-halo flattening by measurements of the equatorial rotation curve since both round and flattened mass distributions can generate the same rotation curve. In this Appendix I show that the DM-halo model used by Sackett & Sparke (1990) : $$\begin{aligned} \rho_h(R,z;q) &=& \rho_{h,0}(q) \; \left( \frac{R_c(q)^2}{R_c(q)^2 + R^2 + (z/q)^2} \right) \; , \label{eq:rho.halo.Rz} {\newcounter{rho.halo.Rz} \setcounter{rho.halo.Rz}{\value{equation}} }\end{aligned}$$ (where $R_c, \; \rho_{h,0}$ and $q \;(=c/a)$ are the DM-halo core radius, central density and flattening respectively) defines a one-parameter family of halo models with essentially the same equatorial rotation curve if core radius and central density depend in a specific way on this one parameter, $q$ (equations \[\[eq:Rc.versus.q\]\] and \[\[eq:rho0.versus.q\]\] respectively). SRJF94 give a general relation for both the vertical and the radial force due to such a dark halo (their equation \[6\]). Evaluating this relation in the midplane, I find that the equatorial rotation curve arising from a flattened DM-halo is given by : $$\begin{aligned} V_{halo}^2(R;\rho_{h,0},R_c,q) &=& V_{halo}^2(\infty;\rho_{h,0},R_c,q) \times Q(R;R_c,q) \; , \label{eq:V2.R.q}\end{aligned}$$ with $$\begin{aligned} && Q(R;R_c,q) \; = \; \nonumber \\*[3mm] && \left\{ 1 - \left( \frac{g(q)}{\arctan{g(q)}} \right) \left( \frac{ q^2 R_c^2(q)}{R^2+(1-q^2)R_c^2(q) } \right) \arctan{ \left( \frac{R^2+(1-q^2)R_c^2(q)}{q^2 R_c^2(q)}\right) } \right\} \; , \label{eq:V2.radvar} \end{aligned}$$ and $$\begin{aligned} V_{halo}^2(\infty;\rho_{h,0},R_c,q) &=& 4 \pi G \; \rho_{h,0}(q) \; R_c^2(q) \; f(q) \; , \label{eq:V2.halo.inf}\end{aligned}$$ while $f(q)$ and $g(q)$ are given by : $$\begin{aligned} && f(q) = \left( \frac{ q \; \arccos{q}}{\sqrt{1-q^2}} \right) \hspace{3cm} g(q) = \left( \frac{\sqrt{1-q^2}}{q} \right) \; . \label{eq:f.q}\end{aligned}$$ It is easily verified that equation (\[eq:V2.R.q\]) indeed yields the familiar\ $V_h^2(R) = V_h^2(\infty) \left\{1-(R_c/R)\arctan{(R/R_c)} \right\}$ , the rotation curve due to a round halo. Changing the flattening of the halo should not change the halo rotation curve significantly, that is to say : $V^2_{halo}(R;q) \approx V^2_{halo}(R;1)$ for all radii. Noting that $f(1)=1.0$, I find that at infinity the following relation holds : $$\begin{aligned} \rho_{h,0}(1) \; R_c^2(1) &=& \rho_{h,0}(q) \; R_c^2(q) \times f(q) \; . \label{eq:rho.Rc.q.versus.1}\end{aligned}$$ Requiring that the round-halo rotation curve equals the flattened-halo rotation curve at $R=R_c(q)$ leads to the following relation : $$\begin{aligned} \hspace{-0.75cm} \left( 1 \; - \; \frac{R_c(1)}{R_c(q)} \; \arctan{\frac{R_c(q)}{R_c(1)}} \right) &=& \left\{ 1 - \left( \frac{g(q)}{\arctan{g(q)}} \right) \left( \frac{q^2}{2-q^2} \right) \arctan{\left( \frac{2-q^2}{q^2} \right) } \right\} \; . \label{eq:V2.q.v.V2.1}\end{aligned}$$ Noting that the RHS of this equation varies between 1.28 and 1.00 for $q \in [0.1,1.0]$, so that $R_c(q)/R_c(1)$ is constrained to the interval $[1.00,1.242]$, I expand (to second order) the left-hand and right-hand sides of this equation with respect to $R_c(q)/R_c(1)$ and $q$ respectively. The resulting quadratic equation is solved to obtain the core radius of the DM-halo as a function of flattening. The central density is then found from equation (\[eq:rho.Rc.q.versus.1\]). After substituting the solutions for $R_c(q)$ and $\rho_{h,0}(q)$ in the equation for the rotation curve (equation \[\[eq:V2.R.q\]\]) it is found that the rotation curves of flattened DM-halos reproduce the rotation curve of a round halo to within 7 % (for $q=$0.2). This error is maximal for $R=0$, changes sign at $R=R_c(q)$ and slowly drops from -2% at two core radii to 0% at infinity. Since these deviations are large in the region where rotation curves are determined, such accuracy is not acceptable for our purpose. A little experimentation leads to a solution which is more acceptable : our best approximation ( errors $\leq$ 1.4%) to the dependence of halo core radius and central density on its flattening $q$ is graphically presented in Figure \[fig:Rc.rho0.versus.q\] and algebraically below : $$\begin{aligned} R_c(q) &=& R_c(1) \times \left( 1.209 - 0.332 q + 0.123 q^2 \right) \nonumber \\*[3mm] &=& R_c(1) \times {\cal C}(q) \; , \label{eq:Rc.versus.q}\end{aligned}$$ and $$\begin{aligned} \rho_{h,0}(q) &=& \rho_{h,0}(1) \times \left( \frac{1}{f(q) {\cal C}(q)^2} \right) \; \left( 0.954 + 0.0533 q - 0.0073 q^2 \right) \nonumber \\*[3mm] &=& \rho_{h,0}(1) \times {\cal H}(q) \hspace{1cm} \approx \hspace{1cm} \left( \frac{\rho_{h,0}(1)}{q} \right) \; , \label{eq:rho0.versus.q}\end{aligned}$$ where $\lim_{q \rightarrow 1} R_c(q) = R_c(1)$ and $\lim_{q \rightarrow 1} \rho_{h,0}(q) = \rho_{h,0}(1)$. Substituting (\[eq:rho0.versus.q\]), (\[eq:Rc.versus.q\]) and (\[eq:f.q\]) into equation (\[eq:rho.halo.Rz\]), I find that at large distances the local DM-halo densities are roughly proportional to $1/f(q) \approx 1/q$. On the Disk-Halo conspiracy {#sec-Appendix.disk.halo.conspiracy} ============================ In order to calculate the potential of the galaxy, the mass distributions of both the stellar disk and the DM-halo need to be parameterized in a convenient way. In this section I describe how the DM-halo structural parameters are related to those of the stellar disk. Given the photometry of the stellar disk and the observed rotation curve, a disk-halo mass model is fully determined by [*one*]{} parameter which regulates the relative importance of stars and dark matter. Following Bottema (1993) I choose the parameter, $\gamma$, to be the fraction of the peak observed rotation curve which is due to the stellar disk [(defined by eqn. \[\[eq:gamma.beta.definition\]\])]{}. With such a parameterization of the disk-halo decomposition available, it is possible to investigate the effects of the stellar mass-to-light ratio upon the flaring of the gas layer. In accordance with van ABBS85 and LF89, I find that a wide range of stellar mass-to-light ratios ($\gamma$’s) produce acceptable “fits” to observed rotation curves. Since the light distribution of the stellar disk is relatively well known, the first step in tackling the disk-halo conspiracy is the description of the rotation curve due to the stellar disk. When assuming a constant mass-to-light ratio, the stellar density distribution can be described, to first order, as radially exponential and vertically thin. The amplitude of the rotation curve due to such a thin exponential stellar disk depends only on its central surface density and radial scale length. The [*shape*]{} is fully determined by the radial density distribution. Casertano (1983) has shown that both the amplitude and shape of stellar rotation curves are sensitive to the thickness of the stellar disk as well as to the truncation radius, $R_{max}$ (vdKS81a&b; vdKS82a&b; BD93). Therefore, the disk-halo conspiracy too will vary with the details of the stellar mass distribution. While the amplitude of the inner stellar rotation curve depends rather strongly on the thickness of the stellar disk, the outer rotation curve is virtually independent thereof. I will now investigate the dependence of the stellar rotation curve on the structural parameters of the stellar disk ($h_R$, $R_{max}$, and $z_e$). I calculated stellar rotation curves, using the program ROTMOD in GIPSY (van der Hulst 1992), for several truncation radii and disk thicknesses, and vertical distributions. As it turned out, the rotation curves of such stellar disks can be parameterized by a rotation speed at an inner ($R_{inn}$) and an outer ($R_{out}= \; x \; R_{inn}$) radius : $$\begin{aligned} V_{rot,stars}^2(R_{inn}) &=& f_m \; \pi \; G \; h_R \; L(0) \; {\mbox{${\cal M/L}\;$}}\label{eq:V2.stars.only.2p3hR} \\*[3mm] V_{rot,stars}(R_{out}) &=& f_x \; V_{rot,stars}(R_{inn}) \; , \label{eq:V2.stars.only.xpyhR}\end{aligned}$$ where $L(0)$ is the central surface brightness, the mass-to-light ratio, and $h_R$ the radial scale length of the stellar disk. I take $R_{inn} = 2.3h_R$, because a typical stellar disk (truncated at $4.5h_R$ and $h_R/z_e=0.1$) has its maximum rotation speed at this distance. The constants $f_m$ and $f_x$ depend upon the the vertical distribution and the radial truncation of the stellar disk. For the outer radius I used three different values, $6.0h_R$, $8.0h_R$ and $10.0h_R$. The results are presented in Table 1 below for a range in parameters covering the observed range in thickness and truncation radius ( vdKS81a&b; vdKS82a&b; Bottema 1987; BD93). The listed values are the average of the rotation curves for a vertically exponential and a vertically sech$^2$ stellar disk. With this choice for the two vertical distributions, sech-squared and exponential, I bracket the suggested range of suggested distributions (van der Kruit, 1988). The uncertainty concerning the true vertical distribution translates directly into uncertainties,$\Delta f$, in $f_m$ and $f_x$. Experimentally I find, $\Delta f_m \approx (2\Delta f_6) \approx (2\Delta f_8) \approx (2\Delta f_{10}) = $ 0.6%, 1.2%, 1.7% and 2.3% for $z_e/h_R=$ 0.05, 0.10, 0.15 and 0.20 respectively), so that $f^{exp} = f + \Delta f$ and $f^{sech2} = f - \Delta f$. Thus, if the structural parameters of the stellar disk are well known the errors on the $f$-values are small. In situations where the thickness of the stellar disk are not known (as for all non edge-on systems) one must use the average values, $\overline{f_m}$ and $\overline{f_x}$, which are listed in the columns labeled “$z_e/h_R=averaged$”. The uncertainties on these averages are typically 4.5, 2.0, 2.0 and 2.0 %. If also no information is available on the location of the truncation radius, then $f_m$, $f_6$, $f_8$, and $f_{10}$ should be representative of the range in allowed values : I suggest $f_m = 0.73 \pm 6\%$, $f_6 = 0.68 \pm 10\%$, $f_8 = 0.57 \pm 10\%$ and $f_{10} = 0.50 \pm 10\%$. The mass-to-light ratio of the stellar disk ------------------------------------------- Now I turn to the dark halo contribution to the observed rotation curve. As usual, the (square of the) observed rotation curve is generated by the quadratic sum of the constituting mass components : $$\begin{aligned} V_{rot,obs}^2(R) &=& V_{rot,stars}^2(R) \; + \; V_{rot,bulge}^2(R) \; + \; V_{rot,gas}^2(R) \; + \; V_{rot,halo}^2(R) \; .\end{aligned}$$ Now I define the gas-bulge-corrected observed rotation speed as the square root of (\[eq:Cal.V2\]). $$\begin{aligned} {\cal V}_{rot,obs}^2(R)&=& V_{rot,obs}^2(R) - V_{rot,bulge}^2(R) - V_{rot,gas}^2(R) \; , \label{eq:Cal.V2} \\*[3mm] && \hspace{-3cm}{\rm so \; that \; :} \nonumber \\[3mm] {\cal V}_{rot,obs}^2(R)&=& V_{rot,stars}^2(R) \; + \; V_{rot,halo}^2(R) \; . \label{eq:V2.obs} \\*[3mm] && \hspace{-3cm}{\rm Furthermore \; I \; define \;} \gamma {\rm \; and \; } \beta_x {\rm (=the \; slope \; of \; the \; rotation \; curve) \; : \; } \nonumber \\*[3mm] \gamma &=& \frac{ V_{rot,stars}(2.3h_R)}{{\cal V}_{rot,obs}(2.3h_R)} \hspace*{2cm} \beta_x \; = \; \frac{{\cal V}_{obs}(x \; h_R)}{{\cal V}_{obs}(2.3h_R)} \; . \label{eq:gamma.beta.definition}\end{aligned}$$ Given the observed radial scale length and central surface brightness of the stellar disk, the mass-to-light-ratio is a function of the stellar contribution to the peak rotation curve ($\gamma$) only and is given by (from the equations \[\[eq:V2.stars.only.2p3hR\]\] and \[\[eq:gamma.beta.definition\]\] ) : $$\begin{aligned} {\mbox{${\cal M/L}\;$}}(\gamma) &=& \left( \frac{{\cal V}_{obs}^2(2.3h_R)}{f_m \; \pi G \; h_R \; L(0)} \right) \times \gamma^2 \hspace{2mm} = \hspace{2mm} 10.25 \left( \frac{{\cal V}_{obs,100}^2(2.3h_R)}{f_{m,0.722}h_{R,1} L_{100}(0)} \right) \times \gamma^2 \; , \label{eq:MoverL.versus.gamma}\end{aligned}$$ where ${\cal V}_{obs,100}(2.3h_R)$ is the gas-bulge-corrected observed rotation speed at 2.3 radial scale-lengths in units of 100 km/s, $L_{100}(0)$ is the central surface brightness of the stellar disk in units of 100 $L_{\odot}/$pc$^2$, $h_{R,1}$ is the radial scale-length in units of 1 kpc and $f_{m,0.722}$ in units of 0.722 (the $f_m$-value appropriate for a typical stellar disk with $h_R/z_e = 10.0$ and $R_{max}/h_R = 4.5$). The core radius of the DM-halo ------------------------------ For a given observed rotation curve and a calculated stellar rotation curve [*shape*]{}, the core radius is determined uniquely by $\gamma$. Applying equation (\[eq:V2.obs\]) at 2.3 and $x \; h_R$, eliminating the terms depending on the stellar rotation curve, and using equation (\[eq:V2.R.q\]) with $q=1$, I find that the following relations hold : $$\begin{aligned} \left( \frac{ (\beta_x^2-(f_x \; \gamma)^2) } {1-\gamma^2} \right) &=& \left( \frac{V_{rot,halo}^2(x \;h_R)}{V_{rot,halo}^2(2.3h_R)} \right) \label{eq:Rc.solve} \nonumber \\*[3mm] &\approx& a_{x,0} + a_{x,1} \left( \frac{R_c}{h_R} \right) + a_{x,2} \left( \frac{R_c}{h_R} \right)^2 \; . \label{eq:RcOhR.gamma.beta}\end{aligned}$$ The approximation (\[eq:Rc.solve\]) is valid for small ranges in $R_c / h_R$ only. I list these ranges, together with the ranges for the left-hand side of (\[eq:Rc.solve\]) and the appropriate $a_x$-values in Table 2. These piecewise approximations recover the right-hand side of equation (\[eq:Rc.solve\]) to within 0.2 %. No effort has been made to match the pieces smoothly. For any assumed $\gamma$ and measured $\beta_x$ and $f_x$ one can calculate the left-hand side of equation (\[eq:Rc.solve\]) and find the appropriate $a_x$-values in Table 2 from which ${\cal R}_c=R_c/h_R$ follows : $$\begin{aligned} \left( \frac{R_c}{h_R} \right)(\gamma;\beta_x,f_x) &\approx& \frac{\left( -a_{x,1} + \sqrt{(a_{x,2})^2 - 4 a_{x,2} ] (a_{x,0} - lhs_x) } \right) } {2 a_{x,2}} \label{eq:Rc.versus.gamma} \\*[3mm] &=& {\cal R}_c(\gamma) \; . \nonumber\end{aligned}$$ The central density of the DM-halo ---------------------------------- The DM-halo central density is calculated from (\[eq:V2.obs\]), (\[eq:gamma.beta.definition\]) and (\[eq:Rc.versus.gamma\]). With $$\begin{aligned} {\cal V}_{obs}^2(2.3h_R) &=& \gamma^2 {\cal V}_{obs}^2(2.3h_R) + V_{halo}^2(2.3h_R) \hspace{0.5cm} = \hspace{0.5cm} \left( \frac{1}{1-\gamma^2}\right) V_{halo}^2(2.3h_R) \; , \nonumber\end{aligned}$$ it follows that $$\begin{aligned} \rho_{h,0}(\gamma;\beta_x,f_x) &=& \left( 0.185 \; {\mbox{$M_{\odot}/{\rm pc}^3\;$}}\right) \; \left( \frac{{\cal V}_{obs,100}(2.3h_R)}{ h_{R,1} } \right)^2 \times \nonumber \\*[3mm] && \left\{ \frac{(1-\gamma^2)}{{\cal R}_c(\gamma)^2} \right\} \; \left\{ 1 - \left( \frac{{\cal R}_c(\gamma)}{2.3} \right) \arctan{\left( \frac{2.3}{{\cal R}_c(\gamma)} \right)} \right\}^{-1} \; , \label{eq:result.for.rho0}\end{aligned}$$ where ${\cal R}_c$, the normalized core radius of the DM-halo, depends strongly upon the choice for $\gamma$, and only weekly upon the shape of the rotation curve (via $\beta_x$) and the shape of the stellar mass distribution (via $f_x$). As an example I present the dependence of the mass-to-light ratio of the stellar disk (lower right panel of Figure \[fig:disk.halo.conspiracy\]), the central density (upper left panel) and core radius (lower right panel) of the DM-halo on $\gamma$ for NGC 3198 (photometry and rotation curve taken from BE89). The error-bars were calculated assuming the most common observational status : the truncation radius of the stellar disk is known but the thickness of the stellar disk is not. In this case, the accuracy with which one can determine the mass-to-light ratio of the stellar disk and the central density and core radius of the DM-halo are typically 3, 0.5 and 4 % respectively. If the thickness of the stellar disk is also known (as would be the case for edge-on galaxies), the errors decrease substantially, while they blow up if neither the truncation radius nor thickness of the stellar disk are known. For galaxies which are about “maximum-disk”, and have more or less flat rotation curves, Figure \[fig:disk.halo.conspiracy\] shows that the core radius of the DM-halo will be 7 $\pm$ (a factor of two) times the radial scale-length of the stellar disk, not too different from the maximum-disk-fits performed on real galaxies (Broeils 1992). Note that while the core radius of the dark halo approaches infinity as $\gamma$ nears unity , the dark to luminous mass (upper right panel) is not singular. I tested the procedure outlined above on simulated data, where I added (in quadrature) a stellar and a DM-halo rotation curve to represent a simulated total rotation curve. For falling, flat, as well as for rising simulated total rotation curves I was able to accurately recover the input parameters, provided that the slope of the total rotation curve is determined over as large a range in radius as possible. As already pointed out by ABBS85 and LF89, mass models with a range (a factor of ten in some cases) in stellar mass-to-light ratios are consistent with observed rotation curves. I find, attempting to fit model rotation curves with “faulty” $\gamma$ values (or -ratios) that a wide range of stellar mass-to-light ratios is possible if small ($\approx$ 2 %) differences between input and recovered rotation curves are allowed to occur. The differences between input and recovered rotation curves increase significantly inside the inner point and beyond the outer point which were used for to determine the slope of the rotation curve. This is illustrated in Figure \[fig:rotcur.gamma\] where I show the resultant fits to the observed rotation curve of NGC3198 (BE89) for several $\gamma$’s. I took the outer radius at 10 radial scale lengths. Although all $\gamma$ values between 0.93 and 0.53 produce similarly acceptable fits, the low-$\gamma$-disks proposed by Bottema (1993) “fit” somewhat better. Since the velocity field of the inner parts of NGC 3198 deviates from circular rotation (Corradi , 1991), the discrepancies between observed and model curves (in the inner parts) can not be used as an indication of goodness of fit. Thus the method presented above presents us with an easy way to analyse many possible disk-halo decompositions. To obtain the core radius and central density of a flattened dark halo, one uses the equations (\[eq:Rc.versus.gamma\]) and (\[eq:result.for.rho0\]) and multiplies these values by ${\cal C}(q)$ (equation \[\[eq:Rc.versus.q\]\]) and ${\cal H}(q)$ (equation \[\[eq:rho0.versus.q\]\]) respectively. The Edge of the Optical Disk {#sec-Appendix.The.Edge.of.the.Optical.Disk} ---------------------------- Combining the relations for the -ratio, the DM-halo core radius and central density it follows that : $$\begin{aligned} \frac{\rho_{stars}(r)}{\rho_{h}(r)} &=& 27.7 \left( \frac{h_R/z_e}{10} \right) \left( \frac{0.722}{f_m} \right) \left( \frac{\gamma^2}{1-\gamma^2} \right) \; \times \nonumber \\*[3mm] && \hspace*{-2.5cm} e^{-r} \times \left\{ \left( {\cal R}_c(\gamma)^2 + r^2 \right) \left[ 1 - \left( \frac{{\cal R}_c(\gamma)}{2.3} \right) \arctan{\left( \frac{2.3}{{\cal R}_c(\gamma)} \right)} \right] \right\} \; , \label{eq:rho.stars.over.rho.DM}\end{aligned}$$ where $r=R/h_R$. Thus the ratio of stellar to DM midplane densities is independent of the [*amplitude*]{} of the observed rotation curve and depends only weakly on the [*shape*]{} of the rotation curve (via $\beta_x$), the truncation radius and thickness of the stellar disk (via $f_m$ and $f_x$ and $h_R/z_e$). The dependence of $\rho_{stars}(r)/\rho_{h}(r)$ upon galactocentric radius is graphically presented in Figure \[fig:rhoSTARS.rhoDM\] for several $\gamma$ values (full line $\gamma$=0.9, dotted line $\gamma=0.8$, short dashed line $\gamma=0.7$, long dashed line $\gamma=0.6$, dash-dotted line $\gamma=0.5$). I term the distance at which the disk densities no longer dominates[^9] the total midplane densities, $R_{dde}$. The maximum-disk like models have $R_{dde}$ values ranging from four to seven, corresponding to the range where some stellar disks are observed to be truncated (vdKS81a&b; vdKS82a&b). Furthermore we notice that the luminous-to-dark matter ratio decreases rapidly with decreasing $\gamma$ and $q$ values. Recently Barteldrees & Dettmar (1993) have found that some spirals have edges in their light distribution as close in as two radial scale-lengths. Only for substantially rising rotation curves (say $\beta \geq 1.3$) and/or small $\gamma$ values does $R_{dde}$ become as small as two, suggesting that these systems have rising rotation curves and/or are DM dominated. A self-consistent solution for the multi-component method {#sec-Appendix.local.approach} ========================================================= In this Appendix, from the equation of hydrostatic equilibrium (\[eq:rho.from.local.approx\]), I derive a self-consistent analytical solution for the vertical distribution of the gas for the multi-component method proposed by Bahcall (1984). The solution (equation \[\[eq:solution.of.zRHO\]\]) is in the form of an integral equation for $z(\rho_{gas})$, which may be inverted numerically to yield $\rho_{gas}(z)$. This equation can be used, in an iterative manner, to determine the local dark matter density. The model galaxies presented in the Figures \[fig:self-gravity-multi-cmp\], \[fig:second.moment\] and \[fig:channel.map\] were calculated using this analytic method. As it turns out, an analytic solution can be found if one makes the approximation that the DM-halo density is constant with height above the plane in the region where the gas is co-spatial with the DM-halo. For DM-halos which are not too flattened this is not a bad approximation for most galactocentric radii. For the gas as well as for the stars the following equation holds : $$\begin{aligned} {\mbox{$\sigma_{z,i}^2$}} \; \frac{d \; \ln{ \rho_i(z)} }{d \; z} &=& -4 \pi G \; \int_{0}^{z} \left\{ \rho_{gas}(z') + \rho_{stars}(z') + (\rho_{halo} + \rho_{rot}) \right\} \; dz' \; . \label{eq:rho.from.local.approx.II}\end{aligned}$$ The density and dispersion ($\rho_i$ and ) can represent any isothermal component $i$, stars or gas. Since the right hand side of (\[eq:rho.from.local.approx.II\]) is the same for stars and gas, it follows that the vertical distributions are powers of each other. The density distribution has to be normalized such that the integral equals the surface density. So I write : $$\begin{aligned} \rho_{gas}(z) &=& C_g \; \left( \rho_{stars} \right)^{p_{sg}} = \; \rho_{gas}(0) \; \left( \frac{\rho_{stars}(z)}{\rho_{stars}(0)} \right)^{p_{sg}} \; , \label{eq:equation.for.rho.gas} \\*[3mm] {\rm with } && \nonumber \\*[3mm] p_{sg} &=& \frac{{\mbox{$\sigma_{stars}^2$}}}{{\mbox{$\sigma_{gas}^2$}}} \hspace*{9mm} {\rm and \;} C_g {\rm \; such \; that \;} \hspace*{9mm} \Sigma_{gas} = \int_{-\infty}^{+\infty} \rho_{gas}(z') \; dz' \; . \nonumber\end{aligned}$$ Similar relations can be defined for $\rho_{stars}$. Differentiating (\[eq:rho.from.local.approx.II\]) with respect to z yields : $$\begin{aligned} A_{gas}\; \frac{d^2 \; \ln{ \rho_{gas}(z)} }{d \; z^2} &=& \rho_{gas}(z) + \rho_{stars}(z) + ( \rho_{halo} + \rho_{rot} ) \label{eq:second.deri} \; , \\*[3mm] {\rm with} && \nonumber \\*[3mm] A_{gas} &=& \; - \frac{{\mbox{$\sigma_{z,gas}^2$}}}{4 \pi G} \; . \label{eq:equation.for.A}\end{aligned}$$ Solving for the gaseous component after defining $\alpha(z) = \ln{\rho_{gas}(z)}$ I rewrite (\[eq:second.deri\]) : $$\begin{aligned} A_{gas}\; \frac{d^2 \; \alpha(z) }{d \; z^2} &=& e^{\alpha} + C_s \left( e^{\alpha} \right)^{p_{gs}} + ( \rho_{halo} + \rho_{rot} ) \; . \label{eq:equation.for.A.II}\end{aligned}$$ Multiplying (\[eq:equation.for.A.II\]) by $2\frac{d\alpha}{dz}$ and using the equality : $ \frac{d}{dz} \; \left( \frac{d\alpha}{dz} \right)^2 \; = \; 2 \; \frac{d\alpha}{dz} \; \frac{d^2\alpha}{dz^2} $, this equation reduces to : $$\begin{aligned} A_{gas} \; \frac{d}{dz} \; \left( \frac{d\alpha}{dz} \right)^2 &=& 2 \; \frac{d\alpha}{dz} \; \left\{ e^{\alpha} + C_s \left( e^{\alpha} \right)^{p_{gs}} + (\rho_{halo} + \rho_{rot} ) \right\} \; . \nonumber\end{aligned}$$ Integrating the RHS with respect to $\alpha$ yields : $$\begin{aligned} A_{gas} \; \frac{d}{dz} \; \left( \frac{d\alpha}{dz} \right)^2 &=& 2 \; \frac{d}{dz} \; \left\{ e^{\alpha} + \frac{C_s}{p_{gs}} \left( e^{\alpha} \right)^{p_{gs}} + (\rho_{halo} + \rho_{rot}) \; \alpha + C \right\} \; . \nonumber\end{aligned}$$ For any physical system, the restoring force vanishes at the midplane (i.e. the RHS of \[\[eq:rho.from.local.approx.II\]\] equals zero), so that $\left. \frac{d\alpha}{dz} \right|_{z=0} = 0$. Thus the integration constant $C$ is given by : $$\begin{aligned} C &=& -\rho_{gas}(0) \; \left\{ \; \left( 1 + \rho'_{s,g} \; p_{sg} \right) \; + \; (\rho'_{h,g} + \rho'_{r,g} ) \; \ln{\rho_{gas}(0)} \; \right\} \; , \label{eq:integration.constant.C}\end{aligned}$$ where the primed variables are defined as follows : $$\begin{aligned} \rho'_{s,g} &=& \frac{\rho_{stars}(0)}{\rho_{gas}(0)} \; ; \; \hspace{0.2cm} \rho'_{h,g} = \frac{\rho_{halo }(0)}{\rho_{gas}(0)} \; ; \; \hspace{0.2cm} \rho'_{r,g} = \frac{\rho_{rot }(0)}{\rho_{gas}(0)} \; .\end{aligned}$$ Thus : $$\begin{aligned} \sqrt{ \frac{-2}{A_{gas}} } \; \left\{ - \left( \rho_{gas} + \frac{C_s}{p_{gs}} \left( \rho_{gas} \right)^{p_{gs}} + (\rho_{halo} + \rho_{rot}) \; \ln{\rho_{gas}} + C \right) \right\}^{+0.5} \nonumber\end{aligned}$$ $$\begin{aligned} \hspace*{2cm} &=&\frac{d\alpha}{dz} \; . \nonumber\end{aligned}$$ Rearranging terms and integrating yields : $$\begin{aligned} \hspace*{-1cm} \sqrt{ \frac{A_{gas}}{-2} } \; \int_{\rho_{gas}(0)}^{\rho_{gas}(z)} \frac{d\rho_{gas}}{\rho_{gas}} \; \left\{ - \left( \rho_{gas} + \frac{C_s}{p_{gs}} \left( \rho_{gas} \right)^{p_{gs}} + (\rho_{halo} + \rho_{rot}) \; \ln{\rho_{gas}} + C \right) \right\}^{-0.5} \nonumber\end{aligned}$$ $$\begin{aligned} \hspace*{2cm} &=& z(\rho_{gas}(z)) \; . \nonumber\end{aligned}$$ Normalizing the gas distribution, i.e. defining $y=\frac{\rho_{gas}(z)}{\rho_{gas}(0)}$, inserting the integration constant $C$ and using (\[eq:equation.for.rho.gas\]) and (\[eq:equation.for.A\]) I find : $$\begin{aligned} z_{gas}(Y) &=& \sqrt{ \frac{{\mbox{$\sigma_{z,gas}^2$}}}{8 \pi G \; \rho_{gas}(0)} } \; \; \times \nonumber \\*[5mm] && \hspace{-2cm} \int_Y^{1} \frac{dy}{y} \sqrt{ \left( \frac{1} { (1 + \rho'_{s,g} \; p_{sg} ) - (y + \rho'_{sg} \; p_{sg} \; y^{p_{gs}} ) - (\rho'_{h,g} + \rho'_{r,g}) \ln(y) } \right) } \; . \label{eq:solution.of.zRHO}\end{aligned}$$ Notice that the term ($\rho'_{s,g} \; p_{sg}$) equals the ratio of stellar and gaseous kinetic energy in the midplane. For vanishing stellar, halo, and rotational densities (i.e. $\rho'_{s,g}=\rho'_{h,g}=\rho'_{r,g}=0.0$ so that the gas is fully self-gravitating), it can be shown that (\[eq:solution.of.zRHO\]) then reduces to the familiar self-gravitating solution, $\rho_{gas}(z) \propto $ sech$^2(z/z_0)$, indeed. Investigating equation (\[eq:solution.of.zRHO\]), we see that there is no solution possible when the sum of the rotation and DM-halo density is negative (i.e. for a steeply rising rotation curve) because at small fractional densities, $Y$, the logarithmic term goes to $-\infty$ and hence dominates, leading to a negative term inside the square root. Bottema (1987), applying the local approach, find that the rotation term can contribute significantly in the steeply rising parts of galactic rotation curves. Obviously, since galaxies do not have holes in those regions where the rotation curve rises, at least one of the assumptions must break down. Most likely, deviations from cylindrical symmetry become important and the equation of hydrostatic equilibrium is no longer a good approximation to the vertical Jeans equation. An iterative procedure is followed to find the vertical density distribution of the gas. Starting from initial guesses for stars and gas, a solution of (\[eq:solution.of.zRHO\]) is calculated, which provides a new (normalized) z-distributions for the first component. Note that this equation can be inverted to calculate the vertical distribution of the stars from the z-distribution of the gas merely by interchanging the $g$ and $s$ indices. In order to eliminate numerical instabilities, I chose the calculated component (i.e. the LHS of \[\[eq:solution.of.zRHO\]\]) to be the component with the smallest velocity dispersion. The normalized distribution, $z_{gas}(Y)$ is then inverted to yield $\rho_{gas}(z)$ which is then scaled so that its z-integral yields the appropriate surface density. The stellar density distribution is then calculated by raising the gaseous distribution to the power $p_{gs}$ (i.e. applying the equivalent of equation \[\[eq:equation.for.rho.gas\]\]) followed by the surface density normalization. From the new midplane values the next solution of (\[eq:solution.of.zRHO\]) is calculated, ... until consecutive changes become small. For a given stellar velocity dispersion, the stellar vertical density distribution changes in each iteration step. In order to keep the thickness of the stellar disk fixed, as suggested by observation ( vdKS81a&b; vdKS82a&b; BD93), I change the stellar velocity dispersion after each iteration step accordingly. This procedure can be easily expanded to include “N” isothermal components. The local dark matter density can be determined in an iterative manner as well. An upper limit to the local dark matter density can be found by assuming that all other components are massless. Using the Gaussian-equivalent method (equation \[\[eq:gas.width.MOST.local\]\]), the measured width and velocity dispersion of the gas layer the upper limit of the dark matter density is calculated. A lower limit to the local dark matter density is of course zero. For a values of the dark matter density halfway between the lower and upper limit, the model gas-layer thickness is calculated and compared with the observed value. If the calculated thickness corresponding to this trial DM-density value is larger than the observed value, then the lower bound is increased to the trial value. This procedure is repeated until convergence is reached. Model calculations show that beyond the optical disk, it is possible to accurately recover the input (into eqn. \[\[eq:solution.of.zRHO\]\]) DM densities from simulated thickness measurements. Four Special Cases {#sec-Four-Special-Cases} ================== Here I describe the gaseous distribution in the vertical direction for four interesting limiting cases with very different flaring behavior. In the first three of those, the rotation curve is assumed to be constant, while the fourth case represents gas in a region where the rotation curve is falling. In all but the self-gravitating case, the gas is treated as a massless testparticle component. The results are listed below, where all widths, [$W_{gas}$($\approx FWHM/2.35$, as defined by eqn. \[\[eq:gas.width.approximate\]\])]{}, are expressed in kpc. $$\begin{aligned} && \hspace*{-3.5cm} {\bf Fully \; self-gravitating \; gas :} \nonumber \\*[3mm] W_{gas}^{self}(R) &\approx& 6.74 \left( \frac{{\mbox{$\sigma_{10}^2$}}}{\Sigma_1(R)} \right) \label{eq:self.gravitating} \\*[5mm] && \hspace*{-3.5cm} {\bf Gas \; in \; a \; stellar \; disk : } \nonumber \\*[3mm] W_{gas}^{stars}(R) &\approx& 0.08 \sqrt{ \left(\frac{z_{0,400}}{\mu_{100}(0) \; {\cal M/L}_2 } \right) } \times {\mbox{$\sigma_{10}$}} \; \; \exp{\frac{R}{2h_R} } \label{eq:in.stellar.disk} \\*[5mm] && \hspace*{-3.5cm} {\bf Gas \; in \; a \; (flattened) \; dark \; halo \; mass \; distribution :} \nonumber \\*[3mm] W_{gas}^{halo}(R) &\approx& \sqrt{ q \; \left( \frac{2.436}{1.436\; +\; q} \right) } \times \left( \frac{{\mbox{$\sigma_{10}$}}}{V(\infty;1)_{halo,100}} \right) \sqrt{ R_{c,10}^2 + ( R_{10}/{\cal C}(q))^2 } \label{eq:in.dark.halo} \\*[5mm] && \hspace*{-3.5cm} {\bf Gas \; in \; Keplerian \; falloff \; region :} \nonumber \\*[3mm] W_{gas}^{rot}(R) &=& 0.484 \; \frac{{\mbox{$\sigma_{10}$}}}{\sqrt{M_{tot,11}}} \; R_{10}^{\frac{3}{2}} \label{eq:gas.width.rho.ROT}\end{aligned}$$ where is the gaseous velocity dispersion in units of 10 km/s, $\Sigma_1(R)$ the gaseous surface density in units of solar masses per pc$^2$, $z_{0,400}$ the scale height of the stars in units of 400 pc, $\mu_{100}(0)$ the central surface brightness in units of 100 $L_{\odot}/{\rm pc}^2$, ${\cal M/L}_2$ is the number of solar masses per solar luminosity, $h_R$ is the radial scale length of the (exponential) stellar disk, $V(\infty;1)_{halo,100}$ the asymptotic rotation velocity of the round dark halo in units of 100 km/s, $R_{c,10}$ the core radius of the dark halo in units of 10 kpc, $R_{10}$, the galactocentric radius in 10 kpc, ${\cal C}(q)$, is the core radius correction factor arising from the flatness,$q$, of the DM halo (is of order unity, see equation \[\[eq:Rc.versus.q\]\]), and $M_{tot,11}$ the total mass of the galaxy in units of $10^{11} \; M_{\odot}$. Notice that the thickness of the gas layer in a potential which is completely dominated by the rotation density is of the same order of magnitude as a gas layer in a DM-halo potential only. Such situations may arise for galaxies where the rotation curve falls in a close to Keplerian manner. If one really wants to use the local approach (e.g. for quick estimation purposes) instead of the global approach, one might as well use the approximation to equation (\[eq:solution.of.zRHO\]) I found to be accurate to about 10% over a wide range of parameters. $W_{gas}^{approx}$ can be found from : $$\begin{aligned} \left( \frac{1} {W_{gas}^{approx}} \right)^2 &=& \left( \frac{w_g}{W_{gas}^{self}} \right)^2 + \left( \frac{1} {W_{gas}^{no-self}} \right)^2 \; , \label{eq:our.approx} \\*[3mm] && \hspace{-2cm} {\rm where } \nonumber \\*[3mm] \left( \frac{1}{W_{gas}^{no-self}} \right)^2 &=& \left( \frac{1}{W_{gas}^{stars}} \right)^2 + \left( \frac{w_{hr}}{W_{gas}^{halo}} \right)^2 + \left( \frac{w_{hr}}{W_{gas}^{rot}} \right)^2 \; , \label{eq:gas.no.self}\end{aligned}$$ $$\begin{aligned} && \hspace{-2cm} { \rm with } \nonumber \\*[3mm] w_g &=& \frac{ \left( 1.15 \; \Sigma_{gas} + 1.55 \; (\Sigma_{stars}+\Sigma_{halo}+\Sigma_{rot} ) \right) } { \left( \Sigma_{gas}+\Sigma_{stars}+\Sigma_{halo}+\Sigma_{rot} \right) } \; , \nonumber \\*[3mm] w_{hr} &=& \frac{ \left( 1.20 \; \Sigma_{stars} + 1.05 \; (\Sigma_{halo}+\Sigma_{rot} ) \right) } { \left( \Sigma_{stars} + \Sigma_{halo} + \Sigma_{rot} \right) } \; ,\nonumber \\*[3mm] && \hspace{-2cm} {\rm and } \nonumber \\*[3mm] \Sigma_{halo} &=& \rho_{halo} \times W_{gas}^{no-self} \hspace{0.5cm} {\rm and } \hspace{0.5cm} \Sigma_{rot} = \rho_{rot} \times W_{gas}^{no-self} \; , \nonumber\end{aligned}$$ where the relations for the halo and rotation surface densities are evaluated (eqn. \[\[eq:gas.no.self\]\]) using $w_{hr}=1.0$. The widths, $W_{gas}^{i}$, are the widths as found for the special cases ( according to the equations \[\[eq:self.gravitating\]\], \[\[eq:in.stellar.disk\]\], \[\[eq:in.dark.halo\]\] and \[\[eq:gas.width.rho.ROT\]\]). Inside the stellar disk, the width of the gas layer is mostly determined by the gas-inside-stellar-disk term, $W_{gas}^{stars}$. Beyond, the halo-term, $W_{gas}^{halo}$, dominates. The self-gravity of the gas perturbs the dominant term slightly, but with a different numerical coefficient for the two regimes because of the different form of the un-perturbed z-distribution functions (sech$^{2p_{sg}}(z/z_0)$ versus Gaussian respectively). I have tested this approximation for several cases. The first case corresponds to BE89’s mass distribution for NGC 3198. The second case corresponds to a $\gamma$=0.6-disk in a flattened ($q=0.1$) dark halo, thereby minimizing the stellar and maximizing the dark halo contribution. The third case consists of BE89’s halo and stellar mass model but with the gas surface density multiplied by a factor of ten. In the last test case I also multiply the halo densities by a factor of ten. In all these cases equation (\[eq:our.approx\]) recovers the exact solution to equation (\[eq:solution.of.zRHO\]) to within 10 %. Albada, T.S., van, Bahcall, J.N., Begeman, K., Sancisi, R., 1985, , 295, 305 (ABBS85) Albada, T.S. van, Sancisi, R., 1986, Phil. Trans. R. Soc. Lond. A., 320 447 Athanassoula, E, Bosma A., Papaioannou, P., 1987,, 179, 23 (ABP87) Bahcall, J.N., 1984 , , 276, 156 Bahcall, J.N., Casertano, S., 1985, , 293, L7 Barteldrees, A., Dettmar, R.J., 1993, , 1994, 103, 475 Begeman, K.G., 1987, Ph. D. Thesis, Rijksuniversiteit te Groningen Begeman, K.G., 1989, , 223, 47 (BE89) Blitz, L., Magnani, K., Mundy, L., 1984, , 282, L9 Bosma, A., 1978, Ph.D. Thesis, University of Groningen Bottema, R, 1993, , 275, 16 Bottema, R, van der Kruit, P.C., Freeman, K.C., 1987, , 178, 77 Broeils, A.H., 1992, Ph. D. Thesis, University of Groningen Carignan, C., Freeman, K.C., 1988, , 323, L33 Casertano, S., 1983, , 203, 735 Casertano, S., van Gorkom, J.H., 1991, , 101, 1231 Corradi,R.L., Boulesteix,J., Bosma,A., Amram,P., Capaccioli,L., 1991, 244, 27 Dekel, A, Shlosman, I, 1983, in “Internal Kinematics and Dynamics of Galaxies”, IAU Symposium 100, ed. E. Athanassoula (Dordrecht, Reidel,197) Dickey, J.M., Hanson, M.M, Helou, G., 1990, , 352, 522 Efstathiou, G, Lake, G., Negroponte, J, 1982, , 199, 1069 Gilmore, G., Reid, N., 1983, , 202, 1022 Hoffner, P., Sparke, L., 1994, , 428, 466 Van der Hulst, J.M., Terlouw, J.P., Begeman, K.G., Zwitser, W. and Roelfsema, P.R., 1992, in “Astronomical Data Analysis Software and Systems I”, ed. D.M. Worall, C. Biemesderfer and J. Barnes, PASP Conf Series No. 25, p. 131 Kamphuis, J., 1993, Chapter 12, Ph.D. Thesis Rijksuniversiteit te Groningen Kent, S, 1987, , 93, 816 Kruit, P.C. van der, 1981, , 99, 298 Kruit, P.C. van der, 1986, , 157, 230 Kruit, P.C. van der, 1988, , 192, 117 Kruit, P.C. van der, Searle, L., 1981a, , 95,105 (vdKS81a) Kruit, P.C. van der, Searle, L., 1981b, , 95,116 (vdKS81b) Kruit, P.C. van der, Searle, L., 1982a, , 110, 61 (vdKS82a) Kruit, P.C. van der, Searle, L., 1982b, , 110, 79 (vdKS82b) Kruit, P.C. van der, Shostak, G.S., 1982, , 105, 351 Kuijken, K., Gilmore, G., 1989, , 239, 571 (KG89) Kundić T., Hernquist, H., Gunn, J.E., 1993, p 592 in “Back to the Galaxy”, AIP Conference proceedings \# 278, Editors S.S. Holt, F. Verter Lake, G., Feinswog, L., 1989, , 98, 166 (LF89) Malhotra, S., 1994, , 433, 687 Maloney, P., 1992, p. 117, in “The Evolution of Galaxies and their Environment”, 1992, NASA conference Publication 3190, Editors D. Hollenbach, H. Thronson, J.M. Shull Maloney, P., 1993, , 414, 41 (M93) Merrifield, M.R., 1992, , 103, 1552 Olling, R., Gorkom, J.H. van, 1992, p. 374 in “The Evolution of Galaxies and their Environment”, 1992, NASA conference Publication 3190, Editors D. Hollenbach, H. Thronson, J.M. Shull Olling, R., Gorkom, J.H. van, 1995, in “Dark Matter”, 5$^{th}$ Astrophysics Conference in Maryland, AIP Conference proceedings (in press) Persic, M., Salucci, P., 1988, ,243, 131 Pfenniger, D., Combes, F., Martinet, L., 1994a, , 285, 79 Pfenniger, D., Combes, F., 1994b, , 285, 94 Press, W.H., Flannery, B.P, Teukolsky, S.A., Vetterling, W.T., 1990, in “Numerical Recipes”, Cambridge University Press Pritchet, C., 1983, , 88, 1476 Rubin,V.C., Ford, W.K., Thonnard, N., 1980, , 238, 471 Rupen, M.P., 1991, Ph.D. thesis, Princeton University Sackett, P.D, Sparke, L.S., 1990, , 361, 408 (SS90) Sackett, P.D., Rix, H.W., Jarvis, B.J., Freeman, K,C., 1994, , 436, 629 (SRJF94) Sackett, P.D., Morrison, H.L., Harding, P., Boroson, T.A., 1994b , Nature, 370, 11 August p. 441 Sancisi, R., Allen, R.J., 1979, , 74, 73 Sparke, L, Casertano, S, 1988, , 234, 873 Shostak, G.S., Kruit, P.C. van der, 1984, , 132, 20 Toomre, A, 1983, in “Internal Kinematics and Dynamics of Galaxies” , IAU Symposium 100, ed. E. Athanassoula (Dordrecht, Reidel,177) Wainscoat, R.J., Freeman, K.C., Hyland, A.R., 1989, , 337, 163 Yoshi, Y.,Ishida, K., Stobie, R.S, 1987, , 93, 232 Table 1. $\frac{R_{max}}{h_R}$ ----------------------- ------- ------- ------- ---------- ------- ------- ------- ---------- ------- ------- ------- ---------- $f_m$ $f_6$ $f_8$ $f_{10}$ $f_m$ $f_6$ $f_8$ $f_{10}$ $f_m$ $f_6$ $f_8$ $f_{10}$ 2.0 0.807 0.516 0.443 0.395 0.798 0.518 0.445 0.397 0.767 0.528 0.454 0.405 2.5 0.841 0.553 0.473 0.421 0.809 0.563 0.482 0.429 0.778 0.574 0.492 0.438 3.0 0.774 0.612 0.522 0.464 0.748 0.623 0.531 0.472 0.722 0.633 0.540 0.480 3.5 0.756 0.647 0.549 0.487 0.731 0.658 0.559 0.496 0.706 0.668 0.568 0.504 4.0 0.750 0.670 0.567 0.502 0.724 0.682 0.576 0.511 0.700 0.693 0.586 0.519 4.5 0.747 0.688 0.578 0.511 0.722 0.700 0.588 0.520 0.697 0.711 0.598 0.529 5.0 0.746 0.703 0.587 0.518 0.720 0.714 0.597 0.527 0.696 0.726 0.607 0.536 5.5 0.746 0.718 0.592 0.522 0.720 0.729 0.603 0.532 0.695 0.740 0.613 0.541 6.0 0.745 0.734 0.597 0.525 0.720 0.743 0.607 0.535 0.695 0.753 0.617 0.544 6.5 0.745 0.724 0.600 0.528 0.719 0.735 0.611 0.537 0.695 0.745 0.621 0.546 7.0 0.745 0.722 0.603 0.529 0.719 0.733 0.614 0.538 0.695 0.743 0.624 0.548 $\infty$ 0.745 0.721 0.607 0.534 0.719 0.732 0.617 0.543 0.695 0.742 0.627 0.552 Table 1. (continued) $\frac{R_{max}}{h_R}$ ----------------------- ------- ------- ------- ---------- ------------------ ------------------ ------------------ --------------------- $f_m$ $f_6$ $f_8$ $f_{10}$ $\overline{f_m}$ $\overline{f_6}$ $\overline{f_8}$ $\overline{f_{10}}$ 2.0 0.748 0.534 0.460 0.410 0.780 0.524 0.451 0.402 2.5 0.749 0.584 0.501 0.446 0.794 0.569 0.487 0.434 3.0 0.698 0.643 0.549 0.488 0.736 0.628 0.536 0.476 3.5 0.683 0.679 0.577 0.513 0.719 0.663 0.563 0.450 4.0 0.677 0.703 0.596 0.528 0.713 0.687 0.581 0.515 4.5 0.674 0.722 0.608 0.538 0.710 0.705 0.593 0.525 5.0 0.673 0.737 0.617 0.545 0.709 0.720 0.602 0.532 5.5 0.672 0.751 0.623 0.550 0.708 0.735 0.608 0.536 6.0 0.672 0.762 0.628 0.553 0.708 0.748 0.612 0.539 6.5 0.672 0.755 0.631 0.555 0.708 0.740 0.616 0.542 7.0 0.672 0.754 0.634 0.557 0.708 0.736 0.619 0.543 $\infty$ 0.672 0.752 0.637 0.561 0.708 0.737 0.622 0.548 Table 2. lhs-range of (\[eq:Rc.solve\]) $a_{6,0}$ $a_{6,1}$ $a_{6,2}$ $R_c/h_R$-range -------------------------------- ------------ ------------ ------------ ------------------- $\in$ (1.00,1.90 \] 1.000 0.4300 0.1140 $\in$ ( 0 , 1.5\] $\in$ (1.90,3.04 \] 0.7363 0.7734 -0.001442 $\in$ ( 1.5, 3 \] $\in$ (3.04,4.76 \] 0.2709 1.097 -0.05814 $\in$ ( 3 , 6 \] $\in$ (4.76,5.81 \] 1.400 0.7439 -0.03038 $\in$ ( 6 ,10 \] lhs-range of (\[eq:Rc.solve\]) $a_{8,0}$ $a_{8,1}$ $a_{8,2}$ $R_c/h_R$-range $\in$ ( 1.00, 2.10\] 1.000 0.4884 0.1645 $\in$ ( 0 , 1.5\] $\in$ ( 2.10, 3.71\] 0.7252 0.8293 0.05622 $\in$ ( 1.5, 3 \] $\in$ ( 3.71, 6.76\] -0.3281 1.506 -0.05400 $\in$ ( 3 , 6 \] $\in$ ( 6.76, 9.16\] -0.01879 1.4511 -0.05341 $\in$ ( 6 ,10 \] lhs-range of (\[eq:Rc.solve\]) $a_{10,0}$ $a_{10,1}$ $a_{10,2}$ $R_c/h_R$-range $\in$ ( 1.00, 2.23\] 1.000 0.5209 0.2007 $\in$ ( 0 , 1.5\] $\in$ ( 2.23, 4.20\] 0.7508 0.8190 0.1103 $\in$ ( 1.5, 3 \] $\in$ ( 4.20, 8.47\] -0.6623 1.702 -0.02960 $\in$ ( 3 , 6 \] $\in$ ( 8.47,12.55\] -1.696 2.100 -0.06758 $\in$ ( 6 ,10 \] [^1]: The optical parameters ($L(0)$=207.1 L$_{\odot}$/pc$^2$, $h_R=2.3$ kpc, $z_e=0.23$ kpc), gaseous surface density distribution and rotation curve (V$_{obs}(2.3h_R)=146.4$, V$_{obs}(10.0h_R)=148.0$, V$_{gas}(2.3h_R)=13.4$ and V$_{gas}(10.0h_R)=38.0 \;$ km/s) were taken from BE89 who used Kent’s (1987) photometry to calculate the rotation curve due to the stellar disk. The stellar disk is truncated at 6$h_R$ (van der Kruit, 1988). [^2]: I thank the referee, Phil Maloney, for pointing this out. [^3]: In the Galaxy, $\sigma_{RR}$ and $\sigma_{zz}$, as measured by Malhotra’s (1994) and Blitz (1984) are almost equal : 7.8 $\pm$ 3 versus 5.7 $\pm$ 1.2 km/s. [^4]: \[footnote.width.estimator\] Since the [*shape*]{} of the potential changes with galactocentric radius, the gaseous density distribution will also change shape. Thus, the accuracy of the approximate density distribution (eqn. \[\[eq:gas.width.approximate\]\]) will also change with galactocentric radius. Other indicators for the width of the gas layer could be used as well. For example the normalized second moment of the density distribution, $ W_{{\rm MOM2}} = \sqrt{ \int_{-\infty}^{+\infty}{z^2 \; \rho(z) \; dz } \; / \; \int_{-\infty}^{+\infty}{ \rho(z) \; dz } }, $ is more sensitive to the high-z parts of the density distribution, yielding [*larger*]{} values for the “width” of the gas layer. On the other hand, the width could be calculated from the Full Width at Half Maximum ($W_{FWHM} = FWHM/2.35$), a calculation which is more sensitive to the low-z part of the density distribution, so that [*smaller*]{} values for the “width” are found. Significant differences between these three width measures indicate that the true density distribution is not Gaussian. For the toy model discussed below the $W_{FWHM}$ and $W_{MOM2}$ width estimators can be as much as 20% larger and smaller than $W_{gas}$ respectively (inside the optical disk). Beyond the optical disk, the gaseous distribution is close to Gaussian. [^5]: In this paper I do not include any thick disk component. Note that these isothermal and exponential vertical distributions have the same slope at large z-heights. Thus for a given surface density and high-$z$ slope, the exponential distribution has a twice larger value of the midplane density than the isothermal distribution (see also van der Kruit, 1988). [^6]: This accuracy equals ten times the accuracy obtained in the integrating routines. [^7]: The effect of the scaling of the gaseous surface density has been taken into account while performing the disk-halo decomposition. [^8]: A channel map is an image (of a galaxy) taken in a “narrow band filter” : 2.56 km/s wide in this case. [^9]: i.e. become smaller than a few times the DM density. | Mid | [
0.56269113149847,
23,
17.875
]
|
This paper describes an environment for supporting very large ontologies. The system can be used on single PCs, workstations, a cluster of workstations, and high-end parallel supercomputers. The architecture of the system uses the secondary storage of a relational data base system, efficient memory management, and (optionally) parallelism. This allows us to answer complex queries in very large ontologies in a few seconds on a single processor machine and in fractions of a second on parallel super computers. The main contribution of our approach is the open architecture of the system on both the hardware and the software levels allowing us easily to translate existing ontologies for our system’s use, and to port the system to a wide range of platforms. This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy. | High | [
0.67156862745098,
34.25,
16.75
]
|
H. Li, Y. Shi, G. Han, J. Liu, J. Zhang, C. Li, J. Liu, Y. Yi, T. Li, X. Gao, C. Di, J. Huang, Y. Che, D. Wang, W. Hu, Y. Liu, L. Jiang, *Angew. Chem. Int. Ed.* **2020**, *59*, 4380. Organic two‐dimensional (2D) semiconductor materials have brought unprecedented impact on materials science, physics, chemistry, and industry,[1](#anie201916397-bib-0001){ref-type="ref"} since the emergence of monolayer crystalline active layer in organic field‐effect transistors (OFETs).[1a](#anie201916397-bib-0001a){ref-type="ref"}, [2](#anie201916397-bib-0002){ref-type="ref"} The charge density and charge transportation in the conducting channel of OFETs are not only modulated by gate bias, but also dominated by external stimulus, such as presence of specific chemicals, enabling their capability of chemical sensing.[2d](#anie201916397-bib-0002d){ref-type="ref"}, [3](#anie201916397-bib-0003){ref-type="ref"} Moreover, OFET‐based sensors, where the OFETs act as both signal transducers and signal amplifiers, substantially simplified the structure of traditional sensing devices. However, the sensing performances of thin film OFET‐based sensors are limited by the diffusion process of analyte molecules to the conducting channels.[3e](#anie201916397-bib-0003e){ref-type="ref"}, [4](#anie201916397-bib-0004){ref-type="ref"} To improve the sensing performances, many efforts have been devoted to alleviate the diffusion barrier by either introducing porous structure into organic semiconductor (OSC) films[3e](#anie201916397-bib-0003e){ref-type="ref"}, [5](#anie201916397-bib-0005){ref-type="ref"} or decreasing the film thickness down to a few layers[6](#anie201916397-bib-0006){ref-type="ref"} or even monolayers.[3b](#anie201916397-bib-0003b){ref-type="ref"}, [3f](#anie201916397-bib-0003f){ref-type="ref"}, [7](#anie201916397-bib-0007){ref-type="ref"} Recently, exciting progress has been achieved regarding the OFET‐based sensors for ammonia detection,[8](#anie201916397-bib-0008){ref-type="ref"} not only because it is of crucial importance for environmental protection purpose, but also ammonia is a biochemical clue for cirrhotic disease or chronic kidney disease.[9](#anie201916397-bib-0009){ref-type="ref"} Huang et al. reported OFET‐based NH~3~ sensors by exploiting p‐type OSCs, since the electrostatic interaction between NH~3~ and OSCs can lead to the decrease in current, with a detection limit up to 350 ppb (as summarized in the Supporting Information, Table S1).[8b](#anie201916397-bib-0008b){ref-type="ref"} Further improvement over the detection limit (that is, ca. 10 ppb) was achieved by decreasing the thickness of semiconductor layer to few molecular layers, for example, monolayer films[3f](#anie201916397-bib-0003f){ref-type="ref"}, [8g](#anie201916397-bib-0008g){ref-type="ref"} and monolayer molecular crystals (MMCs).[8g](#anie201916397-bib-0008g){ref-type="ref"} On the other hand, introducing porous structure was also shown to be an alternative strategy to tailor the sensitivity, which have enabled the NH~3~ detection limit of 1--10 ppb.[3e](#anie201916397-bib-0003e){ref-type="ref"}, [5c](#anie201916397-bib-0005c){ref-type="ref"} However, direct preparation of porous two‐dimensional molecular crystals by the bottom‐up assembly process is still very scarce for ultrasensitive NH~3~ sensing. Herein, we report an n‐type OFET‐based MMCs for NH~3~ detection, by in situ forming of a porous structure. Since NH~3~ molecules exhibit five times higher binding affinity to the core of NDI3HU‐DTYM2[10](#anie201916397-bib-0010){ref-type="ref"} (NDI, Figure [1](#anie201916397-fig-0001){ref-type="fig"} a) than its side groups (Supporting Information, Figure S1), the porous structure is expected to significantly improve the sensitivity and detection limit. The calculated results indicate that the absorbed ammonia between long‐distance NDI molecules can lead to an important long‐range super‐exchange coupling and thus improve the charge‐transport performance (Figure [1](#anie201916397-fig-0001){ref-type="fig"} b; Supporting Information, Figure S2). The as‐prepared porous OFET‐based sensors demonstrated a record NH~3~ detection limit (0.1 ppb). Moreover, the OFET‐based MMCs were further extended to the sensing of solid amine‐derivatives, that is, dopamine, with a detection limit up to 500 ppb, corroborating the versatility of these OFET‐based chemical sensors. These results open up the possibility of constructing the next generation high‐performance sensing devices with high resolution for industrial gas detection or biological diagnosis. {#anie201916397-fig-0001} A simple drop‐casting strategy was employed to realize the controllable tuning of the structural parameters, that is, monolayer or multiple‐layer; porous or nonporous structure; pore sizes. As illustrated in Figure [1](#anie201916397-fig-0001){ref-type="fig"} c, 5--30 μL NDI in chlorobenzene with different concentrations were drop‐casted onto the surface of plasma‐treated SiO~2~/Si^++^ or divinyltetramethylsiloxanebis (benzocyclobutene) derivative (BCB)‐treated SiO~2~/Si^++^ substrates. After the solvent evaporation, films with regular shapes were obtained under ambient conditions (Figure [2](#anie201916397-fig-0002){ref-type="fig"} ,b). Crystal films were confirmed by their homogeneous brightness under polarized illumination (Supporting Information, Figure S3). {#anie201916397-fig-0002} The thickness of the crystals is assessed by atomic force microscopy (AFM), and the thicknesses of these samples are 1.8--2.1 nm, as shown in Figure [2](#anie201916397-fig-0002){ref-type="fig"} c,d, which correlates to the thickness of monolayer height of the lamellar films reported,[8c](#anie201916397-bib-0008c){ref-type="ref"} indicating the formation of monolayer crystals. On the other hand, the NDI multilayer crystals were also evidenced by the *h*00 peaks in X‐ray diffraction patterns (Supporting Information, Figure S4), which corresponded to a layer‐by‐layer growth manner. The d‐spacing is 2.08 nm, which is approximately or is slightly larger than the thickness of the ultrathin crystals, further confirming the monolayer thickness of the ultrathin crystals. In contrast the smooth morphology obtained on BCB/SiO~2~/Si^++^, nanopores were clearly observed for crystals grown on SiO~2~/Si^++^. However, the packing motifs of these MMCs were similar, as evidenced by the high‐resolution atomic force microscope (HR‐AFM), where similar packing parameters of *a*=9.34 Å, *b*=7.39 Å, *θ*=74.67° and *a*=9.71 Å, *b*=7.39 Å, *θ*=73.49° are extracted for the porous and nonporous MMCs (Figure [2](#anie201916397-fig-0002){ref-type="fig"} e,f). The thickness and size of the pores of MMCs can also be tuned by changing the *C* ~NDI~ (Supporting Information, Figures S5--S8). To reveal the reliance of NDI MMCs morphologies on the substrates, surface free energies of the NDI crystals and the substrates were investigated. The wetting envelopes are shown in Figure [1](#anie201916397-fig-0001){ref-type="fig"} d.[11](#anie201916397-bib-0011){ref-type="ref"} Chlorobenzene exhibited similar wettability to the NDI crystal and the BCB substrates, resulting in a steady solvent receding effect during the crystal growth at the BCB substrates/NDI nucleus interface. Therefore, homogenous and smooth MMCs were obtained (Figure [2](#anie201916397-fig-0002){ref-type="fig"} b). By contrast, large difference between wettability of chlorobenzene on NDI crystal and the SiO~2~/Si^++^ would result in unsteady solvent receding line at the SiO~2~/crystal nucleus interface, which could lead to the formation of porous structure (Figure [2](#anie201916397-fig-0002){ref-type="fig"} a). As we claimed, the porous MMCs formed on SiO~2~/Si^++^ can provide great opportunity for fabricating ultra‐high‐performance gas sensors based on OFET devices (Supporting Information, Figure S9). To probe its sensing performance of NH~3~ detecting, the changes in the output current were recorded in the sampling mode. In our work herein, the porous MMC‐OFETs experienced a 120 % current increase upon exposure to 1 ppb NH~3~. Plot of (*I*−*I* ~0~)/*I* ~0~ against NH~3~ concentration (from 1 ppb to 1 % (*v*/*v*) (diluted by N~2~)) is further summarized in Figure [3](#anie201916397-fig-0003){ref-type="fig"} a and the Supporting Information, Figure S10, for both porous and nonporous MMC‐OFETs. As shown in Figure [3](#anie201916397-fig-0003){ref-type="fig"} a and Figure S10, the current gradually increased with the increase of the *C* ${}_{{NH}{}_{3}}$ and then leveled off when it reached 1 ppm. To probe the detection limit of MMC‐OFETs at 0.1 ppb, the porous device suddenly underwent a current increase of 72 %, while no current variation was observed for the nonporous MMC‐OFETs (Figure [3](#anie201916397-fig-0003){ref-type="fig"} b). To the best of our knowledge, this is the first report achieving a sub‐ppb level NH~3~ detection by OFETs based sensors (Supporting Information, Table S1). {#anie201916397-fig-0003} While the nonporous MMC‐OFETs only exhibited 73 % response after 1 ppb NH~3~ exposure and the *I* ~D~ kept increasing with higher *C* ${}_{{NH}{}_{3}}$ . An upper sensing limit was not detected at the whole testing *C* ${}_{{NH}{}_{3}}$ range, which may complement the porous MMC‐OFETs for higher *C* ${}_{{NH}{}_{3}}$ sensing. The porous device exhibited superior sensitivity than the nonporous device through the whole testing *C* ${}_{{NH}{}_{3}}$ range (Figure [3](#anie201916397-fig-0003){ref-type="fig"} a). Moreover, further to the substantially improved sensitivity, the response of porous MMCs was faster than that of the nonporous MMCs (Supporting Information, Figure S11). For comparison, the sensing performances of multilayer crystals OFET were also studied, which only showed about 1.8 % and 3.2 % *I* ~D~ increase at *C* ${}_{{NH}{}_{3}}$ of 10 ppm and 100 ppm (Supporting Information, Figure S12). Apart from the outstanding sensing property of the porous MMCs, the effect of the pore size to the sensitivity was also investigated (Supporting Information, Figure S13). The sensitivity was enhanced when the pore diameter increased from 20 to 200 nm. The selectivity studies of porous MMC‐OFET sensors were carried out by exposing the MMC‐OFETs devices to those chemical vapors including isopropanol, acetone, alcohol, THF, and chloroform (Supporting Information, Figure S14). Only a slight increase in current was detected for those volatile organic compounds even at saturated concentrations at 25 °C. The outstanding sensitivity and high selectivity of the sensors toward NH~3~ proved the great potential of porous MMC‐OFETs in sensing applications. Besides the electrical study based on OFETs, the interaction between NH~3~ molecules and NDI MMCs was investigated by fluorescent (FL) spectra. Figure [3](#anie201916397-fig-0003){ref-type="fig"} d shows the variation of FL spectra of the porous MMCs after exposure to NH~3~ vapors at concentrations ranging from 10 ppb to 1 % (normalized by the pristine intensity at 601 nm). Control experiments were carried on the nonporous MMCs (Figure [3](#anie201916397-fig-0003){ref-type="fig"} e) and multilayer crystals (Figure [3](#anie201916397-fig-0003){ref-type="fig"} f). The FL intensity of NDI MMCs decreased obviously after NH~3~ exposure. Figure [3](#anie201916397-fig-0003){ref-type="fig"} g--i shows the relative FL intensity at 601 nm vs. log *C* ${}_{{NH}{}_{3}}$ , and the decrease of FL intensity is linearly related to the increase of ammonia concentration on semilog coordinates. This relationship is consistent with that of the electrical results of OFET‐based sensors, but the current change is more obvious than the FL in sensing NH~3~ with lower detection limit and higher sensitivity. For example, at 1 ppm of NH~3~ the sensitivity of electrical signal of the porous MMCs sensors is about 34 times higher than that of FL signal. All the slops of liner fitting are extracted and summarized in the Supporting Information, Table S2. In the case of multilayer crystals, the FL intensity almost kept unchanged until ammonia concentration exceeded 1 %, owing to the bulk thickness and the low percentage of the interfacial molecules to the bulk crystals. The FL spectral study further confirmed the superiority of porous MMCs toward nonporous MMCs and bulk crystals in gas sensors applications. It is noted that the slope of sensitivity vs. log *C* ~NH3~ of electronic signal of porous MMCs was about 100 times higher than the spectroscopic slope, and that of nonporous MMCs was 8 times higher than the spectroscopic slope, demonstrating the high resolution of the electrical sensors at low NH~3~ concentration. In the previous study of OFET‐based sensors, the detection of solid chemicals is limited by the diffusion process of the solid analytes.[12](#anie201916397-bib-0012){ref-type="ref"} However, the direct exposure of the charge accumulation layer in MMC‐OFETs provides the possibility of sensing solid chemicals. In this work, dopamine powders were used as the solid chemical to evaluate the sensing property of MMC‐OFETs toward solid analytes. As shown in Figure [4](#anie201916397-fig-0004){ref-type="fig"} a, obvious current increase can be observed for the nonporous MMC devices upon exposure to dopamine powders diluted by silicon powders. This device shows 758 % current increase when 500 ppb (*m*/*m*) dopamine was added. The porous NDI MMC device also showed obvious response under the same condition, as shown in Figure [4](#anie201916397-fig-0004){ref-type="fig"} b. We also compared the sensing performance of the multilayer crystal OFET based sensors. As shown in the Supporting Information, Figure S15, the sensors of multilayer crystal OFETs exhibited negligible response under the same condition. The results indicate that monolayer thickness of the conduction channel is playing an important role in sensing solid chemicals. {#anie201916397-fig-0004} In summary, MMCs with different morphologies were controllably synthesized and their application in sensors were investigated by fabricating MMC‐OFETs. Sensors of porous MMCs exhibited high sensitivity toward NH~3~ with 72 % response at 0.1 ppb NH~3~, which is the record of OFET‐based NH~3~ sensors so far reported. Besides, both electrical and fluorescent change were detected when exposing to NH~3~, while the sensitivity of electrical signal of the porous MMCs sensor is about 34 times higher than that of FL signal. Moreover, the direct detection of solid chemicals was firstly demonstrated based on the MMC‐OFETs, and the sensor showed obvious response to sub‐ppm level of dopamine powders. The easy preparation of MMCs and ultra‐sensitivity of the MMC‐OFETs based sensors might render further application for real‐time detection of gas and solid chemicals. Conflict of interest {#anie201916397-sec-0003} ==================== The authors declare no conflict of interest. Supporting information ====================== As a service to our authors and readers, this journal provides supporting information supplied by the authors. Such materials are peer reviewed and may be re‐organized for online delivery, but are not copy‐edited or typeset. Technical support issues arising from supporting information (other than missing files) should be addressed to the authors. ###### Supplementary ###### Click here for additional data file. We thanks to Prof. Ji Liu for the discussion. This work was supported by the Ministry of Science and Technology of China (2016YFB0401100, 2017YFA0204704), the National Natural Science Foundation of China (21873108, 21805284, 51673114, 51973111), the Chinese Academy of Sciences (Hundred Talents Plan, Youth Innovation Promotion Association), the Strategic Priority Research Program (Grant No. XDB30000000). | High | [
0.658227848101265,
32.5,
16.875
]
|
No team is immune from NBA trade rumors, not even the powerful Golden State Warriors. A fresh rumor has surfaced that has the Warriors involved in a three-team, six-player deal with the Philadelphia 76ers and Toronto Raptors. CBS Sports has reported that the Sixers plan to trade either Nerlens Noel or Jahlil Okafor due to the team’s logjam at the center position. According to Yibada, Golden State is now one of the teams showing interest in Nerlens Noel. The current trade scenario making the rounds online goes as follows: the Golden State Warriors would receive center Nerlens Noel, while the Philadelphia 76ers would emerge from this trade with a package that includes rookie guard Patrick McCaw, swingman Terrence Ross, center JaVale McGee, a 2017 first-round pick from Toronto and a 2019 first-round pick from Golden State. The Toronto Raptors would reportedly bring in power forward Kevon Looney and shooting guard Nik Stauskas. null This complex transaction does qualify as a legal trade under NBA rules, as verified via the ESPN NBA Trade Machine. Is this rumored trade proposal balanced enough so that all three teams would benefit from the deal? Let’s take a look at how this would play out if this six-player, two-draft-choice swap took place. The Golden State Warriors don’t appear to need any additional help, but that surely won’t stop them from trying to make a deal that would make their team even stronger. If this trade goes through, the Warriors will come away with a young center in Nerlens Noel who is an athletic, defensive-minded player who would be a very good fit in Golden State. One thing the team could use is a rim protector and a big man who is athletic enough to keep pace with the run-and-gun Warriors — Nerlens Noel is an ideal player to fill that role. To get Noel, this trade scenario has Golden State sending out three low-level player assets (McCaw, Looney, and McGee) and a draft pick, which in all likelihood will be at or near the end of the first round. This would be a great move for the Warriors to make, much to the chagrin of their competitors who are already struggling to match the firepower the Warriors have at their disposal. The Philadelphia 76ers would be trading Noel and Nik Stauskas for the aforementioned package of three players and two draft choices. Noel is a talented player, but it is clear that the Sixers need to move one of their centers. Nik Stauskas hasn’t lived up to his draft position (No. 8 overall in the 2014 NBA Draft) during his first two years in the league, so Philadelphia would have no problem parting with the six-foot-six shooting guard. Terrence Ross is proven scorer off the bench and is a particularly good three-point marksman. Patrick McCaw is a rookie guard who many observers feel has a chance to be a good player in the NBA, and going to a team like the 76ers would give him a chance to develop since they are seemingly years away from being a playoff contender. The Sixers don’t really have a need for another center, so if this trade were agreed upon, JaVale McGee would probably be released by Philadelphia. The trio of Ross, McCaw and McGee may not be an attractive enough offer to get 76ers general manager Bryan Colangelo excited, but the rumored proposal also includes two future first-round draft picks. Either way, this isn’t a home run deal for Philadelphia, but there might be enough value here for Colangelo to bite on this rumored trade offer — especially if he is committed to shipping out Nerlens Noel or Jahlil Okafor, and this is the best deal out there just as the season is about to begin. null For the Toronto Raptors, this trade boils down to a swap of Terrence Ross for Kevon Looney and Nik Stauskas. Toronto is a team on the cusp of being a true contender in the Eastern Conference race, so they are more interested in the present than building for the future. Ross has been a contributor to the Raptors’ recent success, and if the team agrees to this deal, they would be getting two very young players in return who probably would not be factors for Toronto any time soon. Both Looney and Stauskas are players who have potential, but even if the Raptors like them as young prospects, they won’t help the team now. It’s apparent that for the Toronto Raptors, this rumored proposal makes little sense. The trade offer is a decent one, but it doesn’t fit at all with what Toronto is trying to do. If the Raptors were a bad team and primarily looking for assets to help them down the road, Looney and Stauskas would be of more interest to them, but this still may not have been a good enough offer to convince Toronto to pull the trigger. The latest NBA trade rumors are floating the idea of a three-way deal between Golden State, Philadelphia, and Toronto. The rumored proposition would be very advantageous for the Warriors, but it looks to be a solid maybe for the Sixers. The Raptors end of this potential swap is reasonable, but it is doubtful the team would actually go through with a trade that would send out a key rotation player for two long-term projects who may or may not become eventual contributors in the NBA. It appears that Golden State will have to be content going into 2016-17 with their current roster, and that is a problem 29 other teams would love to have. | Mid | [
0.607538802660753,
34.25,
22.125
]
|
Q: RuntimeException on Camera.setParameters() on nexus one I copied the code from the answer here and I still am getting a RuntimeException: setParameters failed error on my nexus one. My manifest file has camera and wake_lock permissions. This works on the emulator, and on the droid I don't get the error but it does have a rotation problem. A: You're most likely requsting an invalid preview size. If you check the results of adb logcat you'll probably see something like this: E/QualcommCameraHardware(22732): Invalid preview size requested: 480x724 The solution is to request the closest available preview size to the one you'd like; you can get a list of available preview sizes by calling getSupportedPreviewSizes in the Camera.Parameters object returned by Camera.getParameters. A: I corrected this by doing what Roman said, with the code: Camera.Parameters parameters = camera.getParameters(); List<Camera.Size> sizes = parameters.getSupportedPreviewSizes(); Camera.Size cs = sizes.get(0); parameters.setPreviewSize(cs.width, cs.height); camera.setParameters(parameters); A: For what it's worth, the source of my issue ended up being that I was trying to call parameters.setFlashMode(Camera.Parameters.FLASH_MODE_OFF); without first verifying that flash modes were supported by checking that parameters.getFlashMode() != null. There's more than one cause for this poorly documented exception, so check all of your parameters and not just that you're using a supportedPreviewSize. | High | [
0.691472868217054,
27.875,
12.4375
]
|
“Revenge and political motives” drove a 54-year-old Tunisian immigrant to plot and execute the murder of an elderly Austrian couple over their alleged sympathy towards a far-right party, local police said. The chilling double homicide took place in the Austrian town of Linz, and involved a 54-year-old man of Tunisian origin who had lived in the country since 1989, according to police. In 2011, the man – who has not been named – was falsely sentenced for animal abuse following a complaint by a local activist of Austria’s far-right Freedom Party (FPO), an incident which reportedly led to him being stigmatized by the local community. The murder victims, identified as Hildegard Sch., 85, and her husband Siegfried, 87, lived in the same town and regularly ordered food from a grocery store run by the man’s wife. Read more The victims had a “friendly relationship” with the suspect and even supported his family financially, according to Kurier newspaper. The couple’s son, however, was an active member of the FPO, which created false prejudice about the elderly couple’s political affiliations, said Andreas Pilsl, police chief for Upper Austria. "There were classical motives such as revenge and retaliation, coupled by political motives," Pilsl said, adding that the man felt discriminated against by Austria’s employment agency, politicians and especially the FPO party. On the day of a scheduled delivery to the Linz family, the suspect allegedly hid a belt, a wooden club, a knife and a canister of gasoline under his apron, broadcaster ORF reported. Investigators allege that he first strangled the 85-year-old woman, before attacking her husband with the knife and the club, killing him in the assault. The suspect then reportedly set their kitchen on fire, with the victims’ bodies later being discovered by firefighters who were responding to an emergency alert. Afterwards, the 54-year-old suspect turned himself in to the authorities. During questioning he reportedly admitted he had murdered the couple out of revenge, adding that he had even considered drowning himself in the Danube river before surrendering to police. Pilsl said the victims had never been members of the FPO. “They had no links to the party, that is not true,” he said. | Low | [
0.48541666666666605,
29.125,
30.875
]
|
The structure of subtilisin ALP I from alkalophilic Bacillus sp. NKS-21. The gene for an alkaline serine protease from alkalophilic Bacillus sp. NKS-21 (subtilisin ALP I) was cloned, and its nucleotide sequence was determined. The gene (aprQ) contained an open reading frame of 1125 bp, encoding a primary product of 374 amino acids. The mature protease, composed of 272 amino acids, was preceded by a putative signal sequence of 37 amino acids and a pro-sequence of 65 amino acids. The mature protease conserved the catalytic triad, Asp, His, and Ser, as subtilisin BPN' or other subtilisins, and the subtilisin ALP I might belong to the subtilisin super family. The primary structure of subtilisin ALP I was compared and discussed with those of 13 subtilisins, 5 subtilisins from alkalophilic Bacillus, and 8 from neutrophiles. Low homology was shown between subtilisin ALP I and subtilisins from alkalophiles or subtilisins from neutrophiles. Forty-five amino acid residues of the mature protein of subtilisin ALP I were entirely independent of other subtilisins. According to the homology of ALP I with other subtilisins, subtilisin ALP I might be in the middle point between alkaline subtilisins and neutral ones. | High | [
0.6843501326259941,
32.25,
14.875
]
|
client dev tun proto udp remote nl.free.zoogvpn.com 1194 redirect-gateway cipher AES-128-CBC auth SHA1 resolv-retry infinite nobind persist-key persist-tun persist-remote-ip <ca> -----BEGIN CERTIFICATE----- MIIDjTCCAvagAwIBAgIJALBMSxQKBBi6MA0GCSqGSIb3DQEBBQUAMIGMMQswCQYD VQQGEwJVUzENMAsGA1UECBMEVVRBSDESMBAGA1UEBxMJU2FsdCBMYWtlMQ8wDQYD VQQKEwZab29nVFYxEjAQBgNVBAsTCUFNRVIxIFZQTjESMBAGA1UEAxMJWm9vZ1RW IENBMSEwHwYJKoZIhvcNAQkBFhJzdXBwb3J0QHpvb2d0di5jb20wHhcNMTQwNjA5 MjEyNzU2WhcNMjQwNjA2MjEyNzU2WjCBjDELMAkGA1UEBhMCVVMxDTALBgNVBAgT BFVUQUgxEjAQBgNVBAcTCVNhbHQgTGFrZTEPMA0GA1UEChMGWm9vZ1RWMRIwEAYD VQQLEwlBTUVSMSBWUE4xEjAQBgNVBAMTCVpvb2dUViBDQTEhMB8GCSqGSIb3DQEJ ARYSc3VwcG9ydEB6b29ndHYuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKB gQC7kIiPpph0xTYtmdEASddEHxgQVWAeg8gv+AzLEvvmcIHjsO4Rl0x765r7PViE FDlrE6ShXUoOLxmS7mLj1fOlMPTnTGS5xr37mQmAjWJllLzoncYSuWOhY0tIOAsE 0R2juwyddyZCnu+iqC/MFLKG3ldgA4dOfVwnbMVcdtj+QQIDAQABo4H0MIHxMB0G A1UdDgQWBBSkH6efW4hbVd5TIBJYrBpMs6/lPjCBwQYDVR0jBIG5MIG2gBSkH6ef W4hbVd5TIBJYrBpMs6/lPqGBkqSBjzCBjDELMAkGA1UEBhMCVVMxDTALBgNVBAgT BFVUQUgxEjAQBgNVBAcTCVNhbHQgTGFrZTEPMA0GA1UEChMGWm9vZ1RWMRIwEAYD VQQLEwlBTUVSMSBWUE4xEjAQBgNVBAMTCVpvb2dUViBDQTEhMB8GCSqGSIb3DQEJ ARYSc3VwcG9ydEB6b29ndHYuY29tggkAsExLFAoEGLowDAYDVR0TBAUwAwEB/zAN BgkqhkiG9w0BAQUFAAOBgQAZ/cLbuSDEGZXf+aA5OAQnFmsO4fT8/Gq3B4FMa3mr Ddi2VQ01tGzCalK4KDHRNAkqcuf5ao4suj8XEqi5fyXgvZwn2Cs+We+epDG7WXUn sbNy1hhTFbi+3jl3He84KUoulPO6CFYgN44juuN3qjVklioxg+IhGsoHHe7AHPKw FA== -----END CERTIFICATE----- </ca> fast-io auth-user-pass /config/openvpn-credentials.txt reneg-bytes 0 reneg-sec 0 verb 3 | Low | [
0.506696428571428,
28.375,
27.625
]
|
Comments on: Avalanche’s Giguere predicted ‘unacceptable’ effort versus Dallashttp://nhl.nbcsports.com/2013/03/24/avalanches-giguere-predicted-unacceptable-effort-versus-dallas/ ProHockeyTalk on NBCSports.comThu, 14 Dec 2017 02:17:13 +0000hourly1http://wordpress.com/By: sacharlielima1http://nhl.nbcsports.com/2013/03/24/avalanches-giguere-predicted-unacceptable-effort-versus-dallas/comment-page-1/#comment-153892 Mon, 25 Mar 2013 02:03:24 +0000http://prohockeytalk.nbcsports.com/?p=1358135#comment-153892Well coach Sacco Sucks cant coach Pee Wee if you ask me but while we are sucking i guess we can enjoy being the last place in the NHL Hopefully we will fire Sacco and Take Seth Jones 1st Overall in the 2014 NHL Draft Both Of Those Things will Make Me Very Happy As A Hard Core Avalanche Fan!! ]]>By: nananatmanhttp://nhl.nbcsports.com/2013/03/24/avalanches-giguere-predicted-unacceptable-effort-versus-dallas/comment-page-1/#comment-153699 Sun, 24 Mar 2013 17:19:15 +0000http://prohockeytalk.nbcsports.com/?p=1358135#comment-153699Don’t worry Avs fans the only reason the Flames didn’t lose yesterday is that they didn’t play. When they’re done losing today you will still be tied for last. God I miss the 80’s when my Flames had pride. ]]>By: ak5040http://nhl.nbcsports.com/2013/03/24/avalanches-giguere-predicted-unacceptable-effort-versus-dallas/comment-page-1/#comment-153681 Sun, 24 Mar 2013 15:57:16 +0000http://prohockeytalk.nbcsports.com/?p=1358135#comment-153681True. These games have been awful to watch. Sacco needs to go. Something needs to happen. ]]>By: polegojimhttp://nhl.nbcsports.com/2013/03/24/avalanches-giguere-predicted-unacceptable-effort-versus-dallas/comment-page-1/#comment-153597 Sun, 24 Mar 2013 05:46:29 +0000http://prohockeytalk.nbcsports.com/?p=1358135#comment-153597Very true Dolph….Varl was also bad tonight… a couple VERY weak goals. The Avs have become a ONE period team… the 3rd. And its NOT a talent issue… they can play with the best and snapped the Hawks streak. They just waste 40 minutes every game waiting for their balls to drop, skating soft and confused. Just watch them ‘try’ to move into the offensive zone… a little pressure and it’s a dump. No grit. ]]>By: dolphinculthttp://nhl.nbcsports.com/2013/03/24/avalanches-giguere-predicted-unacceptable-effort-versus-dallas/comment-page-1/#comment-153596 Sun, 24 Mar 2013 05:26:48 +0000http://prohockeytalk.nbcsports.com/?p=1358135#comment-153596another game and sacco has his team ill prepared to compete. I feel awful for av fans to have to endure watching massacre after massacre ]]> | Low | [
0.503787878787878,
33.25,
32.75
]
|
In respect of the rise of popularity of youtube gaming personalities, and the decline of print gaming media and the more contemporary decline of internet gaming web sites: I'd like to ask how much did you see this coming? In starting a video/personality based website it seems like you've surfed a wave and secured the future of the site, and the jobs of your team. It seems remarkably prescient. Where you lucky? Or did you just have your finger on the pulse? Thanks Jeff - all the best. Print’s been on a downward slope since the 90s, it wasn’t hard to see that the internet was going to replace it eventually. Watching the big sites get bigger and more bloated as publishers started to wriggle around and threaten to commandeer standard preview coverage by posting it themselves on YouTube instead of giving exclusives to web sites made it easy to see that what had become the standard model for a video game web site wasn’t going to be viable forever, either. Those sites are largely based on access–they get access to pre-release information and share it, you go there because no one else has it. But, really, we weren’t in a position to even try to compete with the aging huge sites when we launched. We didn’t have the size to play the increasingly dead exclusives game. And the way Google SEO works makes them too entrenched to take on directly. They still are. So we found different areas that weren’t being taken seriously at that level (launching a podcast at GameSpot was like pulling teeth and was never taken especially seriously) and focused on those. Those were usually the things we had the most fun doing, anyway, so it worked out. In some ways, YouTube personalities and Twitch streams feel like they’re poised to do to traditional game sites what traditional game sites did to print. At the same time, struggling publishers could swoop in and start a big Fair Use fight to wrangle ad money out of the YouTube people, which would disrupt the whole thing and probably drive a lot of people away, regardless of the outcome. Also, a lot of people coming up these days seem to not really care about the idea of an independent voice for game coverage. They’re happy to get it directly from the publisher or don’t especially care if a personality is only talking about a product because they’re getting paid to do so. That’s a little disheartening, I suppose, but not especially surprising. I like to think that we have enough of a foot in each world to handle whatever happens from here. We’ll see how it all shakes out. | Mid | [
0.639269406392694,
35,
19.75
]
|
With Montreal being designated an orange zone by the Government of Quebec, Fabrice Labeau, Deputy Provost (Student Life and Learning), updates the McGill community on how this... | Mid | [
0.581196581196581,
34,
24.5
]
|
Q: Windows Phone - LongListSelector grouped items by date and ordered by date I have my BaseArticle class which has PublishedDate and I would like to group items by days and after that I would like to order them by date and time. For now I can group them by day and it works (I created get property PublishedDateString which returns date in string): public List<Group<BaseArticle>> GetArticlesGroups() { return GetItemGroups<BaseArticle>(Articles, a => a.PublishedDateString); } private static List<Group<T>> GetItemGroups<T>(IEnumerable<T> itemList, Func<T, string> getKeyFunc) { IEnumerable<Group<T>> groupList = from item in itemList group item by getKeyFunc(item) into g orderby g.Key select new Group<T>(g.Key, g); return groupList.ToList(); } I am newbie in LINQ and I don't know how can I edit my GetItemGroups method to order items by date after their grouped. Thanks A: You're just missing another orderby statement. I don't have the complete BaseArticle class, so I just going to make this very simple by creating a Song/Artist class like so: public class sample_model { public sample_model(string artist, string song, string extra = "") { this.Artist = artist; this.Song = song; this.Extra = extra; } public string Artist { get; set; } public string Song { get; set; } public string Extra { get; set; } } Then we going to make a list of the sample_model by doing this private ObservableCollection<sample_model> CreateData() { ObservableCollection<sample_model> my_list = new ObservableCollection<sample_model>(); my_list.Add(new sample_model("Faith + 1", "Body of Christ", "A Track")); my_list.Add(new sample_model("Faith + 1", "Christ Again", "A Track")); my_list.Add(new sample_model("Faith + 1", "A Night With the Lord", "A Track")); my_list.Add(new sample_model("Faith + 1", "Touch Me Jesus", "A Track")); my_list.Add(new sample_model("Faith + 1", "I Found Jesus (With Someone Else)", "A Track")); my_list.Add(new sample_model("Faith + 1", "Savior Self", "A Track")); my_list.Add(new sample_model("Faith + 1", "Christ What a Day", "A Track")); my_list.Add(new sample_model("Faith + 1", "Three Times My Savior", "A Track")); my_list.Add(new sample_model("Faith + 1", "Jesus Touched Me", "A Track")); my_list.Add(new sample_model("Faith + 1", "Lord is my Savior", "A Track")); my_list.Add(new sample_model("Faith + 1", "I Wasn't Born Again Yesterday", "A Track")); my_list.Add(new sample_model("Faith + 1", "Pleasing Jesus", "A Track")); my_list.Add(new sample_model("Faith + 1", "Jesus (Looks Kinda Hot)", "A Track")); my_list.Add(new sample_model("Butters", "What What", "B Track")); // add a duplicate song for our example my_list.Add(new sample_model("Faith + 1", "Body of Christ", "A Track")); return my_list; } // create our list ObservableCollection<sample_model> DataSource = CreateData(); At this point we have 2 Artist (Faith + 1 and Butters). We can query and group this List of data by Artist Name by doing this var query = from item in DataSource group item by item.Artist into ArtistGroup orderby ArtistGroup.Key select ArtistGroup; foreach (var ArtistGroup in query) { // this will loop twice. Once for Butters and Once for Faith + 1 // it will order by Key which is the Artist Name // so Butters will be first then Faith + 1 // but the Songs will not be ABC order, it will be order the same way we inserted them into the list } Now back to your original question! You want the list to be Sort by Artist and each Artist's Song to be Sorted as well. Then you must have another orderby statement // before we group by item.Artist, we sort by item.Song using another `orderby` var query = from item in svm.DataSource orderby item.Song group item by item.Artist into ArtistGroup orderby ArtistGroup.Key select ArtistGroup; foreach (var ArtistGroup in query) { // this will loop twice. Once for Butters and Once for Faith + 1 // it will order by Key which is the Artist Name // so Butters will be first then Faith + 1 // this time the sub list of each Artist will have their Songs sorted as well // For example, under Faith + 1 you will see // "A Night With the Lord" follow by // "Body of Christ" (TWICE) remember we put 2 in } | Mid | [
0.6009693053311791,
23.25,
15.4375
]
|
Pages Thursday, February 16, 2012 Giambattista Valli Haute Couture '12 Collection 2 Giambattista Valli is one of the most beautiful names in the world of fashion. The creations of the brand are always artistically feminine. A piece by Giambattista Valli is a true investment. The latest haute couture collection is not different. A beautiful collection made of chiffons, laces, pinks, blacks, and whites. This is how we remember haute couture to be. | Mid | [
0.6260869565217391,
36,
21.5
]
|
Significant Tuesday, June 12, 2018 AEP CEO on Money-Losing Coal Units: 'We'll Shut 'Em Down' Any federal plan to forestall the closure of struggling coal and nuclear power plants to stabilize the electric grid should be reviewed by utilities so customers are protected from rising costs, said Nick Akins, CEO of American Electric Power Co.Akins was referring to a controversial proposal under development by the Trump administration to protect and pay electric generating units that play a role in national security, such as supplying a military base."Everyone wants resiliency and reliability of the grid," Akins, 57, said in an interview last week in San Diego during the annual meeting of the Edison Electric Institute."The question becomes how do you do it? How do you do it within the market framework or outside the market framework? And who does it apply to?" he said of the draft plan sketched out with the help of the Department of Energy ahead of a National Security Council meeting on June 1.Increasing cyberthreats against U.S. energy infrastructure is the central justification laid out in the broad policy proposal. It cites authority under the Federal Power Act and the Defense Production Act, a Korean War-era law designed to enable swift emergency action in response to national security crises. The plan to guarantee business for money-losing coal and nuclear power plants both challenges the authority of the independent Federal Energy Regulatory Commission, responsible for safeguarding the nation's power supply, and does an end-run around privately run regional grid operators.Akins noted that under the previous White House administration, the EPA's Clean Power Plan designed to ratchet down carbon emissions was attacked by Republicans for favoring carbon-free generation over fossil fuels. The major talking point: The federal government shouldn't be picking winners and losers."When you're retiring that much coal and that much nuclear at one time, that is a cause for concern," Akins said, expressing some sympathy for Energy Secretary Rick Perry's desire to design a policy in response to baseload power plant retirements. "[But], once again, you get into winners and losers as a result. It's going to be a challenging process to get through to find something that's completely fair to everyone."This year is on track to be another big year for coal plant retirements. Roughly 13.5 gigawatts of coal capacity is slated for closure, according to U.S. Energy Information Administration figures. That equals the 13.5 GW of coal retired in 2016 and runs second to the 19.5 GW of coal-fired power retired in 2015. Nuclear is on a less-dramatic but steady decline. EIA expect the nuclear share of the U.S. electricity mix to decline from 20 percent in 2016 to 11 percent in 2050 as nuclear plants age out and are replaced by natural gas and renewable power, according to a recent analysis.Read more at AEP CEO on Money-Losing Coal Units: 'We'll Shut 'Em Down' | Mid | [
0.602015113350125,
29.875,
19.75
]
|
It seems as though Amazon has beaten Nintendo to the punch as its revealed that Super Smash Bros Wii U will feature a Game Board mode and Stage Builder mode. The information was listed on the product description and was presumably supposed to be announced by the Kyoto based company first. Here’s the description. “The multiplayer showdown you know and love is now on the Wii U console! Take on all comers as Mario, Mega Man, Sonic, and more gaming greats. Or tap an amiibo to the Wii U GamePad controller to train it up by battling with or against it. You can even pit your amiibo against a friends’ to see how your training methods stack up. Whether you’re creating stages on the GamePad, competing in challenges crafted by Master Hand and Crazy Hand, or outwitting your opponents in a brand new board game mode, there’s no doubt that the ultimate Smash Bros. game has arrived.” | Low | [
0.515274949083503,
31.625,
29.75
]
|
For those that seek natural relief from insomnia, Melatonin often comes to mind. Melatonin has been shown to be very effective in regulating sleep issues, but Melatonin also has many other applications. Melatonin is secreted by a little gland deep in the brain, (the Pineal Gland), and in the gastro-intestinal system. Higher concentrations of Melatonin are secreted at night and when there is total darkness. Unfortunately, the exposure of EMF’s from cell phones and other wireless and electronic devices tends to suppress the production of Melatonin from the Pineal Gland. How does the suppression of Melatonin production impact the body? Since Melatonin is also classified as a cytotoxin, or a substance that has a toxic effect on certain cells, Melatonin acts a tumor suppressor. Here are a few ways that Melatonin may prove to be a powerful ally for you in protecting your health: Melatonin puts Breast Cancer cells to sleep. David E. Blask, MD, PhD, a widely acclaimed expert in cancer biology demonstrated that night time melatonin blood levels directly suppressed human Breast Cancer cell growth. He found that Melatonin put Breast Cancer cells to sleep and slowed Breast Cancer growth by 70%! According to Dr. Blask, “Nighttime Melatonin is a relevant anticancer signal to human Breast Cancers. Ninety percent of human Breast Cancers have specific receptors for this signal.” When Blask’s team exposed laboratory mice with human Breast Cancers cells to constant light, the breast tumor growth dramatically increased. 2. Melatonin protects from Estrogen overdose There is a new buzz word in the environment called “Xeno-estrogens”. Xeno-estrogens are chemicals that actually mimic estrogen in the body. Persistently high levels of estrogen have been associated with an increased risk of Breast Cancer. Examples of Xeno-estrogens are insecticides, pesticides, food preservatives such as BHA, hormones injected into meats and dairy cows, plastics, parabens in skin care products, 4 MBC in sunscreens, and the list goes on….. 3. Melatonin counteracts the effect of estrogen on Breast Cancer cell growth 4. Melatonin has an anti-carcinogenic role in inhibiting cancer cell growth and in reducing oxidative stress. 5. For women who choose conventional medical therapies, Melatonin is a suitable treatment for reducing the side effects associated with chemotherapy and radiation. 6. Melatonin causes cancer cells to die. 7. Melatonin is a powerful anti-oxidant with immune enhancing properties. Melatonin also reduced the risk of death in cancer patients in a study conducted in Ontario Canada. Since Melatonin production peaks in the darkness, make sure that your bedroom is completely light free. Even a small light from a small dental appliance can disturb melatonin production. Also, clear out the electronic gadgets from your bedroom since EMF’s have a strong impact on reducing the effectiveness of Melatonin. For women that are trying to be proactive with prevention or are on a Breast Cancer healing journey, you may want to consider getting your Melatonin levels checked. This simple test can determine how deficient you may be and what you can do to increase the production of this vital cytotoxic hormone. Since Melatonin is such a powerful Breast Cancer inhibitor, make sure it is part of your protocol so that you can not only enjoy a restful sleep, but you can also have the peace of mind that Melatonin is hard at work for you. Comments comments | High | [
0.67156862745098,
34.25,
16.75
]
|
A comparison between masticatory muscle pain patients and intracapsular pain patients on behavioral and psychosocial domains. To identify differences between 2 groups of patients with temporomandibular disorders (TMD), those with masticatory muscle pain (MMP) versus intracapsular pain (ICP), and to compare these differences on behavioral and psychosocial domains. There were 435 patients in the MMP group and 139 patients in the ICP group. The overall sample was 88.2% female and had an average age of 36.1 years (SD = 11.7). Patients completed measures of psychological symptoms (SCL-90), pain severity (MPI), sleep (PSQI), activity (MBI), and life stressors (PCL). Heart rate and blood pressure were also measured, and a complete medical/dental history was taken for each patients. Results indicated no significant difference in pain severity or duration between the 2 groups (P > 05). The ICP group, however, reported fewer affective symptoms of pain than the MMP group (t = 6.8, P = .01). The ICP group had twice as many adaptive copers as dysfunctional patients (chi 2 = 7.84, P < .01), while there was no significant difference between these 2 categories for the MMP group (P > .05). Finally, the ICP group reported fewer psychological symptoms (P < .05), better sleep quality (F = 7.54, P = .01), and fewer life stressors (F = 7.00, P = .01) than the MMP group. In contrast to many previous studies, the data set in this study showed no differences in pain severity and duration between the MMP and the ICP groups. Even though pain severity levels were equivalent, the MMP diagnostic group of chronic TMD patients demonstrated more dysfunctional behavioral profiles and significantly higher psychological distress than the ICP subgroup. | High | [
0.677792041078305,
33,
15.6875
]
|
The Sinking Dollar 'The Strong Euro Is Destroying Jobs' The strong euro -- and weak dollar -- is making it increasingly difficult for European companies to do business overseas. SPIEGEL ONLINE spoke with German government economic advisor Peter Bofinger about the dangers of an unfettered euro and what the European Central Bank should do. | Mid | [
0.538461538461538,
36.75,
31.5
]
|
Our website uses cookies to improve your user experience. If you continue browsing, we assume that you consent to our use of cookies. More information can be found in our Cookies Policy and Privacy Policy . Manchester’s Love design Nando Chips Manchester’s Love has designed a range of crisps for Nando’s, Nando’s Chips, to be sold in grocery outlets. The consultancy was appointed to create a look and feel for the products by The Grocery Company, which has the licence to sell and market products in the UK for the fast-food chain. The thick, matt-finished, waxy packaging features the Nando’s logo, with colour-coding for the varying hotness levels of the peri-peri flavours, and have a premium positioning, targeting the market share of competitors such as Kettle Chips. | Low | [
0.39540816326530603,
19.375,
29.625
]
|
What’s Trending co-founders on the move Damon Berger departs for Fullscreen, while Shira Lazar will moonlight as a contributor to HLN's The Daily Share. What’s Trending generated a pair of trending stories of its own on Thursday. Multi-platform media company Fullscreen announced it has hired What’s Trending co-founder and CEO Damon Berger as VP of business development. He will remain on the board of What’s Trending, but relinquish day-to-day activities. Simultaneously, co-founder and host Shira Lazar was named a contributor to HLN‘s flagship afternoon show The Daily Share, which airs Monday to Friday from noon to 5 p.m. She will begin reporting for the network this week, while continuing to lead What’s Trending and host much of its programming. Launched in 2011, What’s Trending produces content ranging from short-form Trending Now videos and What’s Trending variety shows to long-form live specials from CES, SXSW and award shows. It currently receives more than 20 million views per month across a variety of digital outlets, including YouTube, Dailymotion, Yahoo Screen, iHeartRadio, Clear TV, Complex and The Huffington Post. In his new role at Fullscreen, Berger will focus on building strategic partnerships and expanding revenue opportunities for Fullscreen across all its divisions, from its talent network to its brand services and content units. Prior to co-founding What’s Trending, Berger was director of digital marketing at Twentieth Century Fox Film Corporation, where he created and implemented strategies for the domestic releases of such films as Avatar, Diary of a Wimpy Kid, The A-Team, Predators, Vampires Suck, Machete and Wall Street: Money Never Sleeps. In 2007, Berger helped launch the The Walt Disney Co.’s digital studio Stage 9. As head of creative development, he assembled the studio’s development slate, supervising the process from pitch to delivery of the finished show, while working with brands to monetize the content. A native of Montreal, Lazar has worked as a host and producer for such outlets as Music Plus TV, Yahoo, Hollywood.com, MSN.com, CBSNews.com and KNBC Channel 4 in Los Angeles. Currently, she is also the host of the 12-episode VH1 digital series Huge on the Tube, which she executive produces with Berger. TAGS: About The Author Todd is StreamDaily's U.S. West Coast Correspondent. He has written for a wide range of publications, including The Hollywood Reporter, Variety, the Los Angeles Times, the New York Post, NylonGuys and, yes, even the Weekly World News. Earlier in his career, he served as senior editor for the pioneering alternative movie magazine Film Threat. You can reach him at toddrlongwell[at]gmail.com or on twitter @toddlongwell1 | Mid | [
0.639130434782608,
36.75,
20.75
]
|
<link rel="import" href="../../../html/assert.html"> <link rel="import" href="../../../html/cr.html"> <link rel="import" href="../../../html/event_tracker.html"> <script src="../../../js/cr/ui/focus_row.js"></script> | Low | [
0.454106280193236,
23.5,
28.25
]
|
/* * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */ /* * JythonPanel.java * Copyright (C) 2009 University of Waikato, Hamilton, New Zealand */ package weka.gui.scripting; import weka.core.Utils; import weka.gui.BrowserHelper; import weka.gui.ComponentHelper; import weka.gui.visualize.VisualizeUtils; import java.awt.BorderLayout; import java.awt.Color; import java.awt.Font; import java.awt.GridLayout; import java.util.Properties; import javax.swing.BorderFactory; import javax.swing.ImageIcon; import javax.swing.JLabel; import javax.swing.JPanel; import javax.swing.JTextPane; import javax.swing.text.Document; /** * A scripting panel for <a href="http://www.jython.org/" target="_blank">Jython</a>. * * @author fracpete (fracpete at waikato dot ac dot nz) * @version $Revision: 5837 $ */ public class JythonPanel extends FileScriptingPanel { /** for serialization. */ private static final long serialVersionUID = -827358576217085413L; /** the Groovy setup. */ public final static String PROPERTIES_FILE = "weka/gui/scripting/Jython.props"; /** * Creates a new JTextPane for the code. * * @return the text pane */ protected JTextPane newCodePane() { JTextPane result; SyntaxDocument doc; Properties props; try { props = Utils.readProperties(PROPERTIES_FILE); } catch (Exception e) { e.printStackTrace(); props = new Properties(); } result = new JTextPane(); if (props.getProperty("Syntax", "false").equals("true")) { doc = new SyntaxDocument(props); result.setDocument(doc); result.setBackground(doc.getBackgroundColor()); } else { result.setForeground(VisualizeUtils.processColour(props.getProperty("ForegroundColor", "black"), Color.BLACK)); result.setBackground(VisualizeUtils.processColour(props.getProperty("BackgroundColor", "white"), Color.WHITE)); result.setFont(new Font(props.getProperty("FontName", "monospaced"), Font.PLAIN, Integer.parseInt(props.getProperty("FontSize", "12")))); } return result; } /** * Returns an icon to be used in a frame. * * @return the icon */ public ImageIcon getIcon() { return ComponentHelper.getImageIcon(IMAGES_DIR + "/jython_small.png"); } /** * Returns a panel to be displayed with the AboutAction. * * @return the panel with some information on the scripting panel */ protected JPanel getAboutPanel() { JPanel result; JPanel panel; result = new JPanel(new BorderLayout()); result.setBorder(BorderFactory.createEmptyBorder(10, 10, 10, 10)); // image result.add(new JLabel(ComponentHelper.getImageIcon(IMAGES_DIR + "/jython_medium.png")), BorderLayout.CENTER); // links panel = new JPanel(new GridLayout(5, 1)); panel.setBorder(BorderFactory.createEmptyBorder(10, 10, 10, 10)); result.add(panel, BorderLayout.SOUTH); panel.add(new JLabel("Jython homepage")); panel.add(BrowserHelper.createLink("http://www.jython.org/", null)); panel.add(new JLabel(" ")); panel.add(new JLabel("Weka and Jython")); panel.add(BrowserHelper.createLink("http://weka.wikispaces.com/Using+Weka+from+Jython", null)); return result; } /** * Returns the title (without the filename). * * @return the plain title */ public String getPlainTitle() { return "Jython Console"; } /** * Returns an initialized script object. * * @param doc the document to use as basis * @return the initialized script */ protected Script newScript(Document doc) { return new JythonScript(doc); } /** * Displays the panel in a frame. * * @param args can take a file as first argument */ public static void main(String[] args) { showPanel(new JythonPanel(), args); } } | Mid | [
0.615,
30.75,
19.25
]
|
Hi Everyone! :) Happy Valentine’s Day! <3 I have a couple of looks to show you today: I found this gorgeous light pink nail polish from Catrice called Apropos Coco and thought it was perfect for Valentine’s Day. Of course, I had to add a little sparkle to it. So I added the little heart using Barry M Rose Quartz and the rhinestones because they matched the glitter. I thought it was a very sweet, girly mani. Next, I have a purple stripe heart mani because it was requested by Margaret after I posted this Sinful Colors mani. I wanted to make it a bit different so that I wasn’t repeating myself so I did ombre hearts. I used, from pinkie to index, Nails Inc Duchess Street, China Glaze Creative Fantasy, Bella Pierre Lavender and Sinful Colors Beverly Hills. The white is Deborah Lippman Amazing Grace. I did a dotted heart on my thumb with the same purple polishes and really liked how it turned out. (I sometimes experiment on my thumb because you never see it :) So, instead of repeating the striped hearts on my other hand, I decided to try the dotted heart again, only in pink. I liked it again! By now, I was confident that I could do dotted hearts and did them on my whole hand..but I think I was overly confident because they don’t look like hearts! I love the different sized dots though – it’s like a cute colour blind test. Next time I’ll just dot the whole nail and I think that will look better. What do you guys think of my Valentine’s nails? I hope you’re all having a great day! Thanks for looking :) – Eleobel | Low | [
0.509513742071881,
30.125,
29
]
|
This week, Game Informer guest editor Chris "Warcraft" Kluwe confessed that Xenoblade Chronicles wanted to make him punch a kitten. However, it wasn't due to the game being bad, it was due to him loving it, and the resulting frustration that it was on the graphically inferior Wii. "The graphics. Dear god, the graphics," he wrote. "I can’t decide whether the technical capabilities of the Wii make me want to projectile vomit or take a 12-gauge to my television, and it makes me angry enough to mail a severed unicorn head to Nintendo’s main office because this game deserves better. "It deserves better than gasping fish mouths bobbing up and down through beautifully crafted dialogue. It deserves better than jagged edged fuzzy textures comprising a breathtaking landscape set within the body of a fallen god. IT DESERVES BETTER THAN WHAT YOU’VE FORCED THIS GAME TO BE, NINTENDO." Here's the thing, though. Had the game been on Xbox 360, PS3, or the inevitable Wii U, I don't think it would have been better. In fact, I sometimes fear that the Wii represents the swan song of the Japanese RPG as we know it. Let's face it -- games are ridiculously expensive to make. Expensive to the point where the entire business model looks pretty damn broken. We have developers decimating their workforces or even closing down before or after releasing a major "AAA" title. We have games costing millions of dollars to make, and publishers expecting success on par with Call of Duty in exchange for their investments. Games are big business, throwing big money around, and graphics are a huge part of that system. As games get prettier, they tend to get more expensive. Building new engines to take advantage of graphically insane consoles and computers takes time, effort, and lots of cash. It also tends to require some restrictions on what you can do with your game. For example, Gears of War still looks pretty damn lovely, but its action takes place within very tight and linear corridors. Had the game opened up, it would have had to have taken a graphical hit. The only game that has managed to look amazing and retain large environments has been Crysis, but it is still an anomaly in this industry. There are few studios capable of what Crytek is capable of. Certainly, there are few makers of traditional RPGs with the cash and the resources for that kind of craziness. Huge games like The Elder Scrolls V: Skyrim can look pretty thanks to good art direction, but they're also damn glitchy, and have to cut corners by reusing textures and environments. They're almost pieced together like LEGO constructs, with pre-made building blocks pieced together, and you can clearly see the proverbial puppet strings if you look at it long enough. It gets the job done, but it's a very Western thing. It's not the long, huge, open, varied, handcrafted kind of chicanery we're used to from Japanese role-playing games. For an example of what the high definition generation has done for the genre, one need look no further than Final Fantasy XIII. The game took over half a decade to make, and whether you like it or not, there's no denying that it still lacks the scale of past Final Fantasy games. I got a lot more out of the comparatively ugly Final Fantasy VII than I'll ever get from XIII. A greater sense of freedom, a longer time spent playing, and a far deeper sense that I was part of a large, fully realized world. By its own admission, Square Enix has struggled to get everything it wants in a Final Fantasy while also providing the kind of visuals we expect this generation. No less than an entire game's worth of content was cut from Final Fantasy XIII, because the size had to be kept down. Square has also said in the past that HD technology is too demanding to make the kind of big JRPGs we used to enjoy, and this demand is also the reason why we haven't had any confirmation of an HD remake for Final Fantasy VII. Final Fantasy VII took up to four years to produce, but Yoshinori Kitase suggested that it would take over a decade to get VII looking as good as XIII. It makes sense -- VII is simply a far bigger game, far more ambitious than XIII in every way (outside of graphics). There's a reason why so many good JRPGs have found homes on portable systems like the DS and PSP, rather than home consoles. You can actually make traditional experiences there, without the crippling graphical expectations holding them back. This is why I am saddened when I see someone complain about Xenoblade Chronicles being on the Wii. I feel that if we'd had it on any other system, it wouldn't be Xenoblade Chronicles anymore. Yes, the graphics are muddy and jaggy (I started playing it without glasses to make it look smoother!) but I don't think I'd have had it any other way. To get those sprawling open fields full of monsters, to get that wonderful level of variety and intricate world design. To get that huge experience and the sense of a world that truly was alive, I think Xenoblade needed to be on a system where there was no pressure to produce visuals on par with Crysis or Final Fantasy XIII. You can keep your prettier graphics -- I want a better game! The Wii was a great place for mid-sized developers, and while the system never quite realized its potential as an oasis of creativity, I nonetheless appreciate the titles we've seen on it. I think games like Xenoblade Chronicles and The Last Story have only been possible because the Wii "holds them back" in the visual department. The precious visual department holds games back in every other way. As much criticism as the Wii has had (and I've shared mine over the years), I will be grateful for it standing as the last bastion of the term, "gameplay over graphics." The Wii lacking HD output has, in my opinion, been a good thing in the long run. Without that expectation for high definition visuals, it's allowed developers without Square-levels of money to focus on creating good games first, and worrying about the juicy eye candy later. It's the kind of focus that few games on the Xbox 360 and PS3 could dream of getting away with. Yes, when you upscale a Wii game to HD it tends to look much better, but the fact that the upscaled version isn't the expected version eliminates the consumer's demand for ridiculously pretty games and allows the developer to focus on what really matters. When we play a PS3 game, we expect it to look very good, unless it's a budget game (which carries its own stigma). When we play a Wii game, we're expecting something far less flashy. I can't imagine the relief such reduced expectations must be for some studios. I am a little worried about the Wii going away, replaced as it inevitably shall be by the high definition Wii U. I'm worried that the makers of Japanese RPGs with modest budgets will no longer have anywhere to go if they want to make an ambitious game on a home console without getting snubbed. The handheld market will truly become their only sanctuary. At least until game development gets significantly cheaper, and I don't see that happening anytime soon. Not with modern technology consistently pushing the goal posts back. Games like Xenoblade Chronicles have to look like shit. They have to make Game Informer editors want to punch kittens. If they didn't, they wouldn't be the same games anymore. Yes, they'd look nice -- and I love a gorgeous game as much as the next person -- but they wouldn't be all they could have been. To think that a game's potential is only unlocked when it reaches a certain graphical quality is a little blinkered, if you ask me. As far as I am concerned, Xenoblade Chronicles reached its potential, and it did so because it was focused on being a game, as opposed to an art department's masturbation session. Soon, those who have spent years complaining about Wii games not being in HD will get their wish, and we'll have HD games forevermore. I hope they like the imprisoned, neutered, but oh-so pretty games they were asking for. READER COMMENTS LOADING BELOW... LET'S KEEP THE COMMUNITY GREAT You're not expected to always agree, but do please keep cool and never make it personal. Report harassment, spam, and hate speech to our community team. Also, on the right side of a comment you can flag nasty comments anonymously (we ban users dishing bad karma). For everything else, contact us! | Low | [
0.5,
26.125,
26.125
]
|
Adani Group Adani Group is an Indian multinational conglomerate headquartered in Ahmedabad, Gujarat. It was founded by Gautam Adani in 1988 as a commodity trading business with the flagship company Adani Enterprises Limited (previously Adani Exports Limited). Gautam Adani is the chairman. The Group's diverse businesses include energy, resources, logistics, agribusiness, real estate, financial services, defence and aerospace. The group has annual revenue of over $13 billion with operations at 70 locations in 50 countries. It is India's largest port developer and operator with ten ports and terminals including Mundra Port, its largest. Through a joint venture with Wilmar International in Singapore, the Group co-owns India's largest edible oil brand, Fortune. In April 2014, it added the fourth unit of 660 MW at its Tiroda Thermal Power Station, making Adani Power India's largest private power producer. In 2015, Adani was ranked India's most trusted infrastructure brand by The Brand Trust Report 2015. The Group operates mines in India, Indonesia and Australia and supplies coal to Bangladesh, China, and countries in Southeast Asia. The Group handled a total cargo of 200 megatonnes (Mt) in 2018-19. The company has contributed to the economy of Bunyu, North Kalimantan, Indonesia by producing 3.9 Mt of coal in 2016–17. The Group has made the largest investment by an Indian company in Australia at the controversial Carmichael coal mine in the Galilee Basin, Queensland. It is estimated to produce coal at a peak capacity of 60 Mt per year. The Group is the first in India to build a high-voltage direct current (HVDC) system. In January 2018, the logistics and SEZ arm of the Group, Adani Ports & SEZ Limited, added equipment and machinery to become the largest dredger fleet in India. History First phase The Adani Group commenced as a commodity trading firm in 1988 and diversified into the import and export of multi-basket commodities. With a capital of 5 lakhs, the company was established as a partnership firm with the flagship company, Adani Enterprises Limited, previously Adani Exports Limited. In 1990 the Adani Group developed its own port in Mundra to provide a base for its trading operations. It began construction at Mundra in 1995. In 1998, it became the top net foreign exchange earner for India Inc. The company began coal trading in 1999 followed by a joint venture in edible oil refining in 2000 with the formation of Adani Willmar. Second phase The group's second phase started with the creation of large infrastructure assets. The company established a portfolio of ports, power plants, mines, ships and railway lines inside and outside India. Adani handled 4 megatonnes (Mt) of cargo at Mundra in 2002, becoming the largest private port in India. Later in 2006, the company became the largest coal importer in India with 11 Mt of coal handling. The company expanded its business in 2008 purchasing Bunyu Mine in Indonesia which has 180 Mt of coal reserves. In 2009 the firm began generating 330 MW of thermal power. It also built edible oil refining capacity in India of 2.2 Mt per annum. Adani Enterprises became the largest trading house in India importing coal with a market share 60%. It also supplies coal to NTPC Limited, India. The Adani group became India's largest private coal mining company after Adani Enterprises won the Orissa mine rights in 2010. Operations at the Port of Dahej commenced in 2011 and ts capacity subsequently grew to 20 Mt. The company also bought Galilee Basin mine in Australia with 10.4 billion tonnes (Gt) of coal reserves. It also commissioned 60 Mt of handling capacity for the coal import terminal in Mundra, making it the world's largest. In addition, in the same year, the Adani group also bought Abbot Point port in Australia with 50 Mt of handling capacity. It commissioned India's largest solar power plant with a capacity 40 MW. As the firm achieved 3960 MW capacity, it became the largest private sector thermal power producer in India. In 2012 The company shifted its focus on three business clusters – resources, logistics and energy. Adani Power emerged as India's largest private power producer in 2014. Adani Power's total installed capacity then stood at 9,280 MW. The Mundra Port, Adani Ports and SEZ Ltd. (APSEZ), handled 100 Mt in fiscal 2013–14. On 16 May of the same year, Adani Ports acquired Dhamra Port on East coast of India for Rs 5,500 crore. Dhamra Port was a 50:50 joint venture between Tata Steel and L&T Infrastructure Development Projects, which has now been acquired by Adani Ports. The port began operations in May 2011 and handled a total cargo of 14.3 Mt in 2013–14. With the acquisition of Dhamra Port, the Group is planning to increase its capacity to over 200 Mt by 2020. In 2015 the Adani Group's Adani Renewable Energy Park signed a pact with the Rajasthan Government for a 50:50 joint venture to set up India's largest solar park with a capacity of 10,000 MW. In November 2015, the Adani group began construction at the port in Vizhinjam, Kerala. Adani Aero Defence signed a pact with Elbit-ISTAR and Alpha Design Technologies to work in the field of Unmanned Aircraft Systems (UAS) in India in 2016. In April, Adani Enterprises Limited secured approval from the Government of Gujarat to begin work on building a solar power equipment plant. In September, Adani Green Energy (Tamil Nadu), the renewable wing of the Adani Group, began operations in Kamuthi in Ramanathapuram, Tamil Nadu with a capacity of 648 megawatts (MW) at an estimated cost of Rs. 4,550 crore. In the same month, the Adani Group inaugurated a 648 MW single-location solar power plant. It was the world's largest solar power plant at the time in was set up In December, the Adani Group inaugurated a 100 MW solar power plant in Bhatinda, the largest in Punjab. The plant was built at a cost of Rs. 640 crore. On 22 December 2017 the Adani Group acquired reliance the power arm of Reliance Infrastructure for Rs 18,800 crore. Listed companies Adani Enterprises Limited The company began as Adani Exports Limited in 1988 trading commodities. In March 1993, the partnership company M/s. Adani Exports was converted into a limited company – Adani Enterprises Limited. Run by Gautam Adani the company originally exported dyes and intermediates, plastic products, agricultural products and frozen food to about 28 countries around the world. Adani Management Consultancy Services was amalgamated with the company in 1994. With major interests in logistics and energy, the enterprise handles the mining, trading, gas distribution, solar and agribusiness divisions of the Group. Adani Gas, a wholly owned subsidiary executes the gas distribution business. Its real estate activities are managed by Adani Infrastructure & Developers Private Limited. The Adani Enterprises-Mining is the only mining company in the list of top 50 organisations that has bagged the India's Great Place to Work certification two years in a row. Adani Ports & SEZ Limited Adani Ports and Special Economic Zone Limited (APSEZ) is the largest private port company and special economic zone in India. The company is headed by Karan Adani, CEO of APSEZ. The company's operations include Port management, logistics and the special economic zone. The company operates at the following ports: Mundra, Dahej, and Hazira, Gujarat; Dhamra, Odisha; Kattupalli, Tamil Nadu; and Vizhinjam, Kerala. In addition, the Adani Group manages terminals at the ports of Mormugao, Ennore, Vishakhapatnam and Kandla (Tuna Takra). The logistics arm was initially promoted by the Mundra Port Infrastructure Development Company Limited, an enterprise of the Government of Gujarat and Adani Port Limited. The company began operations at the Mundra Port in October 1998. With a Concession Agreement with the Government of Gujarat and the Gujarat Maritime Board in February 2001I, the group was granted the right to operate and develop the Mundra Port situated at the Navinal Island in the Kutch region for 30 years. Adani Power Limited Established in August 1996 as Adani Power Limited, the company gained a certificate of commence business in September of the same year. The company is run by Gautam Adani, Rajesh S. Adani and Adani Enterprises Limited. The company develops and maintains power projects in India. The firm has a combined installed capacity of 10,440 MW with four thermal power projects across India. The company runs the following subsidiaries: Adani Power Maharashtra Limited, Adani Power Rajasthan Limited, Adani Power Dahej Limited, Mundra Power SEZ Limited and Adani Power (Overseas) Limited In 2014, Adani Power overtook Tata Power to become India's largest power producer. The third phase of Adani Power Ltd's (APL) thermal power plant at Mundra in Gujarat is the world's first coal-fired plant to receive carbon credits from the United Nations Framework Convention on Climate Change (UNFCCC) Adani Power's Udupi Power Plant has been conferred with the Power Award by the Government of Karnataka. Adani Transmission Limited Integrated in 2013, Adani Transmission Limited handles the commissioning, operations and maintenance of electric power transmission systems. The holding company holds, operates and maintains 8511 circuit kilometers of transmission lines that range from 400 to 765 kilovolts. The total transmission capacity of the company is 16,200 megavolt amperes. The company has the following subsidiaries: Maharashtra Eastern Grid Power Transmission Company Limited, Maru Transmission Service Company Limited, Adani Transmission (India) Limited, Hadoti Power Transmission Service Limited, Raipur-Rajnandgaon-Warora Transmission Limited, Sipat Transmission Limited, and Chhattisgarh-WR Transmission Limited. Sports The Adani Group has multiple initiatives in sports. Launched in 2016 to prepare athletes for Rio Olympics, Garv Hai is a nationwide programme to promote sports and support athletes in India. It has been re-launched for a second time to groom athletes for 2020 Tokyo Olympics, 2022 Asian Games and Commonwealth Games. The programme focuses on archery, shooting, athletics, boxing, and wrestling. Beneficiaries of the Garv Hai pilot project in 2016 include Ankita Raina (tennis), Pinki Raina (boxing), Shiva Thapa (boxing), Khushbir Kaur (athletics), Inderjeet Singh (athletics), Mandeep Jangra (boxing), Malaika Goel (shooting), and Sanjeevani Jadhav (athletics). Another initiative is the Surguja Football Academy in Chhattisgarh. So far 11 players from Surguja have been selected to play for national Indian football team. Tax corruption On 27 February 2010, Central Bureau of Investigation arrested Rajesh Adani, Managing Director of Adani Enterprises Ltd on charges of custom duty evasion to the tune of Rs. 80 lakh. In August 2017, Indian customs alleged the Adani Group was diverting millions of funds from the company's books to Adani family tax havens overseas. Adani was accused of using a Dubai shell company to divert the funds. The details of US$235m diversion were obtained and published by The Guardian. In 2014, the directorate of revenue intelligence mapped out a complex money trail from India through South Korea and Dubai, and eventually to an offshore company in Mauritius allegedly owned by Vinod Shantilal Adani, the older brother of Gautam Adani. Controversy over mining projects in Australia Carmichael Coal Mine Project The Adani Group launched in 2014, with the support of a part of the Australian Government and Queensland, a mining and rail project (Carmichael coal mine) in Carmichael in Queensland’s Galilee Basin for 21.5 billion euros (over the life of the project, i.e. 60 years). This controversial mine, if implemented, would be the largest coal mine in Australia and the world. Its annual capacity will be 60 Mt of coal corresponding to the emission of 127 Mt of CO2, the equivalent of the total emissions of Belgium. This project will occupy an area of 35,000 hectares and will consume 297 million m3 of water taken from the aquifers of the region, which will have consequences for the environment and the local population. In view of the suggested environmental consequences of this project, most international banks refused to finance it, taking into account risks of flooding and/or induced earthquakes that may occur when working at such scale. In 2018 several international banks declined approaches to finance the project, and the conclusions of the environmental report for the authorization of the project are postponed until 2019, following promises of local job creation by the Adani group. Great Barrier Reef Project In 2014, the Adani group also launched a project to dig a channel through the Great Barrier Reef of Australia to facilitate the export of coal with dredging of 38 million m3 reef sand. This controversial project with dubious financing has aroused international opposition. Awards and recognition APSEZ Adani Ports & SEZ Limited (APSEZ) received ‘India's Container Port of the Year 2016’ in Mumbai. The port developer and logistics arm of the Group was awarded the same at the 7th edition of the All India Maritime and Logistics Awards (MALA). Adani Ports and Special Economic Zone (APSEZ) Limited won the ‘Non-Major Port of the Year 2015’ by the All Time Maritime and Logistics Award (MALA). Adani Ports and Special Economic Zone (APSEZ) Limited won the Emerging Company of the year 2014 at the Economic Time awards. Mundra Port was, recognized by The International Association of Ports and Harbors (IAPH), among 55 top ports of the world, to commit to jointly reduce the threat of global climate change. Adani Green Energy Limited In January 2018, Adani Green Enegry Limited, a division of Adani Group, entered into the global top 15 list of solar power developers by GTM Research, the market analysis and consulting arm of Greentech Media. AEL-Mining The Adani Enterprises-Mining achieved "India’s Great Place to Work 2019" certification on 19 July in Mumbai. The Great Place to Work Institute awarded the certification to the company in the category of the Mid-size Organisations. Philanthropy Adani Foundation has established cost free schools, the Adani Vidya Mandir, at 3 different locations in Ahmedabad Bhadreshwar and Surguja for underprivileged children. Other schools funded by the foundation include Adani DAV Public School, Adani Vidyalayas and Navchetan Vidyalaya. The foundation provides education to 100,000 children through 600 schools and balwadis. Set up by Gujarat Adani Institute of Medical Sciences, the GK General Hospital with 750 beds in Bhuj provides treatment to many people. The foundation has inaugurated village development works in the villages of Bada gram panchayat. The Adani Foundation has implemented System of Rice Intensification method on 4,000acres of farmland spread across 42 villages and empowered 2,050 farmers. References External links Official website Category:Companies based in Ahmedabad Category:Conglomerate companies of India Category:Indian companies established in 1988 Category:1988 establishments in India Category:Adani Group Category:Adani family | Mid | [
0.6523809523809521,
34.25,
18.25
]
|
U.S. Pat. No. 3 976 577, assigned to the assignee of the present invention, shows a prior high pressure liquid filter system for industrial use in which plural side-by-side filter units are spaced along an array of vertically spaced horizontal conduits, for example backwash, filtered liquid outlet, process liquid inlet, and drain (blow off) conduits in descending order. A set of four valves, namely respective backwash, filtered liquid outlet, process liquid inlet, and drain valves, connect the respective backwash, filter liquid outlet, process liquid inlet and drain conduits to a filter unit or pair of filter units. Each of the four valves is equipped with its own individually controlled pneumatic actuator. The backwash and filtered liquid outlet valves are mechanically linked and connected to the top of the filter unit and the process liquid inlet and drain valves are mechanically linked and connected to the bottom of the filter unit. A given mechanical link assures that both of its valves are not simultaneously in the full open position and such links are mere synchronizers, actuation of the valves being by the individual pneumatic actuators. A link interconnects the pneumatic actuators for the process liquid inlet valve and drain valve and a second link interconnects the pneumatic actuators for the filter liquid outlet valve and backwash liquid inlet valve. The connections of such links to such pneumatic actuators are exposed. In normal operation, process liquid from the process liquid inlet conduit passes up through all of the filter units simultaneously and thence appears as filtered liquid in the filtered liquid outlet. Eventually back pressure in the filter units rises due to build up on the filter of solids filtered from the process liquid. This is cured by shifting the valves connecting one or a pair of the filter units to terminate process liquid flow (filtering flow) therethrough and instead permit backwash liquid flow from the backwash conduit through the filter unit to the drain conduit while the rest of the filter units continue filtering flow. In this way the filtering operation of the system as a whole is continuous and yet the filter units are backwashed one or a pair at a time. Filter systems of this kind are usable for filtering of various process liquids including hot and/or abrasive process liquids. For example, systems of this kind are usable in chemical plants, pulp mills, refineries etc. In one particular example, process liquid temperatures in the range of 400.degree.-550.degree. F. are frequently encountered in refinery installations. To avoid loss of process liquid heat and to reduce the need to reheat process liquid, it is known to locate a heat retaining (insulative) box around the major portion of the filter system, the four pneumatic actuators per filter unit, or filter unit pair, being outside the heat retaining box to avoid heat damage to their internal seals. In the prior patent, each of the actuators was controlled by a solenoid valve, the solenoid valves being located outside the heat retaining box along a bottom frame member of the apparatus. The system disclosed in above mentioned U.S. Pat. No. 3,976,577 has proven satisfactory in service, but a continuing effort to improve systems of this general kind has resulted in the present invention. The objects of the present invention include provision of: (1) a filter assembly in which there is a substantial reduction in the number of pneumatic actuators as well as a change in the type of pneumatic actuator, in which there is only one actuator per filter unit (or filter unit pair) rather than one per valve, and wherein all four valves (backwash, filtered liquid output, process liquid input and drain valves) are positively mechanically interlinked for both (1) positive mechanical actuation of each and (2) simultaneous synchronization of opening and closing of all four valves, wherein the number of valve actuating parts susceptible to wear or damage is reduced by a factor of four wherein the likelihood of down time due to failed actuators is correspondingly reduced, and wherein the number of pressurizable air lines leading to actuators is correspondingly reduced so as to reduce clutter and complexity and cost in the air circuitry; (2) an assembly as aforesaid, in which the valve actuator levers and their pivot connections to links are protected in which the drive motor is supported on actuator housings fixed to ones of the valves and not on the filter units, conduits or frame supporting the latter, wherein the motor and actuator housings are installable as a unit and are substitutable on existing field installations for actuators of earlier type, and wherein tolerance and alignment problems that may result from variations in location of the filter units, valves, conduits and frame members with respect to each other are minimized; (3) an assembly as aforesaid, in which the number of pneumatic lines susceptible to damage is substantially reduced, and in which pilot valves previously located on the lower portion of the frame are now located to be less susceptible to damage; (4) an assembly as aforesaid, wherein pivotable valve actuating levers are enclosed to more readily avoid contact with persons or objects moving therepast; (5) an assembly as aforesaid, in which any tendency for the motor driven actuator to twist its housing, upon actuation by said motor, is minimized; (6) an assembly as aforesaid, which is usable with liquids containing abrasive particles in which a tendency for abrasive particles to collect on and prematurely wear the drain valve seat is minimized. (7) an assembly as aforesaid, which is usable with hot liquids wherein a temperature drop across the drain valve is minimized, so as to minimize temperature differential induced dimensional changes and distortions and thereby avoid valve malfunction or failure resulting therefrom; and (8) an assembly as aforesaid, in which the major portions of the apparatus are enclosable in a heat insulating box with the drive motor and a control therefore located outside the box so as to be free of elevated temperatures within the box, the insulated box helping to reduce heat loss from preheated process liquid. Other objects and purposes of the invention will be apparent to persons acquainted with apparatus of this type upon reading the following specification and inspecting the accompanying drawings. | Mid | [
0.59860788863109,
32.25,
21.625
]
|
Disruption of protein neddylation with MLN4924 attenuates paclitaxel-induced apoptosis and microtubule polymerization in ovarian cancer cells. Surgery and chemotherapy are the gold-standard treatments for ovarian cancer. The major cause of treatment failure in patients with ovarian cancer is tumoral heterogeneity and drug resistance. Paclitaxel (PTX) is one of the most commonly used first-line drugs for ovarian cancer chemotherapy. Unfortunately, the mechanisms of PTX chemoresistance remain unclear. Here, we examined the effects of post-translational neddylation on the sensitivity of ovarian cancer cells (OCCs) to PTX-induced apoptosis. Disruption of protein neddylation with the first-in-class inhibitor MLN4924 dramatically neutralized PTX-mediated antiproliferative, antimigration, and apoptotic effects in human OCCs. Moreover, MLN4924 treatment interrupted PTX-induced microtubule polymerization. Importantly, two neddylation conjugating E2 enzymes, UBE2M and UBE2F, were found to play essential roles in PTX-induced cytotoxicity and tubulin polymerization in OCCs. In summary, our findings demonstrated that disruption of protein neddylation by MLN4924 conferred resistance to PTX and provided insights into the potential mechanisms of PTX chemoresistance in ovarian cancer. | High | [
0.671087533156498,
31.625,
15.5
]
|
<form> <input type="color" aria-label="Select a color" /> </form> | Low | [
0.279034690799396,
11.5625,
29.875
]
|
Tarnished Penn State football legend Joe Paterno and other top school officials covered up Jerry Sandusky’s child sex abuse for 14 years — enabling the perv to continue preying on kids, a blistering report charged yesterday. “Our most saddening and sobering finding is the total disregard for the safety and welfare of Sandusky’s child victims by the most senior leaders at Penn State,” said former FBI Director Louis Freeh, whose team conducted an eight-month probe. “The most powerful men at Penn State failed to take any steps for 14 years to protect the children who Sandusky victimized.” Freeh — hired by Penn State in November to get to the bottom of one of the most sordid scandals in the history of college sports — released the damning 267-page report yesterday. It blasted Paterno — the Hall of Fame coach who died in January at age 85 of lung cancer after being fired over his handling of the scandal — along with then-university President Graham Spanier, VP Gary Schultz and athletic director Timothy Curley, all of whom were fired last fall. “These men concealed Sandusky’s activities from the Board of Trustees, the university community and authorities,” the report said. “They exhibited a striking lack of empathy for Sandusky’s victims by failing to inquire as to their safety and well-being, especially by not attempting to determine the identity of the child who Sandusky assaulted in the Lasch Building in 2001.” In that incident, graduate assistant football coach Mike McQueary saw a naked Sandusky apparently sodomizing a boy in a locker-room shower and told Paterno. The report said Paterno and the others had known about the allegations surrounding Sandusky since 1998, when the mother of a boy complained the assistant coach had showered with him But the once-fabled coach did nothing in either case — except telling Sandusky about the 2001 allegation, which the report said further endangered the victim. For example, Paterno told a grand jury in 2011 that he was unaware of any complaints about Sandusky before McQueary saw him molesting the boy in the shower in 2001. But Freeh’s team found that Curley and Schultz had exchanged e-mails shortly after the 1998 incident showing Paterno knew about the charge. One, with the subject line “Joe Paterno,” read: “I have touched base with the coach. Keep us posted.” Another read: “Anything new in this department? Coach is anxious to know where it stands.” Paterno was asked by the grand jury if he discussed the McQueary report with officials beyond Curley. “No, because I figured that Tim would handle it appropriately . . .I thought he would look into it and handle it appropriately,” Paterno said. But Curley and Schultz, who had decided to take the 2001 case to child-welfare authorities, exchanged e-mails that said Paterno put the kibosh on their plan. “After giving it more thought and talking it over with Joe yesterday — I am uncomfortable with what we agreed were the next steps,” Curley wrote Schulz. The report charged, “These individuals . . . empowered Sandusky to attract potential victims to the campus and football events by allowing him to have continued, unrestricted and unsupervised access to the university’s facilities. That continued access provided Sandusky with the very currency that enabled him to attract his victims.’’ Paterno’s son Jay defended his dad. “Very often the people closest to someone like this [Sandusky] are the ones that miss it. We aren’t the only ones who missed it . . . Every one of us wishes that we would have seen something or caught something that would have done something about it,” he told ESPN. But the foster mother of a boy identified at trial only as Victim 10 lashed out at the late coach. “It’s just sick. This all could have been avoided. I thought he would be more like the honest Joe, the good guy,” she told Philly.com The report could lead to NCAA sanctions up to and including imposing the “death penalty” — banning football for a year or more. The US Department of Education also is examining whether the school violated the Clery Act, which requires reporting of certain crimes on campus. Reaction was mixed on campus — where a student-center television was quickly switched to another channel when Freeh’s press conference began. Malcolm Moran, director of Penn State’s John Curley Center for Sports Journalism, called the Freeh report “the latest shock to the system” at the still-polarized campus, where many continue to revere Paterno. Courtney Lennartz, president of the student body, said, “It is sad to think we put so much trust into these men, it is tough to deal with.” Asked if the statue of Paterno on campus should be taken down, Vincenzo Lizza, president of Penn State’s InterFraternity Council, said, “It’s tough to know where I stand on whether to take it down.” The scandal has not hurt donations. Penn State said 190,000 donors — 6,000 more than last year — contributed more than $200 million. Sandusky is awaiting sentencing after being convicted of 45 criminal counts for abusing 10 boys. Curley and Schultz are awaiting trial on charges of lying to a grand jury and failing to report abuse. | Mid | [
0.6320987654320981,
32,
18.625
]
|
Menu I made two phone calls on the morning of November 9th. The first call was to my boyfriend. The second call was to my gynecologist’s office to make an appointment to discuss sterilization. Recently, I underwent a laparoscopic salpingectomy, the removal of both of my Fallopian tubes. Having children has officially been taken off the table for me. I was never the type of person who dreamed of having children. Back in high school, I explicitly said I didn’t want kids. Looking back now, I can say that the only time I wavered on this was when someone else insisted that I should. I could use the excuse of never meeting the right man, but I would be lying–not lying about meeting more than my fair share of losers, but lying about that being the reason I didn’t have kids. So in fairness, Donald Trump is not the sole reason I pursued sterilization. But his election was the straw that broke the camel’s back. It circled back to the world I see around me not being the type of world I could bring a child into in good conscience. It circled back to the day I had to explain to my nephew, only about 10 at the time, that douchebag isn’t an appropriate word for him to use, after he saw it on a bumper sticker on my conservative cousin’s car. It circled back to the idea that people like my own father had no qualms voting for a man who bragged about sexual assault (and explicitly tried to convince others to do the same). If even the so-called Christians think that Trump’s behavior on the campaign trail and since the election is worthy of their vote, then what does that say about us as a nation? And how am I supposed to explain that to an innocent child? And of course, that doesn’t even touch on the most important issue, and the ultimate reason that I called my gynecologist on November 9th: the environment. In 2008, John McCain believed that we needed to tackle climate change–it was even in the Republican platform. But in the years since, it has become a litmus test of sorts for Republicans to denounce climate change, regardless of scientific consensus or increasingly strange weather patterns that even an idiot can’t help but notice. Republican control of D.C. was a guarantee that even the minimal progress we’ve made would be undone. The fact is that I will be dead by the time the worst of the impact is felt. But there’s a good chance that a child or a grandchild of mine would still be alive and forced to deal with the consequences. Once again, how do you explain to an innocent child that all of the extreme weather, the heat waves, the floods, the resulting famines, were because voters wanted to make sure the oil companies were happy? Or that climate change became a partisan issue because of the unprecedented hatred of our 44th President? Someday, we will all have to account for the mess we’ve created. But my conscience is clean. | Mid | [
0.6443914081145581,
33.75,
18.625
]
|
672 F.2d 897 217 U.S.App.D.C. 363 Shepherdv.Merit Systems Protection Board 81-1175 UNITED STATES COURT OF APPEALS District of Columbia Circuit 11/3/81 1 M.S.P.B. AFFIRMED | Low | [
0.464435146443514,
27.75,
32
]
|
Q: How to continuously import data from external service into SQL Server I have nearly a hundred services sending data to our message queue. This data is processed by Java service and loaded into import tables in our SQL Server. After data is loaded, a few procedures are executed that load this data into proper tables. Recently we had to add new instances of service reading and loading messages. It was suggested that we should change database isolation model to snapshot (I'm not very accustomed with databases so I simply did what was proposed). Unfortunately we had a lot of problems with it, so we had to duplicate import tables and aforementioned procedures - this of course resulted in a huge mess that I'm currently trying to clean up. My current understanding is such that snapshot isolation was suggested so that services could work using the same table without problems and errors that we encountered stem from some misunderstanding or improper implementation on our (developers) side. My question is: is it possible, and if yes then how, to bulk load data into single table, transform it and load into target table (everything in parallel, so lets say that there are 3 or 4 services doing it) in a way that causes no deadlocks or data loss. Our SQL Server is: Microsoft SQL Server 2014 (SP2-GDR) (KB4019093) - 12.0.5207.0 (X64) I don't know much more, but I know that for example we don't have support for partitioning or online index creation - maybe this will help somehow. A: I ended modifying services loading data and import tables in such a way that each record loaded has its own identifier. Also services no longer execute import procedures, they are scheduled using SQL Agent and run once per minute. Solution is really simple and while on average data is stored in destination tables 30 seconds after they are received by the services, this is something that we can live with - we can load much, much, MUCH more data that way. | Mid | [
0.644444444444444,
36.25,
20
]
|
/**************************************************************************************** Copyright (C) 2015 Autodesk, Inc. All rights reserved. Use of this software is subject to the terms of the Autodesk license agreement provided at the time of installation or download, or which otherwise accompanies this software in either electronic or hard copy form. ****************************************************************************************/ //! \file fbxpropertypage.h #ifndef _FBXSDK_CORE_PROPERTY_PAGE_H_ #define _FBXSDK_CORE_PROPERTY_PAGE_H_ #include <fbxsdk/fbxsdk_def.h> #include <fbxsdk/core/base/fbxstringlist.h> #include <fbxsdk/core/fbxobject.h> #include <fbxsdk/core/fbxsymbol.h> #include <fbxsdk/core/fbxpropertydef.h> #include <fbxsdk/fbxsdk_nsbegin.h> typedef FbxPair<FbxInt, const char*> FbxNameMapKey; struct FbxNameMapCompare { inline int operator()(const FbxNameMapKey& pKeyA, const FbxNameMapKey& pKeyB) const { if( pKeyA.mFirst < pKeyB.mFirst ) return -1; else if( pKeyA.mFirst > pKeyB.mFirst ) return 1; return strcmp(pKeyA.mSecond, pKeyB.mSecond); } }; class FBXSDK_DLL FbxPropertyInfo { public: FBXSDK_FRIEND_NEW(); static FbxPropertyInfo* Create(const char* pName, FbxPropertyPage* pTypeInfo) { return FbxNew< FbxPropertyInfo >(pName,pTypeInfo); } static FbxPropertyInfo* Create(const char* pName, EFbxType pType=eFbxUndefined) { return FbxNew< FbxPropertyInfo >(pName,pType); } void Destroy() { FbxDelete(this); } FbxPropertyInfo* Clone(FbxPropertyPage* /*pPage*/) { // @@@@@ Filter is missing // @@@@@ Clone is incomplete if (mTypeInfo) { return FbxNew< FbxPropertyInfo >(mName,mTypeInfo); } else { return FbxNew< FbxPropertyInfo >(mName,mType); } } inline void IncRef() { mRef++; } inline void DecRef() { mRef--; if (mRef==0) FbxDelete(this); } inline int GetRef() { return mRef; } // Labels and Types inline FbxStringSymbol GetName() const { return mName; } EFbxType GetType() const; FbxPropertyPage* GetTypeInfo() const { return mTypeInfo; } inline void SetLabel(const char* pLabel) { mLabel=pLabel; } inline const char* GetLabel() const { return mLabel.IsEmpty() ? "" : ((const char*)mLabel); } inline void SetUserTag(int pUserTag) { mUserTag=pUserTag; } inline int GetUserTag() const { return mUserTag; } inline void SetUserData(const void* pUserData) { mUserData=(void*)pUserData; } inline void* GetUserData() const { return mUserData; } // Enum list int AddEnumValue(const char* pStringValue) { EFbxType lType = GetType(); if (lType == eFbxEnum || lType == eFbxEnumM) { if (!mEnumList) mEnumList.Reset(FbxNew< FbxStringList >()); bool lCanAdd = (lType == eFbxEnumM || mEnumList->FindIndex( pStringValue ) == -1); if( lCanAdd ) return mEnumList->Add((char*)pStringValue); } return -1; } void InsertEnumValue(int pIndex, const char* pStringValue) { EFbxType lType = GetType(); if (lType == eFbxEnum || lType == eFbxEnumM) { if (!mEnumList) mEnumList.Reset(FbxNew< FbxStringList >()); bool lCanAdd = (lType == eFbxEnumM || mEnumList->FindIndex( pStringValue ) == -1); if( lCanAdd ) mEnumList->InsertAt(pIndex,(char*)pStringValue); } } int GetEnumCount() { return mEnumList ? mEnumList->GetCount() : 0; } void SetEnumValue(int pIndex, const char* pStringValue) { EFbxType lType = GetType(); if (lType == eFbxEnum || lType == eFbxEnumM) { if (!mEnumList) mEnumList.Reset(FbxNew< FbxStringList >()); bool lCanAdd = (lType == eFbxEnumM || mEnumList->FindIndex( pStringValue ) == -1); if (lCanAdd) mEnumList->SetStringAt(pIndex,(char*)pStringValue); } } void RemoveEnumValue(int pIndex) { EFbxType lType = GetType(); if (lType == eFbxEnum || lType == eFbxEnumM) { if (!mEnumList) mEnumList.Reset(FbxNew< FbxStringList >()); mEnumList->RemoveAt(pIndex); } } char* GetEnumValue(int pIndex) { char* lValue = NULL; EFbxType lType = GetType(); if (lType == eFbxEnum || lType == eFbxEnumM) { lValue = mEnumList ? mEnumList->GetStringAt(pIndex) : 0; } return lValue; } // Min and Max values enum EValueIndex {eValueMin, eValueSoftMin, eValueMax, eValueSoftMax, eValueCount}; bool HasMinMax(EValueIndex pId) const { return mMinMaxValue[pId] != NULL; } bool GetMinMax(EValueIndex pId, void* pValue, EFbxType pValueType) const { if (mMinMaxValue[pId]) { return FbxTypeCopy(pValue, pValueType, mMinMaxValue[pId], GetType()); } return false; } bool SetMinMax(EValueIndex pId, const void* pValue, EFbxType pValueType) { if (!mMinMaxValue[pId]) { size_t lSize = FbxTypeSizeOf(GetType()); if (lSize) { mMinMaxValue[pId] = FbxMalloc(lSize); } } if (mMinMaxValue[pId]) { return FbxTypeCopy(mMinMaxValue[pId], GetType(), pValue, pValueType); } return false; } private: FbxPropertyInfo(const char* pName, FbxPropertyPage* pTypeInfo) : mRef(0) , mName(pName) , mType(eFbxUndefined) , mTypeInfo(pTypeInfo) , mUserTag(0) , mUserData(0) , mFilter(0) { for (int i=0; i<eValueCount; i++) { mMinMaxValue[i] = 0; } } FbxPropertyInfo(FbxStringSymbol pName,FbxPropertyPage *pTypeInfo) : mRef(0) , mName(pName) , mType(eFbxUndefined) , mTypeInfo(pTypeInfo) , mUserTag(0) , mUserData(0) , mFilter(0) { for (int i=0; i<eValueCount; i++) { mMinMaxValue[i] = 0; } } FbxPropertyInfo(const char* pName, EFbxType pType) : mRef(0) , mName(pName) , mType(pType) , mTypeInfo(0) , mUserTag(0) , mUserData(0) , mFilter(0) { for (int i=0; i<eValueCount; i++) { mMinMaxValue[i] = 0; } } ~FbxPropertyInfo() { for (int i=eValueMin; i<eValueCount; i++) { FbxFree(mMinMaxValue[i]); } } int mRef; FbxStringSymbol mName; FbxStringSymbol mLabel; EFbxType mType; FbxPropertyPage* mTypeInfo; int mUserTag; void* mMinMaxValue[eValueCount]; void* mUserData; FbxConnectionPointFilter* mFilter; FbxAutoDeletePtr<FbxStringList> mEnumList; }; #if defined(FBXSDK_COMPILER_MSC) #pragma warning (push) #pragma warning (disable: 4355) #endif class FBXSDK_DLL FbxPropertyConnect { public: FBXSDK_FRIEND_NEW(); static FbxPropertyConnect* Create(FbxPropertyPage* pPage,FbxInt pId) { return FbxNew< FbxPropertyConnect >(pPage,pId); } void Destroy() { FbxDelete(this); } FbxPropertyConnect* Clone(FbxPropertyPage* pPage) { return FbxNew< FbxPropertyConnect >(pPage,mId); } inline void IncRef() { mRef++; } inline void DecRef() { mRef--; if (mRef==0) FbxDelete(this); } inline int GetRef() { return mRef; } // Properties FbxPropertyPage* GetPage() { return mPage; } FbxInt GetPropertyId() { return mId; } // ClearConnectCache() // ------------------------------------------------------ inline void ClearConnectCache() { mConnectionPoint.SubConnectRemoveAll(); } //! Clear all connect without sending any notification (Internal use ONLY) inline void WipeAllConnections() { mConnectionPoint.WipeConnectionList(); } // Properties inline bool ConnectSrc(FbxPropertyConnect* pSrc, FbxConnection::EType pType) { return mConnectionPoint.ConnectSrc(&pSrc->mConnectionPoint,pType); } inline bool DisconnectSrc(FbxPropertyConnect* pSrc) { return mConnectionPoint.DisconnectSrc(&pSrc->mConnectionPoint); } inline bool IsConnectedSrc(FbxPropertyConnect* pSrc) { return mConnectionPoint.IsConnectedSrc(&pSrc->mConnectionPoint); } inline int GetSrcCount(FbxConnectionPointFilter* pFilter) { return mConnectionPoint.GetSrcCount(pFilter); } inline FbxPropertyConnect* GetSrc(FbxConnectionPointFilter* pFilter, int pIndex) { FbxConnectionPoint *lCP = mConnectionPoint.GetSrc(pIndex,pFilter); return lCP ? (FbxPropertyConnect * )lCP->GetData() : 0; } inline bool ConnectDst(FbxPropertyConnect* pDst, FbxConnection::EType pType) { return mConnectionPoint.ConnectDst(&pDst->mConnectionPoint,pType); } inline bool IsConnectedDst(FbxPropertyConnect* pSrc) { return mConnectionPoint.IsConnectedSrc(&pSrc->mConnectionPoint); } inline bool DisconnectDst(FbxPropertyConnect* pDst) { return mConnectionPoint.DisconnectDst(&pDst->mConnectionPoint); } inline int GetDstCount(FbxConnectionPointFilter* pFilter) { return mConnectionPoint.GetDstCount(pFilter); } inline FbxPropertyConnect* GetDst(FbxConnectionPointFilter* pFilter, int pIndex) { FbxConnectionPoint *lCP = mConnectionPoint.GetDst(pIndex,pFilter); return lCP ? (FbxPropertyConnect * )lCP->GetData() : 0; } int mRef; FbxConnectionPoint mConnectionPoint; FbxPropertyPage* mPage; FbxInt mId; private: FbxPropertyConnect(FbxPropertyPage* pPage,FbxInt pId) : mRef(0), mConnectionPoint(this), mPage(pPage), mId(pId) { } ~FbxPropertyConnect(){ if( FbxObject::GetWipeMode() ) mConnectionPoint.WipeConnectionList(); } }; #if defined(FBXSDK_COMPILER_MSC) #pragma warning (pop) #endif class FBXSDK_DLL FbxPropertyEntry { public: static FbxPropertyEntry* Create(FbxInt pParentId, FbxPropertyInfo* pInfo, FbxPropertyValue* pValue, FbxPropertyConnect* pConnect){ return FbxNew<FbxPropertyEntry>(pParentId, pInfo, pValue, pConnect); } void Destroy() { FbxDelete(this); } inline FbxInt GetParentId(){ return mParentId; } inline bool IsEmpty(){ return (mInfo || mValue || mConnect || mFlags.GetMask() != 0) ? false : true; } inline FbxPropertyInfo* Get(const FbxPropertyInfo* /*pType*/){ return mInfo; } void Set(FbxPropertyInfo* pInfo) { FbxPropertyInfo* lInfo = mInfo; if( pInfo ) pInfo->IncRef(); mInfo = pInfo; if( lInfo ) lInfo->DecRef(); } inline FbxPropertyValue* Get(const FbxPropertyValue* /*pType*/){ return mValue; } void Set(FbxPropertyValue* pValue) { FbxPropertyValue* lValue = mValue; if( pValue ) pValue->IncRef(); mValue = pValue; if( lValue ) lValue->DecRef(); } inline FbxPropertyConnect* Get(const FbxPropertyConnect* /*pType*/){ return mConnect; } void Set(FbxPropertyConnect* pConnect) { FbxPropertyConnect* lConnect = mConnect; if( pConnect ) pConnect->IncRef(); mConnect = pConnect; if( lConnect ) lConnect->DecRef(); } inline FbxPropertyFlags* Get(const FbxPropertyFlags* /*pType*/){ return &mFlags; } inline void Set(FbxPropertyFlags pType){ mFlags = pType; } inline void Set(FbxPropertyFlags* pType){ mFlags = pType ? *pType : FbxPropertyFlags(FbxPropertyFlags::eNone); } private: FbxPropertyEntry(FbxInt pParentId,FbxPropertyInfo *pInfo,FbxPropertyValue *pValue,FbxPropertyConnect *pConnect) : mInfo(pInfo), mValue(pValue), mConnect(pConnect), mParentId(pParentId), mFlags(FbxPropertyFlags::eNone) { if( mInfo ) mInfo->IncRef(); if( mValue ) mValue->IncRef(); if( mConnect ) mConnect->IncRef(); } ~FbxPropertyEntry() { if( mInfo ) mInfo->DecRef(); if( mValue ) mValue->DecRef(); if( mConnect ) mConnect->DecRef(); } FbxPropertyInfo* mInfo; FbxPropertyValue* mValue; FbxPropertyConnect* mConnect; FbxInt mParentId; FbxPropertyFlags mFlags; FBXSDK_FRIEND_NEW(); friend class FbxPropertyPage; }; class FBXSDK_DLL FbxPropertyIdGenerator { public: FbxPropertyIdGenerator() : mRef(0), mNextId(0) {} inline FbxInt GetNextId() const { return mNextId; } inline FbxInt GetNextIdAndInc() { return mNextId++; } inline void IncRef() { mRef++; } inline void DecRef() { mRef--; if( mRef == 0 ) FbxDelete(this); } private: FbxInt mRef, mNextId; }; class FBXSDK_DLL FbxPropertyPage { public: FBXSDK_FRIEND_NEW(); static FbxPropertyPage* Create (FbxPropertyPage* pInstanceOf=0) { return FbxNew< FbxPropertyPage >(pInstanceOf); } static FbxPropertyPage* Create (const char* pName, FbxPropertyPage* pTypeInfo) { return FbxNew< FbxPropertyPage >(pName,pTypeInfo); } static FbxPropertyPage* Create (const char* pName, EFbxType pType=eFbxUndefined) { return FbxNew< FbxPropertyPage >(pName,pType); } void Destroy() { FbxDelete(this); } template<class T> inline T* GetPropertyItem(const T* pItemType,FbxInt pIndex,FbxPropertyPage **pFoundIn=0) const { FbxPropertyPage* lReferencePage = 0; FbxPropertyEntry* lReferenceEntry = GetPropertyEntry(pIndex,&lReferencePage); if (pFoundIn) *pFoundIn = 0; if (lReferenceEntry) { T* lItem = lReferenceEntry->Get( FBX_TYPE(T) ); if (lItem) { if (pFoundIn) *pFoundIn = lReferencePage; return lItem; } else { return lReferencePage->mInstanceOf ? lReferencePage->mInstanceOf->GetPropertyItem(pItemType,pIndex,pFoundIn) : 0 ; } } return 0; } template<class T> inline T* ChangePropertyItemState(const T* pItemType, FbxInt pIndex, FbxPropertyFlags::EInheritType pInheritType) { FbxPropertyPage* lReferencePage = NULL; T* lItem = GetPropertyItem(pItemType, pIndex, &lReferencePage); if( pInheritType == FbxPropertyFlags::eOverride ) { if( lReferencePage == this ) { return lItem; } else if( lItem ) { FbxPropertyEntry* lEntry = ChangePropertyEntryState(pIndex, FbxPropertyFlags::eOverride); lEntry->Set(lItem->Clone(this)); return lEntry->Get(FBX_TYPE(T)); } } else { // can't inherit entries that were created on our page. bool lOwnEntry = !mInstanceOf || (mInstanceOf->GetPropertyItem(pItemType, pIndex) == NULL); if( lOwnEntry && FbxPropertyFlags::eInherit == pInheritType) return 0; if( lItem && (lReferencePage == this) ) { FbxPropertyEntry* lEntry = GetPropertyEntry(pIndex); lEntry->Set((T*)0); if( lEntry->IsEmpty() ) { ChangePropertyEntryState(pIndex, FbxPropertyFlags::eInherit); } } return 0; } return 0; } template<class T> FbxPropertyPage* GetFirstPropertyItem(FbxInt pId, const T* pItem) const { FbxPropertyPage* lReferencePage = NULL; GetPropertyItem(FBX_TYPE(T), pId, &lReferencePage); if( lReferencePage && lReferencePage->mInstanceOf ) { FbxPropertyPage* lReferencePage2 = lReferencePage->mInstanceOf->GetFirstPropertyItem(pId, pItem); return lReferencePage2 ? lReferencePage2 : lReferencePage; } return lReferencePage; } const char* GetName(FbxInt pId=FBXSDK_PROPERTY_ID_ROOT) { FbxPropertyInfo* lPropertyInfo = GetPropertyItem( FBX_TYPE(FbxPropertyInfo),pId ); return lPropertyInfo ? ((const char*)lPropertyInfo->GetName()) : ""; } const char* GetLabel(FbxInt pId=FBXSDK_PROPERTY_ID_ROOT) { FbxPropertyInfo* lPropertyInfo = GetPropertyItem( FBX_TYPE(FbxPropertyInfo),pId ); return lPropertyInfo ? ((const char*)lPropertyInfo->GetLabel()) : ""; } bool SetLabel(FbxInt pId=FBXSDK_PROPERTY_ID_ROOT, const char* pLabel="") { FbxPropertyInfo* lPropertyInfo = GetPropertyItem( FBX_TYPE(FbxPropertyInfo),pId ); // Don't make it writeable (Keep it shared) if (lPropertyInfo) { lPropertyInfo->SetLabel(pLabel); return true; } else { return false; } } void* GetUserData(FbxInt pId=FBXSDK_PROPERTY_ID_ROOT) { FbxPropertyInfo* lPropertyInfo = GetPropertyItem( FBX_TYPE(FbxPropertyInfo),pId ); return lPropertyInfo ? lPropertyInfo->GetUserData() : 0; } bool SetUserData(FbxInt pId=FBXSDK_PROPERTY_ID_ROOT, const void* pUserData=0) { FbxPropertyInfo* lPropertyInfo = GetPropertyItem( FBX_TYPE(FbxPropertyInfo),pId ); // Don't make it writeable (Keep it shared) if (lPropertyInfo) { lPropertyInfo->SetUserData(pUserData); return true; } else { return false; } } int GetUserTag(FbxInt pId=FBXSDK_PROPERTY_ID_ROOT) { FbxPropertyInfo* lPropertyInfo = GetPropertyItem( FBX_TYPE(FbxPropertyInfo),pId ); return lPropertyInfo ? lPropertyInfo->GetUserTag() : 0; } bool SetUserTag(FbxInt pId=FBXSDK_PROPERTY_ID_ROOT,int pUserTag=0) { FbxPropertyInfo* lPropertyInfo = GetPropertyItem( FBX_TYPE(FbxPropertyInfo),pId ); // Don't make it writeable (Keep it shared) if (lPropertyInfo) { lPropertyInfo->SetUserTag(pUserTag); return true; } else { return false; } } EFbxType GetType(FbxInt pId=FBXSDK_PROPERTY_ID_ROOT) const { FbxPropertyInfo* lPropertyInfo = GetPropertyItem( FBX_TYPE(FbxPropertyInfo),pId ); return lPropertyInfo ? lPropertyInfo->GetType() : eFbxUndefined; } FbxInt GetParent(FbxInt pId=FBXSDK_PROPERTY_ID_ROOT) const { FbxPropertyEntry* lPropertyEntry = GetPropertyEntry( pId ); return lPropertyEntry ? lPropertyEntry->GetParentId() : FBXSDK_PROPERTY_ID_NULL; } FbxPropertyPage* GetTypeInfo(FbxInt pId=FBXSDK_PROPERTY_ID_ROOT) { FbxPropertyInfo* lPropertyInfo = GetPropertyItem( FBX_TYPE(FbxPropertyInfo),pId ); return lPropertyInfo ? lPropertyInfo->GetTypeInfo() : 0; } FbxInt Add(FbxInt pParentId, const char* pName, EFbxType pType) { return Add(pParentId,FbxPropertyInfo::Create(pName,pType),FbxPropertyValue::Create(0,pType),0); } FbxInt Add(FbxInt pParentId, const char* pName, FbxPropertyPage* pTypeInfo) { return Add(pParentId,FbxPropertyInfo::Create(pName,pTypeInfo),FbxPropertyValue::Create(0,pTypeInfo->GetType()),0); } inline bool Reparent( FbxInt /*pChildId*/, FbxInt /*pNewParentId*/ ) { // Not implemented. /* if( GetParent(pChildId) != pNewParentId && pChildId < mEntries.GetCount() ) { FbxPropertyEntry* lChildEntry = mEntries[pChildId]; lChildEntry->mParentId = pNewParentId; //@@@@@ TODO: propagate to instances return true; } */ return false; } inline bool IsChildOf(FbxInt pId,FbxInt pParentId) const { return GetParent(pId)==pParentId; } inline bool IsDescendentOf(FbxInt pId,FbxInt pAncestorId) const { if (pAncestorId>0) { FbxInt lParentId = GetParent(pId); while (lParentId != FBXSDK_PROPERTY_ID_NULL ) { if (lParentId==pAncestorId) { return true; } lParentId = GetParent(lParentId); } return false; } else { return true; } } //#define PROPERTY_PAGE_SANITY_CHECK // Debug purpose only. Never enable it in a production release. /** Retrieves the first child property id of a specified property id. * \param pParentId The specified property id * \return the first child property id */ FbxInt GetChild(FbxInt pParentId=FBXSDK_PROPERTY_ID_ROOT) const { #ifdef PROPERTY_PAGE_SANITY_CHECK FbxInt ret0 = FBXSDK_PROPERTY_ID_NULL; if (pParentId!=FBXSDK_PROPERTY_ID_NULL) { FbxInt lId = GetMinimumPropertyId(pParentId); FbxInt lParentId = GetParent(lId); const FbxInt lLastId = GetPropertyEntryCount(); while (lId<lLastId && lParentId!=pParentId) lParentId=GetParent(++lId); ret0 = lId<lLastId ? lId : FBXSDK_PROPERTY_ID_NULL; } else { ret0 = FBXSDK_PROPERTY_ID_NULL; } #endif FbxInt ret1 = FBXSDK_PROPERTY_ID_NULL; if (pParentId != FBXSDK_PROPERTY_ID_NULL) { FbxPropertyEntry* lEntry; FbxInt lId = pParentId; do { lId = GetMinimumPropertyIdAndEntry(lId, &lEntry); } while (lId != FBXSDK_PROPERTY_ID_NULL && lEntry->GetParentId() != pParentId); ret1 = lId; } #ifdef PROPERTY_PAGE_SANITY_CHECK FBX_ASSERT(ret0==ret1); #endif return ret1; } /** Retrieves the next sibling property id of a specified property id. * \param pId The specified property id * \return the next sibling property id */ FbxInt GetSibling(FbxInt pId) const { #ifdef PROPERTY_PAGE_SANITY_CHECK FbxInt pIdBackup = pId; FbxInt ret0 = FBXSDK_PROPERTY_ID_NULL; if (pId!=FBXSDK_PROPERTY_ID_NULL) { FbxInt lReferenceParentId = GetParent(pId); FbxInt lParentId = GetParent(++pId); const FbxInt lLastId = GetPropertyEntryCount(); while (pId<lLastId && lReferenceParentId!=FBXSDK_PROPERTY_ID_NULL && lParentId!=lReferenceParentId) lParentId=GetParent(++pId); ret0 = pId<lLastId ? pId : FBXSDK_PROPERTY_ID_NULL; } else { ret0 = FBXSDK_PROPERTY_ID_NULL; } pId = pIdBackup; #endif FbxInt ret1 = FBXSDK_PROPERTY_ID_NULL; if (pId != FBXSDK_PROPERTY_ID_NULL) { FbxInt lReferenceParentId = GetParent(pId); if (lReferenceParentId != FBXSDK_PROPERTY_ID_NULL) { FbxPropertyEntry *lEntry; do { pId = GetMinimumPropertyIdAndEntry(pId, &lEntry); } while (pId != FBXSDK_PROPERTY_ID_NULL && lEntry->GetParentId() != lReferenceParentId); ret1 = pId; } } #ifdef PROPERTY_PAGE_SANITY_CHECK FBX_ASSERT(ret0==ret1); #endif return ret1; } /** Retrieves the first descendent property id of a specified property id. * \param pAnscestorId The specified property id * \return the first descendent property id */ FbxInt GetFirstDescendent(FbxInt pAnscestorId=FBXSDK_PROPERTY_ID_ROOT) const { #ifdef PROPERTY_PAGE_SANITY_CHECK FbxInt ret0 = FBXSDK_PROPERTY_ID_NULL; if (pAnscestorId!=FBXSDK_PROPERTY_ID_NULL) { FbxInt lId = GetMinimumPropertyId(pAnscestorId); FbxInt lParentId = GetParent(lId); const FbxInt lLastId = GetPropertyEntryCount(); while (lId<lLastId) { if( lParentId!=FBXSDK_PROPERTY_ID_NULL && IsDescendentOf(lId,pAnscestorId) ) { ret0 = lId; break; } lParentId = GetParent(++lId); } } #endif FbxInt ret1 = FBXSDK_PROPERTY_ID_NULL; FbxInt lId = pAnscestorId; FbxPropertyEntry* lEntry; if (pAnscestorId != FBXSDK_PROPERTY_ID_NULL) { for(;;) { lId = GetMinimumPropertyIdAndEntry(lId, &lEntry); if (lId == FBXSDK_PROPERTY_ID_NULL) break; if(lEntry->GetParentId() != FBXSDK_PROPERTY_ID_NULL && IsDescendentOf(lId, pAnscestorId)) { ret1 = lId; break; } } } #ifdef PROPERTY_PAGE_SANITY_CHECK FBX_ASSERT(ret0==ret1); #endif return ret1; } /** Retrieves the next descendent property id of a specified property id, with given a descendent property id. * \param pAnscestorId The specified property id * \param pId The descendent property id * \return the next descendent property id */ FbxInt GetNextDescendent(FbxInt pAnscestorId, FbxInt pId) const { #ifdef PROPERTY_PAGE_SANITY_CHECK FbxInt pIdBackup = pId; FbxInt ret0 = FBXSDK_PROPERTY_ID_NULL; if (pId!=FBXSDK_PROPERTY_ID_NULL) { FbxInt lParentId = GetParent(++pId); const FbxInt lLastId = GetPropertyEntryCount(); while (pId<lLastId) { // GetParent returns null when the given id isn't in our page, // or our ancestor's page. if( lParentId != FBXSDK_PROPERTY_ID_NULL && IsDescendentOf(pId, pAnscestorId) ) { ret0 = pId; break; } lParentId = GetParent(++pId); } } pId = pIdBackup; #endif FbxInt ret1 = FBXSDK_PROPERTY_ID_NULL; if (pId != FBXSDK_PROPERTY_ID_NULL) { FbxPropertyEntry* lEntry; for(;;) { pId = GetMinimumPropertyIdAndEntry(pId, &lEntry); if (pId == FBXSDK_PROPERTY_ID_NULL) break; if(lEntry->GetParentId() != FBXSDK_PROPERTY_ID_NULL && IsDescendentOf(pId, pAnscestorId) ) { ret1 = pId; break; } } } #ifdef PROPERTY_PAGE_SANITY_CHECK FBX_ASSERT(ret0==ret1); #endif return ret1; } FbxInt FastFind (FbxInt pId, const char* pName, FbxPropertyPage* pTypeInfo, bool pCaseSensitive) { FbxInt lId = FBXSDK_PROPERTY_ID_NULL; bool lSlowQuery = true; if( mNameMap.mSecond.GetSize() > 0 ) { lSlowQuery = false; // try to use the map if we've got it NameMap::RecordType* lIterator = mNameMap.mSecond.Find( FbxNameMapKey( pId, pName ) ); if( !lIterator ) { lId = FBXSDK_PROPERTY_ID_NULL; } else { lId = lIterator->GetValue(); if (lId != FBXSDK_PROPERTY_ID_NULL && pTypeInfo) { lSlowQuery = true; // Try to match types. // If they are mismatched, fall back to the slow query, // since we might have multiple property with the same name but different types FbxPropertyInfo* lPropertyInfo = GetPropertyItem( FBX_TYPE(FbxPropertyInfo), lId ); if (lPropertyInfo) { FbxPropertyPage* lTypeInfo2 = lPropertyInfo->GetTypeInfo(); if ( lTypeInfo2 && lTypeInfo2->Is(pTypeInfo) ) { lSlowQuery = false; } } } } } if (!lSlowQuery) return lId; // fall back if there's no map or we got one with a different type lId = GetChild(pId); FbxStringSymbol lSearchSymbol( pName ); while( lId != FBXSDK_PROPERTY_ID_NULL ) { FbxPropertyInfo* lPropertyInfo = GetPropertyItem( FBX_TYPE(FbxPropertyInfo), lId ); if ( (!pTypeInfo || lPropertyInfo->GetTypeInfo()->Is(pTypeInfo)) && ((!pCaseSensitive && FBXSDK_stricmp(lPropertyInfo->GetName(),pName)==0) || (pCaseSensitive && lPropertyInfo->GetName() == lSearchSymbol)) ) { return lId; } lId = GetSibling(lId); } return FBXSDK_PROPERTY_ID_NULL; } FbxInt Find (FbxInt pId, const char* pName, FbxPropertyPage* pTypeInfo, bool pCaseSensitive, const char* pChildrenSeparators ) { if (pChildrenSeparators) { FbxInt lId; size_t lFoundIndex = strcspn(pName,pChildrenSeparators); // Strip the first part of the name and search if (lFoundIndex<strlen(pName)) { FbxString pRootName; pRootName.Append(pName,lFoundIndex); lId = FastFind(pId,pRootName.Buffer(),NULL,pCaseSensitive); return lId != FBXSDK_PROPERTY_ID_NULL ? Find(lId,pName+lFoundIndex+1,pTypeInfo,pCaseSensitive,pChildrenSeparators) : lId; } else { return FastFind(pId,pName,pTypeInfo,pCaseSensitive); } } else { return FastFind(pId,pName,pTypeInfo,pCaseSensitive); } } // Enum list int AddEnumValue(FbxInt pId, const char* pStringValue) { FbxPropertyInfo* lPropertyInfo = GetPropertyItem( FBX_TYPE(FbxPropertyInfo),pId ); // Don't make it writeable (Keep it shared) return lPropertyInfo ? lPropertyInfo->AddEnumValue(pStringValue) : - 1; } void InsertEnumValue(FbxInt pId, int pIndex, const char* pStringValue) { FbxPropertyInfo* lPropertyInfo = GetPropertyItem( FBX_TYPE(FbxPropertyInfo),pId ); // Don't make it writeable (Keep it shared) if (lPropertyInfo) lPropertyInfo->InsertEnumValue(pIndex,pStringValue); } int GetEnumCount(FbxInt pId) { FbxPropertyInfo* lPropertyInfo = GetPropertyItem( FBX_TYPE(FbxPropertyInfo),pId ); // Don't make it writeable (Keep it shared) return lPropertyInfo ? lPropertyInfo->GetEnumCount() : 0; } void SetEnumValue(FbxInt pId, int pIndex, const char* pStringValue) { FbxPropertyInfo* lPropertyInfo = GetPropertyItem( FBX_TYPE(FbxPropertyInfo),pId ); // Don't make it writeable (Keep it shared) if (lPropertyInfo) lPropertyInfo->SetEnumValue(pIndex,pStringValue); } void RemoveEnumValue(FbxInt pId, int pIndex) { FbxPropertyInfo* lPropertyInfo = GetPropertyItem( FBX_TYPE(FbxPropertyInfo),pId ); // Don't make it writeable (Keep it shared) if (lPropertyInfo) lPropertyInfo->RemoveEnumValue(pIndex); } char* GetEnumValue(FbxInt pId,int pIndex) { FbxPropertyInfo* lPropertyInfo = GetPropertyItem( FBX_TYPE(FbxPropertyInfo),pId ); return lPropertyInfo ? lPropertyInfo->GetEnumValue(pIndex) : (char*)""; } // Connection // --------------------------------- void ClearConnectCache(FbxInt pId) { FbxPropertyPage* lReferencePage = 0; FbxPropertyConnect* lPropertyConnect = GetPropertyItem( FBX_TYPE(FbxPropertyConnect),pId,&lReferencePage ); // Connections are not considered propagated so // make sure that we own the FbxPropertyConnect objects if (lPropertyConnect) { lPropertyConnect->ClearConnectCache(); } } void WipeAllConnections(FbxInt pId) { FbxPropertyPage* lReferencePage = 0; FbxPropertyConnect* lPropertyConnect = GetPropertyItem( FBX_TYPE(FbxPropertyConnect),pId,&lReferencePage ); if (lPropertyConnect) { lPropertyConnect->WipeAllConnections(); } } bool ConnectSrc(FbxInt pDstId, FbxPropertyPage* pSrcPage, FbxInt pSrcId, FbxConnection::EType pType) { FbxPropertyEntry* lDstEntry = ChangePropertyEntryState(pDstId,FbxPropertyFlags::eOverride); FbxPropertyEntry* lSrcEntry = pSrcPage->ChangePropertyEntryState(pSrcId,FbxPropertyFlags::eOverride); FbxPropertyConnect* lDstConnect= lDstEntry->Get( FBX_TYPE(FbxPropertyConnect) ); FbxPropertyConnect* lSrcConnect= lSrcEntry->Get( FBX_TYPE(FbxPropertyConnect) ); // Make sure we have a connection point on both sides of the connection if (!lDstConnect) { lDstConnect = FbxPropertyConnect::Create( this,pDstId ); lDstEntry->Set( lDstConnect ); } if (!lSrcConnect) { lSrcConnect = FbxPropertyConnect::Create( pSrcPage,pSrcId ); lSrcEntry->Set( lSrcConnect ); } // Must @@@@@@@ Propagate to inherited children return lDstConnect->ConnectSrc(lSrcConnect,pType); } bool DisconnectSrc(FbxInt pDstId,FbxPropertyPage* pSrcPage,FbxInt pSrcId) { FbxPropertyPage* lDstReferencePage = 0; FbxPropertyConnect* lDstConnect = GetPropertyItem( FBX_TYPE(FbxPropertyConnect),pDstId,&lDstReferencePage ); FbxPropertyPage* lSrcReferencePage = 0; FbxPropertyConnect* lSrcConnect = pSrcPage->GetPropertyItem( FBX_TYPE(FbxPropertyConnect),pSrcId,&lSrcReferencePage ); // Make sure we have a connection point on both sides of the connection if (lDstConnect && lSrcConnect && lDstReferencePage==this && lSrcReferencePage==pSrcPage) { // Must @@@@@@@ Remove unused connections return lDstConnect->DisconnectSrc(lSrcConnect); } return false; } bool IsConnectedSrc(FbxInt pDstId, FbxPropertyPage* pSrcPage, FbxInt pSrcId) { FbxPropertyPage* lDstReferencePage = 0; FbxPropertyConnect* lDstConnect = GetPropertyItem( FBX_TYPE(FbxPropertyConnect),pDstId,&lDstReferencePage ); FbxPropertyPage* lSrcReferencePage = 0; FbxPropertyConnect* lSrcConnect = pSrcPage->GetPropertyItem( FBX_TYPE(FbxPropertyConnect),pSrcId,&lSrcReferencePage ); // Make sure we have a connection point on both sides of the connection if (lDstConnect && lSrcConnect && lDstReferencePage==this && lSrcReferencePage==pSrcPage) { // Must @@@@@@@ Remove unused connections return lDstConnect->IsConnectedSrc(lSrcConnect); } return false; } int GetSrcCount(FbxInt pId, FbxConnectionPointFilter* pFilter) { FbxPropertyPage* lReferencePage = 0; FbxPropertyConnect* lPropertyConnect = GetPropertyItem( FBX_TYPE(FbxPropertyConnect),pId,&lReferencePage ); // Connections are not considered propagated so // make sure that we own the FbxPropertyConnect objects return (lPropertyConnect && lReferencePage==this) ? lPropertyConnect->GetSrcCount(pFilter) : 0; } bool GetSrc(FbxInt pId, int pIndex, FbxConnectionPointFilter* pFilter, FbxPropertyPage** pSrcPage, FbxInt* pSrcId) { FbxPropertyPage* lReferencePage = 0; FbxPropertyConnect* lPropertyConnect = GetPropertyItem( FBX_TYPE(FbxPropertyConnect),pId,&lReferencePage ); // Connections are always overridden // make sure that we own the FbxPropertyConnect Item if (lPropertyConnect && lReferencePage==this) { FbxPropertyConnect* lSrc = lPropertyConnect->GetSrc(pFilter,pIndex); if (lSrc) { if (pSrcPage) *pSrcPage = lSrc->GetPage(); if (pSrcId) *pSrcId = lSrc->GetPropertyId(); return true; } } return false; } bool ConnectDst(FbxInt pSrcId, FbxPropertyPage* pDstPage, FbxInt pDstId, FbxConnection::EType pType) { return pDstPage->ConnectSrc(pDstId,this,pSrcId,pType); } bool DisconnectDst(FbxInt pSrcId, FbxPropertyPage* pDstPage, FbxInt pDstId) { return pDstPage->DisconnectSrc(pDstId,this,pSrcId); } bool IsConnectedDst(FbxInt pSrcId, FbxPropertyPage* pDstPage, FbxInt pDstId) { return pDstPage->IsConnectedSrc(pDstId,this,pSrcId); } int GetDstCount(FbxInt pId, FbxConnectionPointFilter* pFilter) { FbxPropertyPage* lReferencePage = 0; FbxPropertyConnect* lPropertyConnect = GetPropertyItem( FBX_TYPE(FbxPropertyConnect),pId,&lReferencePage ); // Connections are not considered propagated so // make sure that we own the FbxPropertyConnect objects return (lPropertyConnect && lReferencePage==this) ? lPropertyConnect->GetDstCount(pFilter) : 0; } bool GetDst(FbxInt pId, int pIndex, FbxConnectionPointFilter* pFilter, FbxPropertyPage** pDstPage, FbxInt* pDstId) { FbxPropertyPage* lReferencePage = 0; FbxPropertyConnect* lPropertyConnect = GetPropertyItem( FBX_TYPE(FbxPropertyConnect),pId,&lReferencePage ); // Connections are always overridden // make sure that we own the FbxPropertyConnect Item if (lPropertyConnect && lReferencePage==this) { FbxPropertyConnect* lDst = lPropertyConnect->GetDst(pFilter,pIndex); if (lDst) { if (pDstPage) *pDstPage = lDst->GetPage(); if (pDstId) *pDstId = lDst->GetPropertyId(); return true; } } return false; } // Min and Max // --------------------------------- enum EValueIndex { eValueMin,eValueSoftMin,eValueMax,eValueSoftMax,eValueCount }; bool HasMinMax(FbxInt pId, FbxPropertyInfo::EValueIndex pValueId) const { FbxPropertyInfo* lPropertyInfo = GetPropertyItem( FBX_TYPE(FbxPropertyInfo),pId ); return lPropertyInfo ? lPropertyInfo->HasMinMax(pValueId) : false; } bool GetMinMax(FbxInt pId, FbxPropertyInfo::EValueIndex pValueId, void* pValue, EFbxType pValueType) { FbxPropertyInfo* lPropertyInfo = GetPropertyItem( FBX_TYPE(FbxPropertyInfo),pId ); // Don't make it writeable (Keep it shared) return lPropertyInfo ? lPropertyInfo->GetMinMax(pValueId,pValue,pValueType) : false; } bool SetMinMax(FbxInt pId, FbxPropertyInfo::EValueIndex pValueId, const void* pValue, EFbxType pValueType) { FbxPropertyInfo* lPropertyInfo = GetPropertyItem( FBX_TYPE(FbxPropertyInfo),pId ); // Don't make it writeable (Keep it shared) return lPropertyInfo ? lPropertyInfo->SetMinMax(pValueId,pValue,pValueType) : false; } // Value // --------------------------------- bool Get(FbxInt pId, void* pValue, EFbxType pValueType) { FbxPropertyValue* lPropertyValue = GetPropertyItem( FBX_TYPE(FbxPropertyValue),pId ); return lPropertyValue ? lPropertyValue->Get(pValue,pValueType) : 0; } bool Set(FbxInt pId, const void* pValue, EFbxType pValueType, bool pCheckValueEquality) { if( pCheckValueEquality ) { FbxPropertyPage* lReferencePage = NULL; FbxPropertyValue* lPropertyValue = GetPropertyItem( FBX_TYPE(FbxPropertyValue),pId,&lReferencePage ); void* lCurrentValue = FbxTypeAllocate( pValueType ); bool lValuesEqual = false; bool lValueChanged = false; if( lReferencePage && lReferencePage != this ) { // this page inherits, so check if we have to override the value. if( lPropertyValue ) { lPropertyValue->Get( lCurrentValue, pValueType ); lValuesEqual = FbxTypeCompare( pValue, lCurrentValue, pValueType ); } } else { FbxPropertyPage* lReferencePage2 = NULL; FbxPropertyValue* lPropertyValue2 = mInstanceOf ? mInstanceOf->GetPropertyItem( FBX_TYPE(FbxPropertyValue),pId,&lReferencePage2 ) : NULL; if( lReferencePage2 && lPropertyValue2 ) { // this page is an override, but there is another page before us that overrides the value lPropertyValue2->Get( lCurrentValue, pValueType ); lValuesEqual = FbxTypeCompare( pValue, lCurrentValue, pValueType ); if( lValuesEqual ) { ChangePropertyItemState( FBX_TYPE(FbxPropertyValue), pId, FbxPropertyFlags::eInherit ); lValueChanged = true; } } // else this page is the originator of the property, so no need to check, } FbxTypeDeallocate(pValueType, lCurrentValue); lCurrentValue = NULL; if( lValuesEqual ) return lValueChanged; } FbxPropertyValue* lPropertyValue = ChangePropertyItemState( FBX_TYPE(FbxPropertyValue),pId,FbxPropertyFlags::eOverride ); return lPropertyValue ? lPropertyValue->Set(pValue,pValueType) : false; } inline FbxPropertyFlags::EInheritType GetValueInherit(FbxInt pId, bool pCheckInstanceOf) const { FbxPropertyPage* lReferencePage = NULL; GetPropertyItem(FBX_TYPE(FbxPropertyValue), pId, &lReferencePage); // check one level if( !pCheckInstanceOf ) { return lReferencePage == this ? FbxPropertyFlags::eOverride : FbxPropertyFlags::eInherit; } else { if( lReferencePage == this ) return FbxPropertyFlags::eOverride; // this page is either an override, or the originator else if( !lReferencePage->mInstanceOf ) return FbxPropertyFlags::eInherit; // the reference is the class root, so we must be inheriting // The reference page is not the class root, might be another override, or the originator. FbxPropertyValue* lPropertyValue = lReferencePage->mInstanceOf->GetPropertyItem( FBX_TYPE(FbxPropertyValue), pId ); // if lReferencePage->mInstanceOf has the property value, // lReferencePage is an override // else // its the originator, so this page inherits from it. return lPropertyValue ? FbxPropertyFlags::eOverride : FbxPropertyFlags::eInherit; } } inline bool SetValueInherit(FbxInt pId, FbxPropertyFlags::EInheritType pType) { // no support for this mode yet if( FbxPropertyFlags::eDeleted == pType ) return false; ChangePropertyItemState( FBX_TYPE(FbxPropertyValue), pId, pType ); // Above call doesn't return error codes, so just check that we match types. return GetValueInherit(pId, false) == pType; } inline bool GetDefaultValue(FbxInt pId, void* pValue, EFbxType pValueType) const { FbxPropertyPage* lReferencePage = GetFirstPropertyItem( pId, FBX_TYPE(FbxPropertyValue) ); FbxPropertyValue* lPropertyValue = lReferencePage ? lReferencePage->GetPropertyItem( FBX_TYPE(FbxPropertyValue), pId ) : NULL; return lPropertyValue ? lPropertyValue->Get( pValue, pValueType ) : false; } // useful set and get functions template <class T> inline bool Set( FbxInt pId, const T& pValue ) { return Set( pId,&pValue,FbxTypeOf(pValue),true ); } template <class T> inline T Get( FbxInt pId, const T* pFBX_TYPE) { T lValue; Get( pId,&lValue,FbxTypeOf(lValue) ); return lValue; } void SetDataPtr(void* pDataPtr) { mDataPtr = pDataPtr; } void* GetDataPtr() const { return mDataPtr; } // Instance and override management // ------------------------------------------ void PushPropertiesToParentInstance() { if (mInstanceOf) { const int lCount = GetPropertyEntryCount(); // push the existing properties into the parent // ---------------------------------------------- for( int i = 0; i < lCount; ++i ) { FbxPropertyEntry* lParentEntry = mInstanceOf->ChangePropertyEntryState( (FbxInt)i,FbxPropertyFlags::eOverride ); FbxPropertyEntry* lEntry = GetPropertyEntry( (FbxInt)i ); if( !lParentEntry ) { lParentEntry = FbxPropertyEntry::Create( lEntry->GetParentId(), 0, 0, 0 ); mInstanceOf->mEntryMap.Insert( i, lParentEntry ); //mInstanceOf->AddChild(i); } FBX_ASSERT( lParentEntry ); // Add it to the parent // Don't touch the connections // ----------------------------------------- if (lParentEntry) { lParentEntry->Set( lEntry->Get(FBX_TYPE(FbxPropertyInfo)) ); lParentEntry->Set( lEntry->Get(FBX_TYPE(FbxPropertyValue)) ); lParentEntry->Set( lEntry->Get(FBX_TYPE(FbxPropertyFlags)) ); } /* else { mInstanceOf->Add( lEntry->GetParentId(), lEntry->Get(FBX_TYPE(FbxPropertyInfo)), // The info lEntry->Get(FBX_TYPE(FbxPropertyValue)), // The Value 0, // The connections false, false ); } */ // Empty the current entry // Don't touch the connections // ----------------------------------------- ChangePropertyItemState(FBX_TYPE(FbxPropertyInfo), i,FbxPropertyFlags::eInherit); ChangePropertyItemState(FBX_TYPE(FbxPropertyValue), i,FbxPropertyFlags::eInherit); ChangePropertyItemState(FBX_TYPE(FbxPropertyFlags), i,FbxPropertyFlags::eInherit); } } } inline const FbxPropertyPage* GetInstanceOf() const { return mInstanceOf; } inline FbxPropertyPage* GetInstanceOf() { return mInstanceOf; } inline const FbxArray<FbxPropertyPage*>& GetInstances() const { return mInstances; } inline FbxArray<FbxPropertyPage*>& GetInstances() { return mInstances; } // Flags // ------------------------------------------ FbxPropertyFlags::EFlags GetFlags(FbxInt pId=FBXSDK_PROPERTY_ID_ROOT) const { FbxPropertyPage* lFoundIn = NULL; FbxPropertyFlags* lPropertyFlags = GetPropertyItem( FBX_TYPE(FbxPropertyFlags), pId, &lFoundIn ); FbxPropertyFlags::EFlags lFlags = FbxPropertyFlags::eNone; if( lPropertyFlags ) { if( !mInstanceOf ) // no inheritance. lFlags = lPropertyFlags->GetFlags(); else { lFlags = mInstanceOf->GetFlags(pId); lFlags = lPropertyFlags->GetMergedFlags(lFlags); } } return lFlags; } bool ModifyFlags(FbxInt pId=FBXSDK_PROPERTY_ID_ROOT,FbxPropertyFlags::EFlags pFlags=FbxPropertyFlags::eNone,bool pValue=true,bool pCheckFlagEquality=true) { if( pCheckFlagEquality ) { FbxPropertyPage* lFoundIn = NULL; FbxPropertyFlags* lFlag = GetPropertyItem( FBX_TYPE(FbxPropertyFlags), pId, &lFoundIn ); if( lFlag ) { if( lFoundIn == this ) { // set them in us. lFlag->ModifyFlags( pFlags, pValue ); // we override this entry, check if we need to revert FbxPropertyFlags* lInheritedFlags = mInstanceOf ? mInstanceOf->GetPropertyItem( FBX_TYPE(FbxPropertyFlags), pId ) : NULL; if( lInheritedFlags && lInheritedFlags->Equal( *lFlag, pFlags ) ) { lFlag->UnsetMask( pFlags ); if( lFlag->GetMask() == 0 ) ChangePropertyItemState( FBX_TYPE(FbxPropertyFlags), pId, FbxPropertyFlags::eInherit ); return true; } } else { // its not us. Just check if we need to set. FbxPropertyFlags lNewValues( pFlags ); if( lFlag->Equal( lNewValues, pFlags ) ) return true; } } } FbxPropertyFlags* lPropertyFlags = ChangePropertyItemState(FBX_TYPE(FbxPropertyFlags), pId, FbxPropertyFlags::eOverride); return lPropertyFlags ? lPropertyFlags->ModifyFlags( pFlags, pValue ) : false; } FbxPropertyFlags::EInheritType GetFlagsInheritType(FbxPropertyFlags::EFlags pFlags, bool pCheckInstanceOf, FbxInt pId=FBXSDK_PROPERTY_ID_ROOT) const { FbxPropertyPage* lFoundIn = NULL; FbxPropertyFlags* lPropertyFlags = GetPropertyItem( FBX_TYPE(FbxPropertyFlags), pId, &lFoundIn ); if( !pCheckInstanceOf ) return lFoundIn != this ? FbxPropertyFlags::eInherit : ( lPropertyFlags ? lPropertyFlags->GetFlagsInheritType(pFlags) : FbxPropertyFlags::eInherit ); else { // This code basically counts the number of overrides for the // given flags. The original entry is always considered an override. // so if we see more than one, something overrode the original. // and thus we are an override. FbxPropertyPage* lRefPage = lFoundIn; bool lFoundOverride = false; while( lRefPage ) { lPropertyFlags = lRefPage->GetPropertyItem( FBX_TYPE(FbxPropertyFlags), pId ); if( !lPropertyFlags ) break; // gone too far, break. if( lPropertyFlags->GetFlagsInheritType( pFlags ) == FbxPropertyFlags::eOverride ) { if( this == lRefPage || lFoundOverride ) return FbxPropertyFlags::eOverride; // found two overrides or this page is the override. else lFoundOverride = true; // signal that we found the first override. } lRefPage = lRefPage->mInstanceOf; } return FbxPropertyFlags::eInherit; } } bool SetFlagsInheritType(FbxPropertyFlags::EInheritType pInheritType, FbxPropertyFlags::EFlags pFlags, FbxInt pId=FBXSDK_PROPERTY_ID_ROOT) { FbxPropertyPage* lFoundIn = NULL; FbxPropertyFlags* lPropertyFlags = NULL; if( FbxPropertyFlags::eOverride == pInheritType ) { lPropertyFlags = ChangePropertyItemState( FBX_TYPE(FbxPropertyFlags), pId, FbxPropertyFlags::eOverride ); // we should initialize our flag to the inherited value, if any. FbxPropertyFlags* lParentFlags = mInstanceOf ? mInstanceOf->GetPropertyItem( FBX_TYPE(FbxPropertyFlags), pId ) : NULL; if( lParentFlags && lPropertyFlags ) { FbxPropertyFlags::EFlags lParentValues = lParentFlags->GetFlags(); lPropertyFlags->SetFlags( pFlags, lParentValues ); return lPropertyFlags->SetMask( pFlags ); } return false; } else if( FbxPropertyFlags::eInherit == pInheritType ) { lPropertyFlags = GetPropertyItem(FBX_TYPE(FbxPropertyFlags), pId, &lFoundIn); if( !lPropertyFlags ) return false; if( lFoundIn != this ) return true; // not us lPropertyFlags->UnsetMask( pFlags ); if( lPropertyFlags->GetMask() == 0 ) // revert ChangePropertyItemState( FBX_TYPE(FbxPropertyFlags), pId, FbxPropertyFlags::eInherit ); return true; } return false; } inline void BeginCreateOrFindProperty() { if( 0 == mNameMap.mFirst ) { mNameMap.mSecond.Reserve(20); // push the existing properties into the map. Note: this includes the root property! FbxInt lFoundId = FBXSDK_PROPERTY_ID_ROOT; FbxPropertyEntry* lEntry = GetPropertyEntry(lFoundId); while(lFoundId != FBXSDK_PROPERTY_ID_NULL) { FbxPropertyInfo* lInfo = lEntry->Get(FBX_TYPE(FbxPropertyInfo)); //FBX_ASSERT( lInfo ); if (lInfo) { mNameMap.mSecond.Insert(FbxNameMapKey(lEntry->GetParentId(), lInfo->GetName()), lFoundId); } lFoundId = GetMinimumPropertyIdAndEntry(lFoundId, &lEntry); } mNameMap.mFirst++; } } inline void EndCreateOrFindProperty() { if( mNameMap.mFirst > 0 ) { if( --(mNameMap.mFirst) == 0 ) mNameMap.mSecond.Clear(); } } protected: FbxPropertyPage(FbxPropertyPage* pInstanceOf=0) : mInstanceOf(0) , mDataPtr(0) , mPropNextId(0) { mEntryMap.Reserve(32); mNameMap.mFirst = 0; // instances don't need to create a root property if( !pInstanceOf ) { mPropNextId = FbxNew< FbxPropertyIdGenerator >(); mPropNextId->IncRef(); // First item is the root information Add(FBXSDK_PROPERTY_ID_NULL,"",eFbxUndefined); } // Hook the instances // ------------------------ mInstanceOf = pInstanceOf; if (mInstanceOf) { mInstanceOf->mInstances.Add(this); mPropNextId = mInstanceOf->mPropNextId; mPropNextId->IncRef(); } } FbxPropertyPage(const char* pName, EFbxType pType) : mInstanceOf(0) , mDataPtr(0) , mPropNextId(0) { mEntryMap.Reserve(32); mNameMap.mFirst = 0; mPropNextId = FbxNew< FbxPropertyIdGenerator >(); mPropNextId->IncRef(); // First item is the root information Add(FBXSDK_PROPERTY_ID_NULL,pName,pType); } FbxPropertyPage(const char* pName, FbxPropertyPage* pTypeInfo) : mInstanceOf(0) , mDataPtr(0) , mPropNextId(0) { mEntryMap.Reserve(32); mNameMap.mFirst = 0; mPropNextId = FbxNew< FbxPropertyIdGenerator >(); mPropNextId->IncRef(); // First item is the root information Add(FBXSDK_PROPERTY_ID_NULL,pName,pTypeInfo); } ~FbxPropertyPage() { // Propagate our property entries. int i = 0, j = 0; for( i = 0; i < mInstances.GetCount(); ++i ) { for( j = 0; j < GetPropertyEntryCount(); ++j ) { if( mInstances[i]->ChangePropertyEntryState((FbxInt)j, FbxPropertyFlags::eOverride) ) { // Clone the info and values. Don't clone the connections, // since they aren't propagated. mInstances[i]->ChangePropertyItemState( FBX_TYPE(FbxPropertyInfo), (FbxInt)j, FbxPropertyFlags::eOverride ); mInstances[i]->ChangePropertyItemState( FBX_TYPE(FbxPropertyValue), (FbxInt)j, FbxPropertyFlags::eOverride ); // Since all entries have their own flags, just override the ones in the instance. mInstances[i]->SetFlagsInheritType(FbxPropertyFlags::eOverride, FbxPropertyFlags::eAllFlags, (FbxInt)j ); } } // Instances become their own copies. mInstances[i]->mInstanceOf = NULL; } FbxMapDestroy(mEntryMap); if (mInstanceOf) { int lIndex = mInstanceOf->mInstances.Find(this); mInstanceOf->mInstances.SetAt(lIndex, mInstanceOf->mInstances[mInstanceOf->mInstances.GetCount()-1]); mInstanceOf->mInstances.RemoveAt(mInstanceOf->mInstances.GetCount()-1); //mInstanceOf->mInstances.RemoveIt(this); } mPropNextId->DecRef(); mPropNextId = NULL; mInstanceOf = NULL; mInstances.Clear(); } inline bool Is(FbxPropertyPage* pPage) { // @@@@@@@@@@@@@@@ Must complete for sub types return this==pPage; } // Internal entry management private: /** Retrieves the smallest property id of which are larger than a specified one. * \param pId The specified property id * \param pIncrementIfNone Whether it returns FBXSDK_PROPERTY_ID_NULL or pId+1, if not found. * \return The property id described above. */ FbxInt GetMinimumPropertyId(FbxInt pId, bool pIncrementIfNone = true) const { if( pId == FBXSDK_PROPERTY_ID_NULL ) pId = FBXSDK_PROPERTY_ID_ROOT; FbxInt lMin = FBXSDK_PROPERTY_ID_NULL; const EntryMap::RecordType* lElement = mEntryMap.UpperBound(pId); if (NULL != lElement) { lMin = lElement->GetKey(); } FbxInt lParentMin = mInstanceOf ? mInstanceOf->GetMinimumPropertyId(pId,false) : FBXSDK_PROPERTY_ID_NULL; bool lParentNull = lParentMin == FBXSDK_PROPERTY_ID_NULL; bool lMinNull = lMin == FBXSDK_PROPERTY_ID_NULL; if( lParentNull && lMinNull ) return pIncrementIfNone ? pId+1 : FBXSDK_PROPERTY_ID_NULL; else if( lMinNull ) lMin = lParentMin; else if( !lParentNull ) lMin = lMin < lParentMin ? lMin : lParentMin; return lMin; } /** Retrieves the smallest property id of which are larger than a specified one, and retrieve its entry. * \param pId The specified property id * \param pEntry The returned property entry * \return The property id described above. */ FbxInt GetMinimumPropertyIdAndEntry(FbxInt pId, FbxPropertyEntry** pEntry) const { FbxInt lFoundId = FBXSDK_PROPERTY_ID_NULL; FbxPropertyEntry* lFoundEntry = NULL; if( pId == FBXSDK_PROPERTY_ID_NULL ) pId = FBXSDK_PROPERTY_ID_ROOT; const EntryMap::RecordType* lElement = mEntryMap.UpperBound(pId); if (NULL != lElement) { lFoundId = lElement->GetKey(); lFoundEntry = lElement->GetValue(); } FbxPropertyEntry* lParentEntry = NULL; FbxInt lParentMin = mInstanceOf ? mInstanceOf->GetMinimumPropertyIdAndEntry(pId, &lParentEntry) : FBXSDK_PROPERTY_ID_NULL; bool lParentNull = lParentMin == FBXSDK_PROPERTY_ID_NULL; bool lMinNull = lFoundId == FBXSDK_PROPERTY_ID_NULL; if( lMinNull && !lParentNull ) { lFoundId = lParentMin; lFoundEntry = lParentEntry; } else if( !lMinNull && !lParentNull ) { lFoundId = lFoundId < lParentMin ? lFoundId : lParentMin; lFoundEntry = lFoundId < lParentMin ? lFoundEntry : lParentEntry; } if (pEntry) *pEntry = lFoundEntry; return lFoundId; } int GetPropertyEntryCount() const { int lCount = 0; const EntryMap::RecordType* lElement = mEntryMap.Maximum(); if (NULL != lElement) { lCount = lElement->GetKey() + 1; } int lParentCount = mInstanceOf ? mInstanceOf->GetPropertyEntryCount() : 0; return lParentCount > lCount ? lParentCount : lCount; } FbxPropertyEntry* GetPropertyEntry(FbxInt pIndex,FbxPropertyPage **pFoundIn=0) const { const EntryMap::RecordType* lElement = mEntryMap.Find(pIndex); if (NULL != lElement) { if( pFoundIn ) { *pFoundIn = const_cast<FbxPropertyPage*>(this); } return lElement->GetValue(); } if( pFoundIn ) { *pFoundIn = 0; } return mInstanceOf ? mInstanceOf->GetPropertyEntry(pIndex,pFoundIn) : 0; } FbxPropertyEntry* ChangePropertyEntryState(FbxInt pIndex,FbxPropertyFlags::EInheritType pInheritType) { FbxPropertyPage* lReferencePage = 0; FbxPropertyEntry* lReferenceEntry = GetPropertyEntry(pIndex,&lReferencePage); if (pInheritType==FbxPropertyFlags::eOverride) { if (lReferencePage==this) { return lReferenceEntry; } else if (lReferenceEntry) { // must create an entry FbxPropertyEntry* lEntry = FbxPropertyEntry::Create(lReferenceEntry->GetParentId(),0,0,0); mEntryMap.Insert( pIndex, lEntry ); return lEntry; } } else { if (lReferenceEntry && (lReferencePage==this)) { mEntryMap.Remove(pIndex); lReferenceEntry->Destroy(); } } return 0; } FbxInt Add(FbxInt pParentId,FbxPropertyInfo* pInfo,FbxPropertyValue* pValue,FbxPropertyConnect* pConnect,bool pRecursive=true) { FbxInt lId = mPropNextId->GetNextIdAndInc(); FbxPropertyEntry* lEntry = FbxPropertyEntry::Create(pParentId,pInfo,pValue,pConnect); // entries created through Add() are not overrides of another entry. // Thus, set all of their flags by default. FbxPropertyFlags* lFlags = lEntry->Get( FBX_TYPE(FbxPropertyFlags) ); if( lFlags ) lFlags->ModifyFlags( FbxPropertyFlags::eAllFlags, false ); mEntryMap.Insert( lId, lEntry ); // We only add to the map if this Add is called after BeginCreateOrFindProperty() // in which case the size is always > 0 because it includes the root property if( mNameMap.mSecond.GetSize() > 0 ) mNameMap.mSecond.Insert( FbxNameMapKey( pParentId, pInfo->GetName()), lId ); // If the entry has multiple children(Struct Datatype) // Recurse for the entries and create an entry in this structure if (pRecursive) { FbxPropertyPage* lTypeInfo = pInfo->GetTypeInfo(); if (lTypeInfo) { FbxInt lChildId; lChildId = lTypeInfo->GetChild(); while (lChildId!=FBXSDK_PROPERTY_ID_NULL) { FbxPropertyInfo* lPropertyInfo = lTypeInfo->GetPropertyItem( FBX_TYPE(FbxPropertyInfo),lChildId ); FbxPropertyValue* lPropertyValue = lTypeInfo->GetPropertyItem( FBX_TYPE(FbxPropertyValue),lChildId ); FbxPropertyConnect* lPropertyConnect = lTypeInfo->GetPropertyItem( FBX_TYPE(FbxPropertyConnect),lChildId ); Add ( lId, lPropertyInfo ? lPropertyInfo->Clone(this) : 0 , lPropertyValue ? lPropertyValue->Clone(this) : 0, lPropertyConnect ? lPropertyConnect->Clone(this) : 0 ); lChildId = lTypeInfo->GetSibling(lChildId ); } } } return lId; } // Property management typedef FbxMap<FbxInt, FbxPropertyEntry*, FbxLessCompare<FbxInt>, FbxHungryAllocator> EntryMap; EntryMap mEntryMap; // instance management FbxPropertyPage* mInstanceOf; FbxArray<FbxPropertyPage*> mInstances; void* mDataPtr; // speed up structure typedef FbxMap<FbxNameMapKey, FbxInt, FbxNameMapCompare > NameMap; typedef FbxPair<unsigned int, NameMap > NameLookupPair; NameLookupPair mNameMap; FbxPropertyIdGenerator* mPropNextId; friend class FbxPropertyHandle; }; #include <fbxsdk/fbxsdk_nsend.h> #endif /* _FBXSDK_CORE_PROPERTY_PAGE_H_ */ | Low | [
0.47983870967741904,
29.75,
32.25
]
|
Learn How to Make The Most Popular East African Snack – Mandazi Today i was craving for Mandazi so I decided to do a little bit of cooking. For those who are not familiar with Mandazi, Mandazi (in coastal are people call it Mahamri) is East African snack, available anywhere from street vendors and served in hotels, usually for breakfast. It goes well with tea or coffee. Although the closest comparison would be with western donuts, the texture of Mandazi is much thicker almost a bread like on the inside. Recipe is very simple and leaves a lot of space for improvisation. When I was living on Zanzibar, a local lady taught me how to make Mandazi, but till today I have never done it before. To take advantage of this opportunity I decided to take some photos along the way at least until the messy part:) How to Make Mandazi Ingredients – original Zanzibar Mandazi: 1 kilo of white flour 2 eggs 4 spoons of sugar 1 big pinch of cardamom 1 spoon of powdered yeast 1/2 a dcl of oil 1/2 a dcl of warm water + fritting oil Instead of water you can also use coconut milk. All ingredients must have room temperature. Mix all the ingredients together, you can do it by hands or use a dough mixer if you have one. I had time and used my hands. With kneading, dough will become soft but not sticky. After kneading for about 10, 15 minutes, dough has to rest for 20 minutes in dark bowl covered with a cloth to rise double the size. Afterwards halve dough into smaller pieces and make smaller dough balls of size of a golf ball. Use rolling pin and flatten each dough ball to the size where the patch is about 0,5 cm thick. Cut vertically and horizontally to get small pieces. Set them aside and heat the oil. The secret of good Mandazi is that the dough rests twice, first time right after kneading and the second time few minutes before the fritting. Although I tried to stick to original recipe, I couldn’t help not to do some minor changes, instead of refined oil I used organic coconut oil which is healthier and splurged with more spices. Along with exotic cardamom I love ginger and cinnamon, so I was generous with all of them. I used three spoons of sugar, two spoons of brown sugar and one spoon of vanilla flavoured sugar. I was really tempted to use dark wholewheat flour but this time i didn’t want to risk a failure so next time will be definitely using dark flour. | High | [
0.683168316831683,
34.5,
16
]
|
Human And Driver Impairment Factors That Cause Auto Accidents What Is Meant by Human Factors? Human factors are one of the causes of motor vehicle accidents. This category encompasses all of the factors that are related to drivers and other users of the roadways (such as cyclists and pedestrians) that can possibly contribute to a collision. The following are examples of common human factors that cause auto accidents: driver behavior and conduct visual acuity auditory acuity decision-making skills and abilities reaction speed A popular 1985 published study reported crash data for British and American auto accidents. That study reported that driver error, drunkenness, and other similar human factors contribute in part, if not completely, to almost the entirety of crashes (93 percent of them). Often, drivers are more confident in their abilities than they should be. It should come as no big surprise then that almost all drivers who have experienced an auto accident did not consider themselves to be at fault in causing or contributing to the accident. What Is Meant by Driver Impairment Factors? Driver impairment is a descriptor that encompasses all of the factors that might prevent a driver from driving at his or her normal level of skill and competency. For example, some of the more common driver impairments include the following: alcohol drugs (illegal, prescription or over-the-counter) otherwise driving under the influence of a substance physical impairment bad eyesight advanced age sleep deprivation fatigue Poor eyesight is a common cause of impaired driving. As a result, many state jurisdictions require simple vision screenings or testing when driver's licenses are first obtained and each time they are renewed. Many states restrict driver's licenses for those who need glasses or contact lenses and designate such restrictions on the driver's license card. For parties with more chronic, long-term, or deteriorating physical conditions or impairments, the jurisdiction may require certain vehicle modifications before a particular driver is permitted to drive. An example of this latter situation occurs with a handicapped driver who may require hand controls or brakes on his or her vehicle instead of standard pedals. Similarly, for older drivers past a given advanced age in a jurisdiction, drivers may be tested for reaction speeds and vision at each license renewal or more frequent intervals. Behind-the-wheel tests at driver's license renewals or vision tests may be required of these older drivers to ensure their competency. In the instance of impairment due to drugs, antihistamines, opioids, and muscarinic antagonists are some of the over the counter drugs that typically cause motor skill, coordination, and reaction time problems that may adversely impact a driver's abilities. Illegal drugs and the abuse of prescription drugs are also potential causes of dangerous driver impairment. | High | [
0.7109737248840801,
28.75,
11.6875
]
|
Distribution trends of gastric polyps: an endoscopy database analysis of 24 121 northern Chinese patients. Traditionally the most common gastric polyps are hyperplastic polyps (HPs). However, in the last two decades, fundic gland polyps (FGPs) have greatly increased in Western countries. We aimed to re-evaluate and compare the distribution of gastric polyps in a northern Chinese population in 2000 and 2010. Consecutive patients with gastric polyps detected in 2000 and 2010 were analyzed and biopsies were re-evaluated. Data including patients' age, sex, symptoms and the number, size, location, Helicobacter pylori (H. pylori) infection of polyps were recorded. A total of 6784 and 17 337 patients underwent esophagogastroduodenoscopy in 2000 and 2010, 68 and 183 patients were diagnosed with gastric polyps, respectively. H. pylori infection decreased from 54.4% to 37.7% (P = 0.017). Overall, spectrum of gastric polyps changed (P < 0.001). HPs accounted for 28.3% and decreased from 48.5% to 20.8%, adenoma/carcinoma and inflammatory polyps also decreased. FGPs were present in 50.6% and increased from 8.8% to 66.1%. The location of polyps was also changed with an increase of polyps in gastric corpus. There was a high proportion of FGPs in females, while adenomas/adenocarcinomas were more common in males. The distribution pattern was similar in young and elderly patients. Spectrum change of gastric polyps was observed over the past 10 years in the northern Chinese population most likely due to the higher proportion of FGPs. Further studies are required to investigate the reasons and confirm whether it will lead to a different management strategy in China. | High | [
0.7064555420219241,
36.25,
15.0625
]
|
NWS: High winds, strong rains to bear down on Hampton Roads Thursday Courtesy of National Weather Service The wind will combine with nearly three-quarters of an inch of rain could cause downed trees or tree limbs, the release said. The rain is expected to start falling around 7 a.m. and continue until 1 p.m. The wind will combine with nearly three-quarters of an inch of rain could cause downed trees or tree limbs, the release said. The rain is expected to start falling around 7 a.m. and continue until 1 p.m. (Courtesy of National Weather Service) While Hampton Roads shouldn’t expect any wintry weather, a rainstorm barreling toward the region could cause power outages and make the Thursday morning commute messy. The National Weather Service at Wakefield has increased wind gust forecasts for Hampton Roads for winds as strong as 55 mph Thursday from 6 a.m. to noon, according to a National Weather Service news release. “While we do not expect widespread severe wind, winds could be strong enough at times to cause some down trees and power outages,” National Weather Service Meteorologist-in-Charge Jeff Orrock wrote in an email. The wind will combine with nearly three-quarters of an inch of rain and could cause downed trees or tree limbs, the release stated. The rain is expected to start falling around 7 a.m. and continue until 1 p.m. Localized flooding could occur, but there is no expectation of widespread flooding or river flooding in the region, according to the release. Dominion Energy customers can report power failures by calling 1-866-DOM-HELP or dominionenergy.com/outage-center/report-and-check-outages. Roberts can be reached at 757-604-1329, by email at [email protected] and on Twitter @SPRobertsJr. | Mid | [
0.53903345724907,
36.25,
31
]
|
Q: PHPExcel - Finding First Column With blank cell I am attempting to m locate the first blank cell in a column. My idea is to pick a column that I know has to have a value (in this case, JOB_NUMBER) and scan through it until a blank cell is found. My code below, in my mind, should do that. However, it never stops. I imagine it is stuck in the while loop, but I don't understand why. Code: <?php require('./Classes/PHPExcel/IOFactory.php'); ini_set('max_execution_time', 800); ini_set('memory_limit', 2000000000); $inputFileType = 'Excel2007'; $inputFileName = $_FILES['file']['tmp_name']; class MyReadFilter implements PHPExcel_Reader_IReadFilter { public function __construct($fromColumn, $toColumn) { $this->columns = array(); $toColumn++; while ($fromColumn !== $toColumn) { $this->columns[] = $fromColumn++; } } public function readCell($column, $row, $worksheetName = '') { // Read columns from 'A' to 'AF' if (in_array($column, $this->columns)) { return true; } return false; } } $filterSubset = new MyReadFilter('A', 'AF'); $objReader = PHPExcel_IOFactory::createReader($inputFileType); $objReader->setReadFilter($filterSubset); $objReader->setLoadSheetsOnly( array("NORTH") ); $objPHPExcelReader = $objReader->load($inputFileName); $r = 3500; while(isset($maxrow_north) != 1){ $cellvalue = $objPHPExcelReader->getSheetByName('NORTH')->getCellByColumnAndRow(2, $r); if(isset($cellvalue) != 1){ $maxrow_north = $r; } elseif($r > 4000) { echo "It's over 4000!"; } else { $r = $r++; } } echo $maxrow_north; ?> Some more background I am having admins upload .xlsx .xls or .csv files into an html form. The code, above, is the handler. I have limited the number of columns seen because the original creator of the .xlsx file thought it would be a great idea to have the columns go all the way out to XCF. The rows also go all the way out to somewhere around 10,000. So, I want to find the first blank row and stop there. TIA! A: Don't use if(isset($cellvalue) != 1){ A cell value always exists even if it's an empty string or a null: and you're not testing the actual cell value, but the existence of a cell.... simply get() ting a cell will create a new empty cell object if one didn't already exist You need to test the actual value stored in the cell if($cellvalue->getValue() === NULL || $cellvalue->getValue() === '') { $maxrow_north = $r; And if you're trying to find the first blank cell in the column, then break once you've found it rather than carry on iterating till you reach your max (Note, doesn't check for rich text in cells) EDIT Example, that also allows for merged cells function testInMergeRangeNotParent($objWorksheet, $cell) { $inMergeRange = false; foreach($objWorksheet->getMergeCells() as $mergeRange) { if ($cell->isInRange($mergeRange)) { $range = PHPExcel_Cell::splitRange($mergeRange); list($startCell) = $range[0]; if ($cell->getCoordinate() !== $startCell) { $inMergeRange = true; } break; } } return $inMergeRange; } $column = 2; // Column to check $max = 4000; echo 'Get First blank row in column ', $column, PHP_EOL; $r = 3500; // Starting row while(true){ $cell = $objPHPExcelReader->getSheetByName('NORTH')->getCellByColumnAndRow($column, $r); if ($cell->getValue() === NULL && !testInMergeRangeNotParent($objPHPExcelReader->getSheetByName('NORTH'), $cell)) { break; }elseif($r > $max) { echo "It's over $max !"; break; } $r++; } echo 'First blank row in column ', $column, ' is ', $r, PHP_EOL; | Low | [
0.5038759689922481,
24.375,
24
]
|
{ "word": "Wadi", "definitions": [ "(in certain Arabic-speaking countries) a valley, ravine, or channel that is dry except in the rainy season." ], "parts-of-speech": "Noun" } | Mid | [
0.631067961165048,
32.5,
19
]
|
Recommended Posts The U.S. State Department is ramping up refugee admissions back to more normal levels after it had slowed to a trickle over the past month under President Donald Trump. WND has confirmed through a State Department spokesperson that the administration is set to more than double the number of refugees arriving in U.S. cities from the current 400 per week to 900 per week. On March 15 a federal judge, Derrick Watson in Hawaii, issued a nationwide injunction stopping the State Department from enforcing or implementing sections 2 and 6 of President Trump’s March 6 executive order. Section 6 lowers the cap on refugee arrivals to 50,000, down from the 110,000 level set by President Obama. After the court’s ruling, which was upheld Wednesday by the Ninth Circuit Court of Appeals, the cap reverts back to the Obama level of 110,000. Consequently, the State Department continues to accept refugees and this includes scheduling travel for refugees who have been screened and are otherwise approved for travel. “The Court Order issued on March 15 prohibits the enforcement or implementation of Section 6 of the EO,” the State Department spokesperson told WND. “Section 6 of the EO includes a cap on refugee admissions into the United States of 50,000 for FY 2017. In accordance with the Court Order, and consistent with both our operational capacity and our capacity under available funding, we have increased the current pace of refugee arrivals to approximately 900 individuals per week. ” Halfway through the fiscal year, the U.S. has already taken in 39,082 refugees, according to the State Department’s Refugee Processing Center database. If the 900 per week number holds through the remainder of the fiscal year it would mean the Trump administration would bring in 62,482 refugees for fiscal 2017, which ends on Sept. 30. What's the point of having an immigration policy if one person (not the president) can decide who's allowed in the country. Looks like we have been sold a bill of goods with Trump. On the one hand you have the administration talking about ending sanctuary cities while the same administration is flying in "refugees" in the middle of the night and dumping them in cities they deem are too white. Share on other sites In his book «Praktischer Idealismus», Kalergi explains that the citizens of the future “United States of Europe” will not be the people of the Old Continent, but a new mixed breed, the products of thorough and widespread miscegenation. He states that the peoples of Europe should interbreed with Asians and other non-White races, to create a multiracial population, with not clear sense of tradition or identity and therefore easily controlled by the ruling elite. Kalergi proclaims the need to abolish the right of nations to self-determination and outlines the break-up of nation states through the use of ethnic separatist movements and the destruction of the nations themselves through mass migration. In order for Europe to be easily controlled by the future elite, Kalergi proposes the creation of a homogeneous mixed breed population, and as to who should be the new elite? Kalergi is particularly illuminating on this point: The man of the future will be of mixed race. The races and classes of today will gradually disappear due to the elimination of space, time, and prejudice. The Eurasian-negroid race of the future, similar in appearance to the Ancient Egyptians, will replace the current diversity of peoples and the diversity of individuals. Instead of destroying European Judaism, Europe, against her will, refined and educated this people, driving them to their future status as a leading nation through this artificial evolutionary process. It’s not surprising that the people that escaped from the Ghetto-Prison, became the spiritual nobility of Europe. Thus, the compassionate care given by Europe created a new breed of aristocrats. This happened when the European feudal aristocracy crashed because of the emancipation of the Jews [due to the actions taken by the French Revolution] Although no textbook mentions Kalergi, his ideas are the guiding principles of the European Union. The belief that the peoples of Europe should be mixed with Africans and Asians, to destroy our identity, to break down traditional ways of living and create a single mixed race, is the reason for community policies that promote minority interests. The underlying motives are not really humanitarian, but because the power behind the ruthless regime dominating the EU plans the greatest genocide in history. A prestigious prize is awarded every two years by the Coudenhove-Kalergi Foundation to Europeans who have excelled in promoting this criminal plan. Among those awarded with such a prize are Angela Merkel and Herman Van Rompuy. The facilitation of genocide, is also the basis of the constant appeals from the United Nations, demanding that we accept millions of immigrants to help counter the low birth rate among Europeans. According to a report published in January 2000 by the population division of the United Nations in New York, under the title “Immigration replacement: A solution to declining and aging population,” Europe will need to accept 159,000,000 migrants by 2025. the citing of such precise numbers is evidence of a premeditated plan. Clearly a low birth-rate can easily be reversed with appropriate measures to support families and it is equally clear that the introduction of alien genes will do nothing to preserve our genetic heritage but destroy it. The consequence of current policies promoting multiracialism is to create a weakened disparate population without national, historical or cultural cohesion. In short, the policies of the Kalergi plan have been and still are, the basis of official government policies intent upon the genocide of the Peoples of Europe, through mass immigration. In his book «Praktischer Idealismus», Kalergi explains that the citizens of the future “United States of Europe” will not be the people of the Old Continent, but a new mixed breed, the products of thorough and widespread miscegenation. He states that the peoples of Europe should interbreed with Asians and other non-White races, to create a multiracial population, with not clear sense of tradition or identity and therefore easily controlled by the ruling elite. Kalergi proclaims the need to abolish the right of nations to self-determination and outlines the break-up of nation states through the use of ethnic separatist movements and the destruction of the nations themselves through mass migration. In order for Europe to be easily controlled by the future elite, Kalergi proposes the creation of a homogeneous mixed breed population, and as to who should be the new elite? Kalergi is particularly illuminating on this point: The man of the future will be of mixed race. The races and classes of today will gradually disappear due to the elimination of space, time, and prejudice. The Eurasian-negroid race of the future, similar in appearance to the Ancient Egyptians, will replace the current diversity of peoples and the diversity of individuals. Instead of destroying European Judaism, Europe, against her will, refined and educated this people, driving them to their future status as a leading nation through this artificial evolutionary process. It’s not surprising that the people that escaped from the Ghetto-Prison, became the spiritual nobility of Europe. Thus, the compassionate care given by Europe created a new breed of aristocrats. This happened when the European feudal aristocracy crashed because of the emancipation of the Jews [due to the actions taken by the French Revolution] Although no textbook mentions Kalergi, his ideas are the guiding principles of the European Union. The belief that the peoples of Europe should be mixed with Africans and Asians, to destroy our identity, to break down traditional ways of living and create a single mixed race, is the reason for community policies that promote minority interests. The underlying motives are not really humanitarian, but because the power behind the ruthless regime dominating the EU plans the greatest genocide in history. A prestigious prize is awarded every two years by the Coudenhove-Kalergi Foundation to Europeans who have excelled in promoting this criminal plan. Among those awarded with such a prize are Angela Merkel and Herman Van Rompuy. The facilitation of genocide, is also the basis of the constant appeals from the United Nations, demanding that we accept millions of immigrants to help counter the low birth rate among Europeans. According to a report published in January 2000 by the population division of the United Nations in New York, under the title “Immigration replacement: A solution to declining and aging population,” Europe will need to accept 159,000,000 migrants by 2025. the citing of such precise numbers is evidence of a premeditated plan. Clearly a low birth-rate can easily be reversed with appropriate measures to support families and it is equally clear that the introduction of alien genes will do nothing to preserve our genetic heritage but destroy it. The consequence of current policies promoting multiracialism is to create a weakened disparate population without national, historical or cultural cohesion. In short, the policies of the Kalergi plan have been and still are, the basis of official government policies intent upon the genocide of the Peoples of Europe, through mass immigration. But that is, by and large not happening. Sure, some degenerate whites are engaging in race mixing, but by the second or third generation, the offspring isn't some kind of mixed race thing, but nearly entirely mongoloid or negroid. They are eradicating the Caucasian's, and replacing them with far more hostile, and far more identity prone, mongoloids and negroids. The "Kalgeri Plan" needs to be commonly spoken about. There is one problem though: "blame the Jew's", is redundant, as they are the minor players, the middle men, the willful scapegoats of the Roman Catholic Church, the Royal and Noble Houses of Europe, and the international Bankers. The major players, need to be brought out into the light. Share this post Link to post Share on other sites But that is, by and large not happening. Sure, some degenerate whites are engaging in race mixing, but by the second or third generation, the offspring isn't some kind of mixed race thing, but nearly entirely mongoloid or negroid. They are eradicating the Caucasian's, and replacing them with far more hostile, and far more identity prone, mongoloids and negroids. The "Kalgeri Plan" needs to be commonly spoken about. There is one problem though: "blame the Jew's", is redundant, as they are the minor players, the middle men, the willful scapegoats of the Roman Catholic Church, the Royal and Noble Houses of Europe, and the international Bankers. The major players, need to be brought out into the light. Hassan al-Banna (1906-1949) in 1928 founded the Egyptian Society of the Muslim Brothers (also known as the Muslim Brotherhood, the Brotherhood, Ikwanul Muslimeen), which is Egypt’s oldest and most influential fundamentalist Sufi group. An offshoot of the Muslim Brothers, called the Egyptian Islamic Jihad (led by Egyptian Ayman Zawahiri), fused with Al Qaeda (led by Osama bin Laden) to form Qaeda al-Jihad in June 2001. What is less well known to some people is that the Muslim Brotherhood and Al Qaeda—and their fusion group Qaeda al-Jihad--are Sufi-based movements and that they became so through the influence of Sufi Egyptian Hassan Al-Banna. The strong Sufi influence helps explain the “disciplined emotionalism” of Qaeda al-Jihad leaders and their disciples, and its rejection of science, knowledge, and learning as the way to better the lives of Muslims around the world. The Masonic origins of the Islamists movements, and their true goal to undermine Islam and fight for Western Zionist Powers such as Britain and the United States of America. The Muslim Brotherhood has acted as a clever technique to recruit agent-provocateurs for the Illuminati. The lowest ranks may sincerely believe they are defending Islam, and confronting "Western imperialism". However, these various terrorist groups, through representing different factions, are part of a single network serving the same Illuminati cause. Hassan al-Banna (1906-1949) in 1928 founded the Egyptian Society of the Muslim Brothers (also known as the Muslim Brotherhood, the Brotherhood, Ikwanul Muslimeen), which is Egypt’s oldest and most influential fundamentalist Sufi group. An offshoot of the Muslim Brothers, called the Egyptian Islamic Jihad (led by Egyptian Ayman Zawahiri), fused with Al Qaeda (led by Osama bin Laden) to form Qaeda al-Jihad in June 2001. What is less well known to some people is that the Muslim Brotherhood and Al Qaeda—and their fusion group Qaeda al-Jihad--are Sufi-based movements and that they became so through the influence of Sufi Egyptian Hassan Al-Banna. The strong Sufi influence helps explain the “disciplined emotionalism” of Qaeda al-Jihad leaders and their disciples, and its rejection of science, knowledge, and learning as the way to better the lives of Muslims around the world. The Masonic origins of the Islamists movements, and their true goal to undermine Islam and fight for Western Zionist Powers such as Britain and the United States of America. The Muslim Brotherhood has acted as a clever technique to recruit agent-provocateurs for the Illuminati. The lowest ranks may sincerely believe they are defending Islam, and confronting "Western imperialism". However, these various terrorist groups, through representing different factions, are part of a single network serving the same Illuminati cause. There are no "Ethiopian Jew's". Ethiopian's who practice a religion similar to the Jew's, claim two different heritages and both are problematic: 1) Through Solomon via Sheba. One problem with that is, Queen Sheba wasn't from Africa, but Arabia, and there is no evidence she had sexual relations with Solomon, or married Solomon. 2) They claim heritage from Moses son's. Problem with that is, they where KICKED/EXILED/made to flee, from the Israelites because of their ungodly conduct. There is no evidence of "Jew's in Ethiopia", prior to the arrival of Christianity in Ethiopia, as the well traveled Jewish people, only discovered these lost "Ethiopian Jew's", after the 9th Century, a few hundred years after the arrival of a Christian missionary. Ethiopia, like every other part of Africa inhabited by blacks, was notorious for being rabid pirates that would plunder and murder every ship that passed by their harbors. The Christian missionary, was merely a Christian sailor, that was taken as a slave, and the blacks where highly impressed by his faith and conduct. The fact, "Ethiopian Jew's", where unheard of until the 9th Century, is important because: look at America. Slave owners taught blacks Christianity, and within 200 years of being exposed to Christianity, a minority of blacks to a degree, rejected it, and began to call themselves the "real Jew's". Akin to Ethiopia, in which, a small minority of Ethiopian Jew's, practiced a "primitive form of Judaism, reliant only on Scripture", without any oral or other Israelite or Jewish traditions. The Scripture, they would have received, from that Christian missionary. Lastly: There is no Jewish genetic markers, in the "Ethiopian Jew's". The only markers the "Black Jew's", lay claim to, are actually "Yemen" genetic markers, that Jew's in Yemen acquired through intermixing with the local Yemen population. Being a Jew is a matter of blood. An average, singular ethnic German, has more Jewish blood in that singular persons veins, then the total Ethiopian Jewish population combined. As for your other charges, Albert Pike was a homosexual devil worshiper, loyal to the British Monarchy. The Kabballah, is as Jewish as Pork Grinds, and is an abomination according to the Torah, as it is a form of divination, among other things. As for the origin of the Kabballah, it goes back to the time of the Maccabees, when the briefly independent Jewish Kingdom, conquered Edom, and forced Edom to convert to Judaism at the point of the sword. Later, when a half Edomite Herod, took voer the throne, the Jewish Temple, was purged of Jew's and replaced with Edomites. If an Edomite forced convert Rabbi, spoke of the Kabballah in the first Century, that Rabbi would of been stoned to death in under 10 seconds. That is why the Edomite Rabbi, waited until the Middle Ages, when the People of Europe, had enough, and began exiling the Jew's, and forcing them to live in the ghettos(urban reservations), forcing the average Jew to be under the total control, and at the mercy of, the Rabbi. At any time prior to that period of absolute control, if a Rabbi tried to speak of the kabbalah, the average Jew would of know what was what, and would of stoned that Rabbi to death. Biblically speaking, I wouldn't be surprised if the Kabballah was the religion of the Edomites, and the Talmud the "Wise sayings of the Elders of Edom", that the Torah strictly condemns. Even if Albert Pike was Jewish, homosexuals, have a deep seated hatred of any faith that doesn't affirm their actions, as they are constantly seeking affirmation from others that what they are doing is "acceptable". If a person needs acceptance from others, in order to form their own internal sense of "everything is alright with the world", then what they are doing is probably wrong, and their ego won't allow them to see it. Not all who claim to be Jew's, are Jew's. Remember in Scripture how it is taught that "God ordered the total genocide of the Canaanites"? Well.... The Israelites didn't listen. By the time of Maccabees, many of the Canaanite and a few other peoples, where "force converted as well". It is believed amongst some, that the Sephardi Jew's, are really Canaanite Converts. Looking at the behavior of the Sephardi, and the forced converts to Roman Catholicism from Spain, that formed the Mexican upper class, it is easy to see the Canaanite behavior in them. The Eastern European Jew's, are likely descendants of the Khazar. There is a difference between German Jew's, and Eastern European Jew's. A difference of heart and souls, that can readily be observed, if you learn to study body language to gauge other people's emotional state. There is Biblical passages, that refer to Israel being invaded/settled by peoples in the far future, using geographic terms, that seem to condemn many groups of "Jew's", and Scripture say's God will wipe them all out, only leaving the real Jew's alive. The Arab Jew's, are likely a byproduct of Jew's intermixing with Arabs, both by force, and willingly. Remember, the Arab's, when they conquer, rape everything with a womb. In such a situation, such Jew's would rightfully be considered perpetual mamzer's(hence why Jewish authorities, used them and other non Jew's claiming to be Jew's, for horrific radiation experiments). If you can't tell a Jew from a non Jew, the majority of the time your blind. The link between Jew's and Islam, is between the mamzer's, Canaanites and Edomite converts, as well as infiltration from the less then lovely devil worshipers, that resulted from the false messiah of 1666, that would love for nothing more, then to see Jew's who practice Torah to any degree, wiped out from the face of the Earth. Rabbi who praise islam to any degree, are auto suspects for falling into one of those categories. As for the Origins of Freemasonry, who knows. As for it being of Jewish origins, that is laughable, as the organization, claims to be originally a trades guild. Jew's in the Middle Ages, as the Freemasons are believed to originate from the 14th century, avoided manual labor like a plague in Europe. They routinely ignored or mocked, the various King's of Europe, who tried to give them land to become farmers, herders or tradesmen(useful to society). If the Freemasons have a hidden origin, it is likely Knight Templar survivors(Templars where exterminated in the 14th Century, Freemasons appeared in the 14th Century. Templars owned property, land and businesses, across Europe. It wouldn't be hard to imagine, that a few where "off the books", and the Templars fled to them.). That said, Freemasonry as it was, is dead, as it has been thoroughly infiltrated by followers of the false messiah of 1666/Bilderberg Group. As for the overarching plans of the Bilderberg Group, when they reach a certain point in their plans, THEY NEED a scapegoat. Jew's have one weakness: they turned Israel the land, into an idol. To great effect, before and during both world war's, the British where able to wave that idol around and get the Jew's to do whatever they wanted or needed them to do. The allure was so strong, that after being betrayed by the British after WW1 and after WW2, they will still bow before their idol, and do everything the British command of them, in exchange for "Greater Israel". Jew's don't care if they take the fall, and in some cases do the dirty work of the Bilderberg Group, all they care about is securing control of their idol. From their vantage point, if Jew's across the globe face persecution because they did the Bilderberg Group's dirty work, "great", as it increases the Jewish population of "Greater Israel". That is not going to happen though, as a leopard doesn't change it's spots, and they will be betrayed to horrific effect, a third time as well. | Low | [
0.5023364485981301,
26.875,
26.625
]
|
"Redline Smalltalk, Copyright (c) James C. Ladd. All rights reserved. See LICENSE in the root of this distribution." PositionableStream subclass: #WriteStream. | Low | [
0.5,
39.25,
39.25
]
|
More than 6 million Americans are drinking water polluted with highly fluorinated chemicals. The most-studied chemicals in the class are linked to kidney and testicular cancer, thyroid disease, decreased sperm quality, high cholesterol, decreased immune function in children, and other serious health problems. Highly fluorinated chemicals are found in firefighting foam used by military and domestic airports, furniture, carpets, outdoor gear, clothing, cosmetics, cookware and food packaging. The chemicals make their way from manufacturing facilities, consumer products and firefighting activities into the air, water, and food, and then into humans. ADVERTISEMENT The health impact of these chemicals, which are increasingly being found in our drinking water, is very concerning. Some scientists say we are carrying out an unintended chemical experiment on our population. Patrick Breysse, Director of the CDC’s National Center for Environmental Health, described highly fluorinated chemicals as “one of the most seminal public health challenges for the next decades.” A recent analysis indicated that the number of Americans with contaminated drinking water is much higher than scientists had estimated. One of the nation’s largest water testing labs estimated that a quarter of our nation’s water systems are likely to contain these chemicals at levels of concern to some scientists. A new, peer-reviewed letter calling for coordinated health research in U.S. communities with drinking water contaminated by highly fluorinated chemicals was published in the journal Environmental Health this week. Thirty-nine leading scientists and physicians signed this letter, which was sent to legislators on key committees in the House and Senate and in impacted regions to draw attention to the pollution of drinking water with these chemicals. The Pentagon and the Federal Aviation Administration also received this letter, because firefighting foams used at military bases and airports are responsible for a major share of the contamination. The government’s current response is not adequate. We need a coordinated strategy to study health effects and reduce exposure. This strategy would provide impacted communities with the information, blood testing, health studies and medical monitoring that they are urgently requesting. Members of Congress from both sides of the aisle have also been asking for action to address this pollution. Some good news is that the National Defense Authorization Act (NDAA) which passed the Senate and House this week includes a provision for $7 million to begin a five to seven-year-long health study of communities affected by these firefighting foam chemicals. The legislation would also establish the first-ever nationwide study on the human health effects of exposure to highly fluorinated chemicals from drinking water. The NDAA also calls for $72 million to be added the Air Force and Navy’s environmental restoration accounts, to be spent on cleaning up impacted areas. Another provision in the NDAA meant to help prevent future contamination needs to be modified to achieve its goal. Highly fluorinated chemicals are currently required to be used for fighting aviation fires at both military and domestic airports because of a Department of Defense rule commonly referred to as the Milspec. According to the third provision in the NDAA, the department is required to submit a report on its development of new firefighting foams and phase out of old varieties of foam within six months. The unfortunate reality is that the new varieties of foam currently in use are also highly fluorinated. These regrettable substitutes are equally persistent in the environment and may cause similar health and ecological problems as the older ones they are replacing. A better solution to the problem is switching to the fluorine-free foams increasingly used on oil drilling platforms and in military and domestic airports around the world. However, before our airports can consider switching to safer fluorine-free foams, the Defense Department’s Milspec rule needs to be updated to allow their use. With NDAA funding for coordinated health studies and remediation in impacted communities, as well as a change in the Milspec to allow fluorine-free firefighting foams, we would indeed be moving towards both solving the current problem and preventing future reoccurrences — for healthier drinking water and a healthier population. Arlene Blum, Ph.D., is a research associate in Chemistry at UC Berkeley and executive director of the Green Science Policy Institute, known for her scientific and policy work to limit the use of six classes of chemicals of concern. Tom Bruton, Ph.D., is a Science and Policy fellow at the Green Science Policy Institute. Bruton studies the remediation of soil and groundwater contaminated with highly fluorinated chemicals. | High | [
0.682170542635658,
33,
15.375
]
|
<?php /** * Cobub Razor * * An open source mobile analytics system * * PHP versions 5 * * @category MobileAnalytics * @package CobubRazor * @author Cobub Team <[email protected]> * @copyright 2011-2016 NanJing Western Bridge Co.,Ltd. * @license http://www.cobub.com/docs/en:razor:license GPL Version 3 * @link http://www.cobub.com * @since Version 0.1 */ /** * Resolutionmodel Model * * @category PHP * @package Model * @author Cobub Team <[email protected]> * @license http://www.cobub.com/docs/en:razor:license GPL Version 3 * @link http://www.cobub.com */ class Resolutionmodel extends CI_Model { /** * Construct funciton, to pre-load database configuration * * @return void */ function __construct() { parent::__construct(); } /** * GetActiveUsersPercentByOrientation * * @param string $fromTime fromTime * @param string $toTime toTime * @param string $productId productId * @param int $count count * * @return query */ function getActiveUsersPercentByOrientation($fromTime, $toTime, $productId, $count = REPORT_TOP_TEN) { $dwdb = $this -> load -> database('dw', true); $sql = "select r.deviceresolution_name, count(distinct f.deviceidentifier) / (select count(distinct ff.deviceidentifier) percent from " . $dwdb -> dbprefix('fact_clientdata') . " ff, " . $dwdb -> dbprefix('dim_date') . " dd, " . $dwdb -> dbprefix('dim_product') . " pp, " . $dwdb -> dbprefix('dim_deviceresolution') . " rr where ff.date_sk = dd.date_sk and ff.product_sk = pp.product_sk and ff.deviceresolution_sk = rr.deviceresolution_sk and dd.datevalue between '$fromTime' and '$toTime' and pp.product_id = $productId and pp.product_active = 1 and pp.channel_active = 1 and pp.version_active = 1) percentage from " . $dwdb -> dbprefix('fact_clientdata') . " f, " . $dwdb -> dbprefix('dim_date') . " d, " . $dwdb -> dbprefix('dim_product') . " p, " . $dwdb -> dbprefix('dim_deviceresolution') . " r where f.date_sk = d.date_sk and f.product_sk = p.product_sk and f.deviceresolution_sk = r.deviceresolution_sk and d.datevalue between '$fromTime' and '$toTime' and p.product_id = $productId and p.product_active = 1 and p.channel_active = 1 and p.version_active = 1 group by r.deviceresolution_name order by percentage desc limit 0,$count; "; $query = $dwdb -> query($sql); return $query; } /** * GetNewUsersPercentByOrientation * * @param string $fromTime fromTime * @param string $toTime toTime * @param string $productId productId * @param int $count count * * @return query */ function getNewUsersPercentByOrientation($fromTime, $toTime, $productId, $count = REPORT_TOP_TEN) { $dwdb = $this -> load -> database('dw', true); $sql = "select r.deviceresolution_name, count(distinct f.deviceidentifier) / (select count(distinct ff.deviceidentifier) percent from " . $dwdb -> dbprefix('fact_clientdata') . " ff, " . $dwdb -> dbprefix('dim_date') . " dd, " . $dwdb -> dbprefix('dim_product') . " pp, " . $dwdb -> dbprefix('dim_deviceresolution') . " rr where ff.date_sk = dd.date_sk and ff.product_sk = pp.product_sk and ff.deviceresolution_sk = rr.deviceresolution_sk and dd.datevalue between '$fromTime' and '$toTime' and pp.product_id = $productId and pp.product_active = 1 and pp.channel_active = 1 and pp.version_active = 1 and ff.isnew=1) percentage from " . $dwdb -> dbprefix('fact_clientdata') . " f, " . $dwdb -> dbprefix('dim_date') . " d, " . $dwdb -> dbprefix('dim_product') . " p, " . $dwdb -> dbprefix('dim_deviceresolution') . " r where f.date_sk = d.date_sk and f.product_sk = p.product_sk and f.deviceresolution_sk = r.deviceresolution_sk and d.datevalue between '$fromTime' and '$toTime' and p.product_id = $productId and p.product_active = 1 and p.channel_active = 1 and p.version_active = 1 and f.isnew = 1 group by r.deviceresolution_name order by percentage desc limit 0,$count; "; $query = $dwdb -> query($sql); return $query; } /** * GetTotalUsersPercentByResolution * * @param string $fromTime fromTime * @param string $toTime toTime * @param string $productId productId * @param int $count count * * @return query */ function getTotalUsersPercentByResolution($fromTime, $toTime, $productId, $count = REPORT_TOP_TEN) { $dwdb = $this->load->database('dw', true); $sql = "select r.deviceresolution_name,sum(sessions) sessions,sum(newusers) newusers from " . $dwdb->dbprefix('sum_deviceresolution') . " f, " . $dwdb->dbprefix('dim_date') . " d, " . $dwdb->dbprefix('dim_deviceresolution') . " r where d.datevalue between '$fromTime' and '$toTime' and f.product_id = $productId and d.date_sk = f.date_sk and f.deviceresolution_sk = r.deviceresolution_sk and r.deviceresolution_name<> 'unknown' group by r.deviceresolution_name order by sessions desc"; $query = $dwdb->query($sql); return $query; } /** * GetSessionNewuserByResolution * * @param string $fromTime fromTime * @param string $toTime toTime * @param string $productId productId * * @return query */ function getSessionNewuserByResolution($fromTime, $toTime, $productId) { $dwdb = $this->load->database('dw', true); $sql = "select IFNULL(sum(sessions),0) sessions,IFNULL(sum(newusers),0) newusers from " . $dwdb->dbprefix('sum_deviceresolution') . " f, " . $dwdb->dbprefix('dim_date') . " d, " . $dwdb->dbprefix('dim_deviceresolution') . " r where d.datevalue between '$fromTime' and '$toTime' and f.product_id = $productId and d.date_sk = f.date_sk and f.deviceresolution_sk = r.deviceresolution_sk and r.deviceresolution_name<> 'unknown' "; $query = $dwdb->query($sql); return $query; } /** * GetSessionByOrientiontop * * @param string $fromTime fromTime * @param string $toTime toTime * @param string $productId productId * @param int $count count * * @return query */ function getSessionByOrientiontop($fromTime, $toTime, $productId, $count = REPORT_TOP_TEN) { $dwdb = $this->load->database('dw', true); $sql = "select r.deviceresolution_name,sum(sessions) sessions from " . $dwdb->dbprefix('sum_deviceresolution') . " f, " . $dwdb->dbprefix('dim_date') . " d, " . $dwdb->dbprefix('dim_deviceresolution') . " r where d.datevalue between '$fromTime' and '$toTime' and f.product_id = $productId and d.date_sk = f.date_sk and f.deviceresolution_sk = r.deviceresolution_sk and r.deviceresolution_name<> 'unknown' group by r.deviceresolution_name order by sessions desc limit 10 "; $query = $dwdb->query($sql); return $query; } /** * GetNewuserByOrientiontop * * @param string $fromTime fromTime * @param string $toTime toTime * @param string $productId productId * @param int $count count * * @return query */ function getNewuserByOrientiontop($fromTime, $toTime, $productId, $count = REPORT_TOP_TEN) { $dwdb = $this->load->database('dw', true); $sql = "select r.deviceresolution_name,sum(newusers) newusers from " . $dwdb->dbprefix('sum_deviceresolution') . " f, " . $dwdb->dbprefix('dim_date') . " d, " . $dwdb->dbprefix('dim_deviceresolution') . " r where d.datevalue between '$fromTime' and '$toTime' and f.product_id = $productId and d.date_sk = f.date_sk and f.deviceresolution_sk = r.deviceresolution_sk and r.deviceresolution_name<> 'unknown' group by r.deviceresolution_name order by newusers desc limit 10 "; $query = $dwdb->query($sql); return $query; } } ?> | Low | [
0.52112676056338,
27.75,
25.5
]
|
It hasn't even been around a year, but Brooklyn's Barclays Center is already one of the top-grossing entertainment venues in the world. Photo: WireImage Brooklyn is the new Manhattan. Sort of. The Borough of Kings is closing the rent gap with Manhattan thanks to soaring prices in luxury buildings that are driving up the market in its more trendy neighborhoods. The highly sought-after borough, home to the new Barclays Center and the Brooklyn Nets, saw its average rent climb to $3,035 in July, a hefty 8.2 percent jump from July 2012, according to a report from real estate group Douglas Elliman. The average rent in Manhattan was $3,822 in July, up just 1.7 percent. Driving the surge was growing demand in Williamsburg, Greenpoint, Brooklyn Heights and Cobble Hill, fabulous news for Brooklyn’s status-symbol crowd, but not so much for residents looking for affordable places to live. Actor Jim Callahan, whose wife is pregnant, said he’s concerned about the cost of a bigger apartment for his family. “It is a great area and we feel very fortunate to live here,” said Callahan, 33, of Cobble Hill. “But as far as the long run goes, it’s getting more and more unrealistic. We just have to kind of shove a baby in there somewhere. “Even a block over, anything under $3000 is unheard of.” “I couldn’t find a studio for less than $1,300,” said Paige Hexton, 23, who pays $975 a month to live with three roommates in Cobble Hill. Yuval Greenblatt, Douglas Elliman’s director of sales, said Brooklyn used to be a renter’s second choice. “Now it’s a primary option,” Greenblatt said. “There’s a different lifestyle. Manhattan is congested. Manhattan has traffic. Brooklyn is a low-rise community.” Manhattan rents have been on the rise for two years without a break, but the rate of growth has been slowing, as landlords and tenants are more likely to agree on lease renewals. Rents in Brooklyn are rising because of demand from those priced out of Manhattan, and would-be buyers who don’t qualify for purchases with today’s tight lending standards. Lesley Kunikis, 41, a graphic designer who lives in a one-bedroom apartment on Cheever Place in Cobble Hill, said her rent has doubled to $1,800 since she moved in. | Mid | [
0.5425531914893611,
31.875,
26.875
]
|
SAN FRANCISCO — A California college student has accused popular video-sharing app TikTok in a class-action lawsuit of transferring private user data to servers in China, despite the company’s assurances that it does not store personal data there. The allegations may deepen legal troubles in the United States for TikTok, which is owned by Beijing ByteDance Technology Co but operates entirely outside of China and has developed an especially devoted fan base among U.S. teenagers. The company is already facing a U.S. government national security probe over concerns about data storage and possible censorship of political sensitive content. The lawsuit, filed in the U.S. District Court for the Northern District of California last Wednesday and originally reported by The Daily Beast, alleges TikTok has surreptitiously “vacuumed up and transferred to servers in China vast quantities of private and personally-identifiable user data.” TikTok did not immediately respond to a request for comment on the allegations, but maintains that it stores all U.S. user data in the United States with backups in Singapore. The documents identify the plaintiff as Misty Hong, a college student and resident of Palo Alto, California, who downloaded the TikTok app in March or April 2019 but never created an account. Months later, she alleges, she discovered that TikTok had created an account for her without her knowledge and produced a dossier of private information about her, including biometric information gleaned from videos she created but never posted. According to the filing, TikTok transferred user data to two servers in China - bugly.qq.com and umeng.com - as recently as April 2019, including information about the user’s device and any websites the user had visited. Bugly is owned by Tencent, China’s largest mobile software company, which also owns social network WeChat, while Umeng is part of Chinese e-commerce giant Alibaba Group. The lawsuit also claims that source code from Chinese tech giant Baidu is embedded within the TikTok app, as is code from Igexin, a Chinese advertising service, which security researchers discovered in 2017 was enabling developers to install spyware on a user’s phone. The legal documents did not provide evidence of the data transfers or the existence of Baidu or Igexin source code in the app. Hong and her legal representatives could not immediately be reached for comment. | Mid | [
0.636559139784946,
37,
21.125
]
|
Q: drawRect - Draw Star I am drawing annotations on a view. The approach i am using to draw star is below. -(void)drawRect:(CGRect)rect { CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSetLineWidth(context, 2.0); CGFloat xCenter = rect.size.width / 2; CGFloat yCenter = rect.size.height / 2; float width; if(rect.size.width > rect.size.height) { width = rect.size.height; } else { width = rect.size.width; } double r = width / 2.0; float flip = -1.0; double theta = 2.0 * M_PI * (2.0 / 5.0); // 144 degrees CGContextMoveToPoint(context, xCenter, r*flip+yCenter); for (NSUInteger k=1; k<5; k++) { float x = r * sin(k * theta); float y = r * cos(k * theta); CGContextAddLineToPoint(context, x+xCenter, y*flip+yCenter); } CGContextSetStrokeColorWithColor(context, self.color.CGColor); CGContextClosePath(context); CGContextStrokePath(context); } I want to have just borders of star removing other inner lines is there any way to achieve that? (apart from using moveToPoint and addLine methods) A: use this code: -(void)drawRect:(CGRect)rect { int aSize = 100.0; float color[4] = { 0.0, 0.0, 1.0, 1.0 }; // Blue CGColorRef aColor = CGColorCreate(CGColorSpaceCreateDeviceRGB(), color); CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSetLineWidth(context, aSize); CGFloat xCenter = 100.0; CGFloat yCenter = 100.0; float w = 100.0; double r = w / 2.0; float flip = -1.0; CGContextSetFillColorWithColor(context, aColor); CGContextSetStrokeColorWithColor(context, aColor); double theta = 2.0 * M_PI * (2.0 / 5.0); // 144 degrees CGContextMoveToPoint(context, xCenter, r*flip+yCenter); for (NSUInteger k=1; k<5; k++) { float x = r * sin(k * theta); float y = r * cos(k * theta); CGContextAddLineToPoint(context, x+xCenter, y*flip+yCenter); } CGContextClosePath(context); CGContextFillPath(context); } This will create 1 blue star | Low | [
0.5211581291759461,
29.25,
26.875
]
|
William Bowe is a Perth-based election analyst and occasional teacher of political science. His blog, The Poll Bludger, has existed in one form or another since 2004, and is one of the most heavily trafficked websites on Australian politics. View all posts by William Bowe A Galaxy poll for the Daily Telegraph has John Alexander clinging on to a 51-49 lead ahead of tomorrow’s Bennelong by-election , after a poll at the beginning of the campaign had it at 50-50. On the primary vote, Alexander is down two to 40% and Kristina Keneally is down one to 38%, with the Greens on 8%, Australian Conservatives on 7% and Christian Democratic Party on 3%. The sample is only 524, but the result is in line with a similar poll conducted by the same company but badged as Newspoll for The Australian earlier in the week. Another poll points to a cliffhanger in the make-or-break Bennelong by-election. Contact the Poll Bludger Election calendar Northern Territory Election August 22, 2020 Election guide featuring overview and reviews of each of the 25 seats August 22, 2020 Australian Capital Territory Election October 17, 2020 Election guide forthcoming Queensland Election October 31, 2020 Election guide forthcoming Blog posts by category Blog posts by category Select Category ACT Election 2004 (7) ACT Election 2008 (6) ACT Politics (11) Brisbane City Council Elections (13) British politics (53) Canadian Politics (6) Election results (3) Electoral reform (34) Federal By-Elections (81) Federal Election 2004 (124) Federal Election 2007 (293) Federal Election 2010 (68) Federal Election 2013 (126) Federal Election 2016 (123) Federal Election 2019 (120) Federal Election Campaign 2016 (52) Federal Politics (270) Federal Politics 2010-2013 (324) Federal Politics 2013- (332) Federal Politics 2016-2019 (437) Federal Politics 2019-2022 (130) Federal Redistributions (39) French politics (2) General (83) German politics (3) Irish politics (8) Israeli politics (5) Mexican politics (1) New Zealand Election 2005 (5) New Zealand Election 2008 (4) New Zealand politics (14) Northern Territory Election 2005 (10) Northern Territory Election 2008 (5) Northern Territory Election 2016 (3) Northern Territory Election 2020 (4) NSW By-Elections (51) NSW Council Elections (2) NSW Election 2007 (23) NSW Election 2011 (14) NSW Election 2015 (15) NSW Election 2019 (18) NSW Politics (212) NT By-Elections (18) NT Politics (35) Queensland By-Elections (32) Queensland Election 2004 (28) Queensland Election 2006 (29) Queensland Election 2009 (7) Queensland Election 2012 (19) Queensland Election 2015 (19) Queensland Election 2017 (17) Queensland Politics (209) SA By-Elections (9) SA Election 2006 (23) SA Election 2010 (13) SA Election 2014 (13) SA Election 2018 (15) South Australian Politics (116) State Redistributions (17) Tasmanian Election 2006 (13) Tasmanian Election 2010 (7) Tasmanian Election 2014 (8) Tasmanian Election 2018 (8) Tasmanian Periodical Elections (44) Tasmanian Politics (123) US Politics (94) Victorian By-Elections (20) Victorian Council Elections (1) Victorian Election 2006 (36) Victorian Election 2010 (21) Victorian Election 2014 (20) Victorian Election 2018 (18) Victorian Politics (183) WA By-Elections (32) WA Election 2005 (54) WA Election 2008 (25) WA Senate Election 2014 (5) Western Australian Election 2013 (15) Western Australian Election 2017 (18) Western Australian Politics (113) Blog posts by month Blog posts by month Select Month September 2020 (12) August 2020 (16) July 2020 (16) June 2020 (17) May 2020 (15) April 2020 (16) March 2020 (17) February 2020 (16) January 2020 (11) December 2019 (12) November 2019 (15) October 2019 (15) September 2019 (13) August 2019 (15) July 2019 (13) June 2019 (14) May 2019 (51) April 2019 (30) March 2019 (29) February 2019 (18) January 2019 (10) December 2018 (14) November 2018 (30) October 2018 (23) September 2018 (17) August 2018 (20) July 2018 (20) June 2018 (18) May 2018 (24) April 2018 (15) March 2018 (27) February 2018 (21) January 2018 (9) December 2017 (20) November 2017 (33) October 2017 (21) September 2017 (19) August 2017 (19) July 2017 (18) June 2017 (16) May 2017 (14) April 2017 (16) March 2017 (21) February 2017 (19) January 2017 (9) December 2016 (9) November 2016 (20) October 2016 (16) September 2016 (12) August 2016 (13) July 2016 (18) June 2016 (26) May 2016 (30) April 2016 (40) March 2016 (31) February 2016 (16) January 2016 (15) December 2015 (13) November 2015 (18) October 2015 (20) September 2015 (21) August 2015 (18) July 2015 (17) June 2015 (18) May 2015 (20) April 2015 (21) March 2015 (26) February 2015 (18) January 2015 (31) December 2014 (18) November 2014 (34) October 2014 (23) September 2014 (23) August 2014 (18) July 2014 (24) June 2014 (17) May 2014 (26) April 2014 (20) March 2014 (31) February 2014 (25) January 2014 (13) December 2013 (17) November 2013 (23) October 2013 (17) September 2013 (38) August 2013 (40) July 2013 (21) June 2013 (20) May 2013 (19) April 2013 (18) March 2013 (26) February 2013 (20) January 2013 (17) December 2012 (14) November 2012 (11) October 2012 (19) September 2012 (17) August 2012 (24) July 2012 (12) June 2012 (9) May 2012 (13) April 2012 (16) March 2012 (28) February 2012 (15) January 2012 (16) December 2011 (9) November 2011 (17) October 2011 (16) September 2011 (8) August 2011 (14) July 2011 (11) June 2011 (15) May 2011 (16) April 2011 (11) March 2011 (26) February 2011 (16) January 2011 (11) December 2010 (10) November 2010 (31) October 2010 (17) September 2010 (15) August 2010 (50) July 2010 (24) June 2010 (18) May 2010 (14) April 2010 (11) March 2010 (26) February 2010 (18) January 2010 (11) December 2009 (15) November 2009 (17) October 2009 (13) September 2009 (11) August 2009 (16) July 2009 (10) June 2009 (15) May 2009 (19) April 2009 (12) March 2009 (13) February 2009 (15) January 2009 (9) December 2008 (14) November 2008 (21) October 2008 (30) September 2008 (42) August 2008 (26) July 2008 (15) June 2008 (23) May 2008 (19) April 2008 (16) March 2008 (14) February 2008 (17) January 2008 (14) December 2007 (10) November 2007 (110) October 2007 (64) September 2007 (40) August 2007 (32) July 2007 (22) June 2007 (13) May 2007 (7) April 2007 (4) March 2007 (26) February 2007 (8) January 2007 (3) December 2006 (7) November 2006 (29) October 2006 (3) September 2006 (17) August 2006 (16) July 2006 (2) June 2006 (3) May 2006 (4) April 2006 (3) March 2006 (31) February 2006 (9) January 2006 (6) December 2005 (3) November 2005 (7) October 2005 (9) September 2005 (7) August 2005 (11) July 2005 (4) June 2005 (8) May 2005 (6) April 2005 (4) March 2005 (6) February 2005 (34) January 2005 (16) December 2004 (7) November 2004 (3) October 2004 (33) September 2004 (42) August 2004 (24) July 2004 (11) June 2004 (7) May 2004 (8) April 2004 (4) March 2004 (6) February 2004 (9) January 2004 (22) Site archives Repository of links to past election guides and poll trends for previous terms. | High | [
0.657142857142857,
28.75,
15
]
|
The most unpopular candidates in American history 5 September 2016 The Labor Day weekend marks the semi-official beginning of the final phase of the 2016 presidential election campaign. The next two months will feature four presidential or vice-presidential debates, hundreds of millions of dollars of mind-numbing attack ads, and new depths of political reaction, bombast and lies from both the Democratic and Republican parties. There is mounting evidence that the avalanche of political filth from the two main capitalist parties, which enjoy an effective political monopoly in the United States, has alienated record numbers of people. Opinion polls taken over the past week have shown support for both Democrat Hillary Clinton and Republican Donald Trump declining, with her numbers falling somewhat more rapidly than his, leading to headlines about Trump beginning to “close the gap.” Clinton and Trump are, it is now widely conceded, the two most unpopular presidential candidates in modern US history. Trump is viewed unfavorably by nearly two-thirds of voters polled, with 44 percent describing him as a racist and 59 percent saying his campaign appeals to bigotry. Yet Trump is only narrowly behind Clinton, whose unfavorability number hit 60 percent in polls last week. Nearly two-thirds of those polled—including many of those planning to vote for her—say that the former Secretary of State is corrupt, a liar and not to be trusted. What the corporate-controlled media cannot say, but which is undeniably true, is that the two candidates are so widely hated because they represent the increasingly right-wing policies of the US ruling elite, under conditions where American workers and youth are moving to the left. The actions of the two candidates last week only underscored the vast decay of the American political system. Trump gave a speech in Arizona on immigration which was an hour-long diatribe against undocumented immigrants, whom he blamed for unemployment, crime, budget deficits and terrorism. His fascistic rant concluded with a 10-point plan for the establishment of a police state in America, complete with detention camps for the millions whom Trump pledged to order rounded up in his first action as president. For her part, Clinton gave a speech on US military policy to the convention of the American Legion—a bulwark of right-wing anti-communism and militarism—in which she presented herself as a more aggressive and reliable commander-in-chief than Trump, whom she suggested was a Russian puppet. She threatened to use force in response to unsubstantiated charges of cyberattacks on the United States by Russia and China, and she hailed the growing list of Republican national security officials, including nearly all the architects of the Iraq War, who have endorsed her campaign. Clinton’s status as the consensus candidate of the US military-intelligence apparatus was further strengthened Friday by the public declaration of two former Republican Secretaries of State, Henry Kissinger and George Shultz, that they would not support Trump. They joined both living Republican former presidents, George H. W. Bush and his son George W. Bush, and the 2012 Republican presidential nominee, Mitt Romney, in refusing to endorse Trump. While the 93-year-old Kissinger and the 95-year-old Shultz did not explicitly endorse Clinton—perhaps at her request, since Kissinger is widely reviled as a war criminal for his role in the Vietnam War—their joint statement left little doubt about their preference: “We are dedicated to fostering a bipartisan foreign policy and we will devote ourselves to this effort now and after the election.” In the wake of the Trump and Clinton speeches, a Reuters/Ipsos poll showed the Democrat leading the Republican by only 40 percent to 39 percent, meaning that 21 percent favored a third-party candidate or were undecided. The rising poll numbers for Libertarian Gary Johnson and Green Jill Stein, the two media-recognized “third-party” candidates, demonstrate than tens of millions of voters, predominantly youth and workers, are disgusted by the “choice” offered by the two main corporate-controlled parties and are looking for an alternative. Polling averages show Johnson getting 10 percent and Stein 5 percent, for a combined 15 percent, 10 times what the same two candidates received when they represented the Libertarians and the Greens in the 2012 presidential election. Most of those planning to vote for either of these candidates fit the profile of supporters of Vermont Senator Bernie Sanders: a majority, 51 percent, were under age 34, and more described themselves as sympathetic to Sanders than to either Clinton or Trump. These figures underscore the political function of both the Libertarian campaign and especially the Green Party, the more superficially “left-wing.” They operate to contain the mass opposition to the Democrats and Republicans within the framework of capitalist politics. The Libertarians are an explicitly pro-capitalist party, advocating even more right-wing policies than the Democrats and Republicans in terms of dismantling public services and eliminating all restraints on the operations of corporate business. At the same time, they appeal to popular democratic and antiwar sentiments by claiming to oppose government spying and foreign wars. The Greens provide a critical service to the ruling elite in seeking to trap leftward-moving layers of the population, particularly those initially attracted to Sanders. Stein previously offered the Green Party nomination to Sanders if he would agree to continue his campaign in the general election. Her recent campaign appearances have been characterized by a high degree of demagogic antiwar posturing, aimed at capitalizing on Clinton’s embrace of the Pentagon and CIA. But the Green Party is no less a capitalist party than the Libertarians, Democrats or Republicans. They support the profit system, advocating only a higher degree of environmental protection. The only mention of socialism in the Green Party platform is an explicit disavowal of it. And Stein’s sudden indulging in antiwar rhetoric is neither convincing nor credible. Whenever Green parties have entered into bourgeois governments, they have embraced militarism wholeheartedly. Most notoriously, the German Greens paved the way for the reemergence of German imperialism as a significant military force, with Green Foreign Minister Joschka Fischer spearheading the German military role in the Kosovo war and then in Afghanistan. There is only one party in the 2016 election that truly speaks for the leftward movement among American workers and youth, and advances a clear socialist and antiwar program. That is the Socialist Equality Party and our candidates, Jerry White for President and Niles Niemuth for Vice President. The SEP candidates receive no plaudits or fawning interviews from the corporate media, for very good reason: our campaign advances a revolutionary socialist program to mobilize the working class and youth and put an end to the dictatorship of big business. We urge all those who are looking for a genuine alternative to capitalist politics to support and build our campaign. Patrick Martin Please enable JavaScript to view the comments powered by Disqus. | Mid | [
0.5383022774327121,
32.5,
27.875
]
|
Minister of Foreign Affairs Prince Saud Al-Faisal arrived in Doha, Qatar, today to head the Kingdom's delegation to the 25th meeting of the foreign ministers of member states of the Organization of the Islamic Conference (OIC). Inaugurating the event, the Emir of Qatar Sheikh Hamad Ibn Khalifa Al-Thani called for a fight against poverty, famine, backwardness, terrorism, environmental hazards and fatal diseases, and advised the leaders of Islamic countries to focus their attention on economic development, praising the OIC's specialized agencies for their assistance in this. Concerning the Middle East situation, the Qatari Emir said peace should be established in the region in accordance with UN resolutions and the land-for-peace formula. He went on to express alarm at ethnic conflicts, condemning all forms of terrorism and urging the meeting to draft an agreement to combat it. For his part, OIC Secretary-General Ezzedine Laraki underlined the necessity of a solution to the issue of Kuwaiti prisoners of war, and urged OIC member states to strive for an end to the suffering of the Iraqi people. He went on to say that the Palestinian question is of the highest priority, and called on Israel to abide by the land-for-peace principle and respect the agreements it signed with the Palestinians, in order to ensure security and stability in the region. Iranian Foreign Minister Dr. Kamal Kharazi, speaking on behalf of Iranian President Muhammad Khatami, declared that the Islamic nations are in great need of a new strategy for cooperation, and called for combating terrorism, aiding Islamic lands that are under occupation, ridding the Middle East of weapons of mass destruction, and urging Iraq to implement the UN resolutions in order to end the hardships of the Iraqi people. Foreign Minister of Guinea Lamine Camara, speaking on behalf of all African nations, urged Islamic states to rise to the challenge and find solutions to the problems of Iraq, Libya and Kosovo. In a speech delivered on his behalf by UN representative Lakhdar Al-Ibrahimi, Secretary-General of the United Nations Kofi Annan praised the unanimity of all members of the UN Security Council in supporting the latest deal concluded with Iraq on implementation of UN resolutions, and gave credit to those countries which have extended food and medicine to the Iraqi people. Concerning the Afghan issue, he expressed regret that no progress had been achieved, pointing out the UN is striving to bring all factions to the negotiating table with a view to establishing peace and putting an end to the sufferings of the Afghan people. Remarking that a final solution to the question of Bosnia-Herzegovina is still lacking, he said that close cooperation between the OIC, the UN and other international organizations would lead to the establishment of peace and security the world over. | Mid | [
0.644067796610169,
33.25,
18.375
]
|
--- abstract: 'Excitons in a complex organic molecular crystal were studied by inelastic x-ray scattering (IXS) for the first time. The dynamic dielectric response function is measured over a large momentum transfer region, from which an exciton dispersion of 130meV is observed. Semi-empirical quantum chemical calculations reproduce well the momentum dependence of the measured dynamic dielectric responses, and thus unambiguously indicate that the lowest Frenkel exciton is confined within a fraction of the complex molecule. Our results demonstrate that IXS is a powerful tool for studying excitons in complex organic molecular systems. Besides the energy position, the IXS spectra provide a stringent test on the validity of the theoretically calculated exciton wavefunctions.' author: - 'K. Yang' - 'L. P. Chen' - 'Y. Q. Cai' - 'N. Hiraoka' - 'S. Li' - 'J. F. Zhao' - 'D. W. Shen' - 'H. F. Song' - 'H. Tian' - 'L. H. Bai' - 'Z. H. Chen' - 'Z. G. Shuai' - 'D. L. Feng' title: 'Inelastic X-Ray Scattering Study of Exciton Properties in an Organic Molecular Crystal' --- In the past several decades, optical applications of organic materials have been extensively explored. [@optical; @pvc; @tft; @spiro; @sensor; @lumi; @polymer] Organic light emitting devices, photovoltaic cells, photochromic materials, biosensors, and nonlinear optical devices among many others have generated a lot of interests. Great effort has been invested on designing new functional materials with special optical properties. As the optical processes are largely dominated by excitonic excitations in these materials, the detailed information of excitons is crucial. Particularly, it would greatly benefit the material design if one knows where the exciton is located in a molecule, what its spatial extent is, and how the exciton energy and dispersion are associated with the local structure. These basic yet crucial questions on exciton cannot be addressed by conventional techniques such as Raman scattering and absorption, as they can only give information near zero momentum transfer. So far, high energy electron-energy-loss spectroscopy (EELS) has been applied successfully to thin film samples of conjugated oligomer or polymer such as $\alpha$-$6T$, $6P$, carotenoid, and TPD. [@fink99; @fink00; @fink04; @quasiband]. EELS is limited to the low momentum transfer $q$ due to strong multiple scattering at high $q$, which would severely smear out the spectrum, and the diminishing matrix element ($\sim q^{-4}$). As a result, EELS studies were mostly focused on planar $\pi$ conjugated molecules where the excitons are more extended in space, and information at low $q$ is sufficient for understanding the exciton behaviors. On the other hand, the majority of organic optical materials are composed of small molecules with complex local structures. Even for polymers, the optically functional portions are mostly small sectors attached to the backbones for better mechanical performance. For small molecules or small molecular clusters, the size of the excitons is presumably very small, and high-$q$ information is crucial for understanding their behaviors. The properties of excitons in these systems remain largely unexplored experimentally. Theoretically, although many quantum chemical methods have been employed to study these systems, their feasibility lacks strong experimental support. Inelastic X-ray scattering (IXS) has proven to be a powerful tool for investigating the electronic excitations in inorganic systems. [@schulkereview] For example, IXS data have helped our understanding of the metal-insulator transition [@IssacsMI], plasmon excitations [@shulkeplas], and band gap [@bandgap]. IXS is a clean and direct probe of the dynamic structure factor $S(q,\omega)$, which is proportional to $q^2\times{Im(\epsilon^{-1}})$, $\epsilon$ being the dynamic dielectric function. In comparison with EELS, IXS is almost free from multiple scattering effects, and it can generate reliable information at high $q$. Therefore, IXS is particularly advantageous for studying excitons that are more constrained in space. In this paper, we report the first IXS measurement of the Py-SO organic molecular crystal. The dynamic dielectric responses of such complex small molecules were measured over a large momentum transfer region. The dispersion of the lowest exciton was observed to be about 130 meV. Quantum chemical calculations based on ZINDO (Zerner’s intermediate neglect of differential overlap)/SCI (single configuration interaction)[@Ridley73] was performed on both molecular aggregate and a single molecule. We found that the ZINDO/SCI method captured very well the internal structure of the excitons. As the momentum dependence of the dielectric function gives strong constraints on the exciton wavefunctions, the good agreement between the calculated IXS spectra and the experiments indicates unambiguously that the lowest Frenkel exciton distributes in a fraction of a single molecule. Our results provide comprehensive information for understanding the properties of excitons in such complex molecular systems. ![A unit cell of Py-SO molecular crystal.[]{data-label="structure"}](yfig1.eps){width="7cm"} Spirooxazines (SO) are a type of well-known photochromic compounds. Open-ring photomerocyanine form of spirooxazines (Py-SO, C$_{21}$N$_4$O$_2$H$_{22}$) was synthesized recently[@pyso], which exhibited alkali induced chromism and the thermochromism in alkali medium. High quality single crystal Py-SO samples, typically in the size of $0.8mm \times 2mm \times 150\mu m$, were prepared by recrystallization. Fig.\[structure\] shows one unit cell, which contains two molecules. The bottom right Py-SO molecule, from left to right, consists of one phenyl ring, one aromatic ring, and a C-N bond bridge to the quinoid on the right. At room temperature, the triclinic lattice parameters are $a = 8.356 $, $b = 9.510 $Å, $c = 12.096 $Å, $\alpha =89.93^o$, $\beta =75.58^o$, $\gamma =81.92^o$. IXS measurements were carried out in transmission mode at the Taiwan Beamline BL12XU at SPring-8.[@CaiExp] A Si(444) spherical analyzer with 2m radius of curvature was used. A Si(400) high-resolution monochromator was used to scan incident photon energy around 7908.75eV. The total energy resolution was estimated to be about 170meV based on the FWHM of the quasi-elastic lines of the sample. Momentum resolution was about 0.14Å$^{-1}$. Data were taken at room temperature. Great care was taken to avoid beam damage to sample during the experiments by changing the probing spot every scan, since the beam spot was only about $ 120\mu m \times 80\mu m$. The time for one scan was less than 4 hours, and the damage effects were negligible. Data count rate was about 10Hz. Absorption is only about $6\%$, therefore it is neglected in the following analysis. $Im(\epsilon^{-1})$ data of the Py-SO molecular crystal measured by IXS are shown in Fig.\[spectra\]a with momentum transfer *q* along the $a$ axis. The spectral features are determined by the internal structure of the optically excited singlet excitons and interband transitions. There are three distinct features at about 2.2eV, 4.6eV and 6.6eV in the measured energy window, labeled as I, II, and III respectively. After removing the quasi-elastic Rayleigh background, the resulting spectra are shown in Fig.\[spectra\]b-c. A weak feature II’ and rising spectral weight beyond 8 eV can also be observed in Fig.\[spectra\]c. Exciton, or excited state in general has been a major challenge for computational physics or chemistry because of the difficulty in treating electron correlation for excited states. Hutchison and coworkers have made a systematic investigation for the excited states of 60 organic conjugated molecules with six commonly applied excited-state computational methods. When compared with experiments, ZINDO/SCI combined with Austin-Model-1-optimized geometry was found to be the best choice in predicting the low-lying excited states, even outperforming the most commonly applied first-principles time-dependent density functional theory.[@Hutchison02] Therefore it is adopted in our calculations. {width="17.5cm"} The IXS spectra calculated along the $a$ axis are shown in Fig.\[spectra\]d for a molecular aggregate of six-unit-cell stack with shortest intermolecular separations, [@aggregate] and a width of 0.17eV was included in order to compare with the experiment. One can also clearly identify several main spectral features A, B and B$^\prime$, C and D. There are almost one to one correspondence between the experimental features I-III and theoretical features A-C, respectively. The energy centroid positions of features B and C match those of features II and III almost perfectly. The weak feature B$^\prime$ at about 5eV becomes pronounced from 1.12Å$^{-1}$ and above. The corresponding feature II$^\prime$ becomes visible at high momentum transfers when feature II is weak. Both feature C and D involve many excitations. Correspondingly, the spectra in the experiment exhibit very broad feature III followed with rising spectral weight. These qualitative and, to a great extent, quantitative agreements between the theoretical spectra and the experimental results indicate that ZINDO/SCI calculations characterize the Py-SO system quite well. The experimental feature I disperses from 2.2eV at $q=0.28$Å$^{-1}$ to 2.07eV at $q=0.7$Å$^{-1}$. This is well reproduced in the theoretical spectra, where the peak position of A disperses by 0.12eV in the corresponding momentum range, except that the theoretical position is about 0.48eV higher. The small exciton dispersion reflects the weak intermolecular coupling. [@quasiband] In the aggregate calculation, the strongest intermolecular coupling between various orbitals in neighboring molecules is estimated to be 55meV. Therefore, the excitons are still Frenkel excitons that are confined mostly in a single molecule in this case. As a result, single molecule excitations could be computed to study the local distribution of the electrons and holes. In fact, $S(q,\omega)$ from single molecule calculations has been shown to agree very well with the EELS measurements for conjugated oligomers and polymers.[@Shuai98] The IXS spectra calculated for molecular excitations of single Py-SO molecule along the $a$ axis are shown in Fig.\[spectra\]e , where energy levels of the excited states are indicated by straight lines. As expected, the spectra are very similar to the aggregate calculations. The energy gap between LUMO and HOMO orbitals, is calculated to be 5.62 eV in ZINDO/SCI, denoted by the dashed line in Fig.\[spectra\]. Moreover, our calculation shows that the lowest energy feature A corresponds to a discrete exciton, unlike in long chain systems, [@mukamelprl; @mukameljcpa] where the peaks in the spectra might be consisted of many excitons. On the other hand, feature B contains mostly two major excitons and feature C and D are made up of tens of excitations above the gap. In experiment, as individual excitations could not be distinguished in features II and III, their dispersion could not be resolved even if there is any. {width="12.5cm"} As the dynamic structure factor is *directly* determined by the ground state and excited state wavefunctions, it puts stronger constraints on theoretical exciton distributions than just exciton energy position information derived from conventional optical measurements. The momentum dependence of the integrated weight is compared with the single molecule calculation in Fig.\[orbital\]a-c for the three main features respectively. After rescaling, the theory agrees with the experiments very well. However, as shown in Fig.2, the theoretical high energy features are more pronounced than the experiment, producing higher integrated weights in Fig. 3b-c compared to feature A. We attribute this overestimation to the more delocalized nature of the high energy excitons, which are less bound than the lowest exciton. Therefore, our single molecule calculation would overestimate its on-site occupation, which in turn causes higher matrix element of IXS. Indeed, the current aggregate calculation does give lower integrated weight, but more detailed calculation on larger aggregate is needed to fully address this issue. As the large intermolecular distance corresponds to a momentum smaller than the sampled range, the calculated local distribution of the high energy excitons in a single molecule could still be compared with the experimental data. For further confirmation, IXS spectra were sampled at two momentum transfers perpendicular to the $a$ direction in the $ac$ plane, which also shows good agreement with the theory.(Fig.4) The fact that the theory based on a single molecule matches the experiment so well indicates that the exciton distribution within the molecule is well captured, as a result of the weak coupling between the molecules. Fig.\[orbital\]d-h are exciton wavefunction presented in a way that the false color scale indicates the possibility $P(x,y)$ for finding an electron at atom site $x$ and a hole at atom site $y$. The gray scale of the solid circles in the inset shows the possibility $P(x)\equiv\sum_{y}P(x,y)$ of finding the electron or hole at site $x$ on Py-SO molecule. These plots give clear information on how the electron and hole of a certain exciton are distributed in the molecule. For feature A, exciton is mostly situated in the middle region. For feature B, on the other hand, both of the two main excitons are extended over the entire molecule, which are much larger than the lowest energy exciton.(Fig.3e-f) For comparison, the two main inter-gap excitations of feature III are shown in Fig.3g-h, one being confined in the phenyl ring, the other being extended. ![(a)Experimental data with background removed. Momentum transfer $q$ is perpendicular to $a$ axis. (b) ZINDO/SCI simulated $S(q,\omega)/q^2$ based on a single molecule with $q$ perpendicular to $a$ axis in the $ac$ plane[]{data-label="f4"}](yfig4.eps){width="8.5cm"} The measured intensity distribution of $S(q,\omega)$ corresponds to the distribution of the exciton center-of-mass in the system. For simple linear molecular systems such as oligomer and polymer, $S(q,\omega)$ along the chain direction relates directly with the exciton distribution over the molecule. The situation is more complicated for a molecular system with complex internal structures such as Py-SO. The structure of $S(q,\omega)$ measured along a certain $\hat{q}$ direction reflects distribution of the exciton center-of-mass in this direction *summed* across the entire molecule. Because the momentum transfer does not correspond to the relative electron-hole motion, quantum chemical calculations is needed to retrieve the full quantitative details of the exciton wavefunctions.[@mukamelprl] Nevertheless, there are still some qualitative correspondence. For example, the significant occupation of exciton A on atomic sites \#15,16,19,20,21,24 (Fig.3d) generally makes its center-of-mass more delocalized than the others along $\hat{a}$-direction (Fig.1), and thus the momentum distribution of $S(q,\omega)$ is narrower in Fig.3a. Inelastic x-ray scattering is a weak probe. From the experimental point of view, this is the first time that IXS is demonstrated to be feasible for organic molecular crystal in a third generation synchrotron. The experimental results are very clean and therefore can be directly compared with theory. The dispersion of the exciton feature gives a good measure of the strength of the intermolecular coupling. Combined with suitable quantum chemical calculations, reliable and comprehensive properties of excitons can be obtained, which are crucial for understanding their optical properties, and for designing materials of desired optical properties based on exciton transfer or dissociation properties. For example, if excitons in this useful energy range (2.2eV here) are localized within a particular complex structure, molecular clusters with a similar structure might be attached to polymers without affecting their local exciton (optical) behavior. [*Acknowledgements:*]{} DLF would like to thank Profs. G. A Sawatzky, X. Sun, C. Q. Wu, and H. Chen for very stimulating discussions. This work was supported by the NSFC, and the 973 Project of MOST of China(Grant No: 2002CB613406), and by the Shanghai Science and Technology Committee. Experiments at SPring-8 were partially supported by the NSC of Taiwan (Grant No.: NSC94-2112-M-213-012). The computation has been carried out in the CNIC Supercomputing Center of the CAS. M. Pope and C. E. Swenberg, *Electronic Processes in Organic Crystals and Polymers* (Oxford University Press, New York, 1999). J. J. M. Halls *et al.*, Nature (London) **376**, 498 (1995); C. J. Brabec, N. S. Sariciftci and J. C. Hummelen, Adv. Funct. Mater. **11**, 15 (2001). G. Horowitz, Adv. Mater. **10**, 365 (1998). G. Benkovic, V. Krongauz and V. Weiss, Chem. Rev. **100**, 1741 (2000). R. H. Friend *et al.*, Nature (London) **397**, 121 (1999). T. Nakano and Y. Okamoto, Chem. Rev. **101**, 4013 (2001) Q. Zhou, T. M. Swager, J. Am. Chem. Soc. **117**, 12593 (1995). D. T. McQuade, A. E. Pullen, T. M. Swager, Chem. Rev. **100**, 2537 (2000). M. Knupfer *et al.*, Phys. Rev. Lett. **83**, 1443 (1999). M. Knupfer, J. Fink, E. Zojer, G. Leising and J. L. Bredas, Phys. Rev. B **61**, 1662 (2000). M. Knupfer, J. Fink, Synth. Met. **141**, 21 (2004). E. Zojer *et al.*, J. Phys.: Condens. Matter **12** 1753 (2000). W. Schulke, J. Phys.: Condens. Matter **13**, 7557 (2001). E. D. Isaacs, P. M. Platzman, P. Metcalf and J. M. Honig, Phys. Rev. Lett. **76**, 4211 (1996). W. Schulke, H. Schulte-Schrepping, and J. R. Schmitz, Phys. Rev. B **47**, 12426 (1993). W. A. Caliebe, J. A. Soininen, Eric L. Shirley, C.-C. Kao and K. Hamalainen, Phys. Rev. Lett. **84**, 3907 (2000). J. Ridley, M. Zerner, Theor. Chim. Acta 32, 111 (1973). H. F. Song, K. C. Chen, and H. Tian, Dyes Pigm. **67**, 1 (2005). Y. Q. Cai et al., in Synchrotron Radiation Instrumentation: Eighth International Conference on Synchrotron Radiation Instrumentation, AIP Conf. Proc. No. 705 (AIP, New York, 2004), p. 340. G. R. Hutchison, M. A. Ratner, and T. J. Marks, J. Phys. Chem. A 106, 10596 (2002). L. P. Chen *et al*. (to be published). E. Zojer, Z. Shuai, G. Leising, and J. L. Brédas, J. Chem. Phys. 111, 1668 (1999). V. Chernyak, S. N. Volkov, and S. Mukamel, Phys. Rev. Lett. **86**, 995 (2000). V. Chernyak, S. N. Volkov, and S. Mukamel, J. Phys. Chem. A **105**, 1988 (2001). | Mid | [
0.6525423728813561,
38.5,
20.5
]
|
I'm hanging out passing the time Menu What do I look like? That was my first thought today when a young man, appearing to be about 15 years old, sitting in the front passenger seat of the car that pulled up next to me at the gas station asked me a question. Another boy, looking 16 or 17 hopped out and started pumping gas and as he did the younger boy yelled over to me, “hey, do you have a cigarette?” I was flabbergasted and said no as I was shaking my head in amazement at the question. No one has asked me that question in many years. Before I sit here in traction and analyze why he asked me, what do I look like, somebody that smokes, was it my sunglasses, my youthful appearance, nope, I have a shitty spinal column, but I’m not delusional. I’ve already recognized he simply found himself in a convenient situation to ask. Yet, I do find myself focused on how sad the random experience left me feeling. After I got over the shock of the question I wanted to ask him some questions, I wanted to preach to him, I wanted to tell him what a gift his health was, how fragile it will seem someday. I wanted to tell him, heck no I don’t have a cigarette, that stuff will kill you and you are too young, yada, yada, yada….but I felt like all he’d hear was the parental voice from the Charlie Brown animations and just see it as some crazy person at the gas station “yelling” at him. All the things any reasonable person would want to say to a teenager about the dangers of smoking crossed my mind, but also came with it the bigger question, WHY! Why on earth in 2011 is a kid still asking to bum a cigarette and in a gas station where we could blow up no less! | Mid | [
0.55813953488372,
33,
26.125
]
|
Tana, Hope you are well. I wondered whether you had had any progress on the final form of the Deutsche Schedule yet? or any progress on the FUNB documents? Thanks for your help in keeping us up to date on the status here. Best regards, Clare -----Original Message----- From: [email protected] [mailto:[email protected]] Sent: Monday, October 30, 2000 7:27 PM To: [email protected] Subject: RE: FW: Deutsche Bank ISDA Master Agreements Attached is the final version of the Enron Corp. Guaranty. Unfortunately, the ISDA Schedule is in the hands of Deutsche Bank and I need them to turn around the revised draft. I'm still working on it. (See attached file: Deutsche Bank Enron Guaranty4.doc) Clare.Godson@Alle nOvery.com To: [email protected] cc: [email protected], Denis.O'[email protected] 10/20/2000 12:22 Subject: RE: FW: Deutsche Bank ISDA Master Agreements PM Tana, Thanks for this. It would be very helpful if you could send me the latest versions of the Guarantee and the Schedule just indicating which is the only outstanding point. That way we can prepare the versions for EnronCredit.com Limited based on these and get things as close to finalised pending the outcome of this last credit point that you refer to. My fax number is 0207 330 9999. Many thanks for your assistance. Kind regards, Clare -----Original Message----- From: [email protected] [mailto:[email protected]] Sent: Thursday, October 19, 2000 7:40 PM To: [email protected] Subject: RE: FW: Deutsche Bank ISDA Master Agreements I bet you thought I forgot about you?! The lawyer up at Enron Corp. just called me and said he just got off the phone w/Deutsche Bank and they have come to an agreement as to the proposed form of guaranty. We have one more credit issue on the master that Credit would like us to revisit with them, but otherwise I think we close to being done. If you would like me to coordinate the execution of the guarantees you need with Enron Corp., I will be happy to assist you. Clare.Godson@Alle nOvery.com To: [email protected] cc: 10/10/2000 12:57 Subject: RE: FW: Deutsche Bank ISDA Master Agreements PM Many thanks for the update. -----Original Message----- From: [email protected] [mailto:[email protected]] Sent: Tuesday, October 10, 2000 1:31 PM To: [email protected] Subject: RE: FW: Deutsche Bank ISDA Master Agreements We just received what we hope is a final draft of the documents from Deutsche Bank, and I hope to respond back to you shortly, once I have had a chance to confirm they made all the changes. Clare.Godson@Alle nOvery.com To: [email protected] cc: [email protected] 10/10/2000 04:33 Subject: RE: FW: Deutsche Bank ISDA Master Agreements AM Many thanks. Apologies for the error, I obtained Tana's e-mail details from someone at Enron in London and they had misspelled it for me. Thanks again for all your help. Clare -----Original Message----- From: [email protected] [mailto:[email protected]] Sent: Monday, October 09, 2000 6:00 PM To: [email protected] Cc: [email protected] Subject: Re: FW: Deutsche Bank ISDA Master Agreements Clare, Thanks for your note. Your e-mail trouble is due to the spelling of Tana's name - should only be one 'n.' I am forwarding this on to her. Mark Clare.Godson@Alle nOvery.com To: [email protected] cc: 10/09/2000 04:50 Subject: FW: Deutsche Bank ISDA Master Agreements AM Mark, I am having some problems sending this to Tanna. her e-mail keeps bouncing back. Could you pass this on to her please? many thanks for your help. Kind regards, Clare > -----Original Message----- > From: Godson, Clare:ICM (LN) > Sent: Friday, October 06, 2000 3:16 PM > To: '[email protected]' > Cc: [email protected]; [email protected]; > [email protected]; '[email protected]'; > [email protected]; denis.o'[email protected]; [email protected] > Subject: RE: Deutsche Bank ISDA Master Agreements > > > Tanna, > > Could you please give me an update on where things stand on the > finalisation of the Guarantees and Schedule with Deutsche. I understood > that the guarantees were with Enron Corp. for final approval and > signature. Is the documentation now agreed? As you are aware the > negotiation of the documentation with EnronCredit.com is dependent upon > the finalisation of the documentation outstanding with Houston. > > Many thanks for your help. > > Kind regards, > > Clare > -----Original Message----- > From: Godson, Clare:ICM (LN) > Sent: Friday, September 22, 2000 3:06 PM > To: '[email protected]' > Subject: First Union National Bank (FUNB) ISDA Master Agreements > > Tanna, > > Thanks for your help on the Goldman Sachs and Deutsche negotiations. If > you could keep me posted on the timing of the Deutsche agreements being > put in place that would be really helpful. I also have another quick > request and that relates to the ISDA Master Agreement with FUNB. I have > spoken to Delene Travella at FUNB. They are negotiating an ECTRIC > Schedule with you in Houston which they want to finalise before commencing > EnronCredit.com's negotiation. Can you give me an update on where this > negotiation stands and how long before it is likely to be finalised? > > Many thanks, > > Clare Godson ====================================================================== This email is confidential and may also be privileged. If you are not the intended recipient please notify us immediately by telephoning +44 (20) 7330 3000 and requesting the Technology Services Helpdesk. You should not copy it or use it for any purpose nor disclose its contents to any other person. Allen & Overy One New Change London EC4M 9QQ Tel:+44 (20) 7330 3000 Fax: +44 (20) 7330 9999 General Email: [email protected] www: http://www.allenovery.com Allen & Overy is a solicitors' partnership. A list of the names of partners and their professional qualifications is open to inspection at the above office. The partners are either solicitors or registered foreign lawyers. ====================================================================== | Low | [
0.49627791563275403,
25,
25.375
]
|
It has been quite some start to the season for Lucas Torreira, and now he's been named as our November Player of the Month! The Uruguayan midfielder has been in dazzling form ever since joining the club, adding energy and bite to the base of our midfield. He started November in fine style, impressing with a man-of-the-match performance as we drew 1-1 with Liverpool before coming close to scoring in our draw with Wolves. Play video Watch Arsenal video online 01:09 What Torreira has added - on and off the field The 22-year-old then struck the post and claimed another man-of-the-match award as we won 2-1 away at Bournemouth. Lucas received 70 per cent of the votes cast, with Bernd Leno and Rob Holding finishing in second and third place respectively. | Mid | [
0.63563829787234,
29.875,
17.125
]
|
The invention relates to an apparatus and method of trading electric energy between utility companies and others. Specifically, the invention is a software product that, when coupled to a communications network, creates a trading environment for the electronic purchase and sale of electric energy, the scheduling and usage of the transmission system, and the automated invoicing and electronic funds transfer for the settlement of transactions. Electric energy is generated for public consumption by utility companies. Each utility company has a service area in which it enjoys near-monopoly status. The utility company is obligated to supply the electric energy needs of individual customers within the service area. Of course, the demand for electricity can vary according to a number of factors. In the long run, the demand for electricity is a function of the population and industries within the service area. In the short run, electrical demand varies according to many factors. Extreme weather, in particular, can significantly strain the generation capacity of the utility company. An electric energy grid exists which connects each utility""s generating facilities to those of adjacent utilities. Each circle represents an individual utility company. Each line represents high-voltage lines which form the grid between the various utilities. Electric energy is traded between utility companies and other market participants to meet shortfalls in capacity during unit outages, to achieve cost savings, or to increase revenues. xe2x80x9cBulk transactionsxe2x80x9d refers to the wholesale buying and selling of electrical energy. Typically, the parties involved in these trades are traditional electric utility companies. These companies wish to meet their obligations to provide reliable service to their customers in the most economically feasible manner. Often it is possible for a utility to purchase electricity from a neighboring utility more economically than it could produce it for itself. At other times, the power generator can sell excess generation at a price higher than its cost of generation. To determine which trades are economic, utilities produce sophisticated forecasts of load (required generation) so that they can schedule their generators to run efficiently. The system dispatcher then determines if demand is likely to be over or under projections during various times of the day. The dispatcher is also interested in the associated cost with each level of generation. Even though the load forecasts are sophisticated, actual conditions usually deviate from them. This may be due to a number of circumstances, such as having generating units go off-line unexpectedly, differences between forecast and actual weather conditions, or changes in the price of available fuel to run the generators. All of these events affect the costs to produce electricity. Because of changes in these forecasts, the dispatcher telephones neighboring utility companies to determine prices and quantities of energy available for upcoming hours. These calls occur many times a day, sometimes hourly. At the same time, dispatchers for other utilities are also making phone calls. If the dispatcher finds what he considers to be a good deal, a trade is consummated. The result is that deals are often struck before the phone surveys are complete. It is rare for a dispatcher to call beyond his direct neighbors, and almost never farther out than two companies. This means that the opportunity for more economic transactions may have been overlooked simply because the dispatcher did not know about them. A need exists for a system which creates substantial efficiency gains by automating this trading process over the current method of using the phone. This method of trading energy should allow utilities to simultaneously view real-time market prices and energy availabilities and to quickly consummate the best opportunities. The system should consider available transmission capacity, and calculate and schedule the least cost path for the energy. It should also report the transactions, invoice the participating parties, and facilitate rapid collection and disbursement of funds. Lastly, the system should allow for anonymous trading required of a true market. The present method, also known as CPEX, establishes a nationwide electronic information system that assists buyers and sellers of electricity to conduct business by providing a common marketplace. CPEX is an easy to use windows-based software and hardware system that enables Participants to gather market information and make energy transactions decisions based on the best available opportunities. CPEX involves a software application, a computer and communications network, and a central server. CPEX allows users to enter quantity and price information on energy that they have available to sell, wish to buy, or both. These offers are then sorted and presented to other CPEX Participants. These offers are sorted by lowest price to highest for purchase opportunities and sorted highest price to lowest for sale opportunities. Each Participant sees delivered price for purchases and total revenue for sales from its unique location in the electric grid. The purchase price of the energy is shown inclusive of any transmission charges, known as xe2x80x9cwheelingxe2x80x9d. Wheeling is a term used to refer to the transfer of electricity across a Participant""s transmission system. A Participant who provides wheeling services is referred to as a wheeler, and receives monetary compensation for providing this service. CPEX also allows the buyers and sellers of electrical energy to offer different degrees of firmness for their energy. Interruptible energy may be curtailed, or cut off, for any reason. Non-interruptible energy may only be curtailed to avoid or remedy an unreliable condition. Both types of energy have a distinct market. CPEX assists in maintaining the reliability of the electric grid by using a conservative method to schedule available transmission capacity. Each Participant maintains the amount of transmission capacity made available for CPEX transactions each hour. As transactions are consummated, this capacity is consumed and is no longer available for use by others. This helps assure that the transmission systems do not become unintentionally overloaded. Reliability is augmented by allowing simultaneous, electronic notification of all parties to a transaction upon a transaction""s curtailment. The current method of phone notification is inadequate when multiple parties are involved, as is common in buy/resell types of transactions. CPEX provides monthly billing and Electronic Funds Transfer (EFT) services for payments and disbursements to all Participants as part of the basic CPEX package. This feature allows Participants to trade with more companies than they would otherwise and to manage their invoicing and collections with their current levels of staffing. | Mid | [
0.5563063063063061,
30.875,
24.625
]
|
[DO NOT PUBLISH] IN THE UNITED STATES COURT OF APPEALS FOR THE ELEVENTH CIRCUIT FILED ___________________________ U.S. COURT OF APPEALS ELEVENTH CIRCUIT October 24, 2006 No. 06-10833 THOMAS K. KAHN Non-Argument Calendar CLERK ___________________________ D.C. Docket No. 05-00074-CR-1-1 UNITED STATES OF AMERICA, Plaintiff-Appellee, versus JACK VUE, a.k.a. Chai Vue OR VUE, a.k.a. Or Send Chandara Lor Defendants-Appellants. ____________________________ Appeals from the United States District Court for the Northern District of Georgia ____________________________ (October 24, 2006) Before TJOFLAT, BLACK and BARKETT, Circuit Judges. PER CURIAM: On February 5, 2005, a Northern District of Georgia grand jury returned a four-count indictment against appellants Jack Vue and Or Vue,1 Khamphanh Sisakda, Susan Sisakda,2 and Vongmath Thongliane charging them as follows: Count One, conspiracy to import opium; Count Two, importation of opium; Count Three, conspiracy to possess with intent to distribute opium; and Count Four, possession with intent to distribute opium. On November 14, 2005, the Vues and Khamphanh Sisakda stood trial.3 The jury convicted the Vues on Counts One and Three, and acquitted Khamphanh Sisakda on all counts.4 On January 20, 2006, the district court sentenced Jack Vue and Or Vue on Counts One and Three at the low end of the Guidelines sentence range — to concurrent prison terms of 97 and 87 months, respectively. They now appeal. Both challenge their convictions, contending that the district court erred (1) in denying their motions to suppress evidence — principally the opium described in the indictment — seized at their Minneapolis, Minnesota residence pursuant to a search warrant, and (2) in denying their motions for judgment of acquittal. Or Vue also challenges her sentences. 1 The Vues are husband and wife. 2 Like the Vues, the Sisakdas are husband and wife. 3 Susan Sisakda and Vongmath Thongliane pled guilty and testified for the Government against the remaining co-defendants. 4 The jury acquitted Khamphanh Sisakda. 2 The district court referred appellant*s motions to suppress to a magistrate judge, who, in his Report and Recommendation (“R & R”), recommended that the motions be denied. After considering appellants’ objections to the R & R, the district court adopted the R & R as the opinion of the court and denied appellants’ motions. Appellants contend that the court erred in doing so because the affidavit that provided the foundation for the search warrant contained information that had been falsified intentionally. The district court assumed that the challenged statements were inaccurate – and had been made so intentionally, as appellants had alleged. The court therefore redacted those statements from the affidavit. With these statements stricken, the court concluded that the statements remaining in the affidavit were sufficient to show probable cause, i.e., that Susan Sisakda and Vongmath Thongliane were carrying opium to appellants’ Minneapolis residence. We agree, and do so for the reasons stated in the R & R. It follows, then, that the court committed no error in denying appellants’ motions to suppress. Appellants claim that the evidence was insufficient to convict them on Counts One and Three because the testimony of co-defendants Susan Sisakda and Vongmath Thongliane was incredible. We think it was well within the jury’s province to believe what these two witnesses had to say. After all, they brought the opium from Thailand to Atlanta because Jack Vue said he would pay each of them 3 $10,000 to do so. When the Customs officers at the Atlanta airport detected the opium (it was in their luggage, contained in fabric that had been soaked in opium), they agreed to participate in a controlled delivery to the Vues’ residence in Minnesota, which was accomplished. In sum, the evidence was more sufficient to support the credibility of these two witnesses and, coupled with the other evidence presented to the jury, to convict appellants on Counts One and Three. Or Vue contends that the court clearly erred in determining her base offense level – based on 7.88 kilograms of opium – because it failed to separate out the weight of the acetaminophen that was mixed with the opium when calculating the opium’s weight. The Guidelines state that “the weight of a controlled substance. . . refers to the entire weight of any mixture or substance containing a detectable amount of the controlled substance.” U.S.S.G. § 2D1.1(c)n.(A)(2005). However, a mixture or substance does not include materials that must be separated from the controlled substance before it can be used. Id. An example would be the fiberglass in a cocaine/fiberglass bonded suitcase. The court did not clearly err in finding Or Vue was responsible for 7.88 kilograms of opium because a DEA chemist testified that acetaminophen would not impair the consumption of opium. And Vue points to nothing in the record that might contradict such testimony. The challenge to her sentences therefore fails. AFFIRMED. 4 | Low | [
0.510843373493975,
26.5,
25.375
]
|
Ready to fight back? Sign up for Take Action Now and get three actions in your inbox every week. You will receive occasional promotional offers for programs that support The Nation’s journalism. You can read our Privacy Policy here. Sign up for Take Action Now and get three actions in your inbox every week. Thank you for signing up. For more from The Nation, check out our latest issue Subscribe now for as little as $2 a month! Support Progressive Journalism The Nation is reader supported: Chip in $10 or more to help us continue to write about the issues that matter. The Nation is reader supported: Chip in $10 or more to help us continue to write about the issues that matter. Fight Back! Sign up for Take Action Now and we’ll send you three meaningful actions you can take each week. You will receive occasional promotional offers for programs that support The Nation’s journalism. You can read our Privacy Policy here. Sign up for Take Action Now and we’ll send you three meaningful actions you can take each week. Thank you for signing up. For more from The Nation, check out our latest issue Travel With The Nation Be the first to hear about Nation Travels destinations, and explore the world with kindred spirits. Be the first to hear about Nation Travels destinations, and explore the world with kindred spirits. Sign up for our Wine Club today. Did you know you can support The Nation by drinking wine? Ask people to describe themselves politically, and many will choose the label “moderate” when the alternative is the stigmatized word “extreme.” But ask moderates if they are angry that billionaires have had tax cuts while millions of people don’t have health insurance, and you will find many are—often extremely so.1 Ad Policy Too much polling is designed to produce an answer that suits an argument. Questions are loaded. Samples are small. That’s why the analysis of actual electoral experience is so important.2 With that in mind, those grappling with the argument that Bernie Sanders is too left-wing to beat Donald Trump might like to cast their eyes across the Atlantic and consider Labour’s unexpected success in denying the Tories a majority in the United Kingdom’s 2017 general election.3 When Jeremy Corbyn became Labour leader in 2015, his opponents argued relentlessly that he would be an electoral disaster. That claim was the main pretext for Labour MPs to force a second leadership election a year later. Corbyn won again comfortably, but the attempt to oust him did the party no favors: Labour’s position in the polls slumped to about 25 percent.4 The Tories saw their chance, as insiders have now revealed, to finish off the Labour Party for the next 20 years. Riding high in the polls, Prime Minister Theresa May called a snap election in April 2017. Most pundits thought she would win by a landslide.5 Related Article Bernie Sanders and Jeremy Corbyn Might Create a Revolution Robert L. Borosage As an adviser to the Labour leader, I was a member of the party’s strategy group, which included long-standing officials opposed to Corbyn. They told us that “campaigns don’t move opinion more than 2 or 3 percent,” that “young people don’t vote,” and that his policies wouldn’t have an impact because “no one reads manifestos.” Labour’s pollsters predicted the party would lose nearly half the 232 constituencies we held at the time.6 In eight weeks, the Corbyn campaign proved them totally wrong. The Labour vote increased from 9.3 million (30.4 percent) in 2015 to 12.9 million (40 percent)—our best vote since 1997 and the biggest increase in our vote share from one election to another since 1945.7 Current Issue View our current issue In making a net gain of 30 seats, Labour denied the Tories a parliamentary majority—and forced May to abandon plans to escalate austerity. In producing a hung Parliament, the election transformed the voting arithmetic around Brexit, which May finally realized in opening talks with Corbyn a few weeks ago.8 Corbyn’s success shocked his critics, but the outcome was no surprise to his team. A revolt against the political establishment—which had been locked in a neoliberal consensus since the Thatcher era—had been brewing well before voters rejected the advice of all the major parties in the 2016 EU referendum. The 2015 general election already saw disillusionment with the three main Westminster parties reach the point that they could muster the votes of only less than half the electorate between them. The other 23 million or so registered voters either abstained or supported smaller parties.9 In 2017, Corbyn succeeded in winning back many voters who had migrated to other parties and in attracting new ones; the campaign saw a 51 percent increase (compared with 2015) in 18-to-24-year-olds registering to vote.10 Labour’s manifesto, which was in some respects more radical than Sanders’s platform, was frequently described as the star of the show. Far from not being read, the hard copy became a collector’s item, and Labour’s website had 5.3 million new-user sessions in seven weeks.11 A British parliamentary election and a US presidential one are, of course, very different events. But there are parallels, too, not least in the emergence of aggressively “populist” variants of neoliberalism in both countries.12 Related Article Bernie Sanders Is Hitting Donald Trump Where It Hurts John Nichols If this politics is to be defeated, it will not be through a revamping of the Clinton-Blair formula of Chicago School economics, neocon foreign policy, and social liberalism. That brand has a diminishing following even in affluent cosmopolitan centers—never mind the electorally crucial rust belts of the two countries.13 Trump’s administration has actually been drawn from political insiders and the corporate elite. But he will try to deflect from the squalid reality of his presidency by using special counsel Robert Mueller’s investigation to play victim. He will portray any moves against him in Congress—however justified they may be—as a vendetta by the political establishment.14 This “Washington outsider” positioning will be much easier for him to sustain if his opponent is seen to be an insider—particularly one who campaigns on a similar narrative to Hillary Clinton, as Joe Biden appears to be doing.15 The Democrats will not win by piling up votes in California and New York. We need a candidate with the politics and authenticity to enthuse alienated voters in the states that will decide the outcome. Sanders did that in 2016, and the midterms demonstrated we can win on a bold progressive agenda. There is no reason at all to believe that strategy has yet reached its full potential. | Mid | [
0.6000000000000001,
33.75,
22.5
]
|
West Footscray, 508 Barkly Street Warm & Homely on Barkly Street - Prime Location! This recently renovated house is located in Footscray's finest street! - It comprises of two spacious bedrooms, both with built in robes. - Boasting a near new kitchen, bathroom and laundry facilities. - Timber venetians blinds throughout, roller shutters, lounge with air conditioning. - Kitchen incorporates near new gas appliances, range hood, dishwasher and cabinets - Fully tiled bathroom, toilet and laundry. - It has timber floor throughout - A car space - A good sized private backyard that is perfect for weekend entertaining. This home is close to schools, public transport and only a two minute drive to central Footscray. Be quick on this one as it won't last long. ** OPEN TIMES ARE FIXED WITHOUT THE ABILITY EXTEND AS SUCH, IF YOU ARE LATE OR MISS AN INSPECTION YOU WILL NEED TO ATTEND THE FOLLOWING OPEN ONCE BOOKED ** | High | [
0.675287356321839,
29.375,
14.125
]
|
[[cha:ngcgui]] = NGCGUI .NGCGUI imbedded into Axis image::images/ngcgui.png[align="center"] == Overview * 'NGCGUI' is a Tcl application to work with subroutines. It allows you to have a conversational interface with LinuxCNC. You can organize the subroutines in the order you need them to run and concatenate the subroutines into one file for a complete part program. * 'NGCGUI' can run as a standalone application or can be embedded in multiple tab pages in the axis GUI * 'PYNGCGUI' is an alternate, python implementation of ngcgui. * 'PYNGCGUI' can run as a standalone application or can be embedded as a tab page (with its own set of multiple subroutine tabs) in any GUI that supports embedding of gladevcp applications axis, touchy, gscreen and gmoccapy. Using NGCGUI or PYNGCGUI: * Tab pages are provided for each subroutine specified in the INI file * New subroutines tab pages can be added on the fly using the <<ngcgui-ini,custom tab>> * Each subroutine tab page provides entry boxes for all subroutine parameters * The entry boxes can have a default value and an label that are identified by special comments in the subroutine file * Subroutine invocations can be concatenated together to form a multiple step program * Any single-file G code subroutine that conforms to ngcgui conventions can be used * Any gcmc (G code-meta-compiler) program that conforms to ngcgui conventions for tagging variables can be used. (The gcmc executable must be installed separately, see: http://www.vagrearg.org/content/gcmc) [NOTE] NGCGUI and PYNGCGUI implement the same functions and both process .ngc and .gcmc files that conform to a few ngcgui-specific conventions. In this document, the term 'NGCGUI' generally refers to either application. == Demonstration Configurations A number of demonstration configurations are located in the sim directory of the Sample Configurations offered by the LinuxCNC configuration picker. The configuration picker is on the system's main menu: CNC > LinuxCNC Examples are included for the axis, touchy, gscreen, and gmoccapy. These examples demonstrate both 3-axis (XYZ) cartesian configurations (like mills) and lathe (XZ) setups. Some examples show the use of a pop up keyboard for touch screen systems and other examples demonstrate the use of files created for the gcmc (G code Meta Compiler) application. The touchy examples also demonstrate incorporation of a gladevcp back plot viewer (gremlin_view). The simplest application is found as: Sample Configurations/sim/axis/ngcgui /ngcgui_simple A comprehensive example showing gcmc compatibility is at: Sample Configurations/sim/axis/ngcgui/ngcgui_gcmc A comprehensive example embedded as a gladevcp app and using gcmc is at: Sample Configurations/sim/gscreen/ngcgui/pyngcgui_gcmc The example sim configurations make use of library files that provide example G code subroutine (.ngc) files and G code-meta-compiler (.gcmc) files: * 'nc_files/ngcgui_lib' ** 'arc1.ngc' - basic arc using cutter radius compensation ** 'arc2.ngc' - arc speced by center, offset, width, angle (calls arc1) ** 'backlash.ngc' - routine to measure an axis backlash with dial indicator ** 'db25.ngc' - creates a DB25 plug cutout ** 'gosper.ngc' - a recursion demo (flowsnake) ** 'helix.ngc' - helix or D-hole cutting ** 'helix_rtheta.ngc' - helix or D-hole positioned by radius and angle ** 'hole_circle.ngc' - equally spaced holes on a circle ** 'ihex.ngc' - internal hexagon ** 'iquad.ngc' - internal quadrilateral ** 'ohex.ngc' - outside hexagon ** 'oquad.ngc' - outside quadrilateral ** 'qpex_mm.ngc' - demo of qpockets (mm based) ** 'qpex.ngc' - demo of qpockets (inch based) ** 'qpocket.ngc' - quadrilateral pocket ** 'rectangle_probe.ngc' - probe a rectangular area ** 'simp.ngc' - a simple subroutine example that creates two circles ** 'slot.ngc' - slot from connecting two endpoints ** 'xyz.ngc' - machine exerciser constrained to a box shape * 'nc_files/ngcgui_lib/lathe' ** 'g76base.ngc' - gui for g76 threading ** 'g76diam.ngc' - threading speced by major, minor diameters ** 'id.ngc' - bores the inside diameter ** 'od.ngc' - turns the outside diameter ** 'taper-od.ngc' - turns a taper on the outside diameter * 'nc_files/gcmc_lib' ** 'drill.gcmc' - drill holes in rectangle pattern ** 'square.gcmc' - simple demo of variable tags for gcmc files ** 'star.gcmc' - gcmc demo illustrating functions and arrays ** 'wheels.gcmc' - gcmc demo of complex patterns To try a demonstration, select a sim configuration and start the linuxCNC program. If using the axis gui, press the 'E-Stop' image:images/tool_estop.png[] then 'Machine Power' image:images/tool_power.png[] then 'Home All'. Pick a ngcgui tab, fill in any empty blanks with sensible values and press 'Create Feature' then 'Finalize'. Finally press the 'Run' image:images/tool_run.png[] button to watch it run. Experiment by creating multiple features and features from different tab pages. To create several subroutines concatinated into a single file, go to each tab fill in the blanks, press 'Create Feature' then using the arrow keys move any tabs needed to put them in order. Now press 'Finalize' and answer the prompt to create Other guis will have similar functionality but the buttons and names may be different. .Notes [NOTE] =============================== The demonstration configs create tab pages for just a few of the provided examples. Any gui with a <<ngcgui-ini,custom tab>> can open any of the library example subroutines or any user file if it is in the linuxCNC subroutine path. To see special key bindings, click inside an ngcgui tab page to get focus and then presss Control-k. The demonstration subroutines should run on the simulated machine configurations included in the distribution. A user should always understand the behavior and purpose of a program before running on a real machine. =============================== == Library Locations In linuxCNC installations installed from deb packages, the simulation configs for ngcgui use symbolic links to non-user-writable LinuxCNC libraries for: * 'nc_files/ngcgui_lib' ngcgui-compatible subfiles * 'nc_files/ngcgui_lib/lathe' ngcgui-compatible lathe subfiles * 'nc_files/gcmc_lib' ngcgui-gcmc-compatible programs * 'nc_files/ngcgui_lib/utilitysubs' Helper subroutines * 'nc_files/ngcgui_lib/mfiles' User M files These libraries are located by ini file items that specify the search paths used by linuxCNC (and ngcgui): ---- [RS274NGC] SUBROUTINE_PATH = ../../nc_files/ngcgui_lib:../../nc_files/gcmc_lib:../../nc_files/ngcgui_lib/utilitysubs USER_M_PATH = ../../nc_files/ngcgui_lib/mfiles ---- [NOTE] These are long lines (not continued on multiple lines) that specify the directories used in a search patch. The directory names are separated by colons (:). No spaces should occur between directory names. A user can create new directories for their own subroutines and M-files and add them to the search path(s). For example, a user could create directories from the terminal with the commands: ---- mkdir /home/myusername/mysubs mkdir /home/myusername/mymfiles ---- And then create or copy system-provided files to these user-writable directories. For instance, a user might create a ngcgui-compatible subfile named: ---- /home/myusername/mysubs/example.ngc ---- To use files in new directories, the ini file must be edited to include the new subfiles and to augment the search path(s). For this example: ---- [RS274NGC] ... SUBROUTINE_PATH = /home/myusername/mysubs:../../nc_files/ngcgui_lib:../../nc_files/gcmc_lib:../../nc_files/ngcgui_lib/utilitysubs USER_M_PATH = /home/myusername/mymfiles:../../nc_files/ngcgui_lib/mfiles [DISPLAY] ... NGCGUI_SUBFILE = example.ngc ... ---- LinuxCNC (and ngcgui) use the first file found when searching directories in the search path. With this behavior, you can supersede an ngcgui_lib subfile by placing a subfile with an identical name in a directory that is found earlier in the path search. More information can be found in the INI chapter of the Integrators Manual. == Standalone Usage === Standalone NGCGUI For usage, type in a terminal: ---- ngcgui --help Usage: ngcgui --help | -? ngcgui [Options] -D nc_files_directory_name ngcgui [Options] -i LinuxCNC_inifile_name ngcgui [Options] Options: [-S subroutine_file] [-p preamble_file] [-P postamble_file] [-o output_file] [-a autosend_file] (autosend to axis default:auto.ngc) [--noauto] (no autosend to axis) [-N | --nom2] (no m2 terminator (use %)) [--font [big|small|fontspec]] (default: "Helvetica -10 normal") [--horiz|--vert] (default: --horiz) [--cwidth comment_width] (width of comment field) [--vwidth varname_width] (width of varname field) [--quiet] (fewer comments in outfile) [--noiframe] (default: frame displays image) ---- [NOTE] As a standalone application, ngcgui handles a single subroutine file which can be invoked multiple times. Multiple standalone ngcgui applications can be started independently. === Standalone PYNGCGUI For usage, type in a terminal: ---- pyngcgui --help Usage: pyngcgui [Options] [sub_filename] Options requiring values: [-d | --demo] [0|1|2] (0: DEMO standalone toplevel) (1: DEMO embed new notebook) (2: DEMO embed within existing notebook) [-S | --subfile sub_filename] [-p | --preamble preamble_filename] [-P | --postamble postamble_filename] [-i | --ini inifile_name] [-a | --autofile auto_filename] [-t | --test testno] [-K | --keyboardfile glade_file] (use custom popupkeyboard glade file) Solo Options: [-v | --verbose] [-D | --debug] [-N | --nom2] (no m2 terminator (use %)) [-n | --noauto] (save but do not automatically send result) [-k | --keyboard] (use default popupkeybaord) [-s | --sendtoaxis] (send generated ngc file to axis gui) Notes: A set of files is comprised of a preamble, subfile, postamble. The preamble and postamble are optional. One set of files can be specified from cmdline. Multiple sets of files can be specified from an inifile. If --ini is NOT specified: search for a running linuxCNC and use its inifile ---- [NOTE] As a standalone application, pyngcgui can read an ini file (or a running linuxCNC application) to create tab pages for multiple subfiles. == Embedding NGCGUI === Embedding NGCGUI in Axis The following INI file items go in the [DISPLAY] section. (See additional sections below for additional items needed) * 'TKPKG = Ngcgui 1.0' - the NGCGUI package * 'TKPKG = Ngcguittt 1.0' - the True Type Tracer package for generating text for engraving (optional, must follow TKPKG = Ngcgui). * 'TTT = truetype-tracer' - name of the truetype tracer program (it must be in user PATH) * 'TTT_PREAMBLE = in_std.ngc' - Optional, specifies filename for preamble used for ttt created subfiles. (alternate: mm_std.ngc) [NOTE] The optional truetype tracer items are used to specify an ngcgui-compatible tab page that uses the application truetype-tracer. The truetype-tracer application must be installed independently and located in the user PATH. === Embedding PYNGCGUI as a gladevcp tab page in a gui The following INI file items go in the [DISPLAY] section for use with the axis, gscreen, or touchy guis. (See additional sections below for additional items needed) .EMBED_ Items .... EMBED_TAB_NAME = Pyngcgui - name to appear on embedded tab EMBED_TAB_COMMAND = gladevcp -x {XID} pyngcgui_axis.ui - invokes gladevcp EMBED_TAB_LOCATION = name_of_location - where the embeded page is located .... [NOTE] The EMBED_TAB_LOCATION specifier is not used for the axis gui. While pyngcgui can be embedded in axis, integration is more complete when using ngcgui (using TKPKG = Ngcgui 1.0). To specify the EMBED_TAB_LOCATION for other guis, see the <<sec:display-section,DISPLAY Section>> of the INI Configuration Chapter. [NOTE] The truetype tracer gui front-end is not currently available for gladevcp applications. [[ngcgui-ini]] === Additional INI File items required for ngcgui or pyngcgui The following INI file items go in the [DISPLAY] section for any gui that embeds either ngcgui or pyngcgui. * 'NGCGUI_FONT = Helvetica -12 normal' - specifices the font name,size, normal|bold * 'NGCGUI_PREAMBLE = in_std.ngc' - the preamble file to be added in front of the subroutines. When concatenating several common subroutine invocations, this preamble is only added once. For mm-based machines, use mm_std.ngc * 'NGCGUI_SUBFILE = filename1.ngc' - creates a tab from the filename1 subroutine * 'NGCGUI_SUBFILE = filename2.ngc' - creates a tab from the filename2 subroutine * '... etc' * 'NGCGUI_SUBFILE = gcmcname1.gcmc' - creates a tab from the gcmcname1 file * 'NGCGUI_SUBFILE = gcmcname2.gcmc' - creates a tab from the gcmcname2 file * '... etc' * 'NGCGUI_SUBFILE = ""' - creates a custom tab that can open any subroutine in the search path * 'NGCGUI_OPTIONS = opt1 opt2 ...' - NGCGUI options ** 'nonew' - disallow making a new custom tab ** 'noremove' - disallow removing any tab page ** 'noauto' - no autosend (use makeFile, then save or manually send) ** 'noiframe' - no internal image, display images on separate top level widget ** 'nom2' - do not terminate with m2, use % terminator. This option eliminates all the side effects of m2 termination * 'GCMC_INCLUDE_PATH = dirname1:dirname2' - search directories for gcmc include files This is an example of embedding NGCGUI into Axis. The subroutines need to be in a directory specified by the [RS274NGC]SUBROUTINE_PATH. Some example subroutines use other subroutines so check to be sure you have the dependences, if any, in a SUBROUTINE_PATH directory. Some subroutines may use custom Mfiles which must be in a directory specified by the [RS274NGC]USER_M_PATH. The Gcode-meta-compiler (gcmc) can include statements like: include("filename.inc.gcmc"); By default, gcmc includes the current directory which, for linuxCNC, will be the directory containing the linuxCNC ini file. Additional directories can be prepended to the gcmc search order with the GCMC_INCLUDE_PATH item. .Sample axis-gui-based INI ---- [RS274NGC] ... SUBROUTINE_PATH = ../../nc_files/ngcgui_lib:../../ngcgui_lib/utilitysubs USER_M_PATH = ../../nc_files/ngcgui_lib/mfiles [DISPLAY] TKPKG = Ngcgui 1.0 TKPKG = Ngcguittt 1.0 # Ngcgui must precede Ngcguittt NGCGUI_FONT = Helvetica -12 normal # specify filenames only, files must be in [RS274NGC]SUBROUTINE_PATH NGCGUI_PREAMBLE = in_std.ngc NGCGUI_SUBFILE = simp.ngc NGCGUI_SUBFILE = xyz.ngc NGCGUI_SUBFILE = iquad.ngc NGCGUI_SUBFILE = db25.ngc NGCGUI_SUBFILE = ihex.ngc NGCGUI_SUBFILE = gosper.ngc # specify "" for a custom tab page NGCGUI_SUBFILE = "" #NGCGUI_SUBFILE = "" use when image frame is specified if # opening other files is required # images will be put in a top level window NGCGUI_OPTIONS = #NGCGUI_OPTIONS = opt1 opt2 ... # opt items: # nonew -- disallow making a new custom tab # noremove -- disallow removing any tab page # noauto -- no auto send (makeFile, then manually send) # noiframe -- no internal image, image on separate top level GCMC_INCLUDE_PATH = /home/myname/gcmc_includes TTT = truetype-tracer TTT_PREAMBLE = in_std.ngc PROGRAM_PREFIX = ../../nc_files ---- [NOTE] The above is not a complete axis gui INI -- the items show are those used by ngcgui. Many additional items are required by LinuxCNC to have a complete INI file. === Truetype Tracer Ngcgui_ttt provides support for truetype-tracer (v4). It creates an axis tab page which allows a user to create a new ngcgui tab page after entering text and selecting a font and other parameters. (Truetype-tracer must be installed independently). To embed ngcgui_ttt in axis, specify the following items in addition to ngcgui items: .... Item: [DISPLAY]TKPKG = Ngcgui_ttt version_number Example: [DISPLAY]TKPKG = Ngcgui_ttt 1.0 Note: Mandatory, specifies loading of ngcgui_ttt in an axis tab page named ttt. Must follow the TKPKG = Ngcgui item. Item: [DISPLAY]TTT = path_to_truetype-tracer Example: [DISPLAY]TTT = truetype-tracer Note: Optional, if not specified, attempt to use /usr/local/bin/truetype-tracer. Specify with absolute pathname or as a simple executable name in which case the user PATH environment will used to find the program. Item: [DISPLAY]TTT_PREAMBLE = preamble_filename Example: [DISPLAY]TTT_PREAMBLE = in_std.ngc Note: Optional, specifies filename for preamble used for ttt created subfiles. .... === INI File Path Specifications Ngcgui uses the linuxCNC search path to find files. The search path begins with the standard directory specified by: [DISPLAY]PROGRAM_PREFIX = directory_name followed by multiple directories specified by: [RS274NGC]SUBROUTINE_PATH = directory1_name:directory1_name:directory3_name ... Directories may be specified as absolute paths or relative paths. .... Example: [DISPLAY]PROGRAM_PREFIX = /home/myname/linuxcnc/nc_files Example: [DISPLAY]PROGRAM_PREFIX = ~/linuxcnc/nc_files Example: [DISPLAY]PROGRAM_PREFIX = ../../nc_files .... An absolute path beginning with a "/" specifies a complete filesystem location. A path beginning with a "\~/" specifies a path starting from the user's home directory. A path beginning with "~username/" specifies a path starting in username's home directory. .Relative Paths Relative paths are based on the startup directory which is the directory containing the INI file. Using relative paths can facilitate relocation of configurations but requires a good understanding of linux path specifiers. .... ./d0 is the same as d0, e.g., a directory named d0 in the startup directory ../d1 refers to a directory d1 in the parent directory ../../d2 refers to a directory d2 in the parent of the parent directory ../../../d3 etc. .... Multiple directories can be specified with [RS274NGC]SUBROUTINE_PATH by separating them with colons. The following example illustrates the format for multiple directories and shows the use of relative and absolute paths. .Multiple Directories Example: ---- [RS274NGC]SUBROUTINE_PATH = ../../nc_files/ngcgui_lib:../../nc_files/ngcgui_lib/utilitysubs:/tmp/tmpngc` ---- This is one long line, do not continue on multiple lines. When linuxCNC and/or ngcgui searches for files, the first file found in the search is used. LinuxCNC (and ngcgui) must be able to find all subroutines including helper routines that are called from within ngcgui subfiles. It is convenient to place utility subs in a separate directory as indicated in the example above. The distribution includes the ngcgui_lib directory and demo files for preambles, subfiles, postambles and helper files. To modify the behavior of the files, you can copy any file and place it in an earlier part of the search path. The first directory searched is [DISPLAY]PROGRAM_PREFIX. You can use this directory but it is better practice to create dedicated directory(ies) and put them at the beginning of the [RS274NGC]SUBROUTINE_PATH. In the following example, files in /home/myname/linuxcnc/mysubs will be found before files in ../../nc_files/ngcgui_lib. .Adding User Directory Example: ---- [RS274NGC]SUBROUTINE_PATH = /home/myname/linuxcnc/mysubs:../../nc_files/ngcgui_lib:../../nc_files/ngcgui_lib/utilitysubs` ---- New users may inadvertently try to use files that are not structured to be compatible with ngcgui requirements. Ngcgui will likely report numerous errors if the files are not coded per its conventions. Good practice suggests that ngcgui-compatible subfiles should be placed in a directory dedicated to that purpose and that preamble, postamble, and helper files should be in separate directory(ies) to discourage attempts to use them as subfiles. Files not intended for use as subfiles can include a special comment: "(not_a_subfile)" so that ngcgui will reject them automatically with a relevant message. === Summary of INI File item details for NGCGUI usage .... Item: [RS274NGC]SUBROUTINE_PATH = dirname1:dirname2:dirname3 ... Example: [RS274NGC]SUBROUTINE_PATH = ../../nc_files/ngcgui_lib:../../nc_files/ngcgui_lib/utilitysubs Note: Optional, but very useful to organize subfiles and utility files Item: [RS274NGC]USER_M_PATH = dirname1:dirname2:dirname3 ... Example: [RS274NGC]USER_M_PATH = ../../nc_files/ngcgui_lib/mfiles Note: Optional, needed to locate custom user mfiles Item: [DISPLAY]EMBED_TAB_NAME = name to display on embedded tab page Example: [DISPLAY]EMBED_TAB_NAME = Pyngcgui Note: The entries: EMBED_TAB_NAME,EMBED_TAB_COMMAND,EMBED_TAB_LOCATION define an embedded application for several linuxCNC guis Item: [DISPLAY]EMBED_TAB_COMMAND = programname followed by arguments Example: [DISPLAY]EMBED_TAB_COMMAND = gladevcp -x {XID} pyngcgui_axis.ui Note: For gladevcp applications, see the <<cha:glade-vcp,GladeVCP Chapter>> Item: [DISPLAY]EMBED_TAB_LOCATION = name_of_location Example: [DISPLAY]EMBED_TAB_LOCATION = notebook_main Note: See example INI files for possible locations Not required for the axis gui Item: [DISPLAY]PROGRAM_PREFIX = dirname Example: [DISPLAY]PROGRAM_PREFIX = ../../nc_files Note: Mandatory and needed for numerous linuxCNC functions It is the first directory used in the search for files item: [DISPLAY]TKPKG = Ngcgui version_number Example: [DISPLAY]TKPKG = Ngcgui 1.0 Note: Required only for axis gui embedding, specifies loading of ngcgui axis tab pages Item: [DISPLAY]NGCGUI_FONT = font_descriptor Example: [DISPLAY]NGCGUI_FONT = Helvetica -12 normal Note: Optional, font_descriptor is a tcl-compatible font specifier with items for fonttype -fontsize fontweight Default is: Helvetica -10 normal Smaller font sizes may be useful for small screens Larger font sizes may be helpful for touch screen applications Item: [DISPLAY]NGCGUI_SUBFILE = subfile_filename Example: [DISPLAY]NGCGUI_SUBFILE = simp.ngc Example: [DISPLAY]NGCGUI_SUBFILE = square.gcmc Example: [DISPLAY]NGCGUI_SUBFILE = "" Note: Use one or more items to specify ngcgui-compatible subfiles or gcmc programs that require a tab page on startup. A "Custom" tab will be created when the filename is "". A user can use a "Custom" tab to browse the file system and identify preamble, subfile, and postamble files. Item: [DISPLAY]NGCGUI_PREAMBLE = preamble_filename Example: [DISPLAY]NGCGUI_PREAMBLE = in_std.ngc Note: Optional, when specified, the file is prepended to a subfile. Files created with "Custom" tab pages use the preamble specified with the page. Item: [DISPLAY]NGCGUI_POSTAMBLE = postamble_filename Example: [DISPLAY]NGCGUI_POSTAMBLE = bye.ngc Note: Optional, when specified, the file is appended to a subfiles. Files created with "Custom" tab pages use the postamble specified with the page. Item: [DISPLAY]NGCGUI_OPTIONS = opt1 opt2 ... Example: [DISPLAY]NGCGUI_OPTIONS = nonew noremove Note: Multiple options are separated by blanks. By default, ngcgui configures tab pages so that: 1) a user can make new tabs 2) a user can remove tabs (except for the last remaining one) 3) finalized files are automatically sent to linuxCNC 4) an image frame (iframe) is made available to display an image for the subfile (if an image is provided) 5) the ngcgui result file sent to linuxCNC is terminated with an m2 (and incurs m2 side-effects) The options nonew, noremove, noauto, noiframe, nom2 respectively disable these default behaviors. By default, if an image (.png,.gif,jpg,pgm) file is found in the same directory as the subfile, the image is displayed in the iframe. Specifying the noiframe option makes available additional buttons for selecting a preamble, subfile, and postamble and additional checkboxes. Selections of the checkboxes are always available with special keys: Ctrl-R Toggle "Retain values on Subfile read" Ctrl-E Toggle "Expand subroutine" Ctrl-a Toggle "Autosend" (Ctrl-k lists all keys and functions) If noiframe is specified and an image file is found, the image is displayed in a separate window and all functions are available on the tab page. The NGCGUI_OPTIONS apply to all ngcgui tabs except that the nonew, noremove, and noiframe options are not applicable for "Custom" tabs. Do not use "Custom" tabs if you want to limit the user's ability to select subfiles or create additional tab pages. Item: [DISPLAY]GCMC_INCLUDE_PATH = dirname1:dirname2:... Example: [DISPLAY]GCMC_INCLUDE_PATH = /home/myname/gcmc_includes:/home/myname/gcmc_includes2 Note: Optional, each directory will be included when gcmc is invoked using the option: --include dirname .... == File Requirements for NGCGUI Compatibility === Single-File Gcode (.ngc) Subroutine Requirements An NGCGUI-compatible subfile contains a single subroutine definition. The name of the subroutine must be the same as the filename (not including the .ngc suffix). LinuxCNC supports named or numbered subroutines, but only named subroutines are compatible with NGCGUI. For more information see the <<cha:o-codes,O-Codes>> Chapter. The first non-comment line should be a sub statement. The last non-comment line should be a endsub statement. .examp.ngc: ---- (info: info_text_to_appear_at_top_of_tab_page) ; comment line beginning with semicolon ( comment line using parentheses) o<examp> sub BODY_OF_SUBROUTINE o<examp> endsub ; comment line beginning with semicolon ( comment line using parentheses) ---- The body of the subroutine should begin with a set of statements that define local named parameters for each positional parameter expected for the subroutine call. These definitions must be consecutive beginning with #1 and ending with the last used parameter number. Definitions must be provided for each of these parameters (no omissions). .Parameter Numbering ---- #<xparm> = #1 #<yparm> = #2 #<zparm> = #3 ---- LinuxCNC considers all numbered parameters in the range #1 thru #30 to be calling parameters so ngcgui provides entry boxes for any occurence of parameters in this range. It is good practice to avoid use of numbered parameters #1 through #30 anywhere else in the subroutine. Using local, named parameters is recommended for all internal variables. Each defining statement may optionally include a special comment and a default value for the parameter. .Statement Prototype ---- #<vname> = #n (=default_value) or #<vname> = #n (comment_text) or #<vname> = #n (=default_value comment_text) ---- .Parameter Examples ---- #<xparm> = #1 (=0.0) #<yparm> = #2 (Ystart) #<zparm> = #3 (=0.0 Z start setting) ---- If a default_value is provided, it will be entered in the entry box for the parameter on startup. If comment_text is included, it will be used to identify the input instead of the parameter name. .Global Named Parameters Notes on global named parameters and ngcgui: (global named parameters have a leading underscore in the name, like #<_someglobalname>) As in many programming languages, use of globals is powerful but can often lead to unexpected consequences. In LinuxCNC, existing global named parameters will be valid at subroutine execution and subroutines can modify or create global named parameters. Passing information to subroutines using global named parameters is discouraged since such usage requires the establishment and maintenance of a well-defined global context that is difficult to maintain. Using numbered parameters #1 thru #30 as subroutine inputs should be sufficient to satisfy a wide range of design requirements. While input global named parameters are discouraged, linuxCNC subroutines must use global named parameters for returning results. Since ngcgui-compatible subfiles are aimed at gui usage, return values are not a common requirement. However, ngcgui is useful as a testing tool for subroutines which do return global named parameters and it is common for ngcgui-compatible subfiles to call utility subroutine files that return results with global named parameters. To support these usages, ngcgui ignores global named parameters that include a colon (:) character in their name. Use of the colon (:) in the name prevents ngcgui from making entryboxes for these parameters. .Global Named Parameters ---- o<examp> sub ... #<_examp:result> = #5410 (return the current tool diameter) ... o<helper> call [#<x1>] [#<x2>] (call a subroutine) #<xresult> = #<_helper:answer> (immediately localize the helper global result) #<_helper:answer> = 0.0 (nullify global named parameter used by subroutine) ... o<examp> endsub ---- In the above example, the utility subroutine will be found in a separate file named helper.ngc. The helper routine returns a result in a global named parameter named #<_helper:answer. For good practice, the calling subfile immediately localizes the result for use elsewhere in the subfile and the global named parameter used for returning the result is nullified in an attempt to mitigate its inadvertent use elsewhere in the global context. (A nullification value of 0.0 may not always be a good choice). Ngcgui supports the creation and concatenation of multiple features for a subfile and for multiple subfiles. It is sometimes useful for subfiles to determine their order at runtime so ngcgui inserts a special global parameter that can be tested within subroutines. The parameter is named #<_feature:>. Its value begins with a value of 0 and is incremented for each added feature. .Additional Features A special 'info' comment can be included anywhere in an ngcgui-compatible subfile. The format is: ---- (info: info_text) ---- The info_text is displayed near the top of the ngcgui tab page in axis. Files not intended for use as subfiles can include a special comment so that ngcgui will reject them automatically with a relevant message. ---- (not_a_subfile) ---- An optional image file (.png,.gif,.jpg,.pgm) can accompany a subfile. The image file can help clarify the parameters used by the subfile. The image file should be in the same directory as the subfile and have the same name with an appropriate image suffix, e.g. the subfile example.ngc could be accompanied by an image file examp.png. Ngcgui attempts to resize large images by subsampling to a size with maximum width of 320 and maximum height of 240 pixels. None of the conventions required for making an ngcgui-compatible subfile preclude its use as general purpose subroutine file for LinuxCNC. The LinuxCNC distribution includes a library (ngcgui_lib directory) that includes both example ngcgui-compatible subfiles and utility files to illustrate the features of LinuxCNC subroutines and ngcgui usage. Another libary (gcmc_lib) provides examples for subroutine files for the Gcode meta compiler (gcmc) Additional user sumitted subroutines can be found on the Forum in the Subroutines Section. === Gcode-meta-compiler (.gcmc) file requirements Files for the Gcode-meta-compiler (gcmc) are read by ngcgui and it creates entry boxes for variables tagged in the file. When a feature for the file is finalized, ngcgui passes the file as input to the gcmc compiler and, if the compile is successful, the resulting gcode file is sent to linuxCNC for execution. The resulting file is formatted as single-file subroutine; .gcmc files and .ngc files can be intermixed by ngcgui. The variables identified for inclusion in ngcgui are tagged with lines that will appear as comments to the gcmc compiler. .Example variable tags formats ---- //ngcgui: varname1 = //ngcgui: varname2 = value2 //ngcgui: varname3 = value3, label3; ---- .Examples: ---- //ngcgui: zsafe = //ngcgui: feedrate = 10 //ngcgui: xl = 0, x limit ---- For these examples, the entry box for varname1 will have no default, the entry box for varname2 will have a default of value2, and the entry box for varname 3 will have a default of value 3 and a label label3 (instead of varname3). The default values must be numbers. To make it easier to modify valid lines in a gcmc file, alternate tag line formats accepted. The alternate formats ignore trailing semicolons (;) and trailing comment markers (//) With this provision, it is often makes it possible to just add the //ngcgui: tag to existing lines in a .gcmc file. .Alternate variable tag formats ---- //ngcgui: varname2 = value2; //ngcgui: varname3 = value3; //, label3; ---- .Examples: ---- //ngcgui: feedrate = 10; //ngcgui: xl = 0; //, x limit ---- An info line that will appear at the top of a tab page may be optionally included with a line tagged as: .Info tag ---- //ngcgui: info: text_to_appear_at_top_of_tab_page ---- When required, options can be passed to the gcmc compiler with a line tagged: .Option line tag format ---- //ngcgui: -option_name [ [=] option_value] ---- .Examples: ---- //ngcgui: -I //ngcgui: --imperial //ngcgui: --precision 5 //ngcgui: --precision=6 ---- Options for gcmc are available with the terminal command: ---- gcmc --help ---- A gcmc program by default uses metric mode. The mode can be set to inches with the option setting: ---- //ngcgui: --imperial ---- A preamble file, if used, can set a mode (g20 or g21) that conflicts with the mode used by a gcmc file. To ensure that the gcmc program mode is in effect, include the following statement in the .gcmc file: ---- include("ensure_mode.gcmc") ---- and provide a proper path for gcmc include_files in the ini file, for example: ---- [DISPLAY] GCMC_INCLUDE_PATH = ../../nc_files/gcmc_lib ---- == DB25 Example The following shows the DB25 subroutine. In the first photo you see where you fill in the blanks for each variable. image::images/ngcgui-db25-1.png[align="center"] This photo shows the backplot of the DB25 subroutine. image::images/ngcgui-db25-2.png[align="center"] This photo shows the use of the new button and the custom tab to create three DB25 cutouts in one program. image::images/ngcgui-db25-3.png[align="center"] | Mid | [
0.645,
32.25,
17.75
]
|
A Maricopa County judge has dismissed an order keeping the state from demolishing a historic building on the state fairgrounds, according to the City of Phoenix. The move comes after a state board agreed to halt the building’s destruction. The latest Rocky Mountain poll from Phoenix-based Behavior Research Center shows, with less than a month before the election, most people don't know who they want to vote for in the Republican Primary for Governor. A compromise bill designed to improve the Veterans Affairs health care system overwhelmingly passed the U.S. House of Representatives on Wednesday. The final vote was 420-5. It now heads to the Senate for a vote on Thursday. The University of Arizona has denied an appeal by a medical marijuana researcher who wants to keep her faculty position. The professor was set to begin testing the drug on veterans with Post Traumatic Stress Disorder. Slide Rock State Park is suffering a 98 percent decrease in visitors in the wake of a recent wildfire near the popular recreation area. But, park officials hope to attract more people to the park with reduced entrance fees. For questions or comments about this website, please contact the KJZZ webmaster. For general comments or questions see the Contact KJZZ page for a listing of contacts by topic.Please note: Station policy mandates that listeners who win on-air giveaways on this station are not eligible to win again for 30 days. Email regarding NPR's coverage, ethics, and funding can be sent to the NPR Ombudsman, who maintains an informative web page. For comments or concerns regarding NPR programs, listeners with a general inquiry may send an email to [email protected] | Low | [
0.5082508250825081,
38.5,
37.25
]
|
<!DOCTYPE html> <head> <title>No-op requestAnimationFrame loop</title> </head> <style> div { outline: solid thin cornflowerblue; } </style> <p> This page has many compositor layers and a no-op requestAnimationFrame loop. </p> <script> for (var i = 0; i < 10000; ++i) { var div = document.body.appendChild(document.createElement("div")); div.style.transform = "translateZ(0)"; div.style.width = "10px"; div.style.height = "10px"; } function noOp() { requestAnimationFrame(noOp); } noOp(); </script> | Low | [
0.474093264248704,
22.875,
25.375
]
|
Thermostatic mixing valves commonly employ a valve shuttle movable between hot and cold seats to control the relative proportions of hot and cold water supplied to an outlet in accordance with user selection of the outlet water temperature and a thermal control system to adjust the position of the valve shuttle to compensate for changes in the temperature and/or pressure of one or both supplies tending to change the set temperature. The known valve shuttles typically have a very small stroke, for example movement of the valve shuttle from full cold to full hot is generally less than 1 mm and is typically only 0.6 mm. As a result, misalignment of the valve shuttle affects the flows of hot and cold water and this can have a significant effect on the operation of the valve. For example, if the valve shuttle lifts off the hot seat unevenly, more of the hot water flows through one side of the valve and vice versa more of the cold water flows through the opposite side of the valve giving rise to asymmetric streams of hot and cold water producing incomplete mixing of the streams that affects the response of the thermal control system to correct any deviation in the outlet water temperature from the selected temperature. It has been proposed to employ close fit sliding guides to keep the valve shuttle aligned with the seats but the sliding parts add complexity, increase manufacturing costs and are susceptible to corrosion and lime-scale causing friction. Misalignment of the valve shuttle may also result in vibrations of the valve shuttle generating noise, especially under high pressure operating conditions. Thus, the water velocity at the edge of the valve shuttle produces a low pressure region that tends to pull the valve shuttle towards its seat and any misalignment of the valve shuttle causes the pull to be uneven and this can start vibration of the shuttle valve against its seat in what we believe is a nutating motion generating noise. Typically, the valve shuttle is mounted on a thermostat and the thermostat is displaced against the biasing of a return spring. Traditionally, the return spring is a helical coil spring of wire of circular cross-section and this may contribute to misalignment of the valve shuttle. In particular, the final turn of wire at either end of the spring coils around, not as desired in a plane perpendicular to the helical axis, but at an angle to the perpendicular. As a result, the valve shuttle mounted on the thermostat can be forced out of line with the valve seats by the inclination of the final turn of the helical wire at the ends of the spring causing the thermostat, and thus the valve shuttle carried by the shuttle, to be tilted slightly relative to the axial direction. This problem persists even if the best quality helical wire springs are used. Generally, thermostatic mixing valves can correct for inlet water temperature changes much better than inlet pressure changes. If the flow rate is reduced by restricting the valve outlet, then inlet pressure changes become much more severe for the valve to correct. FIG. 6 shows a graph of pressure loss ratio versus temperature of the mixed water at the outlet of a typical thermostatic mixing valve for a set temperature of 40° C. typically chosen for showering. Pressure loss ratio is the ratio of the higher inlet pressure drop to the lower inlet pressure drop across the mixing valve. Normally, higher hot water pressure results in increases in the temperature of the mixed water at the outlet and higher cold water pressure results in decreases in the temperature of mixed water at the outlet. The temperature deviations for pressure loss ratios tending to increase the set water temperature are higher than those tending to reduce the set water temperature because the set temperature is usually closer to the hot water inlet temperature than the cold water inlet temperature. As shown the overall spread of temperature variation is about 6° C. This is unacceptable for many applications, for example in healthcare installations, and currently the performance requirements for these applications are met by skewing the response of the valve to reduce the size of hot deviations which may give rise to a risk of scalding with a consequential increase in the size of cold deviations which although noticeable to the user present less risk. The hot and cold water streams are often incompletely mixed as they flow past the thermostat and changes to the waterway geometry can alter the temperature at the thermostat. As a result, skewing the response is usually done on a trial and error basis until a response is achieved that meets the standard. This is inefficient and there is still a possibility that a valve could be used under conditions in which the cold water and hot water pressures are not equal resulting in hotter temperature deviations than intended. Moreover, it may not always be possible to meet the performance requirements by skewing the response of the valve. | Low | [
0.525423728813559,
31,
28
]
|
Live Results Search Healthcare Spending Overshoots a Threat to Sustainability Growth in healthcare spending by provinces and territories has been accelerating over the last four years as governments continue to overshoot their budget targets, says a report from the C.D. Howe Institute. In “Healthcare Spending Overshoots a Threat to Sustainability,” author William B.P. Robson warns that, while governments are typically budgeting growth in healthcare budgets that should be fiscally sustainable, actual growth in healthcare costs is outrunning what Canada’s economy and tax base can support over the long run. Bill Robson took office as President and CEO of the C.D. Howe Institute in July 2006, after serving as the Institute’s Senior Vice President since 2003 and Director of Research since 2000. He has written more than 230 monographs, articles, chapters and books on such subjects as government budgets, pensions, healthcare financing, inflation and currency issues. | Mid | [
0.5913043478260871,
34,
23.5
]
|
--- abstract: 'In this paper, we address the estimation of a time-varying spatial field of received signal strength (RSS) by relying on measurements from randomly placed and not very accurate sensors. We employ a radio propagation model where the path loss exponent and the transmitted power are unknown with Gaussian priors whose hyper-parameters are estimated by applying the empirical Bayes method. We consider the locations of the sensors to be imperfectly known, which entails that they represent another source of error in the model. The propagation model includes shadowing which is considered to be a zero-mean Gaussian process where the correlation of attenuation between two spatial points is quantified by an exponential function of the distance between the points. The location of the transmitter is also unknown and is estimated from the data. We propose to estimate time-varying RSS fields by a recursive Bayesian method and crowdsourcing. The method is based on , and it produces the joint distribution of the spatial field. Further, it summarizes all the acquired information by keeping the size of the needed memory bounded. We also present the of the estimated parameters. Finally, we illustrate the performance of our method with experimental results on synthetic and real data sets.' author: - 'Irene Santos, Juan José Murillo-Fuentes, Petar M. Djurić[^1][^2][^3]' bibliography: - 'allBib.bib' - 'bounds.bib' - 'dynamics4.bib' - 'rssMeasurement.bib' title: '[Recursive Estimation of Dynamic RSS Fields Based on Crowdsourcing and Gaussian Processes]{}' --- sensor networks, Bayesian estimation, spectrum sensing, RSS, Gaussian processes for regression, time-varying fields, crowdsourcing, Cramér-Rao bound. Introduction ============ sensing has gained significant interest for research due to the rapid growth of wireless communication systems. Its main function is to map the distribution of signals within a specific area, detecting intruders in a particular spectrum band and/or free channels that are not being used by any user [@Li12; @Portelinha13]. For this purpose, spectrum sensing relies on measurements from sensors. Current techniques for spectrum management require these measurements to be quite accurate, i.e., the techniques need an expensive infrastructure where sensors are sparsely and strategically deployed. These sensors provide precise measurements of received power and their positions are perfectly known. Due to the cost of this setting, approaches based on measurements obtained by expensive equipment do not scale well and cannot be extended to large areas. An alternative and appealing solution is to exploit [*crowdsourcing*]{}, where many sensors with low-quality acquire much less accurate measurements [@Pfammatter15; @Molinari15]. For example, one can use not very accurate measurements obtained by a large number of smart phones, and yet can create a more accurate spectrum map than that obtained by a few sparsely located expensive sensors. Most of the current spectrum monitoring techniques rely on measurements. Given the values at some known locations, the goal of spectrum sensing is to estimate the field of at any location within an area of interest. Spectrum monitoring is not the only application where measurements play a central role. Others include indoor localization [@Pak11; @Chang17], tracking [@Dashti15], distance estimation [@Mahapatra16] and distributed asynchronous regression [@Garrido15]. Methods that use measurements for making inference are based on radio propagation path loss models. These models depend on different parameters including the locations of the sensors, the path loss exponent, the transmitted power, and shadowing. The more these parameters fit the reality, the more accurate the model is. Furthermore, due to the dynamic nature of the signal propagation, the learning of the model parameters that describe the propagation should allow for their adaptation. There are two groups of papers where the propagation loss model is used. In one group, the authors consider the path loss exponent to be known and static [@Zhang17; @Li12; @Wen16; @Vaghefi13b; @Vaghefi13]. In the other, the exponent is unknown, is possibly dynamic, and is estimated [@Mazuelas09; @Li06; @Liang16]. Often, when indoor environments are studied, different values of the path loss exponent are assumed and estimated, e.g., with nonlinear least-squares techniques. The Bayesian methods allow for taking into account the estimation error of the exponent while estimating other more important unknowns or in making decisions. Shadowing is another important notion in these studies. It has been commonly modeled by log-normal distributions. Usually, no spatial correlation due to shadowing fading has been assumed and i.i.d. log-normal distributions with the same variance have been used [@Zhang17; @Mazuelas09; @Li06; @Liang16; @Vaghefi13b]. However, it is well-known that shadowing effects at different locations can change significantly due to varying propagation conditions. For this reason, the authors of [@Wen16] address the correlation between measurements and propose to estimate at specific locations as the value of nearby sensors. In [@Portelinha13; @Li12; @Ferris06; @Vaghefi13] and [@Romero17], a full covariance matrix is introduced to model the spatially correlated shadowing. While in [@Vaghefi13] this matrix is known, in [@Ferris06; @Li12; @Portelinha13] and [@Romero17] a multivariate zero-mean Gaussian is used to approximate it. In [@Ferris06; @Li12] and [@Portelinha13], an exponentially distributed coefficient of correlation is used. Most of the approaches proposed in the literature require the [transmitted power]{} to be available at a central unit where all the measurements are processed [@Li12; @Portelinha13; @Romero17; @Vaghefi13; @Li06; @Liang16]. However, the transmitted power may be unavailable and may change with time [@Vaghefi13b]. In this situation, one can estimate the transmitted power [@Vaghefi11; @Kim07] or eliminate the dependence of the propagation model on it by computing differences between sensors [@Vaghefi11; @Lee09; @Wang09]. We point out that the transmitter and sensor locations also affect the radio propagation model. Approaches in the literature assume that these locations are perfectly known or, if estimated locations are available, they are considered as if they are true. To the best of our knowledge, there are no models that introduce errors in distances between transmitters and receivers. Further, one may have an increasing number of measurements at different locations at each time instant, and this can increase considerably the complexity and size of the system [@Perez13]. In our paper, we address measurements by sensors whose locations are only approximately known, and we consider a scenario with a large number of measurements that is due to crowdsourcing. The [variability]{} in the and/or the parameters that affect these measurements are rarely discussed. In [@Romero17], an adaptive kernel-based approach is proposed, but there is no discussion on the variations of RSS in practice. In [@Chapre13], some results on measurements are reported for a WLAN with time, while in [@Chang17] some variations over a few seconds and days are reported to emphasize the need for updating the fingerprint for localization purposes. In both cases, indoor WiFi signals are considered. In this paper, we propose a -based approach, where the uncertainties in the positions of the sensors [and the transmitter]{} as well as the correlation due to shadowing effects are included in the model. We also deal with an unknown path loss exponent and transmitted power by modeling them as Gaussian random variables whose hyper-parameters are learned according to the empirical Bayesian approach [@Carlin00]. Preliminary results of this approach for static fields can be found in [@Santos17b]. We cope with the temporal changes of the measurements from some sensors by using a recursive technique where the solution is updated with the arrival of new data. This solution also allows for adaptation to accommodate changes in both propagation and movement of sensors, as showed in [@Santos17c] with a synthetic dataset. As in previous approaches [@Santos17b; @Santos17c], we fix the locations where estimates of RSS are needed. These locations are represented as [*nodes*]{} of a predefined grid. The computational complexity is fixed and determined by the number of nodes. This differs from other approaches, e.g., in [@Perez13], where new available measurements are included sequentially in the system, thereby increasing its size and complexity. Furthermore, in our work the estimates of the field at the nodes serve as priors for the field estimates at the next time instant when new measurements are acquired. In this way, we make the approach scalable. To account for the time-variations of the field, we include a forgetting factor in the method which determines the relevance of the previous and current data. We also find the of our estimates, where we rely on some recent results on bounds for GPs [@Waagberg17]. Finally, we demonstrate the performance of the method on synthetic datasets. We compare the performance with previous static approaches based on GPs [@Santos17b] and interpolation techniques [@Molinari15]. The paper is organized as follows. We introduce the notation and the model in . In , we explain how to estimate the location of the transmitter and how to find the Gaussian distribution of the path loss exponent and transmitted power. We develop the GP implementation for RSS estimation at static fields in . The is presented in , and the method for estimating time-varying fields in . We provide simulation results in . Our concluding remarks are given in . System Model ============ In this section we propose a model for the measurements of the sensors. This model will be used to generate synthetic data and to propose an estimation algorithm. We focus on the estimation of at $M$ fixed grid nodes with perfect known locations given by $\matr{X}_g=[\vect{x}_{g_1}\cdots \vect{x}_{g_M}]\trs\in \mathbb{R}^{M\times2}$. We denote this estimated field at instant $t$ as ${\bf f}_g^{[t]}\in{\mathbb R}^M$. We will use the superscript $^{[t]}$ to indicate that the variable is a function of time. At time $t$, we consider that $N$ random low-cost sensors[^4] appear within the area at locations $\matr{X}^{[t]}=[\vect{x}_1^{[t]} \cdots \vect{x}_N^{[t]}]\trs\in \mathbb{R}^{N\times2}$. We denote the distance between sensors $i$ and $j$ at time $t$ by $d_{ij}^{[t]}$. The distance between them is computed from their locations $\vect{x}_i$ and $\vect{x}_j$ by $$\begin{aligned} \label{eq:distance} d_{ij}^{[t]}=\sqrt{(\vect{x}_i^{[t]}-\vect{x}_j^{[t]})\trs(\vect{x}_i^{[t]}-\vect{x}_j^{[t]})}. \end{aligned}$$ and they are defined analogously to . All the distances are measured in meters. The sensors also report their measurements of (in dBm), and they are denoted by $\vect{z}^{[t]}=[z_1^{[t]} \cdots z_N^{[t]}]^\top$. Assuming a log-normal path loss model, we express the as $$\begin{aligned} \LABEQ{measurementsNoNoise} \vect{z}^{[t]}&= {\sf \vect{1}} P^{[t]} - 10\alpha^{[t]} \log_{10}(\vect{d}^{[t]}) + \vect{v}^{[t]} + \vect{w}^{[t]},\end{aligned}$$ where ${\sf \vect{1}}$ is an $N\times 1 $ vector of 1’s, $\alpha^{[t]}$ is the path loss exponent at time $t$, $\vect{v}^{[t]}$ is an attenuation due to shadowing effects and modeled according to $$\begin{aligned} \vect{v}^{[t]}&\sim \gauss{\vect{v}^{[t]}}{\vect{0}}{\matr{\Sigma}_v^{[t]}}, \label{eq:v}\end{aligned}$$ where the notation $\gauss{\vect{v}^{[t]}}{\vect{0}}{\matr{\Sigma}_v^{[t]}}$ signifies that $\vect{v}^{[t]}$ has a Gaussian distribution with a mean vector $\vect{0}$ and a covariance matrix $\matr{\Sigma}_v^{[t]}$. The covariance matrix $\matr{\Sigma}_v^{[t]}$ is comprised of elements given by $$\begin{aligned} \LABEQ{correlation} {\rm Cov}\left(v_i^{[t]} v_j^{[t]}\right) & = {\sigma_v^2} {\rm exp}\left(-{d}_{ij}^{[t]}/D_{\rm corr}\right),\end{aligned}$$ where $D_{\rm corr}$ is a parameter that models the correlation in the measurements, and $\vect{w}^{[t]}$ is some unrelated additive noise, $$\begin{aligned} \vect{w}^{[t]}&\sim \gauss{\vect{w}^{[t]}}{\vect{0}}{\sigma_w^2 \matr{I}_N}. \label{eq:w}\end{aligned}$$ We reiterate that the sensors do not perfectly know their locations (or distances) and instead they only have the estimates of the distances. More specifically, based on the estimates of their positions, $\widehat{\matr{X}}^{[t]}=[\widehat{\vect{x}}_1^{[t]}\cdots \widehat{\vect{x}}_N^{[t]}]$, one can obtain the estimates of their distances from the transmitter, $\widehat{\vect{d}}^{[t]}=[\widehat{d}_1^{[t]} \cdots \widehat{d}_N^{[t]}]$ (see below). We model the estimated distance between the $i$th sensor and the transmitter according to $$\begin{aligned} %\LABEQ{dnoise} \label{location} \widehat{{d}}_i^{[t]} = d_i^{[t]}+\epsilon_d &\sim \gauss{\widehat{{d}}_i^{[t]}}{{d}_i^{[t]}}{\sigma_d^2},\end{aligned}$$ where $\sigma_d^2$ is known. To reflect the uncertainty in $d_i^{[t]}$, we modify to $$\begin{aligned} \label{measurements} \vect{z}^{[t]}&= {\sf \vect{1}} P^{[t]} - \widehat{\vect{q}}^{[t]} \alpha^{[t]} + \vect{u}^{[t]} + \vect{v}^{[t]} + \vect{w}^{[t]}, \end{aligned}$$ where $$\begin{aligned} \widehat{\vect{q}}^{[t]}&= \left[10\log_{10}(\widehat{d}_1^{\;[t]}) \cdots 10\log_{10}(\widehat{d}_N^{\;[t]})\right]^\top,\end{aligned}$$ and $\vect{u}^{[t]}$ is the error that reflects the imprecisely known location of the sensors, $$\begin{aligned} \LABEQ{erroru} \vect{u}^{[t]}&\sim \gauss{\vect{u}^{[t]}}{\vect{0}}{(\rho_u^{[t]})^2 \widehat{\matr{D}}^{[t]}},\end{aligned}$$ with $\rho_u^{[t]}=10\alpha^{[t]}\sigma_d\log_{10}(e)$ (see the Appendix; $\rho_u^{[t]}$ is expressed in mdB) and $\widehat{\matr{D}}^{[t]}={\rm diag}\left\{1/\widehat{d}_1^{2[t]}\cdots 1/\widehat{d}_N^{2[t]}\right\}$. In the rest of the paper, we assume that both $P^{[t]}$ and $\alpha^{[t]}$ are unknown, where the EIRP can change with time and the path loss exponent is constant. We consider that both variables are Gaussian distributed as explained in . [We also assume that the location of the transmitter, $\vect{x}_0$, is unknown and static]{}. We let the remaining parameters to be known with values and $\sigma_w=\sqrt{7}$ dB. . Given $\vect{z}^{[t]}$, $\widehat{\matr{X}}^{[t]}$, $\widehat{\vect{d}}^{[t]}$, the model in , and all the assumptions, we need to estimate the RSS, $\vect{f}_g^{[t]}$, at the $M$ grid locations, $\matr{X}_g$. Note that the model in can be simplified by ignoring the estimation errors of the distances. In that case, we remove $\vect{u}^{[t]}$ from the equation and work with $$\begin{aligned} \label{mod3} \hspace{-0.7cm}\vect{z}^{[t]}&= {\sf \vect{1}} P^{[t]} - 10\alpha^{[t]} \log_{10}(\widehat{\vect{d}}^{[t]}) + \vect{v}^{[t]} + \vect{w}^{[t]}. \end{aligned}$$ In , we will compare the performances of the models based on and , respectively. Errors of sensor locations -------------------------- In , we plotted the measured power at distance $d_i$ from the transmitter as described by $\alpha = 3.5$ and distances from $0$ to $250$ m when the distance is known correctly (red solid line). The dotted blue curve shows the measurements according to the model when the incorrect distance has an error of $+10$ m, i.e., $\widehat{d}_i=d_i+10$ m. We also included two more curves, $-10\alpha \log_{10} \widehat{d}_i\pm\rho_u/\widehat{d}_i$, where $\rho_u=200$. It can be observed that the error of the model at small distances is quite high. The curves suggest how the incorrect estimate of the sensor’s distance to the transmitter affects the estimates of the remaining unknowns. {width="3.5in"} [Estimation of the unknown parameters]{} ======================================== Estimation of the transmitter location -------------------------------------- We estimate the location of the transmitter from the measurements of RSS at the reporting sensors by the weighted centroid approach [@Nurminen17]. where $$\begin{aligned} \LABEQ{weights} w_{i}^{[t]}&={10^{z_{i}^{[t]}/10}}, \\ \hat{w}^{[t]}&=\hat{w}^{[t-1]}+\sum_{i=1}^{N} w_{i}^{[t]}.\end{aligned}$$ Thus, $$\begin{aligned} \label{dist-1-est} \widehat{d}_{i}^{\newt{[t]}}&=|\widehat{\vect{x}}_i^{\newt{[t]}}-\widehat{\vect{x}}_0^{\newt{[t]}}|,\;\;\;\; i=1, 2, \ldots, N,\\ \label{dist-2-est} \widehat{d}_{g_i}^{\newt{[t]}}&=|\vect{x}_{g_i}-\widehat{\vect{x}}_0^{\newt{[t]}}|,\;\;\; i=1, 2, \ldots, M. \end{aligned}$$ We note that the error in estimating $\widehat{\bf x}_0^{\newt{[t]}}$ propagates in the estimates of the distances defined in and . Bayesian estimation of $\alpha^{[\MakeLowercase{t}]}$ and $P^{[\MakeLowercase{t}]}$ ----------------------------------------------------------------------------------- We assume that both variables, $\alpha^{[\MakeLowercase{t}]}$ and $P^{[\MakeLowercase{t}]}$, are independent and Gaussian distributed, i.e., $$\begin{aligned} \label{eq:P} p(P^{[t]}|\boldsymbol{\theta}_P)&=\gauss{P^{[t]}}{\mu_P^{[t]}}{\sigma_P^{2\,[t]}}, \\ \label{eq:al} p(\alpha^{[t]}|\boldsymbol{\theta}_\alpha)&=\gauss{\alpha^{[t]}}{\mu_\alpha^{[t]}}{\sigma_\alpha^{2\,[t]}}, \end{aligned}$$ where the hyper-parameters $\boldsymbol{\theta}^{[t]}=[\mu_\alpha^{[t]},\sigma^{[t]}_\alpha,\mu_P^{[t]},\sigma^{[t]}_P]$ have to be estimated from the current set of measurements, $\vect{z}^{[t]}$. To ease the reading, we remove the superscript ${[t]}$ in the remaining of this section. The joint posterior distribution of $\alpha$ and $P$ given a set of measurements, $\vect{z}$, from sensors with given positions, $\widehat{\matr{X}}$, can be computed by using the Bayes’ rule, or $$\begin{aligned} p(\alpha,P|\vect{z},\widehat{\matr{X}},\boldsymbol{\theta}) &= \frac{p(\vect{z}|\alpha,P,\widehat{\matr{X}}) p(\alpha|\boldsymbol{\theta})p(P|\boldsymbol{\theta})}{p(\vect{z}|\widehat{\matr{X}},\boldsymbol{\theta})},\end{aligned}$$ where $p(\vect{z}|{\alpha,P},\widehat{\matr{X}})$ is a Gaussian distribution of $\vect{z}$, $p(\vect{z}|{\alpha,P},\widehat{\matr{X}})=\gauss{\vect{z}}{\boldsymbol{\mu}_{z|\alpha,P}}{\matr{\Sigma}_{z|\alpha,P}}$, with $$\begin{aligned} \boldsymbol{\mu}_{z|\alpha,P}&=\vect{1}P-\widehat{\vect{q}}{\alpha}, \\ \matr{\Sigma}_{z|\alpha,P}&=\newt{\rho_u^2}\widehat{\matr{D}}+\matr{\Sigma}_v+\sigma_w^2 \matr{I}_N. %\E(z_i)=P-10\widehat{\alpha}(\widehat{d}_i),\end{aligned}$$ The marginalized distribution of the measurements, $p(\vect{z}|\widehat{\matr{X}},\boldsymbol{\theta})$, can be computed by $$\begin{aligned} \LABEQ{marginalaP} p(\vect{z}|\widehat{\matr{X}},\boldsymbol{\theta})&=\int p(\vect{z}|\alpha,P,\widehat{\matr{X}}) p(\alpha|\boldsymbol{\theta})p(P|\boldsymbol{\theta}) \df{\alpha}\df{P},\end{aligned}$$ which is also a Gaussian distribution [@Bishop06], $p(\vect{z}|\widehat{\matr{X}},\boldsymbol{\theta})\sim\gauss{\vect{z}}{\boldsymbol{\mu}_z}{\matr{\Sigma}_z}$, with $$\begin{aligned} \boldsymbol{\mu}_z&=\vect{\rm \vect{1}}\mu_P - \widehat{\vect{q}}\mu_\alpha, \LABEQ{meanz} \\ \matr{\Sigma}_z&=\matr{\Sigma}_{z|\alpha,P}+ \sigma^2_\alpha \widehat{\matr{Q}} + \sigma_P^2 \matr{J}_N, \LABEQ{varz} %\Sigma_z&=\Sigma_{z|\alpha}+ Q\trs C_\alpha Q, \\ \LABEQ{m1} %\mu_z&=\Sigma_z \Sigma_{z|\alpha}\inv Q (Q\trs \Sigma_{z|\alpha}\inv Q+C_\alpha\inv)\inv(Q\trs \Sigma_{z|\alpha}\inv P + C_\alpha\inv \mu_\alpha)\end{aligned}$$ where $\widehat{\matr{Q}}=\widehat{\vect{q}} \widehat{\vect{q}}\trs$ and $\matr{J}_N$ is an $N\times N$ matrix with all elements equal to one. We can obtain estimates of the hyper-parameters in and from the data by applying the empirical Bayes method [@Carlin00; @Santos17b]. More specifically, we approximate $\boldsymbol{\mu}_z$ with $\vect{z}$ and solve the system of equations in to obtain the estimated hyper-parameters of the mean as the following optimization problem: $$\begin{aligned} \LABEQ{meanPalpha} \begin{bmatrix} \widehat{\mu}_P\\ \widehat{\mu}_\alpha \end{bmatrix} &= \min_{\mu_P,\mu_\alpha} \norm{ \begin{bmatrix} {\sf \vect{1}} & -\widehat{\vect{q}} \end{bmatrix} \begin{bmatrix} {\mu}_P\\ {\mu}_\alpha \end{bmatrix} - \vect{z} }^2 \; \;\; \mbox{subject to } \mu_\alpha\geq2.\end{aligned}$$ [We recall that the sensors close to the transmitter introduce higher errors in the estimation of $\alpha$ and $P$ than the sensors that are further away, as already explained in . For this reason, we used a weighting scheme to account for this in the sum of the squares in as follows: $$\begin{aligned} \LABEQ{meanPalphaChi} \begin{bmatrix} \widehat{\mu}_P\\ \widehat{\mu}_\alpha \end{bmatrix} &= \min_{\mu_P,\mu_\alpha} \sum_{i=1}^N \left(\frac{\mu_P -q_i\mu_\alpha -z_i}{\newt{1/\widehat{d}_i}}\right)^2 \; \;\; \mbox{subject to } \mu_\alpha\geq2 \nonumber \\ &= \min_{\mu_P,\mu_\alpha} \norm{ \begin{bmatrix} {\sqrt{\widehat{\matr{D}}}\inv\sf \vect{1}} & -\sqrt{\widehat{\matr{D}}}\inv\widehat{\vect{q}} \end{bmatrix} \begin{bmatrix} {\mu}_P\\ {\mu}_\alpha \end{bmatrix} - \sqrt{\widehat{\matr{D}}}\inv\vect{z} }^2.\end{aligned}$$ ]{} Next, we approximate $\matr{\Sigma}_z$ with $(\vect{z}-\widehat{\boldsymbol{\mu}}_z)(\vect{z}-\widehat{\boldsymbol{\mu}}_z)\trs$, where $\widehat{\boldsymbol{\mu}}_z=\vect{\rm \vect{1}}\widehat{\mu}_P - \widehat{\vect{q}}\widehat{\mu}_\alpha$, and obtain the estimates of the hyper-parameters of the variance from by solving $$\begin{aligned} \begin{bmatrix} \widehat{\sigma}^{2}_P\\ \widehat{\sigma}^2_\alpha \end{bmatrix} &= \min_{\sigma_P^2,\sigma_\alpha^2} \norm{ \begin{bmatrix} {\sf \vect{1}} & \vect{b} \end{bmatrix} \begin{bmatrix} {\sigma}^2_P\\ {\sigma}^2_\alpha \end{bmatrix} - \vect{a} }^2 \; \;\; \mbox{subject to } \sigma_\alpha^2,\sigma_P^2\geq0, %[\widehat{\mu}_P, \widehat{\mu}_\alpha]=\min_{\alpha,P} || A \LABEQ{varPalpha}\end{aligned}$$ where $\vect{a}$ and $\vect{b}$ are vectors formed by the diagonal elements of $\matr{A}=(\vect{z}-\widehat{\boldsymbol{\mu}}_\vect{z}) (\vect{z}- \widehat{\boldsymbol{\mu}}_\vect{z})\trs- \boldsymbol{\Sigma}_{\vect{z}|\alpha,P}$ and $\widehat{\matr{Q}}$, respectively. [Note that the estimation of the variances in requires knowledge of $\matr{\Sigma}_v$, i.e., the values of the parameters $D_{corr}$ and $\sigma_v^2$. When these parameters are not available, the previous variances can be learned from the GP, as proposed in the next section. ]{} GP for regression in static fields ================================== In this section, we present an approach to estimate the spatial distribution of based on a . We have a set of RSS measurements $\vect{z}^{[t]}$, obtained at locations $\widehat{\matr{X}}^{[t]}$, and we want to estimate the levels $\vect{f}_g^{[t]}$ at a set of given nodes. Note that for now we do not use any information from previous time instants. An equivalent formulation of is given by $$\begin{aligned} \LABEQ{sysmodGP} %z(X)\equiv \vect{z}^{[t]}&=&{f}(\widehat{\matr{X}}^{[t]}) + \vect{\noise}^{[t]},\end{aligned}$$ where $\vect{\noise}^{[t]}$ is an additive Gaussian noise vector defined by $$\begin{aligned} \vect{\noise}^{[t]} &=&\vect{u}^{[t]}+\vect{w}^{[t]} \nonumber \\ &\sim& \gauss{\vect{\noise}^{[t]}}{\vect{0}}{\matr{\Sigma}_\noise^{[t]}=\sigma_w^2\matr{I}_N+\rho_u^2\widehat{\matr{D}}^{[t]}},\end{aligned}$$ and the function $f(\widehat{\matr{X}}^{[t]})={\rm \vect{1}}P^{[t]}-\widehat{\vect{q}}^{[t]}\alpha^{[t]}+\vect{v}^{[t]}$ is modeled as a GP, i.e., $$f(\widehat{\matr{X}}^{[t]})\sim\GP{f(\widehat{\matr{X}}^{[t]})}{\vect{m}_{\widehat{\matr{X}}}^{[t]}}{\matr{K}_{\widehat{\matr{X}}}^{[t]}},$$ whose mean and covariance functions are given by $$\begin{aligned} \vect{m}_{\widehat{\matr{X}}}^{[t]}&=&\vect{\rm \vect{1}}\widehat{\mu}_P^{[t]} - \widehat{\vect{q}}^{[t]}\widehat{\mu}_\alpha^{[t]}, \LABEQ{meanfx}\\ %[m_{\widehat{\vect{x}}_1}^{[t]}\cdots m_{\widehat{\vect{x}}_N}^{[t]}]\trs, \\ \matr{K}_{\widehat{\matr{X}}}^{[t]}&= &\matr{K}(\widehat{\matr{X}}^{[t]},\widehat{\matr{X}}^{[t]}), %+\widehat{\sigma}_{\alpha}^{2\;[t]} \widehat{\matr{Q}}^{[t]}+\widehat{\sigma}_P^{2\;[t]}\matr{J}_N,\end{aligned}$$ where $\matr{K}(\widehat{\matr{X}}^{[t]},\widehat{\matr{X}}^{[t]})$ is an $N\times N$ matrix with elements $k(\widehat{\vect{x}}_i^{[t]},\widehat{\vect{x}}_j^{[t]})$ determined by a specific kernel, defined in , and $\widehat{\boldsymbol{\theta}}^{[t]}=[\widehat{\mu}_\alpha^{[t]},\widehat{\sigma}^{[t]}_\alpha,\widehat{\mu}_P^{[t]},\widehat{\sigma}^{[t]}_P,{\widehat{\vect{x}}_0^{[t]}}]$ are the estimated hyper-parameters of the GPR computed as described in . Note that we have not adopted zero-mean functions, as commonly done [@Rasmussen06], because this assumption would violate the model from . [As already discussed in , the hyper-parameters $\widehat{\sigma}^{[t]}_P$ and $\widehat{\sigma}^{[t]}_\alpha$ can be computed if the values of the parameters $\sigma_v^2$ and $D_{corr}$ are known. If $\sigma_v^2$ and $D_{corr}$ are not known, we propose to estimate them as parameters of the kernel which will be learned from the GP (see ). ]{} Given a set of training points, ($\vect{z}^{[t]}, \widehat{\matr{X}}^{[t]}$), we want to estimate the spatial field, $\vect{f}_g^{[t]}$, at a set of test points, $\matr{X}_g$, whose prior is distributed according to $$\begin{aligned} p(\vect{f}_g^{[t]}|\boldsymbol{\theta}^{[t]})&= \gauss{\vect{f}_g^{[t]}}{\vect{m}_{{\matr{X}_g}}^{[t]}}{\matr{K}_{{\matr{X}}_g}^{[t]}},\end{aligned}$$ where $$\begin{aligned} \vect{m}_{{\matr{X}_g}}^{[t]}&=\vect{\rm \vect{1}}\widehat{\mu}_P^{[t]} - \vect{q}_g\widehat{\mu}_\alpha^{[t]},\LABEQ{muXg} \\ %m_{\widehat{\vect{x}}_{g_i}}^{[t]}&=&\widehat{\mu}_{P}^{[t]}-10\widehat{\mu}_{\alpha}^{[t]}\log_{10}({d}_{g_i}^{[t]}), \\ \matr{K}_{{\matr{X}}_g}^{[t]}&=\matr{K}({\matr{X}_g},{\matr{X}_g}), \\%+\widehat{\sigma}_{\alpha}^{2\;[t]} \vect{q}_g{\vect{q}_g}\trs +\widehat{\sigma}_P^{2\;[t]}\matr{J}_M, \\ \vect{q}_g& = [10\log_{10}(\widehat{d}_{g_1})\cdots 10\log_{10}(\widehat{d}_{g_M})]\trs.\end{aligned}$$ We note that in the last equation we have a hat symbol above ${d}_{g_i}$ because even though we know the exact locations of the nodes where we estimate the RSS, we do not know the exact location of the transmitter. The joint distribution of the training outputs, $\vect{z}^{[t]}$, and the test outputs, $\vect{f}_g^{[t]}$, that fits the model and priors above is $$\begin{aligned} \LABEQ{jointGP} %\hspace{-0.7cm} \begin{bmatrix} \vect{z}^{[t]} \\ \vect{f}_g^{[t]} \end{bmatrix} \sim \gauss{\begin{bmatrix} \vect{z}^{[t]} \\ \vect{f}_g^{[t]} \end{bmatrix}}{\begin{bmatrix} \vect{m}_{\widehat{\matr{X}}}^{[t]} \\ \vect{m}_{{\matr{X}_g}}^{[t]} \end{bmatrix}}{\begin{bmatrix} \matr{K}_{\widehat{\matr{X}}}^{[t]}+\matr{\Sigma}_\noise^{[t]} & \matr{K}_{{\matr{X}}_g,{\widehat{\matr{X}}}}^{[t]\,\top} \\ \matr{K}_{{\matr{X}}_g,{\widehat{\matr{X}}}}^{[t]} & \matr{K}_{{\matr{X}}_g}^{[t]} \end{bmatrix}}.\end{aligned}$$ The prediction of the RSS at the desired nodes is presented by their posterior distribution, $\vect{f}_g^{[t]}$. This distribution is obtained by conditioning on the observations, $\vect{z}^{[t]}$, estimated locations of the sensors, $\widehat{\matr{X}}^{[t]}$, and the estimated hyper-parameters, $\widehat{\boldsymbol{\theta}}^{[t]}$, i.e., $$\begin{aligned} \LABEQ{condunav2} p\left(\vect{f}_{g}^{[t]}|\matr{X}_{g},{\widehat{\matr{X}}}^{[t]},\vect{z}^{[t]},\widehat{\boldsymbol{\theta}}^{[t]}\right)&= \gauss{\vect{f}_{g}^{[t]}}{\boldsymbol{\mu}_g^{[t]}}{\boldsymbol{\Sigma}_g^{[t]}},\end{aligned}$$ where $$\begin{aligned} \LABEQ{mGP} \boldsymbol{\mu}_g^{[t]}&=\vect{m}_{\matr{X}_g}^{[t]}\!+\! \matr{K}_{\matr{X}_g,{\widehat{\matr{X}}}}^{[t]}{\left(\matr{C}_{\widehat{\matr{X}}}^{[t]}\right)}\inv \;\!\!\!\!\left(\vect{z}^{[t]}-\vect{m}_{\widehat{\matr{X}}}^{[t]}\right),\\ % \boldsymbol{\Sigma}_g^{[t]}&=\matr{K}_{{\matr{X}}_g}^{[t]}\!-\!\matr{K}_{{\matr{X}}_g,{\widehat{\matr{X}}}}^{[t]}{\left(\matr{C}_{\widehat{\matr{X}}}^{[t]}\right)}\inv \matr{K}_{{\matr{X}}_g,{\widehat{\matr{X}}}}^{[t]\,\top},\LABEQ{vGP} \\ \matr{C}_{\widehat{\matr{X}}}^{[t]}&=\matr{K}_{\widehat{\matr{X}}}^{[t]}+\boldsymbol{\Sigma}_\noise^{[t]}. \end{aligned}$$ We refer to this solution as static GP-based approach (sGP), and it is summarized in . Given a specific time instant $t$ and the RSS measured by sensors ($\vect{z}^{[t]}$) at positions $\widehat{\matr{X}}^{[t]}$: Obtain the estimates of the hyper-parameters, $$\begin{aligned} \widehat{\boldsymbol{\theta}}^{[t]}&=&\left[\widehat{\mu}_\alpha^{[t]},\widehat{\sigma}^{[t]}_\alpha,\widehat{\mu}_P^{[t]},\widehat{\sigma}^{[t]}_P,{\widehat{\vect{x}}_0^{[t]}}\right], \nonumber\end{aligned}$$ by solving and . Compute the posterior distribution at the grid nodes $p\left(\vect{f}_{d}^{[t]}|\matr{X}_{d},{\widehat{\matr{X}}}^{[t]},\vect{z}^{[t]},\widehat{\boldsymbol{\theta}}^{[t]}\right)$ in . The kernel and its hyper-parameters ----------------------------------- [In implementing the GPR, we need to select the kernel of the GPR which measures the similarity between points and is used to predict values of RSS at the nodes of interest from the measured RSS. We chose to work with a combination of different kernels given by $$\begin{aligned} k(\vect{x}_i,\vect{x}_j) &= {\sigma_k^2} {\rm exp}\left(-\frac{\sqrt{(\vect{x}_i-\vect{x}_j)(\vect{x}_i-\vect{x}_j)\trs}}{2l^2}\right) + \nonumber \\ &+{\sigma}_\alpha^2\widehat{q}(\vect{x}_i)\widehat{q}(\vect{x}_j) + {\sigma}_P^2,\end{aligned}$$ where $\widehat{q}(\vect{x}_i)=10\log_{10}\sqrt{(\vect{x}_i-\widehat{\vect{x}}_0)(\vect{x}_i-\widehat{\vect{x}}_0)\trs}$. The parameters $\sigma_k$, $l$, ${\sigma}_\alpha$ and ${\sigma}_P$ are obtained by the GP by minimizing the log marginal likelihood. ]{} [Here, we make an important point about the proposed approach. It is based on a detailed description of the system by a mathematical model. The objective of the model is to explain as much of the observed data as possible. This also allows the GP to learn more quickly and to correct for the modeling errors. In other words, GPs, being quite robust, will compensate for errors in the estimation of the mean term, i.e., in the exponent loss and transmitted power in and . ]{} Bayesian Cramer-Rao Bound ========================= In this section we focus on a particular node of the grid, say $\vect{x}_{g_i}^{[t]}$. The GP provides us with a Bayesian estimate of the true RSS at this grid node, $f_{g_i}^{[t]}$, $$\begin{aligned} \widehat{f}_{g_i}^{[t]}&\sim&\gauss{\widehat{f}_{g_i}^{[t]}}{{\mu}_{g_i}^{[t]}}{{\sigma}_{g_i}^{2\,[t]}},\end{aligned}$$ i.e., our GP approach obtains the mean, ${\mu}_{g_i}^{[t]}$, by and the variance ${\sigma}_{g_i}^{2\,[t]}$ by . We want to find the lower bound of the of $\widehat{f}_{g_i}^{[t]}$, $\E_{\vect{z},f_{g_i}}[(f_{g_i}^{[t]}-\widehat{f}_{g_i}^{[t]})^2]$. We obtain this bound by computing the [@Van07] according to [@Waagberg17] $$\begin{aligned} \LABEQ{CRB1} \E_{\vect{z},f_{g_i}}[(f_{g_i}^{[t]}&-\widehat{f}_{g_i}^{[t]})^2] \\ &\geq\left(\E_{\vect{z},f_{g_i}}\left[\left(\frac{\partial}{\partial f_{g_i}}\ln p(\vect{z}^{[t]},f_{g_i}^{[t]})\right)^2\right]\right)\inv ={\sigma}_{g_i}^{2\,[t]}. \nonumber\end{aligned}$$ We note that apart from the estimated variable, $f_{g_i}^{[t]}$, we have [three]{} more parameters, $\alpha^{[t]}$, $P^{[t]}$ and [$\vect{x}_0$]{}, that introduce errors in the estimation. In [@Waagberg17], the authors develop a for with deterministic hyper-parameters. The authors conclude that a term must be added to the variance of the in , as shown below. This term is a function of the derivative with respect to the hyper-parameters of the mean of the . The result is obtained under the assumption that the estimates of the hyper-parameters are unbiased, or at least asymptotically unbiased. We can cast our solution as a where the deterministic hyper-parameters of the mean are $\boldsymbol{\mu}_{P,\alpha,\vect{x}_0}^{[t]}=[\widehat{\mu}_P^{[t]}, \; \widehat{\mu}_\alpha^{[t]},{\widehat{\vect{x}}_0\trs}]\trs$. Hence we may apply, in a straightforward way, the result in [@Waagberg17] to develop a where the hyper-parameters $\widehat{\mu}_\alpha^{[t]}$, $\widehat{\mu}_P^{[t]}$ and ${\widehat{\vect{x}}_0}$ are considered as deterministic. The expression for the is $$\begin{aligned} \LABEQ{HCRB} \E_{\vect{z},f_{g_i}}[(f_{g_i}^{[t]}-\widehat{f}_{g_i}^{[t]})^2]&\geq&{\sigma}_{g_i}^{2\,[t]}+ \vect{g}_{g_i}^{[t]\;\top}{\matr{M}_{g_i}^{[t]}}\inv\vect{g}_{g_i}^{[t]},\end{aligned}$$ where $$\begin{aligned} \vect{g}_{g_i}^{[t]}\;\;&=\;\;\frac{\partial}{\partial \boldsymbol{\mu}_{P,\alpha,\vect{x}_0}^{[t]}}\left(m_{{\vect{x}_{g_i}}}-\vect{m}_{\widehat{\matr{X}}}^{[t]\,\top}{\left(\matr{C}_{\widehat{\matr{X}}}^{[t]}\right)}\inv \vect{k}_{\vect{x}_{g_i}{\widehat{\matr{X}}}}^{[t]\,\top}\right), \\ \matr{M}_{g_i}^{[t]}\;\;&=\;\;\frac{\partial \vect{m}_{\widehat{\matr{X}}}^{[t]\,\top}}{\partial \boldsymbol{\mu}_{P,\alpha,\vect{x}_0}^{[t]}} {\left(\matr{C}_{\widehat{\matr{X}}}^{[t]}\right)}\inv \frac{\partial\vect{m}_{\widehat{\matr{X}}}^{[t]}}{\partial \boldsymbol{\mu}_{P,\alpha,\vect{x}_0}^{[t]}}.\end{aligned}$$ We point out that the derivatives are with respect to the hyper-parameters $\widehat{\mu}_\alpha^{[t]}$ and $\widehat{\mu}_P^{[t]}$ and not with respect to $\alpha^{[t]}$, $P^{[t]}$. We can show that [$$\begin{aligned} \vect{g}_{g_i}^{[t]}&= \begin{bmatrix} 1 \\ -10\log_{10}({d}_{g_{i}}) \\ {\frac{c}{d_{g_i}^2 } (\widehat{\vect{x}}_{0}-\vect{x}_{g_i})} \end{bmatrix} - \nonumber \\ &- \begin{bmatrix} {\sf \vect{1}} & -\widehat{\vect{q}}^{[t]} & {\matr{A}}\\ \end{bmatrix}\trs {\left(\matr{C}_{\widehat{\matr{X}}}^{[t]}\right)}\inv \vect{k}_{\vect{x}_{g_i},{\widehat{\matr{X}}}}^{[t]\,\top} + \nonumber \\ &+ \begin{bmatrix} 0 \\ 0 \\ \vect{m}_{\widehat{\matr{X}}}^{[t]\,\top} {\left(\matr{C}_{\widehat{\matr{X}}}^{[t]}\right)}\inv \matr{A}_1{\left(\matr{C}_{\widehat{\matr{X}}}^{[t]}\right)}\inv \vect{k}_{\vect{x}_{g_i},{\widehat{\matr{X}}}}^{[t]\,\top}\\ \vect{m}_{\widehat{\matr{X}}}^{[t]\,\top} {\left(\matr{C}_{\widehat{\matr{X}}}^{[t]}\right)}\inv \matr{A}_2{\left(\matr{C}_{\widehat{\matr{X}}}^{[t]}\right)}\inv \vect{k}_{\vect{x}_{g_i},{\widehat{\matr{X}}}}^{[t]\,\top} \end{bmatrix}, \\ \matr{M}_{g_i}^{[t]}&= \begin{bmatrix} {\sf \vect{1}} & -\widehat{\vect{q}}^{[t]} & {\matr{A}}\\ \end{bmatrix}\trs {\matr{C}_{\widehat{\matr{X}}}^{[t]}}\inv \begin{bmatrix} {\sf \vect{1}} & -\widehat{\vect{q}}^{[t]} & {\matr{A}}\\ \end{bmatrix},\end{aligned}$$ where $$\begin{aligned} c&=-10\mu_{\alpha}\log_{10}(e), \\ \matr{A}&=c \widehat{\matr{D}} (\matr{J}_{N\times2}\mbox{diag}(\widehat{\vect{x}}_{0})-\widehat{\matr{X}}) \in \mathbb{R}^{N\times2}, \\ \matr{A}_1&=-2\sigma_u^2 \mbox{diag}\{\widehat{\matr{D}}^2 (\vect{\rm \vect{1}} \widehat{\vect{x}}_{0}(1) - \widehat{\vect{X}}(:,1))\} \in \mathbb{R}^{N\times N}, \\ \matr{A}_2&=-2\sigma_u^2 \mbox{diag}\{\widehat{\matr{D}}^2 (\vect{\rm \vect{1}} \widehat{\vect{x}}_{0}(2) - \widehat{\vect{X}}(:,2))\} \in \mathbb{R}^{N\times N},\end{aligned}$$]{} and $\widehat{\vect{x}}_{0}(j)$ is the $j$th element of the vector $\widehat{\vect{x}}_{0}$ and $\widehat{\vect{X}}(:,j)$ represents the $j$th column of the matrix $\widehat{\vect{X}}$. At this point, we emphasize that our GPR has hyper-parameters, $\alpha^{[t]}$ and $P^{[t]}$, with priors that depend on their own hyper-parameters, $\boldsymbol{\theta}^{[t]}=[\mu_\alpha^{[t]},\sigma^{[t]}_\alpha,\mu_P^{[t]},\sigma^{[t]}_P,{\widehat{\vect{x}}_0}]$. We are using the results in [@Waagberg17] for the latter hyper-parameters. Therefore, we are somehow assuming that the variance of the GPR already includes the uncertainties of $\alpha^{[t]}$ and $P^{[t]}$ by averaging in a Bayesian way. In turn, the HCRB provides the overall effect of $\mu_\alpha^{[t]}$ and $\mu_P^{[t]}$ on the of the RSS. Gaussian processes for time-varying fields ========================================== We also consider the possibility that the RSS field may vary with time. For example, the , the orientation of the antennas, the objects around the sensors, among others, may change with time, and they may affect the RSS at the locations of the sensors. Also, some sensors may become unavailable, or they can move and change their positions. One possibility to address this is to simply use the approach from the previous section only on the current data and ignore the rest. An alternative is to include all the past information but enforcing reduced influence of older data on the estimates. As explained in , the main interest here is not finding how the field evolves at every position with time, which is already dealt by other approaches [@Perez13]. Instead, we just want to estimate the at the grid positions, i.e., at a set of predefined locations. We propose to update the estimates at these locations with information from the new observations. We compute the posterior distribution at the grid nodes, which is then used as a prior of the for the next instant time. In order to emphasize the information in the more recent samples and “forget” the information from old ones, we inject a forgetting factor, $0<\lambda\leq1$. As a result, our solution is a linear combination of past and current information weighted by $\lambda$ and $1-\lambda$, i.e., $$\begin{aligned} \LABEQ{postRGP} p\left(\vect{f}_{g}^{[t]}|\matr{X}_{g},{\widehat{\matr{X}}}^{[1:t]},\vect{z}^{[1:t]}\right)&= \gauss{\vect{f}_{g}^{[t]}}{\boldsymbol{\mu}_g^{[t]}}{\boldsymbol{\Sigma}_g^{[t]}}, %\nonumber\end{aligned}$$ where $$\begin{aligned} \boldsymbol{\mu}_g^{[t]}&= \vect{m}_{\matr{X}_g}^{[t]}+ (1-\lambda) \boldsymbol{\mu}_{prior}^{[t]} + \lambda \boldsymbol{\mu}_{post}^{[t]} , \LABEQ{meanRGP} %\nonumber \\ \matr{\Sigma}_g^{[t]}&=\matr{K}_{\matr{X}_g}^{[t]}- \Big((1-\lambda) \matr{\Sigma}_{prior}^{[t]} + \lambda\matr{\Sigma}_{post}^{[t]}\Big), %\nonumber \LABEQ{covRGP}\end{aligned}$$ and $$\begin{aligned} \boldsymbol{\mu}_{post}^{[t]}&=\matr{K}_{\matr{X}_g,{\widehat{\matr{X}}}}^{[t]}{\left(\matr{K}_{\widehat{\matr{X}}}^{[t]}+\boldsymbol{\Sigma}_\noise^{[t]}\right)}\inv \left(\vect{z}^{[t]}-\vect{m}_{\widehat{\matr{X}}}^{[t]}\right), %\nonumber \\ \matr{\Sigma}_{post}^{[t]} &= \matr{K}_{\matr{X}_g,{\widehat{\matr{X}}}}^{[t]}{\left(\matr{K}_{\widehat{\matr{X}}}^{[t]}+\boldsymbol{\Sigma}_\noise^{[t]}\right)}\inv \matr{K}_{\matr{X}_g,{\widehat{\matr{X}}}}^{[t]\,\top}, %\nonumber \\ \boldsymbol{\mu}_{prior}^{[t]}&= \boldsymbol{\mu}_g^{[t-1]}-\vect{m}_{\matr{X}_g}^{[t-1]}, %(1-\lambda) \boldsymbol{\mu}_{prior}^{[t-1]} + \lambda \boldsymbol{\mu}_{post}^{[t-1]} %\nonumber \\ \boldsymbol{\Sigma}_{prior}^{[t]}&= \matr{K}_{\matr{X}_g}^{[t-1]}- \matr{\Sigma}_g^{[t-1]}. %\nonumber\end{aligned}$$ Due to the dynamic nature of the scenario, a new estimation of $\alpha$ and $P$ is computed at every time instant following . Note that we are ensuring that the covariance matrix in remains positive definite due to the linear combination weighted by $\lambda$ and $1-\lambda$. Our approach is summarized in , which we refer to as recursive Gaussian process (rGP)-based algorithm. Compute the posterior probability at the grid nodes for $t=0$ by executing and set $\boldsymbol{\mu}_{prior}^{[0]}=\boldsymbol{\mu}_g^{[0]}$, $\matr{\Sigma}_{prior}^{[0]} =\boldsymbol{\Sigma}_g^{[0]}$. Compute the posterior distribution at the grid nodes at time instant $t$, i.e., $p\left(\vect{f}_{g}^{[t]}|\matr{X}_{g},{\widehat{\matr{X}}}^{[1:t]},\vect{z}^{[1:t]}\right)$ as in . Note that when $\lambda= 1$, we forget all the prior knowledge about the previous data and compute the mean and covariance matrix of the GP with just the current data, yielding the approach developed in [@Santos17b]. Experimental results ==================== In this section we include results that illustrate the performance of the proposed approach. First, we explain the setup. Simulated scenario ------------------ The setting is similar to the one used in [@Santos17b]. We simulated an area of 500 m $\times$ 500 m where a single transmitter is placed at the center of the area. We considered a fixed and uniform grid with $M=1088$ nodes, where we want to estimate the RSS. We randomly placed $N=218$ sensors within the area, and their RSS measurements were generated according to with $\hat{d}_i^{[t]}$ replaced by ${d}_i^{[t]}$ and with $\vect{u}^{[t]}$ removed, $\alpha^{[t]}=3.5$ and $P^{[t]}=-10$ dBm. The rest of the parameters were set to $\sigma_w=\sqrt{7}$ dB, $\sigma_v=\sqrt{10}$ dB, $\rho_u=200$ mdB, and $D_{corr}=50$ m. [We assumed that the values of the parameters $\sigma_v$ and $D_{corr}$ are not available and that they would be estimated from the GP. ]{} One example of this scenario is shown in , where we represent the grid nodes with red squares, the transmitter with a green triangle and the sensors with circles. The different values of RSS at the sensors are represented with a scale of colors where yellow means the highest and blue the lowest values. {width="3.5in"} To quantify the error of the performance, we used the mean squared error (MSE). The metric was computed for each time instant as $$\begin{aligned} \mbox{MSE}^{[t]}&=&\frac{1}{M}\sum_i \left(\mu_{g_i}^{[t]}-f_{g_i}^{[t]}\right)^2, \;\;\forall i=1\cdots M, %\\ %\mbox{MAE}^{[t]}&=&\frac{1}{M}\sum_i \left|\mu_{g_i}^{[t]}-f_{g_i}^{[t]}\right|, \;\;\forall i=1\cdots M\end{aligned}$$ where $f_{g_i}$ is the true value of the RSS at the $i$th node and $\mu_{g_i}^{[t]}$ is its estimate. Error in the location --------------------- In this subsection we show the performance of our GP algorithm for static fields considering the three different cases explained in , i.e., 1. Case 1: Using the true sensor locations and the model in . 2. Case 2: Using the sensor locations with errors and accounting for the errors as in . 3. Case 3: Using the sensor locations with errors as if they were accurate and following . In we show the MSEs for these three cases. Clearly, when there is no error in the location (Case 1), we obtain the lowest error. On the other hand, when considering locations with errors (Cases 2 and 3), the performance is just slightly deteriorated in comparison to Case 1. Specifically, our proposal of introducing one additional source of error to model the error of the user location (Case 2) improves the traditional approach of ignoring this source of error (Case 3). {width="3.5in"} Static GP --------- In this subsection we analyze the performance of the sGP approach for static fields. One example of such setting is depicted in . We first estimated and the path loss exponent and the transmitter power following and . To show the robustness of our method for estimating the mean of the path loss exponent and transmitted power, we introduced a high error in the location of the closest sensor to the transmitter. The obtained values of the means are shown in , and they are close to the true values. {width="3.5in"} We also include where we compare the estimated and true along different distances from the transmitter. The figure also displays the noisy measurements of the of the sensors (plotted with circles). The results show that the estimated values (red dashed line) are approximately the same as the true ones (black solid line), and in agreement with the results from . [Note that the closest measurement to the transmitter (whose RSS is around $-$50 dBm) did not affect negatively the estimates of $\alpha$ and $P$ because of the precaution we took with . ]{} {width="3.5in"} The results of the RSS field estimates are shown in . We represent the RSS measurements of the sensors with red circles and the estimated mean of the posterior distribution of the GP over the coverage area with blue solid surface. The graph demonstrates that the GP smoothly approximates the RSS field. {width="3.5in"} Finally, in we present the true and estimated values of the hyperparameters of the kernel. {width="3.5in"} Recursive GP ------------ Here we provide some results with the recursive GP. In , we display results from two different scenarios: 1) an intermittent setting where 20% of the sensors are unavailable at each time instant and 2) a setting where the sensors are moving from their previous positions. The figure shows plots of MSEs for different shadowing variances and at two time instants ($t=1$ and $t=10$). The results were averaged over 100 different experiments and compared to the technique [@Molinari15; @Montero15], applied at $t=1$. As expected, the recursive approach gets better estimates with time. Also as expected, the error of the rGP method with intermittent data is slightly higher than with data of moving sensors. Note that when we use intermittent data at $t=1$, the performance is not as good as when we use data from moving sensors because there are less available measurements. {width="3.5in"} Finally, in , we illustrate the time variability of the RSS at a specific location when [*dynamic transmitter powers*]{} are considered. We changed the value of $P^{[t]}$ two times during the observation interval so that the RSS varied as shown by the green dashed line. The blue solid line represents the mean of the estimated posterior obtained from the GP following , and the gray shadowed error bars are bands around the mean whose widths are equal to three times the square root of HCRB deviation in . The applied value of $\lambda$ was 0.5. {width="3.5in"} ![[A graphical representation of the dataset from [@Chakraborty15]. The location of the transmitter is marked with a red triangle () and the locations of the sensors with circles. The colors of the circles reflect the RSS.]{}](figs/fig8){width="3.5in"} {width="3.5in"} Note that in a real-world scenario, the model in (\[measurements\]) does not perfectly fit the measurements. Specifically, the RSS can also be affected by obstacles that obstruct the line of sight, the path loss exponent might depend on the position where a measurement is taken, or the antenna gain diagram might not be perfectly omnidirectional, among others. All these factors cause the value of the MSE to increase in comparison to the MSEs obtained from simulated measurements where basically we have a perfect model match. Conclusions =========== In this paper, we use crowdsourcing to solve the problem of estimation of a possibly time-varying field. We rely on measurements provided by different and inexpensive sensors that are randomly placed within an area of interest. We developed a Bayesian framework based on and a model with several unknown parameters. The unknown path loss exponent and transmitted power were modeled as Gaussian variables. The hyper-parameters of their Gaussian distributions were estimated from the data. We also assumed that the user locations were not perfectly known, which introduced an additional source of error in the model. Further, the location of the transmitter was unknown too, and it was estimated from the data, which had its own error. We also addressed the problem where the field may vary with time. In our solution we used a forgetting factor which determines the relevance of previous and current data. In all our solutions, the needed memory of the algorithm is fixed and a function of the number of nodes, and it is independent of the number of sensors. Finally, we derived the of the estimated parameters, and we showed the performance of our approach with experimental results on synthetic datasets. We start by taking the logarithm of $$\begin{aligned} \log_{10}\left(\widehat{{d}}_i^{[t]}\right) &= \log_{10}\left(d_i^{[t]}+\epsilon_d\right) = \log_{10}\left(d_i^{[t]}\left(1+\frac{\epsilon_d}{d_i^{[t]}}\right)\right)\nonumber\\ &=\log_{10}\left(d_i^{[t]}\right)+\log_{10}\left(1+\frac{\epsilon_d}{d_i^{[t]}}\right).\end{aligned}$$ Let $x={\epsilon_d}/{d_i^{[t]}}$, which is a variable that takes small values. Then we expand $f(x)=\log_{10}(1+x)$ by Taylor expansion around $x=0$ and get $$\begin{aligned} f(x)&=\log_{10}(1+x)|_{x=0}+\frac{\log_{10}(e)}{1+x}|_{x=0}x+R \nonumber\\ &=\log_{10}(e)x+R=\frac{\log_{10}(e)\epsilon_d}{d_i^{[t]}}+R,\end{aligned}$$ where $R$ is the residual . Thus, $$\begin{aligned} \log_{10}\left(\widehat{{d}}_i^{[t]}\right) \approx \log_{10}\left(d_i^{[t]}\right)+\frac{\log_{10}(e)\epsilon_d}{d_i^{[t]}},\end{aligned}$$ or $$\begin{aligned} \label{eq:app} \log_{10}\left({{d}}_i^{[t]}\right) \approx \log_{10}\left(\widehat{d}_i^{[t]}\right)-\frac{\log_{10}(e)\epsilon_d}{d_i^{[t]}}.\end{aligned}$$ Replacing in , we obtain that $u_i^{[t]}$ in is $$\begin{aligned} \LABEQ{u} u_i^{[t]} = -\frac{10\alpha^{[t]}\log_{10}(e)}{\widehat{d}_i^{[t]}-\epsilon_d}\epsilon_d.\end{aligned}$$ If we assume that $\widehat{d}_i^{[t]}>>\epsilon_d$ and $\epsilon_d\sim \gauss{\epsilon_d}{0}{\sigma_d^2}$, we can write $$\begin{aligned} \label{eq:u_t} u_i^{[t]}\sim\gauss{u_i^{[t]}}{0}{\frac{\left(10\alpha^{[t]} \log_{10}(e)\right)^2}{\widehat{d}_i^{2[t]}}\sigma_d^2},\end{aligned}$$ Thus, for the standard deviation of $u_i^{[t]}$, we have $\sigma_u^{[t]}=\rho_u^{[t]}/\widehat{d}_i^{[t]}$, where $$\begin{aligned} \rho_u^{[t]}={10\alpha^{[t]}\sigma_d\log_{10}(e)}, \end{aligned}$$ which is what we have in . [^1]: I. Santos and J. J. Murillo-Fuentes are with the Dept. Teoría de la Señal y Comunicaciones, Universidad de Sevilla, Camino de los Descubrimiento s/n, 41092 Sevilla, Spain. E-mail: [{irenesantos,murillo}@us.es]{} [^2]: P. M. Djurić is with the Dept. Electrical and Computer Engineering, Stony Brook University, Stony Brook, NY 11794, USA. E-mail: [[email protected]]{}. [^3]: This manuscript has been submitted to IEEE Transactions on Signal Processing on May 11, 2018; revised on August 9, 2018 and November 15, 2018; accepted on December 10, 2018. The work of I. Santos and J.J. Murillo-Fuentes was partially funded by Spanish Government (Ministerio de Economía y Competitividad TEC2016-78434-C3-2-R) and by the European Union (FEDER). P. M. Djurić was supported by the National Science Foundation under Award CNS 1642965. [^4]: Note that we have removed the superscript $t$ from $N$ for easier reading. | High | [
0.6650602409638551,
34.5,
17.375
]
|
While mucins are found on normal cell surfaces, and are proposed to have multiple roles such as providing protection from environmental exposures and carcinogens, and involvement in cell adhesion and migration, altered forms of the MUC1 glycoprotein have been found heavily expressed in human breast cancer tissue, and other tumors. In our laboratory mammary tumor cells expressing the transmembrane isoform of MUC1 develop tumors in vivo and lead to death of the host. In contrast, tumor cells expressing a MUC1 secreted isoform fail to develop tumors by an immunologically regulated mechanism. Furthermore, MUC1 secreting mammary tumor cells confer protection in mice against the parental mammary tumor line, which does not secrete MUC1, and other highly tumorigenic cell lines unrelated to the mammary tumor line. The following proposal aims to dissect some of the molecular mechanisms involved in this phenomenon. The first aim will examine characteristics of the MUC1 secreting tumor cells which may lead to rejection in vivo or failure to grow. Immunological surface molecules, and production of tumor derived factors, such as chemoattractant molecules and extracellular matrix degrading proteins, will be screened by various assays such as flow cytometry, ELISA, and zymography assays. These techniques will identify if there is a variable expression of these molecules by the tumor lines, which may aid in immunoregulation of tumor growth or elimination. The second aim will dissect the effect of these tumor cells differing in MUC1 expression on various immunological parameters including among others migration and activation of different types of lymphoreticular cells, dendritic cell maturation and function, monocyte migration, expansion of immature myeloid suppressor cells and their ability to inhibit anti-tumor immune responses, and induction of immunological tolerance to tumors. This aim will be achieved by different assays including but not limited to flow cytometry, gel matrix cellular recruitment in vivo, T cell activation and proliferation, and use of cellular migration chambers. Furthermore, the purified secreted isoform of MUC1 or an Immuno Enhancing Peptide (IEP) only found in the secreted isoform will be used in parallel experiments to test the ability of changing the parameters analyzed in the two aims. The objective of this proposal is to investigate the immunological and cellular mechanisms involved in the growth of tumors, and examine the immunoregulatory characteristics of the MUC1 glycoprotein, which may have immunoenhancing and anti-tumor properties. Understanding how MUC1 is involved in tumor immunoregulation may potentially lead to its use as an immunotherapeutic agent against various cancers. | High | [
0.686605981794538,
33,
15.0625
]
|
In my post “God TV: Send Us Four Million Dollars!” I pointed out how Wendy and Rory Alec are begging for $4 million dollars by May 31, 2010, or they will have to pack up and leave Jerusalem. (That would be a good thing, actually, so as I said in the other post, hide your wallets.) In fact, God TV wants your money so badly, this is what you see when you click on their website right now. In the video below, Wendy Alec is speaking on God TV’s fundraising program on May 17, 2010, and she looks like she could be a drinking buddy of John Crowder. With this kind of behavior, she would fit in quite comfortably with John Crowder and his “New Mystics.” It angers me that people think this woman and those like her actually represents Biblical Christianity. What a joke, folks. Jesus didn’t die so you could get a snoot full and stagger around like a blithering idiot. I guess they never read the second chapter of Titus. Let me remind them: Titus 2:1-15: (NASB) But speak thou the things which become sound doctrine: That the aged men be sober, grave, temperate, sound in faith, in charity, in patience. The aged women likewise, that they be in behavior as becometh holiness, not false accusers, not given to much wine, teachers of good things; That they may teach the young women to be sober, to love their husbands, to love their children, To be discreet, chaste, keepers at home, good, obedient to their own husbands, that the word of God be not blasphemed. Young men likewise exhort to be sober minded. In all things showing thyself a pattern of good works: in doctrine showing uncorruptness, gravity, sincerity, sound speech, that cannot be condemned; that he that is of the contrary part may be ashamed, having no evil thing to say of you. Exhort servants to be obedient unto their own masters, and to please them well in all things ; not answering again; Not purloining, but showing all good fidelity; that they may adorn the doctrine of God our Savior in all things. For the grace of God that bringeth salvation hath appeared to all men, teaching us that, denying ungodliness and worldly lusts, we should live soberly, righteously, and godly, in this present world; looking for that blessed hope, and the glorious appearing of the great God and our Savior Jesus Christ; who gave himself for us, that he might redeem us from all iniquity, and purify unto himself a peculiar people, zealous of good works. These things speak, and exhort, and rebuke with all authority. Let no man despise thee. People who stumble around in a drunken manner portray themselves as spiritual. But the Bible says these things are works of the flesh: Galatians 5:19-21: 19 Now the works of the flesh are manifest, which are these; Adultery, fornication, uncleanness, lasciviousness, 20 Idolatry, witchcraft, hatred, variance, emulations, wrath, strife, seditions, heresies, 21 Envyings, murders, drunkenness, revellings, and such like: of the which I tell you before, as I have also told you in time past, that they which do such things shall not inherit the kingdom of God. If those who do such things will not inherit the Kingdom of God, why would the Holy Spirit of God manifest in people in such a way to make them act in a way that He detests? The answer is simple, He wouldn’t. To say He would is blasphemy! The Bible says these are the fruits of the Spirit: Galatians 5:22-24: 22 But the fruit of the Spirit is love, joy, peace, longsuffering, gentleness, goodness, faith, 23 Meekness, temperance: against such there is no law. 24 And they that are Christ’s have crucified the flesh with the affections and lusts. If you see someone claim to be under the influence of the Holy Spirit and they don’t possess the traits listed in Galatians 5:22-24, then it’s not the Holy Spirit. Watch the video and tell me what spirit Wendy Alec is influenced by. Update: The gentleman who posted the video was forced to take it down because of copyright claims from God TV. It makes me wonder why Wendy Alec wants it taken down if she was truly in the “spirit.” What’s she so afraid of? Could it be she finds it embarrassing? If I can find the video, I’ll relink it here. Update #2: Philip C. was kind enough to upload this video to his blog page. Not only did he upload the video of Alec “drunk in the glory,” but he uploaded another video of her speaking in tongues, which sounds more like a tribal chant. Word of Faith/Prosperity teacher Steve Munsey is in that video as well. Please click here to watch both videos. 21 Responses to Wendy Alec Drunk in the “Glory” It is almost impossible for me to believe that people actually fall for this stuff. If I didn’t have to just seperate from my own church that used to be so sweet and is now following after “prophets” like these… Brother, it angers me too that such behavior is displayed and believed to be genuine, when it is not. Their behavior of a sign of God’s judgment that is soon to come. I would not be surprised if the entire project/business collapsed in a short period of time, hopefully soon. False teachers gaining more false converts. There is no gospel preaching, therefore they are false. Anyone who is involved with this, get out now, fall on your knees and repent. They are under God’s judgment, support them and you will share in this judgment. The way to the Father is Through His Son our precious Lord and Savior Jesus Christ. They are on the broad road leading to destruction anyone following them will be lead the same way I would be surprised if they fell apart because this is a ministry straight from Satan. God has control of everything but this is part of the judgement that is already taking place. Any God fearing Bible believing person would not be taken by this false ministry, but they are being taken by the millions. God is allowing them to be handed over to these false ministries because they refuse to believe the truth. We are at the end and God is making a final sweep for his believers and then all that will be left will be these false ministries to explain what happened to the people in the rapture. I clicked on the link to God TV. It reminded me of when I call the phone company. Their automated system won’t let you do anything except pay your bill. Every option they offer somehow circles back to the “pay your bill” option. I watched the link here before it was taken down, and believe me, you ain’t missing much. ( Crystal, is the link available at God.tv ? ) Just more proof that these people misrepresent the Gospel, and are making utter & complete fools of themselves. Now they might argue they are being made fools for Christ, so let me use some stronger language – these people are portraying themselves as complete idiots. You wonder sometimes why God had to use a donkey to get His message out ( not really ). I would agree that people like Wendy Alec are either under the influence of a demonic spirit, are actually drunk, or in some type of a self induced stupor, if that is indeed possible. One thing is certain, this type of behaviour is on the rise, and to those who are truly searching, makes our ‘ religion ‘ look reprehensible. Let us continue to pray that God will bring down these false teachers and their false ministries. What a stench they are in our Lord God’s nostrils. May God open the blind eyes of those who continue to support these false teachers and lead them into His truth because they are living in deception and need to be set free by our Lord Jesus. How God Tv can call their viewers a “family” is beyond me – who extorts money from their families – what a joke!! People please wake up they do not care how much credit card debt you have run up by giving to them – they only care about enriching themselves further on YOUR MONEY. God Tv has helped thousands into debt that God has never wanted you to have. I know of one woman who has given over £5,000 to these false teachers on the Gospel channels. When will they see the light!!!!!! Steve Munsey is a completely wicked and evil man – all he does is spew forth rubbish from his mouth. Reminds me of a completely mad person. There is no way to stop false teachers today such as the Alecs (Stephens!!). Peter, John and Paul were able to wield supernatural discipline, such as on Ananias/Sapphira; Elymas; the fornicator of Corinth; Hymanaeus and Alexander and Diotrophes. Even those represented only a few of the problem people that plagued the original apostolic churches. Most of them were handled by doctrinal disputation and apostolic testimony such as in Galatians, 2 Corinthians 11, 2 Peter, 1 John, etc. The sad fact is the wolves outnumber the sheep of God’s pasture by great multiples. Their name is Legion for they are many. The saved are few. But, they are commanded by the Lord to be wise as serpents and harmless as doves as they go out among them. Sadly, God’s sheep disobey Him by retaining sheeplike trust of all ministers, compounded by a spirit of fear of offending God by being critical of them who inhabit mega-church and media pulpits. The pew-sheep don’t have the training in righteousness that they should have. God help them. Websites like these are extremely helpful in doing that kind of work though. We need more of them. Excellently discussed by all here! I’ve tried to send you another copy of the video: – hope you will be able to download it? I really can’t watch any more the Stephens’s antics. Sadly at 6pm local time here they are boasting that they have “banked” over a million and a half dollars already. They will realise their dream of reaping a “billion souls”, but they will be leading them all to Hell. When are these people going to be stopped? They are playing on vulnerable people who have very little money themselves but feel they need to send it to these charlatans. Why do they need $4??? Can’t they sell a few houses? Who needs 15???? @ Jap: You might as well ask who’s going to stop the masses whom God has given the Alecs of the world to because they do not receive the love of the truth so as to be saved from their sinful, lustful ways. The market for such deceivers is created by the huge demand of the many with itching ears. So, the Alecs are not going to be stopped because the many are departing from sound doctrine in droves to put their hopes in fables of unverifiable miracles; tall tales about angels and visits to heaven and even hell; prophecies more in keeping with new age philosophy and Gnosticism; promises of a better earthly life, all which the Alecs (Stephens, that is) and others like them provide amply for them. The prosperous countries especially, such as America, Australia and Europe, are filled with Christians who seek an imaginary Jesus who will grant them their lust for even more money, better health and ecstatic mystical experiences. Voila! Out pops TBN, God TV, Daystar, Inspiration Network and many others, to accommodate them with a parade of the worst wolves ever whose cleverness and greed knows no bounds. The money doesn’t stop flowing into their covetous little hands so they can fly in jets, live in gated mansions, drive BMWs because the people love to have it so. They want to believe that all of their heroes’ prosperity is coming from God, when it is really going out to them from their own covetous little hearts. Consequently, there is no stopping them. God will take care of them when He’s through with them unless they repent, which is highly unlikely. The truly unsuspecting among their supporters who are saved are in danger of turning back into wolves themselves the longer they’re under their doctrine and give to them. They need prayer that the Good Shepherd will leave the 99 and go after them and rescue them by the saving power of His eternal truth. And like the woman who searched her house until she found the lost coin, caring ministers of God must patiently instruct and correct them so that God may grant them repentance unto acknowledgment of the truth. Only then can they come out from the snare of the devil. But, ultimately, as with the prodigal son, they must come to themselves and depart from all those flesh-satisfying teachings and return to the Father for the sake of their own souls. For, if they don’t care for their own spiritual survival, who of us can make them care? Stan Comments are closed. Hebrews 1:1-2 God, after He spoke long ago to the fathers in the prophets in many portions and in many ways, in these last days has spoken to us in His Son, whom He appointed heir of all things, through whom also He made the world. | Low | [
0.5,
30.5,
30.5
]
|
A temporary three-day cease-fire between Israel and Palestinian fighters in Gaza has brought a few days of peace to southern Israel but many residents have yet to return home. They say that in addition to the threat of rockets from above, the latest conflict brought a new fear: from under the ground. An uneasy quiet has settled over Nahal Oz. Eighty of the community’s residents stayed during the recent conflict. They keep in close touch, but mostly stay indoors. The community’s 300 cows survived the shells that fell here for one month. They still provide the dairy products the kibbutz is known for. But the fields outside the perimeter go largely unattended. And most of the 400 residents who evacuated at the start of the conflict have yet to return. This 60-year-old farming community lies 800 meters from the Gaza Strip. Even in quieter times, it is hit regularly by rockets and mortars from across the border. Dov Hartuv, the kibbutz archivist, has lived here for more than 50 years. He says the conflict was different this time. “The tunnels. It was always something that was known but never taken seriously. And now, in this war, suddenly 33 tunnels were destroyed," he said. "And one tunnel was destroyed opposite Nahal Oz.” Some tunnels, in fact, were used by Gaza militants during the recent hostilities to cross into Israel. Israeli officials say there may be other tunnels still out there. But that, says Hartuv, is not the only difference. “There’s also a generation change. The mood of the country has changed. It’s acceptable to be in doubt,” he said. This is because many young families are debating whether to stay. One father brought his children back during the cease-fire, but just for a visit, he says. They were homesick. Hartuv says the uncertainty places many strains on a family. “I think it’s a whole gambit of emotions. It’s personal. It’s in the marriage. It’s with the children," he said. "It’s in relation to living on a kibbutz or moving to the city. All these questions arise now, and sometimes all at one time.” Haruyv’s oldest son moved away two years ago, the last of his four children to leave. Hartuv, however, says he will stay. | Mid | [
0.6099290780141841,
32.25,
20.625
]
|
Sunday, April 20, 2014 Chansung of group 2PM does not hide his frustrated feeling for the handling of the Jindo ferry Sewol disaster. Chansung of 2PM also pointed out inefficient rescue operation of ours through his Twitter. He said. “Accident can happen, but how to deal with the accident is reflecting our society. We can now realize how our society is being operated through the disaster and the accident can happen to anyone. I feel like this society gets sick.” He added, “I feel so sad. I really do not feel good after the accident. The disaster itself is tough but vicious actions after the accident makes me sadder. People who are hurting the families of the missing should receive the same pain as the families." He also expressed his sympathy in April 16 saying, “Please there should be more survivors in the ferry disaster.” The Jindo ferry accident happened around 9 am in April 16 near Jindo-gun, South Jeolla Province. Out of 476 passengers, the survivors are 174 and 46 people are died and 256 people remain unaccounted for as of this morning. | Mid | [
0.573770491803278,
35,
26
]
|
Disclaimer This company is a Free member of AsianProducts. If you end up wanting to buy anything from this supplier, we strongly suggest you to use our Escrow service EZ Order. This will ensure a 100% safe transaction. Products Detail Product Feature: * Flexible Printed Circuits (FPC) is used as interconnection to transmit signals for electronic products by developing circuits on flexible Copper Clad Laminate through processes of exposure, etching.,..etc. FPC is mainly applied to surface mount electronic components and connect PCB & PCB or connect PCB & LCD, to carry out the function of electronic product. * Based upon FPC's characteristics of "thin" and "flexible", there will be a great growth of FPC as the electronic market continues to demand "thinner and thinner" and "smaller and smaller". Relatively, the development of FPC product will help flourish the electronic market as well. * Now our R&D tearn has also been continuously and actively researching the higher-level FPC for more higher-level clients’ demands. We even seek for more economical production cost and more efficient in order to make the whole electric industry reach to the trend of diverse and high-level development | Low | [
0.534117647058823,
28.375,
24.75
]
|
Tag Archives: customer experience Post navigation The 2015 Consumer Electronics Show is definitely missing a bet. The CES press is talking breathlessly about drones, futuristic self-driving cars, and glamorous wristbands telling me what I don’t really need to know every minute of every day. I must admit that the report about a refrigerator with eight USB ports from General Electric’s First Build had me going—until I saw the $3000 ticket that First Build wants for this toy. Sorry, GE. No sale. Connectivity as the big watch-word? (Yawn.) That’s not only not even yesterday’s catch word, it’s older than last year’s catch word. So the Girl Scouts are in your show. A thrill for the girls, but not for homeowners like me. Obviously, the participants in the show have not spent five minutes a day at home. What this country really needs is genuine connectivity from the grocery store to the parking lot, to my shelves at home, and into my fridge. And make it affordable! Sorry, but the big chain-store delivery truck drivers don’t bring the goods to my kitchen, figure out where everything should be placed and place it there. Where is development of an affordable household inventory system for such items as toilet tissue, shoe strings, tooth paste, light bulbs and laundry detergents? Where is the app that reads and translates the weird codes that still show up on products? And I don’t just mean the bar codes either. I don’t want or need a car that drives itself. I’m not quite certain I want to see my friendly postal carrier replaced by a drone. I don’t want or need a 3-D printer that produces the Teddy Bear of the Month. I would rather have self-cleaning drains, a driveway and walkway that melt the ice that coats them in a storm, and self-storing groceries, thank you very much. I’ll be watching from here next year in case the media starts talking about robots for teaching two-year-olds how to build drones. Recently I have been under a lot of stress. My good friend and quintessential practical networker Ramona, seeing that I was definitely in need of a therapeutic experience, invited me to join her and another friend to visit West Virginia’s Berkeley Springs spa. Never having visited any spa—and pretty well burned out—I agreed to the adventure. Soaking for 30 minutes in a huge ceramic tub of minueral water heated to 102 degrees was a delightful experience I will never forget. I’m hooked on the experience. I recommend the experience. I’ll be back. DUB-DUB-DUB (Now Optional) A while back, my Toastmaster friend from California George—a webmaster by profession—pointed out that for anyone to visit my website, the infamous “WWW” (aka “dub-dub-dub”) had to precede PEQUODSYSTEMS. All that has now been changed, and you can now get to our website by simply entering PEQUODSYSTEMS.COM in your browser. Today’s blog came to me in a flash of insight. My other half, struggling with his tablet, presented me with today’s topic: the Swipe Generation. (and you thought I am some other generation? read on…) He was trying and trying and trying to set the correct date and time on his tablet. The more he tried, the more frustrated he got. Finally, I realized what his problem was: He’s still in the PointandShoot/Click-Here generation. I, with my smart phone, had discovered Swipe a long time ago. Many years ago, Microsoft and friends taught us to Point and Shoot. Or at least to ClickHere. Many of us still belong to that Point-and-Shoot/Click-Here Generation. The Point-and-Shoot Generation’s challenge? To learn that a down arrow means to swipe down rather thanclick on something and expect a result we want. The Swipe Generation’s challenges? There are two. A little compassion for our friends who have not yet mastered “The Swipe” will go a long way to maintaining friendships. Also, tablets and smart phones may still have some features that are quite similar to Point-and- Shoot. Upgrading your tablet or your smart phone? What will you swipe next? Recently Dan Rex, the CEO of Toastmasters International, announced that the TI Board of Directors had decided to institute new District officer titles that, among other reasons, would “Create a parallel between district leadership and leadership in the corporate and volunteer sectors.” Basically, the idea is to help volunteers easily explain to current and potential employers what knowledge, skills and abilities they were likely to have acquired by participating in these roles. All very nice and mostly window-dressing, insofar as many members have thought. Recently, I sat down with George Marshall, whose online Toastmaster Tools are used by members around the globe. I asked him that very question, and here is what he said. During my year as Toastmasters Area Governor, I became very interested in the big differences in club quality, and as I gathered data about each of my clubs to try to help them, I realized that the information I wanted was sometimes hard to gather in useful form. I learned a lot that year about downloading the reports and doing my own analysis in spreadsheets. After a while, I decided to automate the more time-consuming tasks. I started working on what eventually became the Tools for Toastmasters website, summarizing some of the reports in real-time. After a year or so, I realized that the data would be more useful if it were in a database, which I knew nothing about. But I sat out to learn how, and with the help of mentors, within a year or so, the core of today’s site was in place, with built-in summaries and analysis of several types of Toastmaster data. I have learned a lot about databases with this project, some of which I have been able to apply to our business. [Freemont Web Solutions]. Often in social media, the highly-talented and dedicated people whose services make life a whole lot better are forgotten. That’s because they are not the ones who show up in social media. They are the ones who often show up at our homes at our convenience to make the fixes and repairs we cannot do ourselves. This post is about one very special master plumber who has made our lives better and who is not here in social media. I live in a house that was built before certain plumbing standards were in effect. An outside spigot broke. This was a real problem for us, since we had invested in a number of ornamental plants around our yard, and just one dry season would be the end of them. According to several other so-called experts repair would have been expensive beyond belief. Some of those so-called experts said there was no way to repair the spigot. Then came the day we needed some other plumbing repairs. We had made a point of asking for the individual contact information of one of the plumbers who had completed a number of other plumbing jobs in a very satisfactory manner. He was the one who explained to us what the specific plumbing issues were, what caused them and how they might be prevented in the future. We looked up James. He arrived on time and got straight to work. We showed him the “impossible” spigot repair. James, a creative sort, looked at the job and proposed what no other plumber had proposed: plugging the original line to the spigot and installing a new line with a new spigot. All for a price we were willing and able to pay. Shortly thereafter, the original line was plugged. He had drilled a new line through our cinderblock basement wall and installed a new line and spigot. James. What a pro who really thinks differently than “the other guys.” You are the best! You never know where the next great idea might come from. I sometimes get ideas for this blog from comments by friends in various social media. But who would have ever thought that the idea for today’s blog about a new idea for technology would come from my alltime favorite wine connoisseur and longtime friend Heidi McLain? Heidi is the CEO and founder of the To Your Taste!®Wine Party Kit, an educational kit of tools to help those who may not feel confident about buying wine, ordering it in a restaurant, or just talking about it. So I was surprised to see a video post from Heidi about Phonebloks.com, a company pointing out an obvious aspect of cell phones. Not built to last, thousands of cell phones are being thrown away daily simply because one component of the phone does not work. Or that it is out of date. The idea behind Phonebloks is that phones should be modular, and enable users to easily upgrade or modify a phone built on an open platform. Basically, the idea is for companies working together to build the best phone in the world. Personally, I had never once thought about what happened to the components of my previous cell phones. That’s a little strange for me, because I have thought of myself as a great believer in a greener earth and as someone who likes to put things together to make them work. Recognizing that getting phone manufacturers to work together will not be an easy task, Phonebloks takes full advantage of social media. The plan is that on October 29 at 10:00 AM Eastern Daylight Time, all who like that idea send out eMail blasts through Thunderclap. Messages will go to our FaceBook friends and Twitter followers saying that this modular type phone is a phone worth keeping. (and developing, since the phone has not yet been developed!) Presumably these messages will reach manufacturers such as Apple and Samsung. As of the date of this blog, Thunderclap lists some 856,800 supporters of a goal of 900,000 supporters and a social reach of 331,641,218. For a team of perhaps three people, this is a ginormous goal. On his help-out FAQ page Developer Dave Hakkens says >How can you help out and make Phonebloks become something more than just a concept? Do not send money! At least not yet. Dave writes on his facebook page Recently, I looked at a lengthy LinkedIn list of “Thought Leaders.” Presumably, these are people whom unspecified others recognize as an authority in a specialized field and whose expertise is sought and often rewarded. The extensive LinkedIn list included such notables as Richard Branson (2,272,487 followers), Tony Robbins (588,125 followers), Guy Kawasaki (262,572 followers) and so many others that the bottom of the LinkedIn page of 90 notables said “show more” at the bottom. I was definitely underwhelmed. For the past four days, I have been trying to figure out what have these thought leaders actually done for me or my family and friends lately? Nothing came to my mind. Then DOVE CANADA came to my attention. According to the August 5 Canadian issue of Huffpost Style, Dove Canada says it has created a Photoshop Action that reverts edited images back to their original, un-airbrushed state. The local division of the skincare company went black ops recently for its latest “Campaign for Real Beauty” stunt, going so far as to create and post the downloadable Action file to social media sites like Reddit (the post has since been removed by its user). While the file promises to beautify images with a single click, in reality it reverts the edits that had been made to the photo, while adding a banner that says, “Don’t manipulate our perceptions of Real Beauty.” As a woman in a profession which only relatively recently has included more women, I deeply appreciate the Dove Canada Real Beauty (inner beauty) campaign. Frankly, for a long time, women in my profession who appeared to be physically attractive were often not taken seriously by men in technical training classes and in professional meetings. We often got the message that our questions were less than worth paying attention to, and answers were often short, and not necessarily sufficient. The man next to us was likely to be called on very quickly. The Dove campaign for girls and women to appreciate ourselves and nourish our self-esteem has resonated with me for many years. I have used Dove products since I was in college. Detractors aside, I find it refreshing to see a large, well-known company take bold and creative action which backs up a campaign of words. It’s one thing to be a “thought leader” with a list of tens or hundreds of thousands of LinkedIn followers. It’s another thing altogether to lead not only with thought, but also with action to match. Now that’s leadership! Dear Malala,We at Pequod Systems hear you loud and clear. And we were deeply moved by your recent speech at the U.N. Youth Assembly in New York City. We look forward to the day there is a documentary about your efforts to encourage the education of all girls, women and children. While we are blessed to be in a country where women are not shot for trying to get an education, we have also been around long enough to have watched a dramatic change in the numbers of girls and women being encouraged to enter technical fields astechniciansrather than as secretaries. Malala, as a young girl, I was encouraged only to be a secretary to someone who would be far more intelligent than I was assumed to be. Enter my husband and first computing mentor Grant. He knew I have a mind of my own and gently encouraged me to learn to use his first computer—an Apple II+. Later, he bought a server on which I managed a database created by my second mentor, Ed Fox. Ed taught me one of the best lessons I would ever learn about data management: Where does the data come from, who will benefit by its use, and what is your plan for managing it when your first plan does not exactly work the way you thought it would? David Rorabaugh was my third computing mentor. David had no truck with those who minimized women for any reason, and was a visionary who understood and talked about the future of Windows. He was a Certified NetWare Engineer when I was on a government contract with him. Eventually we both were taking—and passing—the same professional examinations and comparing notes with each other. Today, while the number of women computer technicians is still significantly lower than the number of men in the field, I believe there has been a generational attitude shift among younger men about women and computing. A Google search shows a lot of articles about women in computing. Most encouraging (to me, at least) there is a Philadelphia-based Network of Women in Computer Technology which focuses on mentoring young girls who might want to enter the field. Malala, keep speaking out as you did on your birthday. In some parts of the world, women are making progress. In others, we still need an army of your friends who believe in supporting the education of all women, girls and children just as you do. Thank you for your inspirational example. Before I joined Pequod Systems, I worked as a contractor for on several different contracts. Inevitably, some situation would arise in which I had to use a customer’s fax machine. (Remember those?) More often than not, access to those fax machines was ruled by a Queen Bee who had programmed specific codes into the machine so that only faxes from her boss could be sent to specific recipients, whose fax numbers were also hard-wired into the fax machines. Only by talking to a more experienced fellow contractor (who might or might not be present when one needed to send a fax) could one discover the one remaining set of magical codes with which one could send a weekly status report to one’s offsite project manager. I began to hate faxing and loathe the Queen Bees. Today I am grateful for the pending total demise of fax as the Queen Bees managed it. Fast forward to a recent blog by my friend Ann Bevans. She says that “in programming (and I would argue, in any job), you can’t know everything you may one day have to know. You have to be able to figure it out on the fly.” She goes on to say that “My first year in business, some of my former colleagues had spun off from that company and asked me if I could build a system like that for them.I said “YES!”Then I went to Barnes and Noble and bought a book called Data-Driven Websites or something like that.When you have the ability to figure shit out, you can do that and get away with it.You’re not a fraud.I have many friends who are entrepreneurs of all stripes, and they ALL say the same thing.When somebody asks if you can do a thing, you say “Yes!” And then you go figure it out.These days, the interwebs being what they are, it’s a lot easier to figure stuff out on the fly. Use that.” We have a wide range of customers. Some of them are like the end user who, at the age of 50 and with no training whatsoever, was suddenly placed in front of a modern computer for the first time. Others are application programmers with a lot of courage and confidence—and thankfully, enough sense to know when not to go voyaging so far into computer systems that they get into trouble they can’t get out of. In each case, we look for ways to help folks figure things out. We act on the value that one size does not fit all. We educate you and learn from you. There are no Queen Bees at Pequod Systems. And no old-fashioned fax machines.Contact us for respectful and personalized technical support. And to tell us if we are really educating you—and learning from you at the same time. WANT TO COMMENT? FILL IN OUR FORM, SO WE KNOW WHO YOU ARE. COMMENT AWAY! Several years ago when I was new to the Toastmasters International organization, I complained to a fellow member about an Area Governor who seemed to be completely out of touch with the half dozen clubs he was supposed to be serving. My friend, a wise and experienced member, said “Well, you can always learn from a bad example what NOT to do.” Over the past three months, LinkedIn has provided a great example of what not to do. LinkedIn appears to have abandoned providing technical support for those who use it. Its announcement that “As of January 31, 2013, the LinkedIn Answers feature will be retired from LinkedIn. We’ll be focusing our efforts on the development of new and more engaging ways to share and discuss professional topics across LinkedIn. In the meantime, you can still pose questions and facilitate professional discussions through other popular LinkedIn channels including LinkedIn Polls, Groups, or status updates.” has not exactly won friends and favorably influenced people. The LinkedIn data export utility has not worked as illustrated for over two months. In what used to be a help forum, there are comments such as “this screw-your-customer policy needs to be changed.” and “I did try to call the corporate office, but you no longer get a human. Such arrogance. I did manage to send an email to a supposed support contact, but, not surprisingly, have received no reply. We’re all just left hanging.” The cockles of my heart were not warmed one bit when, after sending a message asking for help, I received an automated message with a trouble ticket number. I am reminded of the late Charles M. Schulz character Lucy, who just won’t listen to anyone other than herself. His March 2, 1985 strip says it all. “What?” For a social media platform in which users have posted blog after blog and post after post talking about listening to one’s customers, it’s pretty sad to see a major player in the social media world turning a deaf ear even to its paying customers. LinkedIn has provided a great example of what NOT to do. | Low | [
0.521929824561403,
29.75,
27.25
]
|
Tuesday, November 6, 2012 Transition Animations I'm starting to give some thought to transition animations in the game. Like everything else in the game, my decision to add a cape to my character is is adding a lot of extra work for me. Luckily, the animation is so small on the screen and everything is moving so fast, that I can probably get away with just a couple of inbetweens for each transition. I just don't know how many transitions I will need. As of now, I'd love to have a couple of frames for when you are swiching from from moving left to right while your flying and running. I'm probably going to also need some for when I'm flying down and and go up. After I get those done, I'll have a better idea of the kind of things I'll need to flesh out the rest. 3 comments: Wow! Transition animations aren't a common thing in games I think your "multianimated" hero character may be a strong point (I thing all of your characters will be made perfectly, what I saw on your earlier sketches) :-] Btw: I'd like to ask you:Have you made some graphics of backgrounds for your game? For now, I see that you created sketches of buildings, and I would like to know if you have already some more, coloured backgrounds. I ask because at the moment I'm getting tired of this type of problem in my game, so I'f you have an experience of creating backgrounds, or if you have some links to the tutorials (personally I haven't found any of them), could you send it for me? I mean just about something like backround in Ashey Gullen platform game tutorial on Scirra forum - simply, if there are some tricks do make background like that, or we just need to draw it pixel by pixel. Personally I used to make draws on paper first, and then transfer it to the PC, but I manage to use this method only for characters (not for backgrounds). I haven't actually started on the background art, but plan to very soon. I keep going back and forth on whether or not to do it completely flat or have a little bit of perspective (tilt the camera). I think I'm going to take the plunge and just start pretty soon though. I would suggest drawing it by hand, scanning it into the computer, and then coloring it there. I found a blog a while back that has a lot of tutorials on drawing for games called: http://2dgameartforprogrammers.blogspot.com/ I'm pretty sure that he is using a free vector program called Inkscape to draw with. You can find it here: http://inkscape.org/ You might want to check out some of these too. Though I'm not sure how many deal specifically with backgrounds: http://www.hongkiat.com/blog/pixel-art-tutorials/ But I would start by thinking about what you want in your background and finding a few photos of each thing. Having photo reference in front of you helps to give you ideas about how to draw these things. Feel free to show me anything you come up with and I'll give you some pointers. Just keep trying and don't get frustrated. Some times it takes a little while to get warmed up. Also, I'm sorry I haven't been able to get a picture of that glitch over to you. I'll try and get it to you by the end of the week. Good luck and keep me posted! | Low | [
0.518518518518518,
35,
32.5
]
|
Q: Using result from Scalas "fromURL" throws Exception I'm trying to get some webpages using Scala's scala.io.Source object. Getting the iterator works fine but i cant do anything with it without getting an exception: scala> scala.io.Source.fromURL("http://google.com") res0: scala.io.BufferedSource = non-empty iterator scala> scala.io.Source.fromURL("http://google.com").length java.nio.charset.MalformedInputException: Input length = 1 at java.nio.charset.CoderResult.throwException(CoderResult.java:277) at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:338) at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) at java.io.InputStreamReader.read(InputStreamReader.java:184) at java.io.BufferedReader.fill(BufferedReader.java:154) at java.io.BufferedReader.read(BufferedReader.java:175) at scala.io.BufferedSource$$anonfun$iter$1$$anonfun$apply$mcI$sp$1.apply$mcI$sp(BufferedSource.scala:38) at scala.io.Codec.wrap(Codec.scala:64) at scala.io.BufferedSource$$anonfun$iter$1.apply$mcI$sp(BufferedSource.scala:38) at scala.io.BufferedSource$$anonfun$iter$1.apply(BufferedSource.scala:38) at scala.io.BufferedSource$$anonfun$iter$1.apply(BufferedSource.scala:38) at scala.collection.Iterator$$anon$14.next(Iterator.scala:150) at scala.collection.Iterator$$anon$25.hasNext(Iterator.scala:562) at scala.collection.Iterator$$anon$19.hasNext(Iterator.scala:400) at scala.io.Source.hasNext(Source.scala:238) at scala.collection.Iterator$class.foreach(Iterator.scala:772) at scala.io.Source.foreach(Source.scala:181) at scala.collection.TraversableOnce$class.size(TraversableOnce.scala:104) at scala.io.Source.size(Source.scala:181) at scala.collection.Iterator$class.length(Iterator.scala:1071) at scala.io.Source.length(Source.scala:181) at .<init>(<console>:8) at .<clinit>(<console>) at .<init>(<console>:11) at .<clinit>(<console>) at $print(<console>) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:704) at scala.tools.nsc.interpreter.IMain$Request$$anonfun$14.apply(IMain.scala:920) at scala.tools.nsc.interpreter.Line$$anonfun$1.apply$mcV$sp(Line.scala:43) at scala.tools.nsc.io.package$$anon$2.run(package.scala:25) at java.lang.Thread.run(Thread.java:745) So as you can see obtaining the buffer works, i can do something with it scala> scala.io.Source.fromURL("http://google.com").next res7: Char = < But it seems I cant iterate over it. I'm using scala v 2.9.2 but the problem recurs in 2.11.2 as well. Further I'm running java version "1.7.0_75" OpenJDK Runtime Environment (IcedTea 2.5.4) (7u75-2.5.4-2) OpenJDK 64-Bit Server VM (build 24.75-b04, mixed mode) Any help getting this to work would be greatly appreciated A: You have an encoding issue here. The Encoding needed for interpreting the response is latin1, also known as ISO-8859-1. Use Source.fromURL("url")("encoding") to solve your problem. Source.fromURL("http://google.com")("ISO-8859-1").mkString res4: String = <!doctype html><html itemscop A little background: When no encoding is given in a http request the standard behaviour is to retun everything encoded in Latin-1. For in depth info see http://www.ietf.org/rfc/rfc2045.txt | Low | [
0.476614699331848,
26.75,
29.375
]
|
This is the second part of our exclusive documentary on Chiba Tsugutaka Sensei, the last Daito-ryu master of Shikoku. In this section, he explains the origins of Daito-ryu and tells us about his experience learning intensively in Hokkaido, at Takeda Tokimune's Daito-kan. Video: Documentary on Daito-ryu Aiki-jujutsu from the Takumakai part 2 Second part of the documentary on the life of Chiba Tsugutaka Sensei, the Daito-ryu Aikijujutsu master of Shikoku (Click on "CC" to display subtitles) Interview transcript: Over there, at the Daito-kan, we had to sign an admission form to be allowed to learn. It was the first time that we learned techniques up to 5th kajo. Before that, we only got to 2nd kajo. Olivier Gaurin: What kind of keiko were we doing at Tokimune's dojo? Chiba Tsugutaka: We used to do shikko and work on katatedori, ryotedori and also some katame waza. This is what we used to do, what we call warm-up these days. Over there, they used to grab like that [showing with his hand], it was very different from the way we used to do it. These were exercises on joints by controlling the wrists. Before, we used to grab directly, flat like that. But over there, they first took hold from the small finger. So the hand that was being grabbed could not open up. It could not use any strength. We used to work like that, on controlling the wrists, we got control of the elbow. That is what they made me do as well as some grabs such as ryotedori. They told me: "Come over here!" I asked why and the answered: "Because you don't do Ken". They meant not like Kendo. So that is when I started to work with a big log of wood, as thick as a leg! We also used metal bars or wood logs, training in rhythm. We used to strike like this to build up our muscle frame. Olivier Gaurin: This is also what Tokimune Sensei used to do in the morning right? Chiba Tsugutaka: Yes, all of it. Olivier Gaurin: This work is important isn't it? Chiba Tsugutaka: Yes, especially this way of twisting. It is not just about swinging like that (waving). You have to pinch like this and because the handle is so big, there is some remaining space that you cannot hold. Like if you were waving a big pole. So you have to synchronize the pinching of each hand to have them together. You have to raise straight up. Everybody ended up as massive as Suzuki Sensei. Suzuki Sensei never lost, not even against Sumo or Judoka. When he grabbed you, you couldn't move. He would grab and the other guy went flying. Olivier Gaurin: What was the training schedule? Chiba Tsugutaka: Takeda Soke started at 5 a.m. because he had to go to work at 6 a.m. at the Yamada Suisan company. Olivier Gaurin: Did you continue at the dojo after 6 a.m.? Chiba Tsugutaka: He would stop a bit before 6 a.m. and we would go back to the nearby hotel. We quickly took our breakfast over therea nd then, Suzuki Sensei would teach us while doing some calligraphy on his own. It lasted until lunchtime. So about 6 h of training until 12 a.m. That is why we could learn all that, it never stopped! All the classes, without a break. We couldn't even drink tea even though we really felt like having some. we had all our meals at the hotel, in the morning, lunch, and dinner. In the evenings, Suzuki Sensei was teaching. With Soke we only learnt until 2nd kajo and when we finally managed to do 1st and 2nd kajo, we asked Suzuki Sensei, showing him the board: "We would like to also learn 3rd and 4th kajo, would it be possible?" But he said that it was not possible because we weren't deshi. However, he spoke to Tokimune and one day, our names got written on the board as "Shikoku students". Then he asked some deshi to start training with us too. We started to train with the others on that night. Acknowledgements: Many thanks to Sato Hideaki Sensei for helping us conduct this interview. Founder of the site in 2007, Guillaume has a passion for Japanese culture and martial arts. After having practiced Judo during childhood, he started studying Aikido in 1996, and Daito-ryu Aiki-jujutsu in 2008. He currently holds the ranks of 4th Dan in Aikido (Aikikai) and 2nd Dan in Daito-ryu Aiki-jujutsu (Takumakai). Guillaume is also passionate about science and education and he holds a PhD in Molecular and Cell Biology since 2010. He currently lives in Tokyo and works as a consultant for medical research. > View Full Profile | High | [
0.660465116279069,
35.5,
18.25
]
|
En Español. Earlier this year, BART announced it would completely redesign its fleet by 2017, but the rapid transit system has a few projects to finish before that great day. For one, there’s the issue of the bacteria-infested wool seats. In 2011, BART began a project to replace most of the wool seats with cleaner vinyl ones and remove the carpeting from cars. But as anyone riding BART can attest, those gross wool ones are still in the system and continue to be thriving homes of microbiotic life…but for how much longer?Mission Local checked in with BART spokesperson James Allison to find out if and when all the old seats will be replaced. Mission Local: A few years ago BART promised that if riders like the new seats, 200 out of 669 cars will have vinyl seats. Currently, how many of the total cars have vinyl seats? James Allison: There are currently 439 cars complete with vinyl seats and seats for the remaining 230 cars on order. BART continues to replace the old wool seats with easier-to-clean, wipeable vinyl covers and all seats will be converted by the end of the year, marking the end of wool seats systemwide. BART crews also continue to rip out old carpets and replace them with new easier-to-clean flooring. Flooring is scheduled to be replaced on all cars by next summer. ML: Is the transition to vinyl seats going as planned or was there a delay since you started this project three years ago? JA: There was a delay of several months because the company that supplies the seats had some sort of manufacturing issue and couldn’t keep up with the demand. ML: The switch is supposed to save money since the seats are easier to clean than dry-cleaning. How much money is being saved? JA: The easier-to-clean seats save $40,000 a month. The wool seats need to be removed, cushion and all. They are then sent to be dry-cleaned. The vinyl seats can simply be wiped down; no need to remove them and send them to a third party. ML: Why does BART have carpets at all? JA: The plan is to remove all carpet because it is easier to maintain and a majority of our customers tell us they prefer the non-carpeted floor. It’s important to remember that when BART began service in 1972, the concept was to provide mass transit that would rival the airline experience—hence the cloth seats and carpeted floor. Now, more than 40 years later, our ridership has grown exponentially. Some of the things that made sense in 1972 when we were carrying tens of thousands a day are no longer the best option. ML: Do busier train lines have a certain priority for the new seats? How is it determined which lines get more vinyl seats? JA: No, it’s completely random on which line the cars will run and varies day by day. The cars are not assigned to specific lines; instead they go to the maintenance yards each night and the next morning they are reassembled into new trains. Therefore, the car you ride from Richmond to Fremont today could be running from Pittsburg to SFO tomorrow. It all depends on how the trains are assembled. ML: Does the plan for the new designed cars impede the switch to the vinyl seats? JA: The fleet of the future, which are scheduled to be in service in 2017, has no impact on the switch to vinyl seats in our current fleet. ML: How have patrons responded to the new seats? Allison: The overwhelming response has been positive. Seven thousand people took surveys on the new car design and 84 percent rated the vinyl seats excellent to good. (Survey information can be found here.) Got a question that needs answering? Let us know! Send your burning questions to missionlocalATgmailDOTcom. | Mid | [
0.6141078838174271,
37,
23.25
]
|
Q: Prove sequence defined by recurrence relation using induction Confused at this question, from what I gather strong induction is necessary here to prove this but the algebraic step after the Inductive Hypothesis is where I'm not too sure. Basis: 2 <= a1 = 2 <= 4 2 <= a2 = 2 <= 4 Inductive Hypothesis: Assume that for any 2 < n <= k, we have that 2 <= an <= 4 Show an+1 is true. A: Your base case is correct. Now, fix an $n\geq 3$ and suppose that for all $k<n$, $2\leq a_{k}\leq 4$. We want to show that $2\leq a_{n}\leq 4$. Note that $$ a_{n}=\frac{1}{2}\left(a_{n-1}+\frac{8}{a_{n-2}}\right)\leq\frac{1}{2}\left(4+\frac{8}{2}\right)=4 $$ and $$ a_{n}=\frac{1}{2}\left(a_{n-1}+\frac{8}{a_{n-2}}\right)\geq \frac{1}{2}\left(2+\frac{8}{4}\right)=2, $$ by the inductive hypothesis. Hence, $2\leq a_{n}\leq 4$, as desired. | High | [
0.6793478260869561,
31.25,
14.75
]
|
Monday, November 13, 2006 Monday 11/13 A.M. Quickie:Bears Showtime in Primetime Complete NFL Wrap-Up coming later this morning, but a few thoughts on yesterday's games: (1) Devin Hester's 108-yard TD return off of a Giants missed FG was the Play of the Year in the NFL. (Sort of like how Nathan Vasher's similar play was the Play of the Year in the NFL in 2005. Coincidence?) (2) The Chargers' 21-point comeback (via 42 points in the second half) was the Half of the Year in the NFL. Not quite sure if that's an actual award, but it was an absolutely sick game, particularly from Philip Rivers. Any more doubters about him? Didn't think so. (3) Who had protege-turned-nemesis Eric Mangini beating Bill Belichick in Foxboro? The student has become the master, and you could tell by the look on Darth Hoodies' face as he refused to look Mangini in the eye (let alone say anything) during the post-game handshake that Bill B. didn't think it would happen. More NFL later this a.m. BCS Update: USC leap-frogs Florida. You can read the post below for my expanded take on all things BCS. It looks like the USC-Notre Dame winner has the inside track on the second BCS title-game spot (provided that USC beats Cal next week, which is a strong possibility but no gimme). FanIQ has the type of column I love to see: A detailed analysis of individual AP voters' ballots. They've ID'ed some really f'ed up ballots that make you question why these folks get to be part of a group that determines its own national champ. It's worth a look. (Meanwhile, I was thinking about the strength-of-schedule hit that BCS No. 4 Florida will take for playing Western Carolina this Saturday, and I had an idea: What if they paid WCU to NOT play the game? Not a forfeit. Just a cancellation. I'm sure the Gators fans would understand; computer rankings would never know; and how could poll voters begrudge them a move to reject the type of cupcakes that ice the other contenders' schedules?) Ohio St-Michigan Countdown: Yes, you WILL be completely sick of this story by, say, Wednesday. But it's the closest thing college football will get to an actual playoff: The winner will be in the national title game. (It remains to be seen if the system is so broken that it would allow for a ridiculous rematch in the national-title game. Honestly, that would suck.) MLB Awards Season! Today: Rookie AL: If it wasn't for injuries, the race between Verlander, Liriano and Papelbon would have been epic. (And remember my mid-summer obsession with Liriano?) But the winner/survivor was easy to pick. My ballot: 1. Justin Verlander2. Francisco Liriano3. Jonathan Papelbon NL: Ryan Zimmerman might have the best career, but this season, the success of the "AAA Marlins" was remarkable, making the NL award a run-off between a Florida's ace pitcher and one of its top hitters. My ballot: Manny Acta to manage Nats: Great hire to go the opposite direction of Frank Robinson's old/stodgy and import Acta's young/fresh. He's only 37. He coached a few of these players (as the Expos) earlier this decade. And he brings a winner's vibe from the Mets. Aramis Ramirez re-signs with Cubs: For whopping 5Y/$73M, putting him in the "cornerstone NL 3B" category with others like David Wright and Ryan Zimmerman. That the Cubs re-signed Kerry Wood is almost an afterthought. CBB: Virginia shocks No. 10 Arizona. (But not really, given that Virginia had all the energy from opening a new arena in Charlottesville. Still, wasn't this Zona team supposed to be a Final Four contender? (I recognize I haven't done much CBB previewing. More on that this week.) Duke wins opener: Most notably, three Duke frosh accounted for 42 of Duke's 86 points. (12 points on four 3's from my favorite CBB frosh of the year, Jon Scheyer, who is apparently the new Redick. You can appreciate my conflicted feelings -- on so many levels -- about this.) NBA: I'm still getting over Michael Redd's sick scoring binge from the other night. Meanwhile: (1) Yao is sizzling. His 34/14 last night gives him 69/31 over the past two games. (2) Could Kevin Martin (24 pts last night, 24 ppg in 2006) be this year's most unlikely All-Star? (3) The Clippers won their 5th straight – hottest team in the NBA... again?! 39 comments: I wasn't surprised by the Pats' post-Colts hangover, especially against a team who was lying in wait with a coach who knew what to expect, anyway. The big concern for the Pats should be their third down defense....the Jets picked up every conversion they needed to in the second half. Too bad college football got rid of ties in the first place. Imagine what the fallout of an OSU-UM tie could have been (read: they could "agree" for the tie and play it out for real in January) The officiating in the Ravens/Titans game was horrible particularly the fumble that was not a fumble. Isn't Triplett the same ref who blew a couple of games for the Jets about 6 or 7 years ago. How is this guy still an official. He said Anderson was down by contact when he was standing upright and then to cover for that blown call he said there was inconclusive evidence of who recovered the fumble although the Titans Robaire Smith came away with the the ball. What a bunch of bullshit that game was. Anyone else notice that Div II announced their college football playoff bracket yesterday? They play 11 games then have a 24 team playoff (top 2 seeds in the 4 regions get bye's). Oh wait, these schools aren't ruled by the $$$ coming from bowl games. That is why there'll never be a Div-1 playoff. Come on, the refs in the Ravens/Titans game were all for the Titans. They didnt call any of the cheap shots the Titans gave. The late hit on Musa Smith sent him off the field on a back board. They did miss one fumble, but when your kicker cant make chip shot field goals, you deserve to lose. Just looked at the cfb rankings. And mind you, this is a middle of the pack kind of observation. But how is it that Boston College is ranked below Virginia Tech? We kicked whatever teeth they have left down their throat. IN BLACKSBURG!!! And after all that, we get to be behind them in the rankings. wrong. Apparently, head to head matchups don't matter as much as who else you play. Might as well just play the game on paper then. I'll question the bears being the team to beat in the NFC. They should have lost to the Cardinals and did get blown out by the Dolphins. Just because Rex Grossman didn't turn into Aaron Brooks against a decimated defense is no reason to celebrate. The no call on Bulluck for hitting Smith was bad and you never like to see someone get hurt. Bironas should have made the FGs but on the potential game winner the left side of the line gave up blocking and the dude blocked it with his chest. But the fumble call changed the tide of the game. The Titans would have had the ball in great field position because of the personal foul and a field goal there would have been the difference in the game. That is what should have happened but I told someone today that all the calls seem to go against you when you are a bad team and that is what the Titans are right now. The thing I love about the BCS is the hypocrisy the NCAA has about Div. I kids missing school if they had a playoff. I grew up in a town with a small college that plays in Div. III. Last time I checked schools that play at the DIII level are very small and have excellent academic repuations. Yet they are somehow able to play a full regular season and then have a playoff and these non-scholarship kids still graduate in very, very high numbers. But kids that go to the football factories would lose valuable classroom time if they shortened the regular season by a game and then started a playoff the next week. Now, I'm a UGA fan of course, was there saturday for the big win. I know it's cliche to say it, but i knew we could win that game. Everyone else seems to think it's a stunner, a complete shock, but I just don't get that. UGA has had very highly ranked recruiting classes. Talent galore, with tons and tons of athletes all over the place. Turnovers and lack of execution have killed us this year, but it there have been a handful of plays that just didnt' go our way all season long that hamstrung us. What are y'alls thoughts on how big a shock this is, considering this team did win the SEC last year. to be fair, the ravens did all they could to throw that game away and tennessee STILL couldn't put it away... you don't see Third and 35 too often at this level. Especially when the first fifteen penalty yards were for unsportsmanlike conduct by the COACH. Sure, there were some crappy calls, but i don't think tennessee can blame the officiating for that loss. Speaking of inexplicable calls made by referees. What about the Roughing the passer penalty in the Steelers V. Saints game? The lineman literally grabbed Drew Brees's foot... HIS FOOT! And they call a roughing the passer penalty. HIS FOOT!!!! He didn't pull it, I think even Brees wa surprised to see the flag on that one. I didn't like the call in the Jets game either (partially because i'm a Jet fan) but I at least could understand where it was coming from. As a Ravens fan, I know they were lucky. Even I admit the fumble call was BS. Sometimes teams just have lucky seasons. But as for the roughing the passer call against the Steelers. I have to think that the Palmer injury last season had something to do with that call. The ref had to have the shot on Palmer in the back of his mind when he saw the Steeler lineman go low on Brees, even if he didnt actually hit him. I have a hard time believing Notre Dame would jump a one-loss UF--or even an undefeated Rutgers--especially if Michigan loses to OSU. I understand that you're "the season is a de facto playoff" theory makes you want to think of ND-USC as a national semi-final, but I think it takes Arkasas and Florida and Rutgers and OSU losing for ND to get a shot--and even then, there's the "do we really want a rematch" question. As a Tech alum, I'm never shocked when anyone beats Auburn. We kicked the crap out of them the last couple times we played them. That game is an example of what true freshmen QB's offer. They are incredibly inconsistant. But don't compare your team to where it was last year. Last season you had a legit QB and this year you have a giant question mark every week. The only shocking thing to me about your season is that your idiot fans that yell for Richt to be fired. If you can find a way to make them shut up, it would be beneficial. See you in Athens in 2 weeks. I'll be the one wearing gold watching Reggie Ball do something stupid and Calvin Johnson trying to bail him out as they always do. in case you didn't... Sean Taylor came untoughed straight up the middle on a safetly blitz at Tony Romo. Taylor dove head first and his shoulder/helmet hit his abs/upper thigh area. As Romo was going down he threw the ball away. The ref called two fouls on the play Roughing The Passer AND Intentional Grounding. The look on Joe Gibbs face on the sideline was hilarious. He looked like he was watching gravity reverse itself, pigs fly, Peyton Manning win a Super Bowl, whatever. After the game Gibbs was asked about that call and said he didn't even know that was a possibility. The Elias Sports Bureau reported that the only other time that someone was called for ROUGHING THE PASSER and INTENTIONAL GROUNDING on the same play was this time that Warren Sapp picked up a QB and body-slammed him while the guy was trying to get rid of the ball. ND Fan here....and an undefeated Rutgers def. deserve to be in the Title Game, over Florida, Arkansas, USC, and (yes) Notre Dame. The fact is an undefeated team from a BCS conference ALWAYS deserves to be in the title game if there are only 2 unbeatens. (Boise St doesn't count b/c they are NOT in a BCS conference)... On The Herd with Colin Cowherd on ESPN Radio Colin was having a session called "you give me circumstances, I give you a title contender" where people called up and gave him scenarios and he said who played the OSU/MICHIGAN winner. This guy called up and started off with "What if Purdue had gone undefeated, and Boise State..." Colin then cut him off and said this... "If Pudue was unbeaten is like if Purdue was the Colts, and if mayonaise was mustard, and if relish was a batman costume, would superman still be alergic to kryptonite? Or maybe that's unfair and it's more like if halloween was in december, and christmas was in june, would valentines be in july?" Why do people not understand how the football polls work? Because they believe this to be true: If X beats Y and Y beats Z, X will beat Z. When Z beats X (hello SEC!) people start bitching because the poll system has no good way of dealing with it except to move Z up and X down according to the result. Just accept that polls are imperfect and move on. Yet Liriano will probably come in second, while Weaver has recieved no discussion whatsoever. Probably due to what I call the Dontrelle Willis Theory - if a player (esp. a pitcher or a rookie) has a great September, it will go virtually unnoticed b/c all members of the media pick their ROY/Cy Young favorites in August. I'm not saying Weaver should win or be ahead of Liriano, but he should at least get some consideration... People, people: a poll is supposed to determine who the pollster thinks will beat whom if the game happened today. This X>Y>Z crap can't fly when you consider that upsets happen. Now, I am in no way defending any and all polls, because some of them are stupefyingly wrong, but shoddy logic is being used when you assume that one game can define a team's abilities. The only way to be certain that one team is better than another is for those teams to play their best on a neutral field without injuries, and with a roster that has been figured out by the coaches. This is why Arkansas is currently not *necessarily* worse than USC, and it's why two teams that have not played each other can only be judged on their team's perceived qualities, not on one win here or one win there. I'm talking about Rutgers and WVU. I've said it before: the transitive property is for algebra, not college football, and you've got a lot of explaining to do if you believe otherwise. Connect With Me Quickish About This Blog DanShanoff.com is a sports-blog spin-off of my long-time ESPN.com column, "The Daily Quickie." Anchored by an early-morning post of must-know topics, the blog is updated frequently throughout the day with new posts and user comments. | Low | [
0.521739130434782,
28.5,
26.125
]
|
IN — Thirumal Nagar, Tirunelveli A man who identifies as a woman was beaten by locals in Thirumal Naga and held for police after his husband, who also identifies as a woman, told locals that the man had attempted to assault an underage relative who had been left in their care. The six-year-old victim had been staying with her uncle and his husband, both of whom are described as ‘transgender,’ after the girl’s mother went into labor and was admitted to Tirunelveli Government Medical College Hospital. The girl’s father had been accompanying his wife at the hospital. The locals held the suspect for police, who promptly released him onto the streets. Frustrated residents then alerted the child protection unit and senior police officers. “Locals said that on Thursday evening, the accused sent the other children out to play around when his wife (the transwoman) went out. As the victim was alone at home, he tried to sexually abuse her. The transwoman wife who came to know about it after she had returned alerted the locals,” B S Dev Ananth, District Child Protection Officer, stated. A Saravanan, Deputy Commissioner of Police (Law & Order), assured the public, “We are investigating the matter and appropriate action will be taken.” Featured image credit: Pixabay Read more on this story Transgender man attempts to sexually assault girl in TN Times of India TIRUNELVELI: Residents of Thirumal Nagar in Tirunelveli city beat a transgender man for trying to sexually assault a six-year-old girl. | Mid | [
0.559055118110236,
35.5,
28
]
|
High school wrestling: Carthages Brady seeks second chance CARTHAGE Shayne Brady sat in a dark, empty hallway in the back of the Times Union Center in Albany a year ago. Brady had his head in his hands after losing a 6-3 decision in the Division I, 170-pound state championship match. But he wasnt feeling sorry for himself, and he wasnt thinking about how he had just become the first wrestler in the history of the Carthage program to compete in a state title match, which he reached with a 4-3 overtime victory in a thrilling semifinal match just a few hours earlier. Instead, Brady was beginning to shift his focus to this season, where the senior will enter this weekends state tournament as the top seed in the 182-pound Division I bracket. I was thinking that I didnt work hard enough, and I didnt wrestle good and I wasnt the best in the state, Brady said. It was kind of depressing to think that, but it was knowing that I still had a lot to work on and I wasnt quite there yet. I thought I was, but I wasnt, and I had a lot more to do. Brady is 32-1 as he enters his third straight state tournament, which begins with the preliminaries and quarterfinal rounds on Friday, with semifinals, consolations and championship matches scheduled for Saturday at the Times Union Center. He has 205 career wins, more than any wrestler in Carthage history. For the second straight year, Brady won a Frontier League, Section 3 Division I and Section 3 Class A title. And in a way, Brady believes it all goes back to that loss in the state final. I havent had that title yet and I want it, said Brady, who will wrestle for Division I North Carolina State next year. Instead of getting it (last year), then kind of coasting through to try and get it again and not really work as hard toward it. It made me work really hard and want it more. You hate to say it, but I think in a way it helped him, Carthage coach Don Dorchester said. It kind of gave him that drive and showed him what he needed to do to get better. It fueled him over the summer and through the whole offseason, and it kind of gave him that edge that you need. Bradys only loss this year came by injury default in the finals of the Herkimer Tournament on Dec. 8. Brady said that his knee popped while scrambling to avoid a takedown, and he eventually learned that he strained a ligament in his knee. He missed a tournament and a dual meet, but was back to winning matches a few weeks later. Hes in a good place, Dorchester said of the senior. Hes mentally prepared. Hes been battle-tested. Obviously its the state tournament and everyone is going to be good, but I think Shayne is at the head of that class. And while Brady will finish his high school wrestling career this weekend as the schools first wrestler to reach a state final and possibly its initial state champion, he hopes he wont be the last. It means quite a bit to me, Brady said. Commenting rules: Stick to the topic of the article/letter/editorial. When responding to issues raised by other commenters, do not engage in personal attacks or name-calling. Comments that include profanity/obscenities or are libelous in nature will be removed without warning. Violators' commenting privileges may be revoked indefinitely. By commenting you agree to our full Terms of Use. | Mid | [
0.636363636363636,
35,
20
]
|
Jeep Build Parts Make your time in the shop count by building your rig with M.O.R.E. products. We've got the brake lines, bushings and tabs and brackets to get you back out on the trails stronger than ever. With our "RockProof" Lifetime Warranty, you'll head out on your next off-road adventure in confidence. | Low | [
0.48960739030023004,
26.5,
27.625
]
|
I just read somewhere that the Pentagon has chosen Burlington, VT as one of the first two deployment sites for the new F-35A. I don't know if this is true or not, so please confirm as necessary. What is the F-35A's mission and why would Burlington, VT be a natural base for it? I know 'The Green Mountain Boys' now fly F-16s. Aside from those, and the F-15s based at Westfield (Massachusetts), I don't know if there are any jet fighters to be found anywhere else in New England. Is that true? Maybe Connecticut has A-10s also...not sure. I'm not from the Burlington area (I live in S. New Hampshire). But a Google Map image shows only a modest amount of space to store airframes...certainly nothing expansive or impressive. It is, after all, a commercial airport on the other side of the runway. I was thinking that you'd want the fastest possible jets up here in New England...able to get in the air and to wherever the 'hot-spot' is...be it an airliner with a mayday or whatever, a stray airliner, etc. I was just trying to fathom the curiosity of BTV being one of the first two airports to have them. And what the supposed 'scoring' is that said BTV is where they should locate the F-35. There are a lot of other places with more concrete if 'storage' was the reason. It does not make sense to put F-35s in BTV for the VTANG. They do fine with their F-16s. Does the VTANG still have an air defense alert commitment at BGR and the MEANG? Since the F-16 (and MAANG F-15s) have longer legs than the F-35, it makes no sense for it to be a coastal defense interceptor. The MAANG also has the capability to position alert F-15s at PSM with the NHANG, as well as at its former home at FMH. I know the 177th Fighter Wing of the New Jersey ANG wants them, they have an alert mission at Atlantic City Airport that is right between NYC and DC. In fact it's the closest fighter aircraft to NYC. Quoting ChrisNH (Thread starter):I don't know if there are any jet fighters to be found anywhere else in New England What's alarming is that with the closing of Brunswick NAS there are no more active duty airfields in the New England, unless you count Fort Drum. The closest active duty airfields are the airfields at JB MDL in New Jersey. The term deployment site doesn't mean that the ANG fighter unit in BTV would be equipped with the F-35 but to me means a F-35 unit would deploy a detachment (say 4 fighters) to BTV to stand their alert status there in accordance with the NORAD air tasking order. That's what comes to my mind when you use the term "deployment site." Quoting kanban (Reply 1):How much ramp space do you have for active aircraft storage? Since they keep grinding out units that will need updating before they are combat ready, you might be the closest. I suspect you'll see those updates done during depot maintenance. In the meantime, those new birds will go to units that can train with them and begin to explore what the airplane can do, even at this early stage. That's what the Europeans did with Typhoon. The early models were limited in combat capability but the air forces of the nations that bought the airplane flew them to get familiar with what they could do, in anticipation of what it would later be modified to do. Same will likely hold true for the F-35. Has anyone considered that the F-35A will go to Air National Guard units concurrently with assignment to active duty squadrons? Quoting ebj1248650 (Reply 8):I suspect you'll see those updates done during depot maintenance. In the meantime, those new birds will go to units that can train with them and begin to explore what the airplane can do, even at this early stage. That's what the Europeans did with Typhoon. The early models were limited in combat capability but the air forces of the nations that bought the airplane flew them to get familiar with what they could do, in anticipation of what it would later be modified to do. Same will likely hold true for the F-35. Correct. F-35 will be produced and updated through block updates. A odd numbered block update is a software update while even numbered updates will also include hardware updates along with software updates. There's some sort of "scoring" going on here. So politics is creating the scoring methodology, and then politics is changing it when you don't like the result... however, I'm still not sure what the criteria are. Wasn't easy to google. Quoting ebj1248650 (Reply 8):I suspect you'll see those updates done during depot maintenance. In the meantime, those new birds will go to units that can train with them and begin to explore what the airplane can do, even at this early stage. That's what the Europeans did with Typhoon. The early models were limited in combat capability but the air forces of the nations that bought the airplane flew them to get familiar with what they could do, in anticipation of what it would later be modified to do. Same will likely hold true for the F-35. I suspect the Typhoon is not a great model to emulate, given that the UK won't be bothering to update the first tranche Eurofighters and will be retiring them by 2019, due to, you guessed it - excessive cost. Inspiration, move me brightly! Light the song with sense and color.Hold away despair, more than this I will not ask.Faced with mysteries dark and vast, statements just seem vain at last.Some rise, some fall, some climb, to get to Terrapin! Quoting SSTeve (Reply 12):There's some sort of "scoring" going on here. So politics is creating the scoring methodology, and then politics is changing it when you don't like the result... however, I'm still not sure what the criteria are. Wasn't easy to google. It's the Guard, politics is everything. There are three things that determine (four, if you want to include size which really goes towards how big the Guard is, and not necessarily what they get) what assets the National Guard in a particular state get: Politics, Mission and Politics (The PiMP factors): How much clout your delegation has on the Hill, how often the units in your state Guard deploy, and how effectively your governor can beg/demand/grovel. Whether they want to admit it or not, it benefits state governments tremendously to have Guard units that deploy regularly. It means that big Army/Air Force is paying the troops, they'll get new facilities (immediate construction jobs and mid/long term Tech/ADOS/ADSW/AGR jobs) and those soldiers away from home are collecting a paycheck that in many cases they are not collecting when they're back home (Unemployment among for seven reserve component personnel runs just under 10%, take a guess at which one has the highest unemployment rate among them... Army National Guard). Quoting ChrisNH (Thread starter): Aside from those, and the F-15s based at Westfield (Massachusetts), I don't know if there are any jet fighters to be found anywhere else in New England. Is that true? Maybe Connecticut has A-10s also...not sure. It's just the F-15s at Barnes ANG Base and at the F-16s at Burlington. Well, in the reading I've been doing it appears as though this 'decision' will come in November. Also, it seems as though a whole bunch more F-35s would be based at Burlington than the number of F-16s they have there now. If that is correct, then the 'economic impact' should be quite positive. The noise footprint of the F-35 is also mentioned as being 'significantly' higher than the F-16...which is curious since they are both single-engine aircraft. | Low | [
0.506410256410256,
29.625,
28.875
]
|
The characteristics of hGH binding to the liver macrophages. Macrophages isolated from female rat liver as well as hepatocytes bind 125I-hGH. This study compares the effect of sex of the rat, hypophysectomy (hypox) and preincubation of the cells with oPrl on the binding of 125I-hGH to the cells. The percent of 125I-hGH to the hepatocytes was decreased in cells from hypox female and male rats, and hepatocytes preincubated with oPrl to 0.43, 0.21 and 0.39, respectively, of that observed in hepatocytes from normal female rats. In the hepatocytes from normal female, hypox female, and male rats, hGH was the most effective competitor for 125I-hGH binding with an ID50 of 0.73-0.99 nM. The concentration of oPrl, bGH and rGH that produced half-maximal inhibition (ID50) of 125I-hGH binding to hepatocytes from female rat liver was 6.3, 100, and 420 nM respectively. In hepatocytes from male and hypox female rats, and hepatocytes preincubated with oPrl, the ID50 for bGH and rGH varied from 2.1 to 15.9 nM. The percent of 125I-hGH bound by the macrophages from hypox female and male rats, and macrophages preincubated with oPrl was 0.06, 0.15 and 0.18, respectively, of that bound by macrophages from normal female rat liver. In contrast to hGH binding to the hepatocytes, the ID50 for hGH was 6 to 180-fold greater in macrophages from hypox female and male rats, and macrophages preincubated with oPrl compared to that observed in macrophages from normal female rats, Rat GH was the most effective competitor for 125I-hGH binding in the macrophages from the hypox female and male rat liver with ID50 of 5.5 and 85 respectively.(ABSTRACT TRUNCATED AT 250 WORDS) | Mid | [
0.608315098468271,
34.75,
22.375
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.