text
stringlengths
2
1.05M
repo_name
stringlengths
5
101
path
stringlengths
4
991
language
stringclasses
3 values
license
stringclasses
5 values
size
int64
2
1.05M
Because the best marketers deserve great content. 5 Reasons Transition Is the Marketing Conference You Shouldn’t Miss The reasons to attend Transition 2017 in New York City are pretty much endless. If past Transitions are any indication, the atmosphere, programing, and location of this marketing conference will draw some of the most innovative minds in marketing both as speakers and attendees. But if you — or your boss — still need convincing, here are five reasons Transition is a must on any marketer’s calendar. 1. A front-row seat to CMO insightsTransition has a history of bringing illustrious and interesting business leaders from around the world – from Keith Weed of Unilever to Marc Mathieu of Samsung to Nick Denton of Gawker – and 2017 is no different. We’ve got Beth Comstock, Vice Chair and former CMO at GE, sharing insights from her experience leading the brand’s digital transformation; Raja Rajamannar, CMO of Mastercard, talking about forging connections with today’s consumers; and Kim Metcalf-Kupres, CMO of Johnson Controls, presenting on embracing digital change at a legacy brand. No matter whether you’re a CMO or eyeing the title, these are talks that any marketer thinking about the future of digital will not want to miss. 2. Network with 300+ marketersYou can learn a lot from talks from CMOs and executive thought leaders, but one of the best resources for improving the way you build your brand is to learn firsthand how your peers approach their jobs and swap success stories. Transition is the perfect opportunity to connect with marketers from some of the world’s biggest brands. And the Transition agenda has over four hours of networking time built in, so there’ll be ample opportunity to introduce yourself and talk shop with your fellow attendees. 3. Spend a day in the heart of TribecaHaving played host to everything from the Tribeca Film Festival to New York Fashion Week to Microsoft’s latest global product launch, the eclectic Spring Studios is the perfect location for Transition 2017. With gorgeous views of New York’s iconic cityscape and access to some of the best restaurants (and coffee shops) around, you’ll have a marketing conference experience that will both surprise and delight. 4. Witness a special product announcement from our CEOHe probably won’t be wearing a black turtleneck and dark wash jeans, but our co-founder and CEO Noah Brier will be making an announcement that will change the way you think about our platform and how you do your job on a daily basis. Consider this your invitation to a sneak peek at the future of how marketers at global brands work. 5. The Percolate Awards are backEveryone loves an awards show–and marketers are no exception. While we can’t promise an impromptu speech from Kanye West or a performance from a hologram of an iconic singer, we will be giving out our annual award to marketers who have been brewing up change in 2017. Last year we honored innovators at Kickstarter, the Wharton School, and Madison Square Garden, so attend Transition to see who gets the trophies and network with the winners afterward. 6. BONUS: Learn@TransitionCurrent Percolate customers can also attend Learn@Transition – a special day of user-focused programming to help you and your team up-level your Percolate knowledge and make the most of the platform. Head over to the Success Center for details.
null
minipile
NaturalLanguage
mit
null
DWP has been forced to withdraw a leaflet on benefit sanctions after comments from supposed claimants were revealed to be, errrr, made up and stuff. The department ‘fessed up up after a freedom of information request from Welfare Weekly asked them to provide evidence that photos and comments on a disability benefits leaflet were real. They responded: “The photos used are stock photos and along with the names do not belong to real claimants … The stories are for illustrative purposes only. With an inquiry by MPs hearing evidence of arbitrary justifications for sanctions, the literature featured quotes from two ‘claimants’ — including one docked benefits after she failed to produce a CV: The pictures look to be DWP’s own stock photos. Scrapbook has found the same picture of “Sarah” on a Universal Credit website — cast, ironically enough, not as lazy dole-dossing scum but a dynamic jobseeker “standing out from the crowd”: The news comes after DWP were accused of planting fake tweets praising the beleaguered Universal Credit scheme: This doesn't look at all like the DWP trying to plant fake tweets and getting the wrong account. Nope. pic.twitter.com/Bw7Mj3setT — Latent Ella (@latentexistence) October 27, 2014 SEE ALSO:
null
minipile
NaturalLanguage
mit
null
Q: Geotools conditional Style based on attribute How do I style different elements of a layer differently, depending on the value of an attribute? For example, I have time zone data which includes an attribute associating an integer from 1 to 8 to each time zone, for the purposes of colouring the time zones on a map. How can I associate a style to each value of the attribute, and use it to colour the time zones in different colours? A: This answer is for Geotools 9.2 Below is an example which you can adopt, see below for some ideas how to apply it to your requirements. The example is about polygons. It has different styles for selected and visible features and for small and large scales: protected final StyleFactory styleFactory = CommonFactoryFinder.getStyleFactory(GeoTools.getDefaultHints()); protected final FilterFactory2 filterFactory = CommonFactoryFinder.getFilterFactory2(); protected Rule createLayerRule ( Color outlineColor , float strokeWidth , Color fillColor , float opacity , Filter filter , double minScaleDenominator , double maxScaleDenominator) { Stroke stroke = outlineColor != null ? styleFactory.createStroke ( filterFactory.literal(outlineColor) , filterFactory.literal(strokeWidth) , filterFactory.literal(opacity)) : null;//Stroke.NULL; Fill fill = fillColor != null ? styleFactory.createFill ( filterFactory.literal(fillColor) , filterFactory.literal(opacity)) : null;//Fill.NULL; PolygonSymbolizer symbolizer = styleFactory.createPolygonSymbolizer(stroke, fill, null); return createRule(filter, minScaleDenominator, maxScaleDenominator, symbolizer); } // IDs of visible features, programmatically updated. protected final Set<FeatureId> visibleFeatureIDs = new HashSet<FeatureId>(); // IDs of selected features, programmatically updated. protected final Set<FeatureId> selectedFeatureIDs = new HashSet<FeatureId>(); protected Style createLayerStyle() { Filter selectionFilter = filterFactory.id(selectedFeatureIDs); Filter visibilityFilter = filterFactory.and ( Arrays.asList ( filterFactory.not (selectionFilter) , filterFactory.id(visibleFeatureIDs) ) ); FeatureTypeStyle fts = styleFactory.createFeatureTypeStyle ( new Rule[] { // hope constants below are self explaining createLayerRule ( SELECTED_OUTLINE_COLOR , STROKE_WIDTH_LARGE_SCALE , SELECTED_FILL_COLOR , SELECTED_OPACITY , selectionFilter , STYLE_SCALE_LIMIT , Double.NaN) , createLayerRule ( UNSELECTED_OUTLINE_COLOR , STROKE_WIDTH_LARGE_SCALE , UNSELECTED_FILL_COLOR , UNSELECTED_OPACITY , visibilityFilter , STYLE_SCALE_LIMIT , Double.NaN) , createLayerRule ( SELECTED_OUTLINE_COLOR , STROKE_WIDTH_SMALL_SCALE , SELECTED_FILL_COLOR , SELECTED_OPACITY , selectionFilter , Double.NaN , STYLE_SCALE_LIMIT) , createLayerRule ( UNSELECTED_OUTLINE_COLOR , STROKE_WIDTH_SMALL_SCALE , UNSELECTED_FILL_COLOR , UNSELECTED_OPACITY , visibilityFilter , Double.NaN , STYLE_SCALE_LIMIT) } ); Style style = styleFactory.createStyle(); style.featureTypeStyles().add(fts); return style; } // layer creation FeatureLayer someMethode() { ... FeatureLayer layer = new FeatureLayer ( dataStore.getFeatureSource() , createLayerStyle() , "Zipcodes" ); ... return layer; } // style update if visible or selected features have changed void someOtherMethod(FeatureLayer layer) { ... // update selectedFeatureIDs or visibleFeatureIDs layer.setStyle(createLayerStyle()); } Of course optimizations are possible and welcome. For your requirement the following may help: You need 8 rules (see createRule(..) above), if you want to have one style for all scales (or 16 for small and large scales). Define 8 filters using FilterFactory.equals(Expression, Expression) where first expression is of type AttributeExpressionImpl and second of type LiteralExpressionImpl Be aware that there is another method equal (without s) in FilterFactory2
null
minipile
NaturalLanguage
mit
null
Heavyweight Numbered Ducks by Big Duck Canvas We are excited to offer a wide range of Heavyweight Numbered Ducks! Single fill and double fill are both constructed with a flat weave, but the double fill (The Numbered Ducks) are stronger because the warp and weft are made of plied (twisted) yarns. These fabrics are extremely dense, stiff and durable. #1 is the heaviest of all at 30oz! #4 is the next heaviest and available in a wide selection of widths from 24″-120″. If these fabrics sound perfect for your next sewing project make sure you have an industrial sewing machine and heavy thread. We recommend a tex 69 thread. I personally use #4 to make floor cloths and tote bags. I have had a lot of luck dyeing the #4. One thing I have learned about this super heavy canvas is that the wrinkles are very hard to remove. I have found the best way to wash #4 is to lay it flat in the driveway and use a hose and scrub brush. After I wash and rinse I move the canvas to another flat spot in the sun. It dries really quickly and completely flat! Keeping the fabric flat through the washing process saves a lot of time trying to remove stubborn wrinkles made in the washing machine. If you absolutely want to use your washing machine to wash this heavyweight canvas, the best solution is to remove the wet canvas and lay it flat on a few layers of towels. Cover it with more towels with some flat heavy weights on top. Change out the towels every few hours and eventually your fabric will look smooth.
null
minipile
NaturalLanguage
mit
null
“[The] Indonesia-Africa Forum is the embodiment of the commitment of Indonesia and African countries to advance and prosper together and become the first forum, where Africa and Indonesia can meet and discuss concrete cooperation involving various stakeholders,” said Indonesian Minister for Foreign Affairs H.E. Retno Marsudi in her report during the opening of the Indonesia-Africa Forum (IAF) 2018 at Bali Nusa Dua Convention Center, Denpasar. The Minister for Foreign Affairs hoped that the IAF event can be maximally utilized to explore more potential cooperation that will benefit Indonesia-Africa in order to achieve mutual prosperity. “This time for Africa,” she affirmed. The first-ever IAF event on the theme “Developing Sustainable Economic and Investment Cooperation” was officially opened by Vice President Jusuf Kalla. “Let’s [walk] hand in hand in strengthening the relations between Indonesia and Africa and to build a just and prosperous world together,” he said in his keynote speech at the opening session of the IAF 2018. The event was attended by over 500 participants including 240 delegates from 46 African countries, International Organizations and Development Partners while from Indonesia, about 200 people were present from the government, private sector, and business players. Indonesian Coordinating Minister for Home Affairs, Minister for Trade, and Minister for Foreign Affairs were also present at the opening. The series of the IAF 2018 meeting programs will include discussion forums, industry fairs, and business deals. On the sidelines of the meeting there will also be a number of bilateral meetings between Indonesia and African countries. In his opening speech, the Vice President expressed his hope that the forum will be able to establish real economic cooperation, in various fields and continue to map cooperation in infrastructure, strategic industries, and financing facilities. Currently, Indonesia needs oil, cotton, and cocoa beans from Africa, while Africa requires palm oil, motor vehicles, and instant noodles from Indonesia. In the field of investment, more than 30 companies in the fields of pharmaceuticals, textiles, energy and others, operating in Africa. IAF 2018 is a showcase of concrete economic cooperation between Indonesia and African countries to translate the political and historical closeness of Indonesia-Africa into a close and tangible economic cooperation. During the IAF 2018 event, business deals worth USD 586.56 million will be signed, and Indonesia also expressed its commitment to strengthen cooperation with Africa, among others, through the enhancement of technical cooperation and capacity building to Africa region; improvement of scholarship cooperation, development of competitive export credit facilities, increased connectivity cooperation, and exploring trade agreements through the establishment of Preferential Trade Agreement. In addition, in the next year as an effort to follow up the IAF 2018 meeting, there is a plan to hold Indonesia-Africa Infrastructure Dialogue in August/September 2019. The meeting will also discuss the potential and opportunities of cooperation between the two countries, such as the cooperation in food security, creative and digital economy, energy, and construction, and action steps to realize the possibility of cooperation programs that have not been explored so far. Vice President Jusuf Kalla officially opened the first Indonesia-Africa Forum (IAF 2018) in Bali Nusa Dua Convention Center on 10 April 2018. The Vice President conveyed that Indonesia and Africa have a long history beginning with the organizing of the Asian-African Conference (KAA) in Bandung in 1955 and subsequently with the Non-Alignment Movement in 1965. The spirit of cooperation between Indonesia and Africa has been continuous, marked by the 50-year and 60-year anniversaries of the Asia-Africa Conference in the years 2005 and 2015, respectively. Indonesia and Africa thereafter keep working together in promoting the economic cooperation. Indonesia also considered Africa as a strategic partner in foreign policy. African countries, likened to Wakanda in the Black Panther film have uncharted potential and resources that are not widely known by the international community. Like Africa, Indonesia – as a growing economy in Asia, the largest in Southeast Asia, as well as a member of the G20 – is developing infrastructure to boost economic growth. In addition, the Vice President also encouraged the continuation of Kerja Sama Selatan-selatan dan Triangular (KSST) (South-South and Triangular Cooperation) program of Indonesia-Africa that may explore the potential of economic cooperation in the future. The value of Indonesia-Africa trade, according to the Vice President, continues to increase. According to the latest statistics in 2017, the value of Indonesia-Africa trade was USD 4.86 billion in export value and USD 3.97 billion in import value, with a surplus trade balance for Indonesia of USD 887.28 million and an upward trend of 15.25% from 2016. The main export commodities of RI to Africa, among others, are palm oil, processed food and beverages, soaps, paper, garments, motor vehicles and spare parts. Meanwhile, Indonesia’s main import commodities from Africa are petroleum, cotton, cocoa beans, pulp, and chemicals for fertilizers and industries. The trade value still has great potential to grow continuously in considering the African countries still need Indonesian export goods such as palm oil, motor vehicles, and mass transportation vehicles, in addition to existing ones such as Indomie. “Such effort needs to be supported by cooperation of export policy, connectivity, trade agreements, and infrastructure improvement,” said the Vice President in front of the invitees. Before officially opening IAF 2018, Vice President Jusuf Kalla advised that in order for the hard work to be continued, cooperation agreements that have been made together should be followed up for the sake of common welfare of Indonesia and Africa.
null
minipile
NaturalLanguage
mit
null
Expression of RB2/p130 tumor-suppressor gene in AIDS-related non-Hodgkin's lymphomas: implications for disease pathogenesis. In this study we examined 21 cases of AIDS-related lymphomas for genomic organization and expression of RB2/p130 oncosuppressor gene and compared the results with the proliferative features of these neoplasms. We found no mutations in the RB2/p130 gene and unusually high percentages of cells expressing nuclear pRb2/p130 in tumors with a high proliferative activity, such as AIDS-related lymphomas. These findings might suggest that a molecular mechanism usually observed in viral-linked oncogenesis could be involved. We performed in vitro and in vivo binding assays to investigate whether the human immunodeficiency virus (HIV) gene product Tat and Rb2/p130 could interact. The results of these assays revealed that the HIV-1 Tat protein binds specifically to pRb2/p130. This may result in the inactivation of its oncosuppressive properties and the induction of genes needed to proceed through the cell cycle including p107, cyclin A, and cyclin B. Using single-cell polymerase chain reaction (PCR) assay, we found HIV-1 DNA in the neoplastic cells of only 2 of the 21 cases examined, whereas PCR on whole tissue revealed HIV-1 DNA in all of the cases. Furthermore, a diffuse and nuclear stain was observed in tissue sections with anti-Tat monoclonal antibody. These findings are in accordance with the notion that soluble Tat protein could function as a biologically active extracellular protein released by infected cells and taken up readily by uninfected B cells. In conclusion, our results seem to suggest that pRb2/p130 oncosuppressor protein may be a target in the interaction between the HIV-1 gene products and host proteins.
null
minipile
NaturalLanguage
mit
null
The Association of Dietary Intake of Calcium and Vitamin D to Colorectal Cancer Risk among Iranian Population. Background: Vitamin D and Calcium have a possible protective impact versus rectal neoplasm. Vitamin D, an important nutrient, is vital to regulate the absorption of calcium and bone mineralization; nevertheless, in a case-control study in Iran, we investigated the relationship among the dietary intake of vitamin D and calcium with the hazard of rectal neoplasm. Methods: 363 subjects (162 cases and 201 controls) participated in the case- control Study from March 2017 to November 2018. Dietary intake of Calcium and Vitamin D was calculated using a 148-items food-frequency questionnaire. Results: Since altering the strong confounding agents, the multivariate risk proportion within the dietary vitamin D intake was OR=0.2, 95%CI 0.1-0.5, P-value <0.001 among cases. There was no association in case of calcium and rectal cancer. Conclusions: Taken together, a possible reduction in the hazard of rectal neoplasm with dietary intake of Vitamin D within Iranian patients was observed.
null
minipile
NaturalLanguage
mit
null
This weekend fans will be watching another BLAST Pro Series event, this time in Madrid. However, this event might have more implications than we think. After all, this might be the point where Astralis era starts fading away. Why is that? Let’s understand below. Did skipping events really led Astralis to this point? One of the biggest controversies in recent times in the CSGO has been Astralis participation, or lack thereof, in big events. The number one ranked team in the world has consistently skipping events, in favour of BLAST Pro Series tours. Be it due to contractual obligations since RFRSH owns both BLAST and Astralis or not, it doesn’t matter. Astralis has skipped StarSeries Season 7 and IEM Sydney 2019, premier tournaments, that they should’ve attended. Now, Liquid, the second ranked team in the world won IEM Sydney 2019. With this victory, the team also many valuable points in the HLTV world ranking. To the point where they got closest to Astralis than any other team has been this year. So, with a third event skip coming down the line, Astralis might see themselves going down in the ranking. And of course, this would be the first time in one year that Astralis would drop from first place. Here are our preferred bookmakers to bet on CS:GO: X-bet – Get up to a €100 Betway – €30 Free Bet Astralis isn’t invincible anymore and BLAST is the perfect place to show this BLAST Pro Series features a different event format that focuses on best of ones. BO1 matches are criticized by fans and analysts since a long time ago and for a good reason. One-map games don’t allow teams to fully explore their tactics. This turn matches in momentum games above everything else. Astralis have achieved an amazing balance between firepower and tactical prowess, but those BO1s aren’t the best for the Danes. As we saw in the last event Astralis attended, BLAST Miami, MiBR was able to dominate the Danes. MiBR took the momentum and won over them with a 16 – 2 score. Liquid and FaZe capitalized on this and also won over Astralis in the same fashion. Coming to Madrid this weekend, ENCE and Na’Vi must be eager to follow in MiBR’s steps. After all, both teams know how to put their best players to use and test their opponents in every aspect. Astralis will be surely in a drill in Madrid. Even winning might not be enough to stop Liquid rise Of course, the possibility of Astralis winning this event hasn’t been discarded. After all, they are the best team in the world and their Miami result might just be from a bad day. However, even winning here might not be enough. As stated, Liquid is very close to Astralis in the world rankings. If the North Americans put a decent show in Madrid, they will be really close to the Danes. Then, if they follow with victories at cs_summit 4 and DreamHack Masters Dallas, which Astralis will skip, Liquid will probably reach first place in the ranks. Even if not, they still will have other shot to reach the top if they manage to qualify for ECS Finals. Overall, Astralis days on the top are being counted. Unless Liquid has a giant downfall starting from Madrid, they will be taking a break from the throne. And Liquid’s downfall is not something to count with, after all, they just won IEM Sydney. What does this mean for the Astralis era? Falling from the first place won’t instantly mean the end of an era. It will mean the end of a record that won’t be beaten any time soon, sure, but that is it. However, this will open the first major hole in Astralis reign. Since they established their dominance, they have been beaten a few times but always came back stronger. This time though, they haven’t events booked to show that the previous one was a fluke. Still, Liquid will be tested by fire shall they take Astralis place. After all, after Dallas Astralis will be present in the next three major events. While Liquid presence hasn’t been confirmed for ECS Finals, they will be in EPL and ESL Cologne. Those two events will certainly be of utmost importance. Liquid knows that they can win, but so does Astralis. If Astralis win in the bigger picture, they might return to the first place, shakenly. If Liquid wins, though, Astralis era will be nearing its end. So, what this all means for Astralis era is that simply missing out events might’ve triggered a snowball effect. Like we saw in Miami, teams are starting to understand them. And while playing many events in a short while is bad, so is ignoring tournaments. This snowball effect, that of course, will depend on Liquid. Astralis and Liquid rivalry is heating up as Summer is coming in While Astralis dominance back in IEM Katowice 2019 was undisputed, this time around they have a real challenger. In fact, there’s a chance that in the upcoming Summer major we see Astralis coming as the second-ranked team in the world. For CSGO viewers, this couldn’t be better. Fans have been waiting for a rival for Astralis for a long time, and the year-long dominance might be closer to its end than we think.
null
minipile
NaturalLanguage
mit
null
Q: Why does LINQ to SQL report a conflict on Windows Phone, when the database doesn't appear to have one? I've got a web service that is returning objects for me, along with a candidate key, which I've marked up with: [Column(IsPrimaryKey = true)] public int EventId { get; set; } All is fine with loading the data back from the webservice, but whilst iterating through the new & updated items to put them in a SQL CE database to act as a cache, like so: foreach (var e in results) { var ev = (from evt in context.Events where evt.EventId == e.EventId select evt).FirstOrDefault(); if (ev == null) { // Brand new context.Events.InsertOnSubmit(e); } else { // Update data ... } } For some reason, it occasionally thinks an event is brand new but it throws an exception during the InsertOnSubmit: System.InvalidOperationException was unhandled Cannot add an entity that already exists. I've pulled the database off of the emulator with the Windows Phone Power Tools, loaded the database up in Visual Studio, and there doesn't seem to be any conflicting value for the primary key, so why am I getting an exception that implies there is a conflict, and the debugger shows there aren't any cross-thread issues? EDIT One thing I did spot, is that my entity has an overridden Equals() that didn't cover the primary key (it did a comparison on a natural key), and it appears the web service has two records on something that is documented as a candidate key. If I adjust the Equals method to account for the primary key as well, the exception changes to be a SqlCeException and it tells me that: A duplicate value cannot be inserted into a unique index. [ Table name = SpecialEvent,Constraint name = PK_SpecialEvent ] Even though the primary key still hasn't been duplicated, which leaves me more confused (especially, as that type of exception cannot be easily caught) EDIT2 I've even tried using a lock() {} around the code performing the update, but I'm still getting odd conflicts, so I'm confused why I'm occasionally getting conflicts, especially when the SDF doesn't reflect the same conflicts. A: In my case, it turns out that it was down to a couple of factors combining -- my entities did not have a property with IsVersion = true nor did they have UpdateCheck = UpdateCheck.Never on columns that did not make up part of my primary key. It appears there was a timing issue which was resulting in it trying trying to update only when the columns matched what it was expecting the old values to be.
null
minipile
NaturalLanguage
mit
null
[Parallel DNA helices. Conformational analysis of regular poly(dG).poly(dC) helices with different variants of base binding]. We have performed a conformational analysis of DNA double helices with parallel directed backbone strands. The calculations were made for homopolymers poly(dG).poly(dC). All possible models of base binding were checked. By the potential energy optimization the dihedral angles and helices parameters of stable conformations of parallel double polynucleotides were calculated. The dependences of conformational energy on the base pair structure were studied. Possible structure of parallel helices with various nucleotide composition are discussed.
null
minipile
NaturalLanguage
mit
null
Larry Brown: Time for Doc Rivers to move on Mark MurphySunday, June 30, 2013 Doc Rivers has contorted himself in an attempt to put responsibility for his departure as much on Danny Ainge and Celtics ownership as his own clear desire to leave. But he also sought out the advice of Bill Parcells, Lou Holtz and Rivers’ mentor, Larry Brown — advocates, one and all, of the idea that leaving a team is good, even beneficial. Better to make the first move than to pay for it later. Perhaps Rivers needed validation that it was OK to walk away. Brown, who has left more jobs than arguably any other coach in the history of the sport, joked with Rivers about his own reputation. Rivers laughed and said Brown was probably the last person he should be talking to. Brown’s advice was going to be predictable. But for a coach who once said he wanted to be the Jerry Sloan of Boston, Rivers’ staying power eroded over the last year. Brown said that in today’s coaching climate — where making the playoffs no longer guarantees job security — it’s the right of a coach to protect himself. “You know that those things change,” the 72-year-old Brown, now head coach at Southern Methodist University, said last week. “Look at the guys who got fired — George Karl, Vinny Del Negro, Larry Drew, Lionel Hollins, Alvin Gentry. “We went to the playoffs in Charlotte and I got fired,” Brown said of his last NBA job. “Doc’s not silly. We can talk about rebuilding, and I do understand that he was there for nine years, and I understand the relationship he had with the city and the team. “But there is absolutely no loyalty in the NBA anymore. Look at the new GMs who are coming in — a lot of them never even played ball. And now you have analytics ruling the way things are done,” he said. “I know that Doc and Danny (Ainge) were attached at the hip, but how do you know that wouldn’t change? It just doesn’t happen that way. I wanted to be like coach (Dean) Smith and stay in one place forever, believe it or not, but that’s just not how it works.” Self-preservation explains a lot about Brown. In addition to SMU, he has held college jobs at Kansas, where he won the 1988 national title, UCLA and, technically, Davidson, though he left the summer before what would have been his first season there (1969-70). Brown, fired by Michael Jordan from his Charlotte job midway through the 2010-11 season, has coached nine NBA teams. He won the 2004 NBA title in Detroit. He also coached the Carolina Cougars of the ABA. “I’ve left places where I had a good reason to leave, and there were some where I wanted to stay but was let go,” he said. For all of the bad history involving Rivers’ new team, the Clippers, Brown said he enjoyed his 11⁄2 seasons there. For starters, that’s when he got to know a very reluctant Clipper named Doc Rivers. And like it or not, Rivers felt reluctance this summer, for different reasons. But according to Brown, Rivers was extremely divided at the time of their conversation in June. “At the end of the day, when you see old coaches getting fired, that can be tough, and I could see that playing in Doc’s mind,” he said. “I told him, you’d better be in charge wherever you go. The worst experiences I’ve had was when there was a disconnect between the coach, general manager and owner. So it’s good to see that Doc has that kind of control now. “But I saw Danny’s press conference, and it was good to hear when he said he was Doc’s assistant coach, and Doc was his assistant GM — I know that had to be a special situation for Doc,” Brown said. “But I know Doc was really torn up with this thing. He was seriously talking about stepping away from coaching for a while, and I told him that when I stepped away and spent time watching other coaches coach, (it) was a real blessing for me. I learned. But watching also made me realize that I would never not coach.” Rivers may have reached the same conclusion, without stepping away. “Doc loved Boston, and his relationship with Danny,” Brown said. “I told him to do what was in his heart. I told him that stuff happens. You have to move ahead.”
null
minipile
NaturalLanguage
mit
null
--- abstract: 'Based on the framework of nonrelativistic quantum chromodynamics, we carry out next-to-leading-order (NLO) QCD corrections to the decay of $Z$ boson into $\chi_c$ and $\chi_b$, respectively. The branching ratio of $Z \to \chi_{c}(\chi_b)+X$ is about $10^{-5}(10^{-6})$. For the color-singlet (CS) $^3P_J^{[1]}$ state, the heavy quark-antiquark pair associated process serves as the leading role. However the process of $Z \to Q\bar{Q}[^3P_J^{[1]}]+g+g$ can also provide non-negligible contributions, especially for the $\chi_b$ cases. In the case of the color-octet (CO) $^3S_1^{[8]}$ state, the single-gluon-fragmentation diagrams that first appear at the NLO level can significantly enhance the leading-order results. Consequently the CO contributions account for a large proportion of the total decay widths. Moreover, including the CO contributions will thoroughly change the CS predictions on the ratios of $\Gamma_{\chi_{c1}}/\Gamma_{\chi_{c0}}$, $\Gamma_{\chi_{c2}}/\Gamma_{\chi_{c0}}$, $\Gamma_{\chi_{b1}}/\Gamma_{\chi_{b0}}$, and $\Gamma_{\chi_{b2}}/\Gamma_{\chi_{b0}}$, which can be regarded as an outstanding probe to distinguish between the CO and CS mechanism. Summing over all the feeddown contributions from $\chi_{c}$ and $\chi_b$, respectively, we find $\Gamma(Z \to J/\psi+X)|_{\chi_c-\textrm{feeddown}}=(0.28 - 2.4) \times 10^{-5}$ and $\Gamma(Z \to \Upsilon(1S)+X)|_{\chi_b-\textrm{feeddown}}=(0.15 - 0.49) \times 10^{-6}$.' author: - Zhan Sun$^1$ - 'Hong-Fei Zhang$^2$' title: 'Next-to-leading-order QCD corrections to the decay of $Z$ boson into $\chi_c(\chi_b)$' --- Introduction ============ As one of the most successful theories describing the production of heavy quarkonium, nonrelativistic quantum chromodynamics (NRQCD) [@Bodwin:1994jh] has proved its validity in many processes [@Braaten:1994vv; @Cho:1995ce; @Cho:1995vh; @Han:2014jya; @Zhang:2014ybe; @Gong:2013qka; @Feng:2015wka; @Han:2014kxa; @Wang:2012is; @Butenschoen:2009zy; @Sun:2017nly; @Sun:2017wxk]. Despite these successes, NRQCD still faces some challenges. For example the NRQCD predictions significantly overshoot the measured total cross section of $e^+e^- \to J/\psi+X_{\textrm{non}-c\bar{c}}$ released from the $BABAR$ and Belle collaborations [@Zhang:2009ym]. In addition, the polarization puzzle of the hadroproduced $J/\psi$ ($\psi(2S)$) is still under debate [@Butenschoen:2012px; @Chao:2012iv; @Gong:2012ug]. One key factor responsible for these problems is that there are three long distance matrix elements (LDMEs) to be determined, which will bring about difficulties in drawing a definite conclusion. In comparison with $J/\psi$, $\chi_c$ has its own advantages. First, within the NRQCD framework, in the expansion of $v$ (the typical relative velocity of quark and antiquark in quarkonium) we have $$\begin{aligned} |\chi_{QJ}\rangle=\mathcal O(1)|Q\bar{Q}[^3P_{J}^{[1]}]\rangle+\mathcal O(v)|Q\bar{Q}[^3S_{1}^{[8]}]g\rangle+...~.\end{aligned}$$ $^3S_1^{[8]}$ is the unique color-octet (CO) state involved at the leading-order (LO) accuracy in $v$. From this point of view, $\chi_c$ is more “clean" comparing to $J/\psi$. In the second place, since the branching ratio of $\chi_c \to J/\psi+\gamma$ is sizeable, the $\chi_c$ feeddown may have a significant effect on the yield and/or polarization of $J/\psi$. For instance including the $\chi_c$ feeddown will obviously make the polarization trend of the hadroproduced $J/\psi$ more transverse. On the experiment side, $\chi_c$ can be easily detected by hunting the ideal decay process, $\chi_c \to J/\psi \to \mu^+\mu^-$. In conclusion, $\chi_c$ is beneficial for studying heavy quarkonium, deserving a separate investigation. In the past few years, there have been a number of literatures concerning the studies of the $\chi_c$ and $\chi_b$ productions [@Cho:1995vh; @Cho:1995ce; @Chen:2014ahh; @Braaten:1999qk; @Sharma:2012dy; @Shao:2014fca; @Li:2011yc; @Feng:2015wka; @Han:2014jya]. Ma $et$ $al$. [@Ma:2010vd] for the first time accomplished the next-to-leading-order (NLO) QCD corrections to the $\chi_c$ hadroproductions. Later on Zhang $et$ $al$. [@Jia:2014jfa] carried out a global analysis of the copious experimental data on the $\chi_c$ hadroproduction and pointed out that almost all the existing measurements can be reproduced by the NLO predictions based on NRQCD. To further check the validity and universality of the $\chi_c$ related LDMEs, it is indispensable to utilize them in other processes. Considering that copious $Z$ boson events can be produced at LHC, the axial vector part of the $Z$-vertex allows for a wider variety of processes, and the relative large mass of $Z$ boson can make the perturbative calculations more reliable, we will for the first time perform a systematic study on the decay of $Z$ boson into $\chi_c$ within the framework of NRQCD. Due to the larger mass of the $b\bar{b}$ mesons, the typical coupling constant and relative velocity of bottomonium are smaller than those of charmonium, subsequently leading to better convergent results over the expansion in $\alpha_s$ and $v^2$ than the charmonium cases. Thus, in this article, the $\chi_b$ productions via $Z$ boson decay will also be systematically investigated. The rest of the paper is organized as follows: In Sec. II, we give a description on the calculation formalism. In Sec. III, the phenomenological results and discussions are presented. Section IV is reserved as a summary. Calculation Formalism ===================== ![image](Feyn1.eps){width="95.00000%"} ![\[fig:Feyn2\] Some simple Feynman diagrams for the $\textrm{NLO}^{*}$ processes of $^3S_1^{[8]}$.](Feyn2.eps){width="60.00000%"} ![\[fig:Feyn3\] Some simple Feynman diagrams for the processes of $^3P_J^{[1]}$, including $Z \to c\bar{c}[^3P_J^{[1]}]+g+g$ and $Z \to c\bar{c}[^3P_J^{[1]}]+c+\bar{c}$.](Feyn3.eps){width="60.00000%"} Within the NRQCD framework, the decay width of $Z \to \chi_c(\chi_b)+X$ can be written as: $$\begin{aligned} d\Gamma=\sum_{n}d\hat{\Gamma}_{n}\langle \mathcal O ^{H}(n)\rangle,\end{aligned}$$ where $d\hat{\Gamma}_n$ is the perturbative calculable short distance coefficients, representing the production of a configuration of the $Q\bar{Q}$ intermediate state with a quantum number $n(^{2S+1}L_J^{[1,8]})$. $\langle \mathcal O ^{H}(n)\rangle$ is the universal nonperturbative LDME. According to NRQCD, for $\chi_c$ and $\chi_b$ related processes, only two states should be taken into considerations at LO accuracy in $v$ , namely $^3S_1^{[8]}$ and $^3P_J^{[1]}$. Taking $\chi_c$ as an example, up to $\alpha\alpha_s^2$ order, for $n=^3S_1^{[8]}$ we have $$\begin{aligned} \textrm{LO}:&Z& \to c\bar{c}[^3S_1^{[8]}]+g, \nonumber \\ \textrm{NLO}:&Z& \to c\bar{c}[^3S_1^{[8]}]+g~(\textrm{virtual}), \nonumber \\ &Z& \to c\bar{c}[^3S_1^{[8]}]+g+g, \nonumber \\ &Z& \to c\bar{c}[^3S_1^{[8]}]+u_g+\bar{u}_g~(\textrm{ghost}), \nonumber \\ &Z& \to c\bar{c}[^3S_1^{[8]}]+u+\bar{u}, \nonumber \\ &Z& \to c\bar{c}[^3S_1^{[8]}]+d(s)+\bar{d}(\bar{s}), \nonumber \\ \textrm{NLO}^{*}:&Z& \to c\bar{c}[^3S_1^{[8]}]+c+\bar{c}, \nonumber \\ &Z& \to c\bar{c}[^3S_1^{[8]}]+b+\bar{b}. \label{3s18 channels}\end{aligned}$$ The label “$\textrm{NLO}^{*}$" represents the heavy quark-antiquark pair associated processes, which are free of divergence. In the case of $n=^3P_J^{[1]}$, there are two involved channels as listed below: $$\begin{aligned} &Z& \to c\bar{c}[^3P_J^{[1]}]+g+g, \nonumber \\ &Z& \to c\bar{c}[^3P_J^{[1]}]+c+\bar{c}. \label{3pj1 channels}\end{aligned}$$ Some simple Feynman diagrams corresponding to Eqs. (\[3s18 channels\]) and (\[3pj1 channels\]) are presented in Figs. \[fig:Feyn1\], \[fig:Feyn2\], and \[fig:Feyn3\], including 51 diagrams for $^3S_1^{[8]}$ (2 LO diagrams, 6 counterterms, 15 one-loop, 18 diagrams for real corrections, and 10 NLO\* diagrams), and 10 diagrams for $^3P_J^{[1]}$. Note that, as shown in Eq. (\[3s18 channels\]), the real correction process $Z \to c\bar{c}[^3S_1^{[8]}]+q+\bar{q}$ has been divided into two categories, namely $q=u$ and $q=d(s)$. In addition, in Fig. 1(e) the diagrams involving fermion loops of $u,c$ and $d,s,b$ are also divided into two groups. For the $\chi_b$ cases, one should replace the charm quark of Eqs. (\[3s18 channels\]) and (\[3pj1 channels\]) with the bottom quark. Of special attention is that the coupling of $Z c\bar{c}$ is different from $Zb\bar{b}$. In the following, we will present the calculation formalisms for $Z \to Q\bar{Q}[^3S_1^{[8]}]+X$ and $Z \to Q\bar{Q}[^3P_J^{[1]}]+X$, respectively. $Z \to Q\bar{Q}[^3S_1^{[8]}]+X$ ------------------------------- To the next-to-leading order in $\alpha_s$, the decay width of $Z \to Q\bar{Q}[^3S_1^{[8]}]+X$ is $$\begin{aligned} \Gamma=\Gamma_{\textrm{Born}}+\Gamma_{\textrm{Virtual}}+\Gamma_{\textrm{Real}}+\mathcal O(\alpha\alpha_s^3),\end{aligned}$$ where $$\begin{aligned} &&\Gamma_{\textrm{Virtual}}=\Gamma_{\textrm{Loop}}+\Gamma_{\textrm{CT}}, \nonumber \\ &&\Gamma_{\textrm{Real}}=\Gamma_{\textrm{S}}+\Gamma_{\textrm{HC}}+\Gamma_{\textrm{H}\overline{\textrm{C}}}.\end{aligned}$$ $\Gamma_{\textrm{Virtual}}$ is the virtual corrections, consisting of the contributions from the one-loop diagrams ($\Gamma_{\textrm{Loop}}$) and the counterterms ($\Gamma_{\textrm{CT}}$). $\Gamma_{\textrm{Real}}$ means the real corrections, including the soft terms ($\Gamma_{S}$), hard-collinear terms $(\Gamma_{\textrm{HC}})$, and hard-noncollinear terms $(\Gamma_{\textrm{H}\overline{\textrm{C}}})$. For the purpose of isolating the ultraviolet (UV) and infrared (IR) divergences, we adopt the dimensional regularization with $D=4-2\epsilon$. The on-mass-shell (OS) scheme is employed to set the renormalization constants for the heavy quark mass ($Z_m$), heavy quark filed ($Z_2$), and gluon filed ($Z_3$). The modified minimal-subtraction ($\overline{MS}$) scheme is for the QCD gauge coupling ($Z_g$), as listed below ($Q=c,b$) [@Klasen:2004tz] $$\begin{aligned} \delta Z_{m}^{OS}&=& -3 C_{F} \frac{\alpha_s N_{\epsilon}}{4\pi}\left[\frac{1}{\epsilon_{\textrm{UV}}}-\gamma_{E}+\textrm{ln}\frac{4 \pi \mu_r^2}{m_Q^2}+\frac{4}{3}+\mathcal O(\epsilon)\right], \nonumber \\ \delta Z_{2}^{OS}&=& - C_{F} \frac{\alpha_s N_{\epsilon}}{4\pi}\left[\frac{1}{\epsilon_{\textrm{UV}}}+\frac{2}{\epsilon_{\textrm{IR}}}-3 \gamma_{E}+3 \textrm{ln}\frac{4 \pi \mu_r^2}{m_Q^2} \right. \nonumber\\ && \left.+4+\mathcal O(\epsilon)\right], \nonumber \\ \delta Z_{3}^{\overline{MS}}&=& \frac{\alpha_s N_{\epsilon}}{4\pi}\left[\beta_{0}(n_{lf})-2 C_{A}\right]\left[(\frac{1}{\epsilon_{\textrm{UV}}}-\frac{1}{\epsilon_{\textrm{IR}}}) \right. \nonumber\\ && \left. -\frac{4}{3}T_F(\frac{1}{\epsilon_{\textrm{UV}}}-\gamma_E+\textrm{ln}\frac{4\pi\mu_r^2}{m_c^2}) \right. \nonumber\\ && \left. -\frac{4}{3}T_F(\frac{1}{\epsilon_{\textrm{UV}}}-\gamma_E+\textrm{ln}\frac{4\pi\mu_r^2}{m_b^2})+\mathcal O(\epsilon)\right], \nonumber \\ \delta Z_{g}^{\overline{MS}}&=& -\frac{\beta_{0}(n_f)}{2}\frac{\alpha_s N_{\epsilon}}{4\pi}\left[\frac{1} {\epsilon_{\textrm{UV}}}-\gamma_{E}+\textrm{ln}(4\pi)+\mathcal O(\epsilon)\right], \label{CT}\end{aligned}$$ where $\gamma_E$ is the Euler’s constant, $\beta_{0}(n_f)=\frac{11}{3}C_A-\frac{4}{3}T_Fn_f$ is the one-loop coefficient of the $\beta$-function, and $\beta_{0}(n_{lf})$ is identical to $\frac{11}{3}C_A-\frac{4}{3}T_Fn_{lf}$. $n_f$ and $n_{lf}$ are the number of active quark flavors and light quark flavors, respectively. $N_{\epsilon}= \Gamma[1-\epsilon] /({4\pi\mu_r^2}/{(4m_c^2)})^{\epsilon}$. In ${\rm SU}(3)_c$, the color factors are given by $T_F=\frac{1}{2}$, $C_F=\frac{4}{3}$, and $C_A=3$. To subtract the IR divergences in $\Gamma_{\textrm{Real}}$, the two-cutoff slicing strategy [@Harris:2001sx] is utilized. To calculate the D-dimension trace of the fermion loop involving $\gamma_5$, under the scheme described in [@Korner:1991sx], we write down all the amplitudes from the same starting point (such as the $Z$-vertex) and abandon the cyclicity. As a crosscheck for the correctness of the treatments on $\gamma_5$, we have calculated the QCD NLO corrections to the similar process, $Z \to c\bar{c}[^3S_1^{[1]}]+\gamma$, obtaining exactly the same $K$ factor as in [@Wang:2013ywc]. $Z \to Q\bar{Q}[^3P_J^{[1]}]+X$ ------------------------------- The heavy quark-antiquark associated process $Z \to Q\bar{Q}[^3P_J^{[1]}]+Q+\bar{Q}$ ($Q=c,b$) is finite, thus one can calculate it directly. Now we are to deal with the other process of $Z \to Q\bar{Q}[^3P_J^{[1]}]+g+g$ ($Q=c,b$), which has soft singularities. Taking $\chi_c$ as an example, we first divide $\Gamma(Z \to c\bar{c}[^3P_J^{[1]}]+g+g)$ into two terms, $$\begin{aligned} &&d\Gamma(Z \to c\bar{c}[^3P_J^{[1]}]+g+g)=d\hat{\Gamma}_{^3P_J^{[1]}} \langle \mathcal O^{\chi_c}(^3P_J^{[1]}) \rangle+d\hat{\Gamma}_{^3S_1^{[8]}}^{LO} \langle \mathcal O^{\chi_c}(^3S_1^{[8]}) \rangle ^{NLO}.\end{aligned}$$ Then we have $$\begin{aligned} d\hat{\Gamma}_{^3P_J^{[1]}} \langle \mathcal O^{\chi_c}(^3P_J^{[1]}) \rangle&=&d\Gamma(Z \to c\bar{c}[^3P_J^{[1]}]+g+g) -d\hat{\Gamma}_{^3S_1^{[8]}}^{LO} \langle \mathcal O^{\chi_c}(^3S_1^{[8]}) \rangle ^{NLO} \nonumber \\ &=&d{\Gamma}_F+(d{\Gamma}_S-d\hat{\Gamma}_{^3S_1^{[8]}}^{LO} \langle \mathcal O^{\chi_c}(^3S_1^{[8]}) \rangle ^{NLO}) \nonumber \\ &=&d{\Gamma}_F+d{\Gamma}^{*}. \label{3pj1 SDC}\end{aligned}$$ $d{\Gamma}^{*}$ denotes the sum of $d{\Gamma}_S$ and $-d\hat{\Gamma}_{^3S_1^{[8]}}^{LO} \langle \mathcal O^{\chi_c}(^3S_1^{[8]}) \rangle ^{NLO}$. $d\Gamma_F$ is the finite terms in $d\Gamma(Z \to c\bar{c}[^3P_J^{[1]}]+g+g)$, and $d\Gamma_S$ is the soft part which can be written as $$\begin{aligned} &&d{\Gamma}_S=-\frac{\alpha_s}{3 \pi m_c} u^{s}_\epsilon \frac{N_c^2-1}{N_c} d\hat{\Gamma}^{LO}_{^3S_1^{[8]}} \langle \mathcal O^{\chi_c}(^3P_J^{[1]}) \rangle, \label{3pj1 soft}\end{aligned}$$ with $$\begin{aligned} u^{s}_\epsilon=\frac{1}{\epsilon_{IR}}+\frac{E}{|\textbf{p}|} \textrm{ln}(\frac{E+|\textbf{p}|}{E-|\textbf{p}|}) + \textrm{ln}(\frac{4 \pi \mu_r^2}{s\delta_s^2})-\gamma_E-\frac{1}{3}. \label{us}\end{aligned}$$ $N_c$ is identical to 3 for $SU(3)$ gauge field. $E$ and $\textbf{p}$ denote the energy and 3-momentum of $\chi_c$, respectively. $\delta_s$ is the usual “soft cut" employed to impose an amputation on the energy of the emitted gluon. Now we are to calculate the transition rate of $^3S_1^{[8]}$ into $^3P_J^{[1]}$. From Ref. [@Jia:2014jfa], under the dimensional regularization scheme we have $$\begin{aligned} \langle \mathcal O^{\chi_c}(^3S_1^{[8]}) \rangle ^{NLO}=-\frac{\alpha_s}{3 \pi m_c} u^{c}_\epsilon \frac{N_c^2-1}{N_c} \langle \mathcal O^{\chi_c}(^3P_J^{[1]}) \rangle. \label{3s18to3pj1}\\end{aligned}$$ On the basis of $\mu_{\Lambda}$-cutoff scheme [@Jia:2014jfa], $u^{c}_\epsilon$ has the form of $$\begin{aligned} u^{s}_\epsilon=\frac{1}{\epsilon_{IR}}-\gamma_E-\frac{1}{3}- \textrm{ln}(\frac{4 \pi \mu_r^2}{\mu_{\Lambda}^2}). \label{uc}\end{aligned}$$ $\mu_{\Lambda}$ is the upper bound of the integrated gluon energy, rising from the renormalization of the LDME. Substituting Eqs. (\[3pj1 soft\]), (\[us\]), (\[3s18to3pj1\]), and (\[uc\]) into Eq. (\[3pj1 SDC\]), the soft singularities in $d{\Gamma}_S$ and $d\hat{\Gamma}_{^3S_1^{[8]}}^{LO} \langle \mathcal O^{\chi_c}(^3S_1^{[8]}) \rangle ^{NLO}$ cancel each other. Consequently $d{\Gamma}^{*}$ is free of divergence. For the $\chi_b$ cases, one should replace the charm quark with the bottom quark. In addition, the $Zc\bar{c}$ coupling should changed into the $Zb\bar{b}$ form. Numerical results and discussions ================================= Before presenting the phenomenological results, we first demonstrate the choices of the parameters in our calculations. To keep the gauge invariance, the masses of $\chi_c$ and $\chi_b$ are set to be $2m_c$ and $2m_b$, respectively. $m_c=1.5 \pm 0.1$ GeV and $m_b=4.9 \pm 0.2$ GeV. $m_Z=91.1876$ GeV. $\alpha=1/137$. In the calculations for the NLO, the $\textrm{NLO}^{*}$, and the two $^3P_J^{[1]}$ processes, we employ the two-loop $\alpha_s$ running, and one-loop $\alpha_s$ running for LO. We take $m_c(m_b)$ as the value of $\mu_{\Lambda}$ for $\chi_c(\chi_{b})$. The values of $\langle \mathcal O^{\chi_{c}(\chi_{b})}(^3S_1^8) \rangle$ are taken as $$\begin{aligned} &&\langle \mathcal O^{\chi_{c0}}(^3S_1^{[8]}) \rangle=2.15 \times 10^{-3}~\textrm{GeV}^3, \nonumber \\ &&\langle \mathcal O^{\chi_{b0}}(^3S_1^{[8]}) \rangle=9.40 \times 10^{-3}~\textrm{GeV}^3,\end{aligned}$$ from Refs. [@Feng:2015wka] and [@Jia:2014jfa]. In the case of the $^3P_J^{[1]}$ channels, the relation $\langle \mathcal O^{\chi_{cJ}(\chi_{bJ})}(^3P_J^{[1]}) \rangle=\frac{9}{2\pi}(2J+1)|R^{'}_p(0)|^2$ is adopted with $|R^{'}_p(0)|^2=0.075~\textrm{GeV}^5$ for $\chi_c$ and $|R^{'}_p(0)|^2=1.417~\textrm{GeV}^5$ for $\chi_b$. ![image](IIorder.eps){width="49.00000%"} ![image](Iorder.eps){width="49.00000%"} ![image](scut3s18.eps){width="49.00000%"} ![image](ccut3s18.eps){width="49.00000%"} ![image](scut3p01.eps){width="49.00000%"} ![image](scut3p11.eps){width="49.00000%"} ![image](scut3p21.eps){width="49.00000%"} In our calculations, the mathematica package $\textbf{Malt@FDC}$ [@Feng:2017bdu; @Sun:2017nly; @Sun:2017wxk; @Sun:2018rgx] is employed to obtain $\Gamma_{\textrm{Virtual}}$, $\Gamma_{\textrm{\textrm{S}}}$ and $\Gamma_{\textrm{\textrm{HC}}}$. $\textbf{FDC}$ [@Wang:2004du] package serves as the agent to evaluate the contributions of the hard-noncollinear part of the real corrections, namely $\Gamma_{\textrm{H}\overline{\textrm{C}}}$. Both the cancellation of divergence and the independence on cutoff have been checked carefully. By taking $\chi_c$ as an example, we present the verifications in Figs. \[fig:div\], \[fig:cut3s18\], and \[fig:cut3pj1\]. Note that, for $Z \to c\bar{c}[^3S_1^{[8]}]+q+\bar{q}$ (as displayed in Figures. \[fig:Feyn1\](i) and \[fig:Feyn1\](j)), the contributions of the single-gluon-fragmentation (SGF) diagrams (\[fig:Feyn1\](j)) are free of divergence. Moreover, the SGF contribution is about 2 orders of magnitude bigger than that of Fig. \[fig:Feyn1\](i). In order to clearly demonstrate the verification of the independence on the cutoff parameters ($\delta_s,\delta_c$), the $\Gamma_{\textrm{H}\overline{\textrm{C}}}$ in Fig. \[fig:cut3s18\] does not include the SGF contributions. Phenomenological results for $\chi_c$ ------------------------------------- The NRQCD predictions for $\Gamma(Z \to \chi_{cJ}+X)$ ($J=0,1,2$) are demonstrated in Tables. \[xc0\], \[xc1\], and \[xc2\], respectively. $\mu_r$ $m_c(\textrm{GeV})$ $^3S_1^{[8]}|_{\textrm{LO}}$ $^3S_1^{[8]}|_{\textrm{NLO}}$ $^3S_1^{[8]}|_{\textrm{NLO}^{*}}$ $^3P_0^{[1]}|_{gg}$ $^3P_0^{[1]}|_{c\bar{c}}$ $\Gamma_{\textrm{total}}$ $\textrm{Br}(10^{-5})$ --------- --------------------- ------------------------------ ------------------------------- ----------------------------------- ----------------------- --------------------------- --------------------------- ------------------------ $~$ $1.4$ $1.20 \times 10^{-2}$ $14.9$ $8.26$ $5.63 \times 10^{-2}$ $27.0$ $50.2$ $2.02$ $2m_c$ $1.5$ $1.09 \times 10^{-2}$ $10.9$ $6.05$ $4.27 \times 10^{-2}$ $18.1$ $35.1$ $1.41$ $~$ $1.6$ $9.99 \times 10^{-3}$ $8.12$ $4.53$ $3.30 \times 10^{-2}$ $12.5$ $25.1$ $1.01$ $~$ $1.4$ $5.30 \times 10^{-3}$ $2.99$ $1.66$ $1.13 \times 10^{-2}$ $5.43$ $10.1$ $0.41$ $m_Z$ $1.5$ $4.95 \times 10^{-3}$ $2.31$ $1.28$ $9.06 \times 10^{-3}$ $3.84$ $7.45$ $0.30$ $~$ $1.6$ $4.64 \times 10^{-3}$ $1.82$ $1.01$ $7.36 \times 10^{-3}$ $2.78$ $5.61$ $0.23$ $\mu_r$ $m_c(\textrm{GeV})$ $^3S_1^{[8]}|_{\textrm{LO}}$ $^3S_1^{[8]}|_{\textrm{NLO}}$ $^3S_1^{[8]}|_{\textrm{NLO}^{*}}$ $^3P_1^{[1]}|_{gg}$ $^3P_1^{[1]}|_{c\bar{c}}$ $\Gamma_{\textrm{total}}$ $\textrm{Br}(10^{-5})$ --------- --------------------- ------------------------------ ------------------------------- ----------------------------------- --------------------- --------------------------- --------------------------- ------------------------ $~$ $1.4$ $3.60 \times 10^{-2}$ $44.6$ $24.8$ $1.47$ $29.9$ $101$ $4.06$ $2m_c$ $1.5$ $3.27 \times 10^{-2}$ $32.6$ $18.2$ $1.09$ $20.0$ $71.9$ $2.89$ $~$ $1.6$ $3.00 \times 10^{-2}$ $24.4$ $13.6$ $0.819$ $13.7$ $52.5$ $2.11$ $~$ $1.4$ $1.59 \times 10^{-2}$ $8.98$ $4.98$ $0.296$ $6.01$ $20.3$ $0.82$ $m_Z$ $1.5$ $1.49 \times 10^{-2}$ $6.94$ $3.85$ $0.231$ $4.23$ $15.3$ $0.61$ $~$ $1.6$ $1.39 \times 10^{-2}$ $5.45$ $3.03$ $0.183$ $3.05$ $11.7$ $0.47$ $\mu_r$ $m_c(\textrm{GeV})$ $^3S_1^{[8]}|_{\textrm{LO}}$ $^3S_1^{[8]}|_{\textrm{NLO}}$ $^3S_1^{[8]}|_{\textrm{NLO}^{*}}$ $^3P_2^{[1]}|_{gg}$ $^3P_2^{[1]}|_{c\bar{c}}$ $\Gamma_{\textrm{total}}$ $\textrm{Br}(10^{-5})$ --------- --------------------- ------------------------------ ------------------------------- ----------------------------------- --------------------- --------------------------- --------------------------- ------------------------ $~$ $1.4$ $6.00 \times 10^{-2}$ $74.3$ $41.3$ $1.03$ $11.7$ $128$ $5.14$ $2m_c$ $1.5$ $5.46 \times 10^{-2}$ $54.4$ $30.3$ $0.780$ $7.84$ $93.2$ $3.74$ $~$ $1.6$ $4.99 \times 10^{-2}$ $40.6$ $22.6$ $0.601$ $5.39$ $69.2$ $2.78$ $~$ $1.4$ $2.65 \times 10^{-2}$ $15.0$ $8.30$ $0.208$ $2.35$ $25.8$ $1.04$ $m_Z$ $1.5$ $2.48 \times 10^{-2}$ $11.6$ $6.42$ $0.166$ $1.66$ $19.8$ $0.80$ $~$ $1.6$ $2.32 \times 10^{-2}$ $9.08$ $5.05$ $0.134$ $1.20$ $15.5$ $0.62$ One can see that the branching rations are on the order of $10^{-5}$, indicating a detectable prospect of these decay processes at LHC or other platforms. To be specific, considering the uncertainties induced by the choices of the values of $\mu_r(2m_c \sim M_Z)$ and $m_c(1.4\ \sim 1.6~\textrm{GeV})$, we have $$\begin{aligned} \textrm{Br}(Z \to \chi_{c0}+X)&=&(0.23 - 2.02) \times 10^{-5}, \nonumber \\ \textrm{Br}(Z \to \chi_{c1}+X)&=&(0.47 - 4.06) \times 10^{-5}, \nonumber \\ \textrm{Br}(Z \to \chi_{c2}+X)&=&(0.62 - 5.14) \times 10^{-5}.\end{aligned}$$ For the color-singlet $^3P_J^{[1]}$ ($J=0,1,2$) state cases, the process of $Z \to c\bar{c}[^3P_J^{[1]}]+c+\bar{c}$ serves as the leading role in the total CS prediction, due to the $c$-quark fragmentation mechanism. The other CS process, namely $Z \to c\bar{c}[^3P_J^{[1]}]+g+g$, contributes moderately, accounting for about $0.24\%,5\%$, and $10\%$ of the total CS prediction for $J=0,1,2$, respectively. In the case of the color-octet $^3S_1^{[8]}$ state, the QCD NLO corrections can enhance the LO results significantly, by $2 - 3$ orders. This can be attributed to the kinematic enhancements via the $^3S_1^{[8]}$ single-gluon-fragmentation diagrams, including the one-loop triangle anomalous diagrams (Fig. \[fig:Feyn1\](e)) and the diagrams associated with a final $q\bar{q}$ ($q=u,d,s$) pair (Fig. \[fig:Feyn1\](j)), which first emerge at the NLO level. By the same token, the $\textrm{NLO}^{*}$ channels can also provide considerable contributions, about one half of the NLO results. Consequently the CO channels will play a vital role in the decay process of $Z \to \chi_c+X$. To show the CO significance obviously, we introduce the following ratios $$\begin{aligned} \Gamma^{\chi_{c0}}_{\textrm{CO}} / \Gamma^{\chi_{c0}}_{\textrm{\textrm{CS+CO}}}&=&(46.1 - 50.3)\%, \nonumber \\ \Gamma^{\chi_{c1}}_{\textrm{CO}} / \Gamma^{\chi_{c1}}_{\textrm{\textrm{CS+CO}}}&=&(68.9 - 72.4)\%, \nonumber \\ \Gamma^{\chi_{c2}}_{\textrm{CO}} / \Gamma^{\chi_{c2}}_{\textrm{\textrm{CS+CO}}}&=&(90.1 - 91.4)\%.\end{aligned}$$ $\Gamma^{\chi_{cJ}}_{\textrm{CO}}$ ($J=0,1,2$) denotes the sum of the NLO and $\textrm{NLO}^{*}$ result. In addition to the crucial impacts on the total widths, the CO channels can also significantly influence the predictions on the ratios of $\Gamma_{\chi_{c1}}/\Gamma_{\chi_{c0}}$ and $\Gamma_{\chi_{c2}}/\Gamma_{\chi_{c0}}$, as shown below $$\begin{aligned} \textrm{CS}&:&~~~\Gamma_{\chi_{c1}} / \Gamma_{\chi_{c0}} = 1.159 - 1.162, \nonumber \\ \textrm{CS+CO}&:&~~~\Gamma_{\chi_{c1}} / \Gamma_{\chi_{c0}} = 2.007 - 2.087, \nonumber \\ \textrm{CS}&:&~~~\Gamma_{\chi_{c2}} / \Gamma_{\chi_{c0}} = 0.471 - 0.480, \nonumber \\ \textrm{CS+CO}&:&~~~\Gamma_{\chi_{c2}} / \Gamma_{\chi_{c0}} = 2.558 - 2.756.\end{aligned}$$ One can see that the CS results have been thoroughly changed by including the CO states. The conspicuous differences can be regarded as an outstanding probe to distinguish between the CO and CS mechanism. Considering the branching ratios of $\chi_c $ to $ J/\psi$ are not small [@Tanabashi:2018oca], $$\begin{aligned} \textrm{Br}(\chi_{c0} \to J/\psi+\gamma)&=&1.4\%, \nonumber \\ \textrm{Br}(\chi_{c1} \to J/\psi+\gamma)&=&34.3\%, \nonumber \\ \textrm{Br}(\chi_{c2} \to J/\psi+\gamma)&=&19.0\%,\end{aligned}$$ thus $\chi_c$ feeddown may have a substantial impact on the production of $J/\psi$. Adding together the contributions from $\chi_{c0}$, $\chi_{c1}$, and $\chi_{c2}$, we finally obtain $$\begin{aligned} \Gamma(Z \to J/\psi+X)|_{\chi_c-\textrm{feeddown}}=(0.28 \sim 2.4) \times 10^{-5}.\end{aligned}$$ This result is about one order of magnitude smaller than the experimental data released from the L3 Collaboration at LEP [@Acciarri:1998iy]. Phenomenological results for $\chi_b$ ------------------------------------- $\mu_r$ $m_b(\textrm{GeV})$ $^3S_1^{[8]}|_{\textrm{LO}}$ $^3S_1^{[8]}|_{\textrm{NLO}}$ $^3S_1^{[8]}|_{\textrm{NLO}^{*}}$ $^3P_0^{[1]}|_{gg}$ $^3P_0^{[1]}|_{b\bar{b}}$ $\Gamma_{\textrm{total}}$ $\textrm{Br}(10^{-7})$ --------- --------------------- ------------------------------ ------------------------------- ----------------------------------- ----------------------- --------------------------- --------------------------- ------------------------ $~$ $4.7$ $9.76 \times 10^{-3}$ $0.272$ $0.148$ $8.84 \times 10^{-3}$ $0.677$ $1.11$ $4.46$ $2m_b$ $4.9$ $9.26 \times 10^{-3}$ $0.225$ $0.121$ $7.46 \times 10^{-3}$ $0.535$ $0.888$ $3.57$ $~$ $5.1$ $8.82 \times 10^{-3}$ $0.187$ $9.95 \times 10^{-2}$ $6.34 \times 10^{-3}$ $0.426$ $0.719$ $2.89$ $~$ $4.7$ $6.34 \times 10^{-3}$ $0.119$ $6.26 \times 10^{-2}$ $3.74 \times 10^{-3}$ $0.286$ $0.472$ $1.90$ $m_Z$ $4.9$ $6.08 \times 10^{-3}$ $0.101$ $5.22 \times 10^{-2}$ $3.22 \times 10^{-3}$ $0.231$ $0.387$ $1.55$ $~$ $5.1$ $5.85 \times 10^{-3}$ $8.62 \times 10^{-2}$ $4.37 \times 10^{-2}$ $2.79 \times 10^{-3}$ $0.187$ $0.320$ $1.29$ $\mu_r$ $m_b(\textrm{GeV})$ $^3S_1^{[8]}|_{\textrm{LO}}$ $^3S_1^{[8]}|_{\textrm{NLO}}$ $^3S_1^{[8]}|_{\textrm{NLO}^{*}}$ $^3P_1^{[1]}|_{gg}$ $^3P_1^{[1]}|_{b\bar{b}}$ $\Gamma_{\textrm{total}}$ $\textrm{Br}(10^{-7})$ --------- --------------------- ------------------------------ ------------------------------- ----------------------------------- ----------------------- --------------------------- --------------------------- ------------------------ $~$ $4.7$ $2.92 \times 10^{-2}$ $0.814$ $0.445$ $0.153$ $0.653$ $2.06$ $8.28$ $2m_b$ $4.9$ $2.78 \times 10^{-2}$ $0.674$ $0.362$ $0.128$ $0.512$ $1.68$ $6.74$ $~$ $5.1$ $2.64 \times 10^{-2}$ $0.562$ $0.299$ $0.109$ $0.405$ $1.37$ $5.50$ $~$ $4.7$ $1.90 \times 10^{-2}$ $0.357$ $0.188$ $6.47 \times 10^{-2}$ $0.276$ $0.886$ $3.56$ $m_Z$ $4.9$ $1.83 \times 10^{-2}$ $0.303$ $0.157$ $5.54 \times 10^{-2}$ $0.221$ $0.736$ $2.96$ $~$ $5.1$ $1.75 \times 10^{-2}$ $0.258$ $0.131$ $4.77 \times 10^{-2}$ $0.178$ $0.616$ $2.47$ $\mu_r$ $m_b(\textrm{GeV})$ $^3S_1^{[8]}|_{\textrm{LO}}$ $^3S_1^{[8]}|_{\textrm{NLO}}$ $^3S_1^{[8]}|_{\textrm{NLO}^{*}}$ $^3P_2^{[1]}|_{gg}$ $^3P_2^{[1]}|_{b\bar{b}}$ $\Gamma_{\textrm{total}}$ $\textrm{Br}(10^{-7})$ --------- --------------------- ------------------------------ ------------------------------- ----------------------------------- ----------------------- --------------------------- --------------------------- ------------------------ $~$ $4.7$ $4.88 \times 10^{-2}$ $1.360$ $0.743$ $0.160$ $0.270$ $2.53$ $10.2$ $2m_b$ $4.9$ $4.63 \times 10^{-2}$ $1.130$ $0.605$ $0.136$ $0.212$ $2.08$ $8.35$ $~$ $5.1$ $4.41 \times 10^{-2}$ $0.937$ $0.497$ $0.116$ $0.168$ $1.72$ $6.91$ $~$ $4.7$ $3.18 \times 10^{-2}$ $0.595$ $0.312$ $6.78 \times 10^{-2}$ $0.114$ $1.09$ $4.38$ $m_Z$ $4.9$ $3.04 \times 10^{-2}$ $0.504$ $0.261$ $5.87 \times 10^{-2}$ $9.12 \times 10^{-2}$ $0.915$ $3.69$ $~$ $5.1$ $2.92 \times 10^{-2}$ $0.430$ $0.219$ $5.12 \times 10^{-2}$ $7.36 \times 10^{-2}$ $0.774$ $3.11$ Based on NRQCD, the predicted decay widths via $Z \to \chi_{bJ}+X$ ($J=0,1,2$) are presented in Tables. \[xb0\], \[xb1\], and \[xb2\]. It is observed that the branching ratio for $Z \to \chi_{bJ}+X$ is around $10^{-7} - 10^{-6}$. Taking into account the uncertainties induced by $\mu_r$ ($2m_b \sim M_Z$) and the mass of $b$ quark ($4.7 \sim 5.1$ GeV), we have $$\begin{aligned} \textrm{Br}(Z \to \chi_{b0}+X)&=&(1.29 - 4.46) \times 10^{-7}, \nonumber \\ \textrm{Br}(Z \to \chi_{b1}+X)&=&(2.47 - 8.28) \times 10^{-7}, \nonumber \\ \textrm{Br}(Z \to \chi_{b2}+X)&=&(0.31 - 1.02) \times 10^{-6}.\end{aligned}$$ In contrast to the previously stated “moderation" of the contributions via $Z \to c\bar{c}[^3P_J^{[1]}]+g+g$, the channel $Z \to b\bar{b}[^3P_J^{[1]}]+g+g$ contributes significantly, $$\begin{aligned} &&\Gamma_{^3P_0^{[1]}}^{gg}/\Gamma_{^3P_0^{[1]}}^{\textrm{CS}} \sim 1.5\%, \nonumber \\ &&\Gamma_{^3P_1^{[1]}}^{gg}/\Gamma_{^3P_1^{[1]}}^{\textrm{CS}} \sim 20\%, \nonumber \\ &&\Gamma_{^3P_2^{[1]}}^{gg}/\Gamma_{^3P_2^{[1]}}^{\textrm{CS}} \sim 40\%.\end{aligned}$$ $\Gamma_{^3P_J^{[1]}}^{gg}$ means $\Gamma(Z \to b\bar{b}[{^3P_J^{[1]}}]+g+g)$. $\Gamma_{^3P_J^{[1]}}^{\textrm{CS}}$ is the total color-singlet predictions, including both $\Gamma(Z \to b\bar{b}[{^3P_J^{[1]}}]+g+g)$ and $\Gamma(Z \to b\bar{b}[{^3P_J^{[1]}}]+b+\bar{b})$. It is worth mentioning that, to satisfy the conservation of $C-$parity, at $B$ factories the process $e^+e^- \to \gamma^{*} \to b\bar{b}[^3P_J^{[1]}]+g+g$ is forbidden. Moreover the center-of-mass energy at $B$ factories (10.6 GeV) is too small to allow for $e^+e^- \to \gamma^{*} \to (b\bar{b})[^3P_J^{[1]}]+b\bar{b}$. From these points of view, for the study of $\chi_b$ the decay of $Z$ boson seems to be more suitable. For the $^3S_1^{[8]}$ state cases, the NLO QCD corrections can also enhance the LO results significantly, by 10-20 times. The contributions of the $\textrm{NLO}^{*}$ channels are as always sizeable. Similar to $Z \to \chi_c+X$, the CO contributions still account for a large proportion in the total decay width, as listed below $$\begin{aligned} \Gamma^{\chi_{b0}}_{\textrm{CO}} / \Gamma^{\chi_{b0}}_{\textrm{\textrm{CS+CO}}}&=&(37.8 - 40.6)\%, \nonumber \\ \Gamma^{\chi_{b1}}_{\textrm{CO}} / \Gamma^{\chi_{b1}}_{\textrm{\textrm{CS+CO}}}&=&(51.5 - 63.3)\%, \nonumber \\ \Gamma^{\chi_{b2}}_{\textrm{CO}} / \Gamma^{\chi_{b2}}_{\textrm{\textrm{CS+CO}}}&=&(83.0 - 83.9)\%.\end{aligned}$$ $\Gamma^{\chi_{bJ}}_{\textrm{CO}}$ represents the sum of the NLO and $\textrm{NLO}^{*}$ contributions. Regarding the ratios of $\Gamma_{\chi_{b1}}/\Gamma_{\chi_{b0}}$ and $\Gamma_{\chi_{b2}}/\Gamma_{\chi_{b0}}$, the NRQCD predictions are still far different from that built on the CS mechanism, $$\begin{aligned} \textrm{CS}&:&~~~\Gamma_{\chi_{b1}} / \Gamma_{\chi_{b0}}= 1.175 - 1.188, \nonumber \\ \textrm{NRQCD}&:&~~~\Gamma_{\chi_{b1}} / \Gamma_{\chi_{b0}} = 1.868 - 1.923, \nonumber \\ \textrm{CS}&:&~~~\Gamma_{\chi_{b2}} / \Gamma_{\chi_{b0}} = 0.626 - 0.657, \nonumber \\ \textrm{NRQCD}&:&~~~\Gamma_{\chi_{b2}} / \Gamma_{\chi_{b0}} = 2.286 - 2.420,\end{aligned}$$ which can be utilized to check the validity of the CO mechanism. By adopting the branching ratios of $\chi_b $ to $\Upsilon(1S)$ [@Tanabashi:2018oca], $$\begin{aligned} \textrm{Br}(\chi_{b0} \to\Upsilon(1S)+\gamma)&=&1.94\%, \nonumber \\ \textrm{Br}(\chi_{b1} \to \Upsilon(1S)+\gamma)&=&35.0\%, \nonumber \\ \textrm{Br}(\chi_{b2} \to \Upsilon(1S)+\gamma)&=&18.8\%,\end{aligned}$$ we obtain $$\begin{aligned} \Gamma(Z \to \Upsilon(1S)+X)|_{\chi_b-\textrm{feeddown}}=(0.15 \sim 0.49) \times 10^{-6}.\end{aligned}$$ Considering that for $Z \to Q\bar{Q}[^3S_1^{[8]}]+X$ the NLO QCD corrections can enhance the LO results quite significantly, it is interesting and natural to take a brief discussion on the NNLO effect. As stated before, this significant enhancement can be attributed to the kinematic enhancements via the $^3S_1^{[8]}$ single-gluon-fragmentation diagrams. Since the SGF topology has emerged at the NLO level, the NNLO-level diagrams might not enhance the NLO results by orders. Of course whether this is indeed the case depends on the future accomplishment of the NNLO calculations. Summary ======= In this paper, we have systematically investigated the decay of $Z$ boson into $\chi_c$ and $\chi_b$, respectively. We find that the branching ratio for $Z \to \chi_c+X$ is on the order of $10^{-5}$, and $10^{-6}$ for the $\chi_b$ case, which implies that these decay processes are able to be detected. It is observed that, the $^3S_1^{[8]}$ single-gluon-fragmentation diagrams that first emerge at the NLO level can enhance the LO results by about 2-3 orders for $c\bar{c}$, and 10-20 times for $b\bar{b}$. For the same reason, the $\textrm{NLO}^{*}$ processes can also contribute considerably, about $50\%$ of the NLO results. Consequently, the CO contributions will play a vital (even dominant) role in the decay process of $Z \to \chi_c(\chi_b)+X$. Moreover, including the CO channels will thoroughly change the CS predictions on the ratios of $\Gamma(\chi_{c2})/\Gamma(\chi_{c0})$, $\Gamma(\chi_{c1})/\Gamma(\chi_{c0})$, $\Gamma(\chi_{b1})/\Gamma(\chi_{b0})$, and $\Gamma(\chi_{b2})/\Gamma(\chi_{b0})$, which can be regarded as an outstanding probe to distinguish between the CS and CO mechanism. For the CS channels, the heavy quark-antiquark pair associated process, $Z \to Q\bar{Q}[^3P_J^{[1]}]+Q\bar{Q}$, plays a leading role. However, the process of $Z \to Q\bar{Q}[^3P_J^{[1]}]+g+g$ can also provide non-negligible contributions, especially for the $\chi_b$ cases. Taking into account the $\chi_{cJ}$ and $\chi_{bJ}$ feeddown contributions respectively, we find $\Gamma(Z \to J/\psi+X)|_{\chi_c-\textrm{feeddown}}=(0.28 - 2.4) \times 10^{-5}$ and $\Gamma(Z \to \Upsilon(1S)+X)|_{\chi_b-\textrm{feeddown}}=(0.15 - 0.49) \times 10^{-6}$. In summary, the decay of $Z$ boson into $\chi_c(\chi_b)$ is an ideal laboratory to further identify the significance of the color-octet mechanism. Acknowledgments =============== [**Acknowledgments**]{}: We would like to thank Wen-Long Sang for helpful discussions on the treatments on $\gamma_5$. This work is supported in part by the Natural Science Foundation of China under the Grant No. 11705034., by the Project for Young Talents Growth of Guizhou Provincial Department of Education under Grant No. KY\[2017\]135., and the Project of GuiZhou Provincial Department of Science and Technology under Grant No. QKHJC\[2019\]1160.\ [1]{} G. T. Bodwin, E. Braaten and G. P. Lepage, “Rigorous QCD analysis of inclusive annihilation and production of heavy quarkonium, Phys. Rev. D [**51**]{} (1995) 1125 Erratum: \[Phys. Rev. D [**55**]{} (1997) 5853\] doi:10.1103/PhysRevD.55.5853, 10.1103/PhysRevD.51.1125. E. Braaten and S. Fleming, Color octet fragmentation and the psi-prime surplus at the Tevatron, Phys. Rev. Lett.  [**74**]{} (1995) 3327 doi:10.1103/PhysRevLett.74.3327. P. L. Cho and A. K. Leibovich, Color octet quarkonia production, Phys. Rev. D [**53**]{} (1996) 150 doi:10.1103/PhysRevD.53.150. P. L. Cho and A. K. Leibovich, Color octet quarkonia production. 2., Phys. Rev. D [**53**]{} (1996) 6203 doi:10.1103/PhysRevD.53.6203. H. Han, Y. Q. Ma, C. Meng, H. S. Shao and K. T. Chao, $\eta_c$ production at LHC and indications on the understanding of $J/\psi$ production, Phys. Rev. Lett.  [**114**]{} (2015) no.9, 092005 doi:10.1103/PhysRevLett.114.092005. H. F. Zhang, Z. Sun, W. L. Sang and R. Li, Impact of $\eta_c$ hadroproduction data on charmonium production and polarization within NRQCD framework, Phys. Rev. Lett.  [**114**]{} (2015) no.9, 092006 doi:10.1103/PhysRevLett.114.092006. B. Gong, L. P. Wan, J. X. Wang and H. F. Zhang, Complete next-to-leading-order study on the yield and polarization of $\Upsilon(1S,2S,3S)$ at the Tevatron and LHC, Phys. Rev. Lett.  [**112**]{} (2014) no.3, 032001 doi:10.1103/PhysRevLett.112.032001. Y. Feng, B. Gong, L. P. Wan and J. X. Wang, An updated study of $\Upsilon$ production and polarization at the Tevatron and LHC, Chin. Phys. C [**39**]{} (2015) no.12, 123102 doi:10.1088/1674-1137/39/12/123102. K. Wang, Y. Q. Ma and K. T. Chao, $\Upsilon(1S)$ prompt production at the Tevatron and LHC in nonrelativistic QCD, Phys. Rev. D [**85**]{} (2012) 114003 doi:10.1103/PhysRevD.85.114003. H. Han, Y. Q. Ma, C. Meng, H. S. Shao, Y. J. Zhang and K. T. Chao, $\Upsilon(nS)$ and $\chi_b(nP)$ production at hadron colliders in nonrelativistic QCD, Phys. Rev. D [**94**]{} (2016) no.1, 014028 doi:10.1103/PhysRevD.94.014028. M. Butenschoen and B. A. Kniehl, Complete next-to-leading-order corrections to J/psi photoproduction in nonrelativistic quantum chromodynamics, Phys. Rev. Lett.  [**104**]{} (2010) 072001 doi:10.1103/PhysRevLett.104.072001. Z. Sun and H. F. Zhang, QCD corrections to the color-singlet $J/\psi$ production in deeply inelastic scattering at HERA, Phys. Rev. D [**96**]{} (2017) no.9, 091502 doi:10.1103/PhysRevD.96.091502. Z. Sun and H. F. Zhang, QCD leading order study of the $J/\psi$ leptoproduction at HERA within the nonrelativistic QCD framework, Eur. Phys. J. C [**77**]{} (2017) no.11, 744 doi:10.1140/epjc/s10052-017-5323-6. Y. J. Zhang, Y. Q. Ma, K. Wang and K. T. Chao, QCD radiative correction to color-octet $J/\psi$ inclusive production at B Factories, Phys. Rev. D [**81**]{} (2010) 034015 doi:10.1103/PhysRevD.81.034015. M. Butenschoen and B. A. Kniehl, J/psi polarization at Tevatron and LHC: Nonrelativistic-QCD factorization at the crossroads, Phys. Rev. Lett.  [**108**]{} (2012) 172002 doi:10.1103/PhysRevLett.108.172002. K. T. Chao, Y. Q. Ma, H. S. Shao, K. Wang and Y. J. Zhang, $J/\psi$ Polarization at Hadron Colliders in Nonrelativistic QCD, Phys. Rev. Lett.  [**108**]{} (2012) 242004 doi:10.1103/PhysRevLett.108.242004. B. Gong, L. P. Wan, J. X. Wang and H. F. Zhang, Polarization for Prompt J/¦× and ¦×(2s) Production at the Tevatron and LHC, Phys. Rev. Lett.  [**110**]{} (2013) no.4, 042002 doi:10.1103/PhysRevLett.110.042002. L. B. Chen, J. Jiang and C. F. Qiao, NLO QCD Corrections for $\chi_{cJ}$ Inclusive Production at $B$ Factories,” Phys. Rev. D [**91**]{} (2015) no.9, 094031 doi:10.1103/PhysRevD.91.094031 E. Braaten, B. A. Kniehl and J. Lee, Polarization of prompt $J/\psi$ at the Tevatron, Phys. Rev. D [**62**]{} (2000) 094005 doi:10.1103/PhysRevD.62.094005. R. Sharma and I. Vitev, High transverse momentum quarkonium production and dissociation in heavy ion collisions, Phys. Rev. C [**87**]{} (2013) no.4, 044905 doi:10.1103/PhysRevC.87.044905. H. S. Shao, Y. Q. Ma, K. Wang and K. T. Chao, Polarizations of $\chi_{c1}$ and $\chi_{c2}$ in prompt production at the LHC, Phys. Rev. Lett.  [**112**]{} (2014) no.18, 182003 doi:10.1103/PhysRevLett.112.182003. D. Li, Y. Q. Ma and K. T. Chao, $\chi_{cJ}$ production associated with a $c\bar c$ pair at hadron colliders, Phys. Rev. D [**83**]{} (2011) 114037 doi:10.1103/PhysRevD.83.114037. Y. Q. Ma, K. Wang and K. T. Chao, QCD radiative corrections to $\chi_{cJ}$ production at hadron colliders, Phys. Rev. D [**83**]{} (2011) 111503 doi:10.1103/PhysRevD.83.111503. H. F. Zhang, L. Yu, S. X. Zhang and L. Jia, Global analysis of the experimental data on $\chi_c$ meson hadroproduction, Phys. Rev. D [**93**]{} (2016) no.5, 054033 Addendum: \[Phys. Rev. D [**93**]{} (2016) no.7, 079901\] doi:10.1103/PhysRevD.93.054033, 10.1103/PhysRevD.93.079901. M. Klasen, B. A. Kniehl, L. N. Mihaila and M. Steinhauser, $J/\psi$ plus jet associated production in two-photon collisions at next-to-leading order,” Nucl. Phys. B [**713**]{} (2005) 487 doi:10.1016/j.nuclphysb.2005.02.009 B. W. Harris and J. F. Owens, The Two cutoff phase space slicing method, Phys. Rev. D [**65**]{} (2002) 094032 doi:10.1103/PhysRevD.65.094032. J. G. Korner, D. Kreimer and K. Schilcher, A Practicable gamma(5) scheme in dimensional regularization, Z. Phys. C [**54**]{} (1992) 503. doi:10.1007/BF01559471. X. P. Wang and D. Yang, The leading twist light-cone distribution amplitudes for the S-wave and P-wave quarkonia and their applications in single quarkonium exclusive productions, JHEP [**1406**]{} (2014) 121 doi:10.1007/JHEP06(2014)121. Y. Feng, Z. Sun and H. F. Zhang, Is the color-octet mechanism consistent with the double $J/\psi$ production measurement at B-factories?, Eur. Phys. J. C [**77**]{}, 221 (2017). Z. Sun, X. G. Wu, Y. Ma and S. J. Brodsky, Exclusive production of $J/\psi+\eta_c$ at the $B$ factories Belle and Babar using the principle of maximum conformality, Phys. Rev. D [**98**]{} (2018) no.9, 094001 doi:10.1103/PhysRevD.98.094001 J. X. Wang, Progress in FDC project, Nucl. Instrum. Meth. A [**534**]{}, 241 (2004). M. Tanabashi [*et al.*]{}, Review of Particle Physics, Phys. Rev. D [**98**]{} (2018) no.3, 030001. doi:10.1103/PhysRevD.98.030001. M. Acciarri [*et al.*]{} \[L3 Collaboration\], Heavy quarkonium production in $Z$ decays, Phys. Lett. B [**453**]{} (1999) 94. doi:10.1016/S0370-2693(99)00280-4.
null
minipile
NaturalLanguage
mit
null
In a week where BitTorrent already lost its largest torrent indexer Mininova, a well-known private tracker has also announced that it will cease its operations. Rumors that the shutdown is related to the bust of the topsite LOOP earlier this week remain unconfirmed. SceneTorrents (ScT) has been a respected and well-connected private BitTorrent tracker for more than four years. An invite for the tracker was hard to find, but the lucky few that did get in had little to complain about, until today that is. A few hours ago ScT put up a sad and unexpected announcement for its 20,000 members, as the site’s operators have decided to close the site for good tomorrow. Thus far the staff refuses to comment on the reason for the shutdown, which has resulted in widespread rumors among the site’s users. ScT announces that it will close the site tomorrow at 10PM GMT. Some rumors say that the end of ScT may be related to the raid of a topsite in The Netherlands earlier this week. According to the Dutch news site Tweakers, the ‘ranked’ topsite LOOP had its servers raided in Amsterdam, where 40 terabytes of data was stored. LOOP was (supposedly) one of SceneTorrent’s main content provider according to insiders. According to other rumors, the shutdown could be a planned operation instead of a response to the raided topsite. In the last weeks the site has encouraged its members to donate, offering double rewards for those who pay up, allegedly raking in as much as $10,000. Thus far both rumors remain unconfirmed, and the same is true for an eBay auction of the site that went up a few hours ago. Since the staff of the site is not talking, it will probably remain unknown why the site will close its doors now, or what their underlying motivation is. Update: FSF published a short chat with ScT owner ‘Feeling’ who confirmed that the shutdown is not a hoax. Update: A new staff message claims that the shutdown is due to legal issues. By now most of you already know that ScT will be going offline permanently. However, due to pending legal issues, we are not at liberty to speak freely about why we’ve chosen to take down the site. Members of our staff were arrested and will be undergoing the entire length of the judicial process. Obviously, in the case of criminal proceedings, it would be downright foolish to comment any further on the situation; Please bear this in mind and wish them the best of luck. There have been several theories as to where the donation money (of the recent months) has gone. We’d like to take this opportunity to put all skepticism to rest. The money was used to purchase new hardware that would ensure our spot as the fastest tracker on the net. You are free to perform whatever calculations you feel necessary, but in doing so it should become very clear that running a site of this stature costs money. We feel the overwhelming cynicism is just a product of bad timing compounded with general frustration caused by the current situation. We sincerely hope that you’ve enjoyed being a part of our wonderful community over the past 4 1/2 years. We’ve certainly enjoyed our members letting us be of service. We’ve always felt our user base played an equally important role in making SceneTorrents.org a model environment in the torrent world. The staff would also like to express their gratitude to fellow trackers for their support in such a chaotic time. Several well known communities have voluntarily opened their doors, and have offered our former users a new home. We appreciate the courtesy and acknowledge the steps being taken to move forward collectively as a community. Your assistance does not go unrecognized.
null
minipile
NaturalLanguage
mit
null
The Golden State Warriors are exploring trade opportunities for All-Star forward David Lee and the three years, $44 million left on his contract, league sources told Yahoo! Sports. Lee, 30, is a popular and well-respected player within the Golden State franchise but his contract is considerable, and moving him for a star – or a player on a shorter deal – makes financial sense. Golden State offered Lee in a package for Toronto's Andrea Bargnani, sources said, but the Raptors made a deal with the New York Knicks to shed the final two years, $22.5 million on Bargnani's contract. The Warriors tried to pry Portland's LaMarcus Aldridge as part of an offer that included Brandon Rush, league sources said. Portland has been engaging trade talks for Aldridge, but has pursued more robust offers than Golden State's proposal, sources said. Lee had one of his best NBA seasons, averaging 18.5 points and 11.2 rebounds and making the Western Conference All-Star team. He suffered a torn hip flexor on April 20 in the playoffs and eventually had surgery to repair it on May 30. Related coverage on Yahoo! Sports: • Knicks near trade for Andrea Bargnani • Teams line up to court FA Dwight Howard • Complete list of NBA free agents
null
minipile
NaturalLanguage
mit
null
--- abstract: 'We present a streaming model for large-scale classification (in the context of $\ell_2$-SVM) by leveraging connections between learning and computational geometry. The streaming model imposes the constraint that only a single pass over the data is allowed. The $\ell_2$-SVM is known to have an equivalent formulation in terms of the minimum enclosing ball (MEB) problem, and an efficient algorithm based on the idea of *core sets* exists (CVM) [@cvm]. CVM learns a $(1+\varepsilon)$-approximate MEB for a set of points and yields an approximate solution to corresponding SVM instance. However CVM works in batch mode requiring multiple passes over the data. This paper presents a single-pass SVM which is based on the minimum enclosing ball of streaming data. We show that the MEB updates for the streaming case can be easily adapted to learn the SVM weight vector in a way similar to using online stochastic gradient updates. Our algorithm performs polylogarithmic computation at each example, and requires very small and constant storage. Experimental results show that, even in such restrictive settings, we can learn efficiently in just one pass and get accuracies comparable to other state-of-the-art SVM solvers (batch and online). We also give an analysis of the algorithm, and discuss some open issues and possible extensions.' author: - | Piyush Rai, Hal Daumé III, Suresh Venkatasubramanian\ University of Utah, School of Computing\ `{piyush,hal,suresh}@cs.utah.edu` bibliography: - 'ijcai09.bib' title: 'Streamed Learning: One-Pass SVMs' --- Introduction {#intro} ============ Learning in a streaming model poses the restriction that we are constrained both in terms of time, as well as storage. Such scenarios are quite common, for example, in cases such as analyzing network traffic data, when the data arrives in a streamed fashion at a very high rate. Streaming model also applies to cases such as disk-resident large datasets which cannot be stored in memory. Unfortunately, standard learning algorithms do not scale well for such cases. To address such scenarios, we propose applying the *stream model* of computation [@datastreamsurvey] to supervised learning problems. In the stream model, we are allowed only one pass (or a small number of passes) over an ordered data set, and polylogarithmic storage and polylogarithmic computation per element. In spite of the severe limitations imposed by the streaming framework, streaming algorithms have been successfully employed in many different domains [@streamcluster]. Many of the problems in geometry can be adapted to the streaming setting and since many learning problems have equivalent geometric formulations, streaming algorithms naturally motivate the development of efficient techniques for solving (or approximating) large-scale batch learning problems. In this paper, we study the application of the stream model to the problem of maximum-margin classification, in the context of $\ell_2$-SVMs [@vapnik98:statlearnth; @crist00:introsvm]. Since the support vector machine is a widely used classification framework, we believe success here will encourage further research into other frameworks. SVMs are known to have a natural formulation in terms of the minimum enclosing ball problem in a high dimensional space [@cvm; @bvm]. This latter problem has been extensively studied in the computational geometry literature and admits natural streaming algorithms [@chanstream; @ahvstream]. We adapt these algorithms to the classification setting, provide some extensions, and outline some open issues. Our experiments show that we can learn efficiently in just one pass and get competetive classification accuracies on synthetic and real datasets. Scaling up SVM Training {#sec:supp-vect-mach} ======================= Support Vector Machines (SVM) are maximum-margin kernel-based linear classifiers [@crist00:introsvm] that are known to provide provably good generalization bounds [@vapnik98:statlearnth]. Traditional SVM training is formulated in terms of a quadratic program (QP) which is typically optimized by a numerical solver. For a training size of $N$ points, the typical time complexity is $O(N^3)$ and storage required is $O(N^2)$ and such requirements make SVMs prohibitively expensive for large scale applications. Typical approaches to large scale SVMs, such as chunking [@vapnik98:statlearnth], decomposition methods [@libsvm] and SMO [@smo] work by dividing the original problem into smaller subtasks or by scaling down the training data in some manner [@svmhierclust; @rsvm]. However, these approaches are typically heuristic in nature: they may converge very slowly and do not provide rigorous guarantees on training complexity [@cvm]. There has been a recent surge in interest in the online learning literature for SVMs due to the success of various gradient descent approaches such as stochastic gradient based methods [@sg] and stochastic sub-gradient based approaches[@pegasos]. These methods solve the SVM optimization problem iteratively in steps, are quite efficient, and have very small computational requirements. Another recent online algorithm LASVM [@lasvm] combines online learning with active sampling and yields considerably good performance doing single pass (or more passes) over the data. However, although fast and easy to train, for most of the stochastic gradient based approaches, doing a single pass over the data does not suffice and they usually require running for several iterations before converging to a reasonable solution. Two-Class Soft Margin SVM as the MEB Problem {#sec:two-class-soft} ============================================ A minimum enclosing ball (MEB) instance is defined by a set of points ${\mathbf}x_1$, ..., ${\mathbf}x_N \in {\mathbb{R}}^D$ and a metric $d : {\mathbb{R}}^D \times {\mathbb{R}}^D \rightarrow {\mathbb{R}}^{\geq 0}$. The goal is to find a point (the *center*) ${\mathbf}c \in {\mathbb{R}}^D$ that minimizes the radius $R = \max_n d({\mathbf}x_n, {\mathbf}c)$. The 2-class $\ell_2$-SVM [@cvm] is defined by a hypothesis $f({\mathbf}{x}) = {\mathbf}{w}^T\varphi({\mathbf}{x})$, and a training set consisting of $N$ points $\{{\mathbf}{z}_n= ({\mathbf}{x}_n,y_n)\}_{n=1}^N$ with $y_n \in \{-1,1\}$ and ${\mathbf}x_n \in {\mathbb{R}}^D$. The primal of the two-classs $\ell_2$-SVM (we consider the unbiased case one—the extension is straightforward) can be written as $$\min_{{\mathbf}{w},\xi_i} ||{\mathbf}{w}||^2 + C\sum_{i=1,m}\xi_i^2$$ $$s.t. \ \ y_i({\mathbf}{w}'\varphi({\mathbf}{x}_i)) \geq 1 - \xi_i ,\ \ i=1,...,N$$ The only difference between the $\ell_2$-SVM and the standard SVM is that the penalty term has the form $(C\sum_n{{\xi_n}^2})$ rather than $(C\sum_n{\xi_n})$. We assume a kernel $K$ with associated nonlinear feature map $\varphi$. We further assume that $K$ has the property $K({\mathbf}{x},{\mathbf}{x}) = \kappa$, where $\kappa$ is a fixed constant [@cvm]. Most standard kernels such as the isotropic, dot product (normalized inputs), and normalized kernels satisfy this criterion. Suppose we replace the mapping $\varphi({\mathbf}{x}_n)$ on ${\mathbf}{x}_n$ by another nonlinear mapping $\tilde \varphi({\mathbf}{z}_n)$ on ${\mathbf}{z}_n$ such that (for unbiased case) $$\tilde \varphi({\mathbf}{z}_n) = \left[ y_n \varphi({\mathbf}x_n) ; C^{-1/2} {\mathbf}e_n \right]{{}^\top}$$ The mapping is done in a way that that the label information $y_n$ is subsumed in the new feature map $\tilde \varphi$ (essentially, converting a supervised learning problem into an unsupervised one). The first term in the mapping corresponds to the feature term and the second term accounts for a regularization effect, where $C$ is the misclassification cost. ${\mathbf}{e}_n$ is a vector of dimension $N$, having all entries as zero, except the $n^{\text{th}}$ entry which is equal to one. It was shown in [@cvm] that the MEB instance $(\tilde\varphi({\mathbf}z_1), \tilde\varphi({\mathbf}z_2), \ldots \tilde\varphi({\mathbf}z_N))$, with the metric defined by the induced inner product, is dual to the corresponding $\ell_2$-SVM instance (1). The weight vector ${\mathbf}{w}$ of the maximum margin hypothesis can then be obtained from the center ${\mathbf}{c}$ of the MEB using the constraints induced by the Lagrangian [@bvm]. Approximate and Streaming MEBs {#sec:appr-stre-mebs} ============================== The minimum enclosing ball problem has been extensively studied in the computational geometry literature. An instance of MEB, with a metric defined by an inner product, can be solved using quadratic programming[@boyd04:_convex_optim]. However, this becomes prohibitively expensive as the dimensionality and cardinality of the data increases; for an $N$-point SVM instance in $D$ dimensions, the resulting MEB instance consists of $N$ points in $N+D$ dimensions. Thus, attention has turned to efficient approximate solutions for the MEB. A $\delta$-approximate solution to the MEB ($\delta > 1$) is a point ${\mathbf}c$ such that $\max_n d({\mathbf}x_n,{\mathbf}c) \le \delta R^*$, where $R^*$ is the radius of the true MEB solution. For example, A $(1+\epsilon)$-approximation for the MEB can be obtained by extracting a very small subset (of size $O(1/\epsilon)$) of the input called a *core-set* [@AHV], and running an exact MEB algorithm on this set [@badoiuminball]. This is the method originally employed in the CVM [@cvm]. [@maxmargcore] take a more direct approach, constructing an explicit core set for the (approximate) maximum-margin hyperplane, without relying on the MEB formulation. Both these algorithms take linear training time and require very small storage. Note that a $\delta$-approximation for the MEB directly yields a $\delta$-approximation for the regularized cost function associated with the SVM problem. Unfortunately, the core-set approach cannot be adapted to a streaming setting, since it requires $O(1/\epsilon)$ passes over the training data. Two one-pass streaming algorithms for the MEB problem are known. The first [@ahvstream] finds a $(1+\epsilon)$ approximation using $O((1/\varepsilon)^{\lfloor D/2 \rfloor})$ storage and $O((1/\varepsilon)^{\lfloor D/2 \rfloor}N)$ time. Unfortunately, the exponential dependence on $D$ makes this algorithm impractical. At the other end of the space-approximation tradeoff, the second algorithm [@chanstream] stores only the center and the radius of the current ball, requiring $O(D)$ space. This algorithm yields a 3/2-approximation to the optimal enclosing ball radius. The StreamSVM Algorithm ----------------------- We adapt the algorithm of [@chanstream] for computing an approximate maximum margin classifier. The algorithm initializes with a single point (and therefore an MEB of radius zero). When a new point is read in off the stream, the algorithm checks whether or not the current MEB can enclose this point. If so, the point is discarded. If not, the point is used to suitably update the center and radius of the current MEB. All such selected points define a core set of the original point set. Let ${\mathbf}{p}_i$ be the input point causing an update to the MEB and ${\mathbf}{B}_i$ be the resulting ball after the update. From figure \[ballupdate\], it is easy to verify that the new center ${\mathbf}{c}_i$ lies on the line joining the old center ${\mathbf}{c}_{i-1}$ and the new point ${\mathbf}{p}_i$. The radius ${\mathbf}{r}_i$ and the center ${\mathbf}{c}_i$ of the resulting MEB can be defined by simple update equations. $$r_i = r_{i-1} + \delta_i$$ $$||{\mathbf}{c}_i - {\mathbf}{c}_{i-1}||=\delta_i$$ Here $2\delta_i = (||{\mathbf}{p}_i - {\mathbf}{c}_{i-1}|| - r_{i-1})$ is the closest distance of the new point ${\mathbf}{p}_i$ from the old ball ${\mathbf}{B}_{i-1}$. Using these, we can define a closed-form analytical update equation for the new ball ${\mathbf}{B}_i$: $${\mathbf}{c}_i = {\mathbf}{c}_{i-1} + \frac{\delta_i}{||{\mathbf}{p}_i - {\mathbf}{c}_{i-1}||}({\mathbf}{p}_i - {\mathbf}{c}_{i-1})$$ 0.2in -0.2in It can be shown that, for adversarially constructed data, the radius of the MEB computed by the algorithm has a lower-bound of $(1 + \sqrt2)/2$ and a worst-case upper-bound of $3/2$ [@chanstream]. We adapt these updates in a natural way in the augmented feature space $\tilde\varphi$ (see Algorithm \[alg:streamsvm1\]). Each selected point belongs to the *core set* for the MEB. The support vectors of the corresponding SVM instance come from this set. It is easy to verify that the update equations for weight vector ($\bf w$) and the margin (R) in StreamSVM correspond to the center and radius updates for the ball in equation 7 and 4 respectively. The $\xi^2$ term is the distance calculation is included to account for the fact that the distance computations are being done in the $D+N$ dimensional augmented feature space $\tilde\varphi$ which, for the linear kernel case, is given by: $$\tilde \varphi({\mathbf}{z}_n) = \left[ y_n{\mathbf}x_n ; C^{-1/2} {\mathbf}e_n \right]{{}^\top}.$$ Also note that, because we perform only a single pass over the data and the ${\mathbf}e_n$ components are all mutually orthogonal, we never need to explicitly store them. The number of updates to the weight vector is limited by the number of core vectors of the MEB, which we have experimentally found to be much smaller as compared to other algorithms (such as Perceptron). The space complexity of StreamSVM is small since only the weight vector and the radius need be stored. Kernelized StreamSVM -------------------- Although our main exposition and experiments are with linear kernels, it is straightforward to extend the algorithm for nonlinear kernels. In that case, algorithm 1, instead of storing the weight vector **w**, stores an $N$-dimensional vector of Lagrange coefficients $\mathbf{\alpha}$ initialized as $\left[y_1,\ldots,0\right]$. The distance computation is line 5 are replaced by $d^2 = \sum_{n,m}\alpha_n\alpha_m k({\mathbf}{x}_n,{\mathbf}{x}_m) + k({\mathbf}{x}_n,{\mathbf}{x}_n) - 2y_n\sum_{m}\alpha_m k({\mathbf}{x}_n,{\mathbf}{x}_m) + \xi^2 + 1/C$, and the weight vector updates in line 7 can be replaced by Lagrange coefficients updates $\mathbf{\alpha}_{1:n-1} = \mathbf{\alpha}_{1:n-1}(1 - \frac 1 2 \left(1 - R/d\right))$, $\alpha_n = \frac 1 2 \left(1 - R/d\right)y_n$. examples $({\mathbf}x_n,y_n)_{n \in 1\dots N}$, slack parameter $C$ weights (${\mathbf}{w}$), radius ($R$), number of support vectors ($M$) Initialize: [$M=1; R=0; \xi^2=1,{\mathbf}{w} = y_1{\mathbf}{x}_1$]{} Compute distance to center:\ [$d = \sqrt{\|{\mathbf}{w} - y_n{\mathbf}{x}_n\|^2 + \xi^2 + 1/C}$]{} examples $({\mathbf}x_n,y_n)_{n \in 1\dots N}$, slack parameter $C$, lookahead parameter $L \geq 1$ weights (${\mathbf}{w}$), radius ($R$), upper bound on number of support vectors ($M$) Initialize: [$M=1; R=0; \xi^2=1; {\mathbf}S = \emptyset; {\mathbf}{w} = y_1{\mathbf}{x}_1$]{} Compute distance to center:\ [$d = \sqrt{\|{\mathbf}{w} - y_n{\mathbf}{x}_n\|^2 + \xi^2 + 1/C}$]{} Add example $n$ to the active set:\ [${\mathbf}S = {\mathbf}S \cup \{ y_n{\mathbf}x_n \}$]{} Update ${\mathbf}w,R,\xi^2$ to enclose the ball $({\mathbf}w,R,\xi^2)$ and all points in ${\mathbf}S$ Update ${\mathbf}w,R,\xi^2$ to enclose the ball $({\mathbf}w,R,\xi^2)$ and all points in ${\mathbf}S$ StreamSVM approximation bounds and extension to multiple balls -------------------------------------------------------------- It was shown in [@chanstream] that any streaming MEB algorithm that uses only $O(D)$ storage obtains a lower-bound of $(1 + \sqrt2)/2$ and an upper-bound of 3/2 on the quality of solution (i.e., the radius of final MEB). Clearly, this is a conservative approximation and would affect the obtained margin of the resulting SVM classifier (and hence the classification performance). In order to do better in just a single pass, one possible conjecture could be that the algorithm must *remember* more. To this end, we therefore extended algorithm-\[alg:streamsvm1\] to simultaneously store $L$ weight vectors (or “balls”). The space complexity of this algorithm is $L(D+1)$ floats and it still makes only a single pass over the data. In the MEB setting, our algorithm chooses with each arriving datapoint (that is not already enclosed in any of the balls) how the current $L+1$ balls (the $L$ balls plus the new data point) should be merged, resulting again into a set of $L$ balls. At the end, the final set of $L$ balls are merged together to give the final MEB. A special variant of the $L$ balls case is when all but one of the $L$ balls are of zero radius. This amounts to storing a ball of non-zero radius and to keeping a *buffer* of $L$ many data-points (we call this the *lookahead* algorithm - Algorithm \[alg:streamsvm2\]). Any incoming point, if not already enclosed in the current ball, is stored in the buffer. We solve the MEB problem (using a quadratic program of size $L$) whenever the buffer is full. Note that algorithm \[alg:streamsvm1\] is a special case of algorithm \[alg:streamsvm2\] with $L$=1, with the MEB updates available in a closed analytical form (rather than having to solve a QP). Algorithm \[alg:streamsvm1\] takes linear time in terms of the input size. Algorithm \[alg:streamsvm2\] which uses a lookahead of $L$ solves a quadratic program of size $L$ whenever the buffer gets full. This step takes $O(L^3)$ times. The number of such updates is $O(N/L)$ (in practice, it is considerably less than $N/L$) and thus the over all complexity for the lookahead case is $O(NL^2)$. For small lookaheads, this is roughly $O(N)$. Experiments {#exper} =========== We evaluate our algorithm on several synthetic and real datasets and compare it against several state-of-the-art SVM solvers. We use 3 crieria for evaluations: a) Single-pass classification accuracies compared against single-pass of online SVM solvers such as iterative sub-gradient solver Pegasos [@pegasos], LASVM [@lasvm], and Perceptron [@rosenb:perceptron]. b) Comparison with CVM [@cvm] which is a batch SVM algorithm based on the MEB formulation. c) Effect of using lookahead in StreamSVM. For fairness, all the algorithms used a linear kernel. Single-Pass Classification Accuracies ------------------------------------- The single-pass classification accuracies of StreamSVM and other online SVM solvers are shown in table-\[tab:res1\]. Details of the datasets used are shown in table-\[tab:res1\]. To get a sense of how good the single-pass approximation of our algorithm is, we also report the classification accuracies of batch-mode (i.e., all data in memory, and multiple passes) libSVM solver with linear kernel on all the datasets. The results suggest that our single-pass algorithm StreamSVM, using a small reasonable lookahead, performs comparably to the batch-mode libSVM, and does significantly better than a single pass of other online SVM solvers. ------------------ ---------------------------------------------------------------------------------------------------------------------------------- -- -- -- -- -- -- -- -- -- [**Data Set**]{} **Dim & [**Train**]{} & [**Test**]{} & [**[(batch)]{}**]{}& & [k = 1]{} & [k = 20]{} & &[**[Algo-1]{}**]{} & [**[Algo-2]{}**]{}\ Synthetic A & 2 & 20,000 & 200 & 96.5 & 95.5 & 83.8 & 89.9 & 96.5 & 95.5 & **97.0\ Synthetic B & 3 & 20,000 & 200 & 66.0 & 68.0 & 57.05 & 65.85 & 64.5 & 64.4 & **68.5\ Synthetic C & 5 & 20,000 & 200 & 93.2 & 77.0 & 55.0 & 73.2 & 68.0 & 73.1 & **87.5\ Waveform & 21 & 4000 & 1000 & 89.4 & 72.5 & 77.34 & 78.12 & 77.6 & 74.3 & **78.4\ MNIST (0vs1) & 784 & 12,665 & 2115 & 99.52 & 99.47 & 95.06 & 99.48 & 98.82 & 99.34 & **99.71\ MNIST (8vs9) & 784 & 11,800 & 1983 & 96.57 & **95.9 & 69.41 & 90.62 & 90.32 & 84.75 & 94.7\ IJCNN & 22 & 35,000 & 91,701 & 91.64 & 64.82 & 67.35 & 88.9 & 74.27 & 85.32 & **87.81\ w3a & 300 & 44,837 & 4912 & 98.29 & 89.27 & 57.36 & 87.28 & **96.95 & 88.56 & 89.06\ ****************** ------------------ ---------------------------------------------------------------------------------------------------------------------------------- -- -- -- -- -- -- -- -- -- Comparison with CVM ------------------- We compared our algorithm with CVM which, like our algorithm, is based on a MEB formulation. CVM is highly efficient for large datasets but it operates in batch mode, making one pass through the data for each core vector. We are interested in knowing how many passes the CVM must make over the data before it achieves an accuracy comparable to our streaming algorithm. For that purpose, we compared the accuracy of our single-pass StreamSVM against two and more passes of CVM to see how long does it take for CVM to beat StreamSVM (we note here that CVM requires at least two passes over the data to return a solution). We used a linear kernel for both. Shown in Figure \[fig:cvm\] are the results on MNIST 8vs9 data and it turns out that it takes several hundreds of passes of CVM to beat the single pass accuracy of StreamSVM. Similar results were obtained for other datasets but we do not report them here due to space limitations. ![[]{data-label="fig:cvm"}](mnist89.ps){width="3in"} ![[]{data-label="fig:varyL"}](mnisterr.ps){width="3in"} Effect of Lookahead ------------------- We also investigated the effect of doing higher-order lookaheads on the data. For this, we varied $L$ (the lookahead parameter) and, for each $L$, tested Algorithm \[alg:streamsvm2\] on 100 random permutations of the data stream order, also recording the standard deviation of the classification accuracies with respect to the data-order permutations. Note that the algorithm still performs a single pass over the data. Figure \[fig:varyL\] shows the results on the MNIST 8vs9 data (similar results were obtained for other datasets but not shown due to space limitations). In this figure, we see two effects. Firstly, as the lookahead increase, performance goes up. This is to be expected since in the limit, as the lookahead approaches the data set size, we will solve the exact MEB problem (albeit at a high computational cost). The important thing to note here is that even with a small lookahead of $10$, the performance converges. Secondly, we see that the standard deviation of the result decreases as the lookahead increases. This shows experimentally that higher lookaheads make the algorithm less susceptible to badly ordered data. This is interesting from an empirical perspective, given that we can show that in theory, any value of $L < N$ cannot improve upon the 3/2-approximation guaranteed for $L=1$. Analysis, Open Problems, and Extensions {#openissues} ======================================= There are several open problems that this work brings up: 1. Are the $(1+\sqrt{2})/2$ lower-bound and the $3/2$ upper-bound on MEB radius indeed the best achievable in a single pass over the data? 2. Is it possible to use a richer geometric structure instead of a ball and come up with streaming variants with provably good approximation bounds? We discuss these in some more detail here. Improving the Theoretical Bounds -------------------------------- One might conjecture that storing more information (i.e., more points) would give better approximation guarantees in the streaming setting. Although the empirical results showed that such approaches do result in better classification accuracies, this is not theoretically true in many cases. For instance, in the adversarial stream setting, one can show that *neither* the lookahead algorithm *nor* its more general case (the multiple balls algorithm) improves the bounds given by the simple no-lookahead case (Algorithm-\[alg:streamsvm1\]). In particular, one can prove an identical upper- and lower-bound for the lookahead algorithm as for the no-lookahead algorithm. To obtain the $3/2$-upper bound result, one can show a nearly identical construction as to [@chanstream] where $L-1$ points are packed in a small, carefully constructed cloud the boundary of the true MEB. Alternatively, one can analyze these algorithms in the random stream setting. Here, the input points are chosen adversarially, but their *order* is permuted randomly. The lookahead model is not strengthened in this setting either: we can show both that the lower bound for no-lookahead algorithms, as well as the 3/2-upper bound for the specific no-lookahead algorithm described, generalize. For the former, see Figure \[fig:advstream\]. We place $(N-1)/2$ points around $(0,1)$ and $(N-1)/2$ points around $(0,-1)$ and one point at $(1+\sqrt{2},0)$. The algorithm will only beat the $(1+\sqrt{2})/2$ lower bound if the singleton appears in the first $L$ points, where $L$ is the lookahead used. Assuming the lookahead is polylogarithmic in $N$ (which must be true for a streaming algorithm), this means that as $N \longrightarrow \infty$, the probability of a better bound tends toward zero. Note, however, that this applies only to the lookahead model, not to the more general multiple balls model, where it *may* be possible to obtain a tighter bounds in the random stream setting. ![[]{data-label="fig:advstream"}](advstream.eps) Ellipsoidal Balls ----------------- Instead of using a minimum enclosing ball of points, an alternative could be to use a minimum volume ellipsoid (MVE) [@mve05]. An ellipsoid in $\mathbb{R}^D$ is defined as follows: $\{ {\mathbf}{x} : ({\mathbf}{x}-{\mathbf}{c})'{\mathbf}{A}({\mathbf}{x}-{\mathbf}{c}) <= 1\}$ where ${\mathbf}{c} \in \mathbb{R}^D$, ${\mathbf}{A} \in \mathbb{R}^{DxD}$, and ${\mathbf}{A} \succeq 0$ (positive semi-definite). Note that a ball, upon inclusion of a new point, expands equally in all dimensions which may be unnecessary. On the other hand, an ellipsoid can have several axes and scales of variations (modulated by the covariance matrix ${\mathbf}{A}$). This allows the ellipsoid to expand only along those directions where needed. In addition, such an approach can also be seen along the lines of confidence weighted linear classifiers [@confweighted]. The confidence weighted (CW) method assumes a Gaussian distribution over the space of weight vectors and updates the mean and covariance parameters upon witnessing each incoming example. Just as CW maintains the models uncertainty using a Gaussian, an ellipsoid generaization can model the uncertainty using the covariance matrix ${\mathbf}{A}$. Recent work has shown that there exist streaming possibilities for MVE [@streammve08]. The approximation gaurantees, however, are very conservative. It would be interesting to come up with improved streaming algorithms for the MVE case and adapt them for classification settings. Conclusion {#conclude} ========== Within the streaming framework for learning, we have presented an efficient, single-pass $\ell_2$-SVM learning algorithm using a streaming algorithm for the minimum enclosing ball problem. We have also extended this algorithm to use a *lookahead* to increase robustness against poorly ordered data. Our algorithm, StreamSVM, satisfies a proven theoretical bound: it provides a $\left(\frac 3 2\right)$-approximation to the optimal solution. Despite this conservative bound, our algorithm is experimentally competitive with alternative techniques in terms of accuracy, and learns much simpler solutions. We believe that a careful study of stream-based learning would lead to high quality scalable solutions for other classification problems, possibly with alternative losses and with tighter approximation bounds.
null
minipile
NaturalLanguage
mit
null
Retired Gen. Stanley McChrystal discusses what it means for the Army that two female soldiers have graduated from the U.S. Army Ranger School. After speaking with the media, the former commander of the Joint Special Operations Command presented "L Robin [email protected] Retired Gen. Stanley McChrystal discusses what it means for the Army that two female soldiers have graduated from the U.S. Army Ranger School. After speaking with the media, the former commander of the Joint Special Operations Command presented "L Robin [email protected] Emotions run high at leadership forum, with taste of World Trade Center tragedy Day two of the Jim Blanchard Leadership Forum was tearfully emotional at times, comical on occasion, but chock full of advice for being innovative and leading people, often through extremely difficult moments. Taking place at the Columbus Convention and Trade Center Tuesday, the 10th annual event, a sellout, featured a former party boy who launched a charity for clean water in poor countries, a retired soldier and Army Ranger who led troops in Iraq and Afghanistan, and an investment bank executive who lost nearly 70 colleagues in the horrific Sept. 11, 2001, attacks on the World Trade Center in New York. The latter’s experience was perhaps the most poignant, with James “Jimmy” Dunne, senior managing principal of Sandler O’Neill+Partners, at a golf event when learning of the terrible attack that fateful day nearly 14 years ago. Choking back tears several times during the presentation, he recalled receiving a phone call about a star employee who had been in one of the 110-story towers as jets controlled by terrorists slammed into them, eventually toppling the structures. Help us deliver journalism that makes a difference in our community. Our journalism takes a lot of time, effort, and hard work to produce. If you read and enjoy our journalism, please consider subscribing today. “They found Kevin Williams (alive). It’s just not our Kevin Williams,” Dunne was told, explaining that’s when the levity of the tragedy hit him hard. Dunne said the days following the tragedy were filled with anguished families of deceased employees, working feverishly to help them while saving the company. All the while, emotions ran hot. “You were always a tough guy, an enforcer, pushing people. Now what are you going to do for my family?” he recalled one victim’s relative saying. The company, featured in a CBS piece, steered huge amounts of money to injured victims and families of the dead, with the firm ultimately bouncing back strong. “I don’t know if we did everything right or wrong. We did the best we could,” said Dunne with a thick New York accent. “You have to be willing to make tough, tough decisions.” Another speaker that made a huge impact was Scott Harrison, who worked a decade as an event promoter — essentially “getting paid to drink alcohol,” he joked — before searching his soul and deciding to volunteer as a photojournalist on a hospital ship off the coast of Liberia in Africa. That two-year experience of seeing scores of villagers drink putrid water from ponds and filthy streams — often becoming sick and some dying — led him to launch charity:water. The organization raises money to fund projects for drilling wells and other methods for obtaining clean, safe drinking and bathing water. “Our slogan is water changes everything,” said Harrison, whose organization has raised nearly $100 million to help supply safe water to people overseas. That’s with participation by celebrities, companies, churches and everyday citizens. “There are now 5.2 million people who now have access to clean water after years of work.” A gush of emotion in Harrison’s story came from one everyday little girl, Rachel Beckwith, 9, who wanted to raise $300 for charity:water. Her life was cut short after a car accident in 2011, with $80 remaining on her goal. A grassroots campaign went viral, with nearly $1.3 million being raised in her name and helping nearly 38,000 people have clean water. A video Tuesday showed villagers with big smiles on their faces and naming a park in Ethiopia after the young girl, her family visiting one year after her death. Other speakers Tuesday included: • Daniel Pink, a New York Times and Wall Street Journal best-selling author. Discussing sales and persuasion, he said the model has changed over the last decade. It used to be “buyer beware” of the seller, who had all of the information at hand. But now, he said, the buyer is armed with so much information online that the balance has tipped in their favor when it comes to buying cars, homes, just about anything. • John Maxwell, an author, pastor and speaker on leadership. His primary point was encouraging people — leaders and followers — to live an “intentional life” in which they grow and change every single day. “Be intentional in really making your life meaningful, and adding value to other people,” he said. • Simon Sinek, author, speaker, leadership expert and “visionary thinker.” He delved into the chemical elements that impact the brain — dopamine, serotonin and oxytocin — and how they influence everyone, leaders included. He also touched on trust in the workplace and the devastating impact of mass company layoffs. “Leadership is not an event. It’s a choice,” he said. “And not everyone is cut out for it. It’s hard and you don’t always get all the credit.” • Ken Blanchard, motivational speaker and author of “The One Minute Manager” and “Servant Leader.” Said Blanchard, often with a twist of humor, “I want to get rid of your stinkin’ thinkin’ about leadership.” He said fear and self doubt are enemies of leaders, and that humility trumps false pride. “Effective leadership starts on the inside with your heart,” said Blanchard, calling Jesus Christ the greatest role model of all time. • Retired U.S. Army Gen. Stanley McChrystal, former commander of U.S. and international forces in Afghanistan, who commanded the 75th Ranger Regiment at Fort Benning during his career. “Leadership today cannot be the same as it has in the past,” said the outspoken McChrystal, closing out the forum. He gave a history lesson of sorts, discussing efforts against terrorists in Iran a decade ago, the failed U.S. mission to rescue hostages in Iran in 1979, and the “efficiency movement” among the world’s armies since Roman times that ultimately led to less flexibility, which has become a necessity for today’s high-tech military. Coordination and constant communication among the various military branches also are a must, he said. “In a complex world, you’ve still got to have efficiency. But that’s not enough anymore,” McChrystal said. The featured speaker Monday night was George W. Bush, his wife, Laura, and their daughters, Barbara Bush and Jenna Bush Hager. It was the second appearance at the forum for both parents — each headlining the event on separate occasions. The forum, organized by Columbus State University and its Leadership Institute, was sponsored this year by Synovus, TSYS, AT&T and W.C. Bradley Co. Admission to the event was $529 per person or $4,200 for a table of eight.
null
minipile
NaturalLanguage
mit
null
Adenocarcinoma of the uterine cervix: incidence and the role of radiation therapy. Among previously untreated cases of invasive carcinoma of the cervix in intact uterus evaluated and treated at the authors' institution, primary adenocarcinoma accounted for 20% of the entire group. This incidence is higher than in previously reported series. It is felt that while the true frequency of this entity is increasing, careful fractional curettage, attention to morphologic characteristics, and special stains will distinguish adenocarcinoma from primary endometrial adenocarcinoma. Eight of 10 patients with Stage I adenocarcinoma were found to have sterilized hysterectomy specimens after preoperative irradiation. Radiation therapy alone may be adequate for Stage I adenocarcinoma.
null
minipile
NaturalLanguage
mit
null
State child health; implementing regulations for the State Children's Health Insurance Program. Health Care Financing Administration (HCFA), HHS. Final rule. Section 4901 of the Balanced Budget Act of 1997 (BBA) amended the Social Security Act (the Act) by adding a new title XXI, the State Children's Health Insurance Program (SCHIP). Title XXI provides funds to States to enable them to initiate and expand the provision of child health assistance to uninsured, low-income children in an effective and efficient manner. To be eligible for funds under this program, States must submit a State plan, which must be approved by the Secretary. This final rule implements provisions related to SCHIP including State plan requirements and plan administration, coverage and benefits, eligibility and enrollment, enrollee financial responsibility, strategic planning, substitution of coverage, program integrity, certain allowable waivers, and applicant and enrollee protections. This final rule also implements the provisions of sections 4911 and 4912 of the BBA, which amended title XIX of the Act to expand State options for coverage of children under the Medicaid program. In addition, this final rule makes technical corrections to subparts B, and F of part 457.
null
minipile
NaturalLanguage
mit
null
Speeding up and scaling up finance into clean energy and energy efficiency, greening infrastructure, sustaining ecosystems like forests and coral reefs and enabling countries and communities to adapt is essential to staying below 2C degrees. Check out what's happening and how to benefit. At the UN Climate Change Conference in Paris, the UN’s Secretary-General’s Special Envoy for Cities and Climate Change Michael Bloomberg announced a new industry-led disclosure task force on climate-related financial risks under his chairmanship. The “Task Force on Climate-related Financial Disclosures”, established by the Financial Stability Board, will develop voluntary, consistent climate-related financial risk disclosures for use by companies in providing information to lenders, insurers, investors and other stakeholders. The task force is being constituted at the request of G20 Finance Ministers and Central Bank Governors. Speaking about his role, Michael R. Bloomberg said: It’s critical that industries and investors understand the risks posed by climate change, but currently there is too little transparency about those risks. The Bank of England’s Governor Mark Carney was one of the initiators of the task force. In Paris, Mark Carney said that the national climate action plans (“Intended Nationally Determined Contributions” or "INDCs") submitted to the UN ahead of the Paris universal agreement would have major repercussions for businesses, and that a maximum of information was crucial for them to deal with the challenge of climate change and to capitalize on the opportunities of climate action. For the European Union alone, that means a 1.6% reduction of greenhouse gas emissions every year. Companies need to ask themselves – ‘what does that mean for me? If the strategy is to get to net zero emissions, what is my plan?’ Michael Bloomberg has already been raising awareness of businesses in the US for climate-related risk through the Risky Business initiative he co-founded. As the former mayor of New York, he also likes to draw attention to the many advantages of environmental regulation at municipal and national level: “For people who say climate action is bad for business, I say that New York City has the highest growth rate and the highest employment rate in the United States. And the net number of jobs in the United States because of environmental regulation has gone up, not down.” Mark Carney said that a vocal and involved public was key to accelerate the transition to low carbon and that investors will need to take into account rules, regulations and societal pressures. Mike Bloomberg added that the younger generation were now empowered to help shape the shift to low carbon and resilience, and that this had for example enabled the fossil fuel divestment movement to take hold on US campuses. Technology now allows people to talk two ways, and that is having an impact on the way decisions are being made.
null
minipile
NaturalLanguage
mit
null
🎲🎲🎲 EXIT CODE: 0 🎲🎲🎲 🟥🟥🟥 STDERR️️ 🟥🟥🟥️ Update a device. USAGE: scw iot device update <device-id ...> [arg=value ...] ARGS: device-id Device ID [name] Device name [allow-insecure] Allow plain and server-authenticated SSL connections in addition to mutually-authenticated ones [message-filters.publish.policy] (unknown | accept | reject) [message-filters.publish.topics.{index}] [message-filters.subscribe.policy] (unknown | accept | reject) [message-filters.subscribe.topics.{index}] [hub-id] Change Hub for this device, additional fees may apply, see IoT Hub pricing [region=fr-par] Region to target. If none is passed will use default region from the config (fr-par) FLAGS: -h, --help help for update GLOBAL FLAGS: -c, --config string The path to the config file -D, --debug Enable debug mode -o, --output string Output format: json or human, see 'scw help output' for more info (default "human") -p, --profile string The config profile to use
null
minipile
NaturalLanguage
mit
null
Create a Battery Core Icon in Photoshop In this tutorial you will be creating a semi-realistic, sci-fi icon that could be used as a battery or dock icon. Let’s get started! Step 1 Create a 256 x 256 document at 72 DPI. This is a very common image size for a dock icon. The background color can be set to either white or transparent. Step 2 Let's set up the canvas by creating a white background layer if you have not already. You will find this useful later on when you need to see background contrast by Inverting the background layer Cmd/Ctrl + I. Display your Rulers and drag a new Guide to the center. Step 3 Using the Ellipse (U) tool, draw an oval at the top center of the canvas with this color #e1e1e1. You can center it by selecting the whole document Cmd/Ctrl + A and use the Move Tool (V) and click the icon on the top bar as show in the diagram. Call this "Circle Top." Duplicate this layer fourteen times by holding down Alt + Down with the Move Tool (V). Merge the fourteen duplicated layers can call it "Circle Body." Place this layer below "Circle Top." Step 4 Apply the following style to "Circle Top" to give it a metallic look. Inner Glow tends to give it the frensel effect while Gradient Overlay gives it dimension. Apply the following style to "Circle Body." You should have something that looks like this. Step 5 Create a new layer above "Circle Top" and call it "Edge." Cmd/Ctrl-click "Circle Top" thumbnail for its selection. Fill it with white using the Paint Bucket Tool (G). Nudge the selection up by 1px and press Delete. Using the Eraser Toll (E) of about 60px in diameter and 0% hardness, click once on the left and right side of the circle. Create a new layer above and use the same method to get the selection of "Circle Top." Call this layer "Shine." Using and white soft brush at 60px diameter, click once on the bottom center of the circle. Bring the opacity down to 80%. This will give it a crispy and shiny look. Create another layer and call it "Reflection." Use the Rectangular Marquee Tool (M) and draw a rectangle on the side of the "Circle Body" and Fill (G) it with white. I inverted the background color so you can see it more clearly. Get the selection of "Circle Body" and Invert it Cmd/Ctrl + Shift + I. Now press delete and set the Opacity to 10%. This might hardly look like any difference, but this whole step makes a very big impact on the design. Step 6 Create a new layer and call this "Bulb." Get the selection of "Circle Top" and Fill (G) it with #80fa96. You can use any color you wish, but for this tutorial, we will be going for a slightly yellowish green. Enter Free Transform Cmd/Ctrl + T and while holding down Shift + Alt, drag one of the corners inward until you find that it's the size of the bulb you want. Now apply the following style for a glow effect. Step 7 You can group all the layers we were working on just now, except for the background, and give it a name. I called mine "Head." Create a new group call "Top Body." Create a new layer inside this group and call it "Top." Use the Pen Tool (P) and draw what you see on the diagram. Remember to keep perspective in mind when drawing 3D icons. Now Fill (G) the shape with #d7d7d7 by Right Click > Fill Path. Once you're done, apply the following style. Step 8 It's time it add shine and reflection. Now is a good time to make use of a black background. Select you background layer and Invert it Cmd/Ctrl + I. We'll start with the edges. Create a new layer and call it "Left Edge." Get the selection of the "Top" layer and Fill (G) it with white. Nudge the selection up and right by 1px and press Delete. Get the Eraser Tool (E) of 70px diameter, 0% Hardness, and click once slightly away from the left corner. Check out the diagram to see what I mean. Using the same technique, draw a 1px edge at the bottom right on a new layer called "Right Edge." Erase both ends of the line with the Eraser Tool (E). We'll use the Pen Tool (P) to draw the reflection. Follow the diagram then Fill (G) it with white and set Opacity to 20%. Keeping in mind realism, we'll add a reflection of itself. Get the selection of "Circle Body" and Fill (G) it with black. Nudge it down just below the body. Set its Opacity to 20%. I know that by right we should be taking the original image instead of using a black one. But sometimes it's good to break the rules to get what you want. Step 9 Create a Group called "Body Face", and a new layer inside called "Face." Use the Pen Tool (P) and draw out the following. Remember to align the points with the top metal plate and keep the curve as symmetrical as you can. To draw a straight line, press Shift when creating a new point. Apply the following layer style. Step 10 It's time to add reflection and shine to this area like we did for the others. Using the selection method I taught you earlier, create a 1px, white edge on the right of "Face." Use a soft Eraser (E) of approximately 100px and brush the two ends of the line. Set Opacity to 70%. Create a new layer and call this "Dark." Get the selection of "Face" and use a black Linear Gradient (G) and drag it from top to bottom like shown. Then Erase (E) the bottom with 0% Hardness and about 100px. Just go around the edge when brushing, keeping the edge far from the selection so that it looks like that. Then set Opacity to 10%. Create a new layer called "Brush Shine." Get the selection of "Face" and Brush (B) the top and bottom with white so that the area looks something like that. Then set Opacity to 20%. After that, create a new layer called "Dark Reflection." Use the Pen Tool (P) and draw the shape shown and Fill (G) it with black. Delete the area outside the selection of "Face" as I've taught earlier. Set the Opacity to 4%. Repeating the process, create a white "Reflection" layer with an Opacity of 10% at the top. You should have something like below. Step 11 On to the next section, create a new Group called "Inside", then a new layer called "Frame." You can use either a Polygonal Lasso Tool (L) or Pen Tool (P) to draw the frame of the inside. Use #777777 as the color. Try to keep a consistent width throughout. Apply the style to this layer. Now create a new layer called "Dark Edge" and create a 1px black edge on the left side of "Frame." Set Opacity to 20%. Then create a new layer called "Corner". We'll give this icon an inset look so that it doesn't look so flat. Again, you can either use Polygonal Lasso Tool (L) or Pen Tool (P) to draw this. Use #5c5c5c as the color. Apply the following layer style to "Corner". Let's add shadow to it. Create a new layer called "Darkness". Use a black Linear Gradient Tool (G) with transparency to paint in the selection of "Corner." Then set Opacity to 10%. Step 12 Now create a new layer called "Inner Body." Draw a rectangle area for the inside with #161616. A quick tip is to use Guides as strategic point to draw objects, and use selection of other layers to deleted unwanted areas. I've inverted the background color for you to see clearly. Apply this layer style and we are half way there! Dark Inner Shadow and gradient Overlay helps to give a strong depth. Step 13 It's time to prepare some change in lighting for the right side of the icon without needing to do the whole part again. Create a new Group called "For Right Side Changes." Create a new layer inside and call it whatever you want, I called it "Darken For Right Side." Make use of previous layers and Fill (G) it with black then set Opacity to 20%. These are the one for each layer along with the Opacity. Here's how it should look like after setting Opacity. Step 14 We need to duplicated a copy of this layer and use it for the right side. First hide the background layer and "Head" group by clicking on the eye icon next to them. Create a new layer and Merge Visible Cmd/Ctrl + Shift + E. Enter Free Transform Cmd/Ctrl + T and Flip it Horizontally on the right Reference Point Location by Right Click > Flip Horizontal and dragging the center pivot to the right. Hide the group "For Right Side Changes" and unhide the previous groups and layers. Step 15 Create a new Group called "Right" and place the duplicated layer inside it. We have some lighting to add to it. Invert Cmd/Ctrl + I the background and create a new layer called "Floor Shadow." Draw the shadow like so. Delete the areas of that are touching the icon and set Opacity to 30%. Add a Vector Mask by clicking the icon right the bottom of the layer panel. It looks like a rectangle with a circle in it. Use somewhere around a mid-gray Linear Gradient Tool (G) to fill in the other side of the shadow so that it fades out. Step 16 Create a new layer called "Side" and draw a black rectangle as show with either Rectangular Marquee Tool (M) or Rectangle Tool (U). Apply a 5px Gaussian Blur, varying it if you want. Set Opacity to 30% and deleted the areas which are not touching the right side of the icon. Step 17 Create a new layer called "Edge Stroke." Draw a white line along the inner edge of the metal ("Corner") using the 1px Line Tool (U). Then use a soft Eraser (E) of about 40px in diameter to erase both ends of the line. Now create a new layer called "Edge Blur." Using either a Rectangular Marquee Tool (M) or Line Tool (U), draw a 1px white line in the center of the screen with the help of a Guide if needed. Then apply Gaussian Blur of 1px and set Opacity to 20%. Step 18 Create a new Group called "Rings", another Group inside called "Glass" and a layer inside called "Glass." First we need to do some sketching to get an idea of how our 3D circular tube will look like. Here I'll show you how I get about the outline of the shape using circles, then a Pen Tool (P) to trace the left half of the tube. Fill the shape with white then cut off the unwanted half of the shape. Duplicate it and flip it to the right then merge it with the other side. Set Fill to 3% and apply these styles. Step 19 It's time to add reflection and shine, and this is usually the hard part, so I recommend looking around for glasses and tube and see how the reflection and opacity works. First create a new layer called "Top Half Shine." Get the selection of "Glass" and Fill it with white. Nudge the selection down about 10px or half way down the tube. Add a Vector Mask and Brush (B) the edges of the shine so it looks something like so. Use somewhere around 25px in diameter. Set Opacity to 30%. Tip, using Vector Mask helps you to experiment without destroying your image. Create a new layer called "Upper Top Shine." We'll repeat the process, but with a thinner shine. For this Erase (E) the edges so that it blends in well and set Opacity to 30%. Create a new layer called "Thick Line." use a Pen Tool and draw a short curve similar to the curve of the tube. Set your brush to 3px in diameter, 100% hardness and white. Stroke the path by using the Pen Tool (P) Right-click > Stroke Path, choose Brush and check Simulate Pressure then click OK. Set Opacity to 30%. You can use Move Tool (V) to move your line if you didn't draw it accurately earlier on. Create a new layer called "Thin Line." Again use the same technique, but this time with a 1px Brush (B) and without Simulate Pressure. Then either Erase (E) or Mask the two ends of the line and set Opacity to 90%. Apply a 4px white Outer Glow with opacity of 40% for the Layer Style. Step 20 We are half way through the reflection. Create a new layer called "Side Shine." Use the Pen Tool (P) to draw a curve shape, following the shape of the tube. Then Right-click > Fill Path with white. Add Vector Mask and start Brushing (B) with a soft brush around the sides of the shine such that it fades in to the bottom and right. Set Opacity to 40%. Get the selection of "Glass" and Invert Selection Cmd/Ctrl + Shift + I then press delete. Now we'll create the rims/edges of the tube so it doesn't look so flat. Create a new layer and create a 2px, white edge on the left side of "Glass". Then do the same for the right side and merge the two layers together. Set Opacity to 40%. Apply Bevel and Emboss of Size 0px and 100% Opacity for Highlight and Shadow. Step 21 We need to prepare another important detail to the tube. Create a new layer called "Green Reflection." Use the selection of "Glass" and Fill (G) the bottom half of the area with #3bf75f. Then add Vector Mask and Brush (B) the top and sides of the area so that it fades in slightly. Set Blending Mode to Linear Dodge (Add) and Opacity to 25%. This will add realsim as the glowing green rings below it will create a reflection on the ring above it. Step 22 Now it's time to prepare these images for duplication. We need one glass with the green reflection, so create a new layer in this group then Alt-click the eye icon of the group. Merge Visible Cmd/Ctrl + Shift + E. Alt-click the eye again and hide the green reflection then repeat the duplication process. Now do the same without the "Glass", too. So now you should have 3 types of duplicates, glass with green reflection, glass without green reflection, just the reflection but not the green one. Step 23 Create a new layer called "One Ring", get the selection of "Glass" and Fill (G) it with #3cf760. Nudge it down by 80px by Shift + down. You can place this in a new group and call it whatever you want. I called mine "1" since it's going to be level 1. Apply these styles for the glow and dimension. Remember the reflection layer we created earlier on? Duplicate it and place it right over the green ring. Later duplicate the one with green reflection and place it in the center. Duplicate the white glass and you should have this. Step 24 Do the same thing for each level, placing them in groups or merging them if you want. Step 25 Create a new layer called "Shadow" at the bottom of "Rings" group. Use the selection of "Glass" to Fill (G) it with black. Nudge (V) it down by about 15px. Select one half of the shadow and Skew it up so that it touches the side of the tube. In Free Transform Cmd/Ctrl + T mode, Right-click > Skew. You can isolate the selection better by nudging it out of place then back, after selecting it with Rectangular Marquee Tool (M). Do the same for the other half. Duplicate it for three tubes then set Opacity to 20%. Use Polygonal Lasso Tool (L) to delete the areas that are not touching the icon. Step 26 We are almost done. We'll add some finalizing details. First, unhide the background layer and the "Floor Shadow" layer in "Right" group. Then create a new layer and Merge Visible Cmd/Ctrl + Shift + E. Unhide those layers that you just hid. Create a new layer right at the top and call it "Noise" get the selection of the merged layer and Fill (G) it with 50% gray. Apply 10% Gaussian, Monochromatic Noise under Filter > Noise > Add Noise. Set Blending Mode to Overlay and Opacity to 10%. You can now delete the merged layer. You wouldn't notice much difference, but it helps to create that imperfect look. Create a new layer just above the background layer and call it "Shadow." Use the Polygonal Lasso Tool (L) to draw something like that, outlining the base of the icon roughly. Apply a 6px Gaussian Blur and set Opacity to 60%. Finally, you are done! Final Image Now that you’ve finished your icon, simply export the icons each with different level by hiding and unhiding the necessary rings.
null
minipile
NaturalLanguage
mit
null
827 So.2d 73 (2002) BIRMINGHAM HOCKEY CLUB, INC., d/b/a Birmingham Bulls v. NATIONAL COUNCIL ON COMPENSATION INSURANCE, INC., et al. 1000658. Supreme Court of Alabama. February 15, 2002. *75 James H. McFerrin of Southeastern Legal Group, L.L.C., Birmingham; and Ken Hooks and Ralph Bohannon of Pittman, Hooks, Dutton & Hollis, Birmingham, for appellant. W. Percy Badham III of Maynard, Cooper & Gale, P.C., Birmingham; and John A. Karaczynski of Akin, Gump, Strauss, Hauer & Feld, L.L.P., Los Angeles, California, for appellee National Council on Compensation Insurance, Inc. Robert A. Huffaker of Rushton, Stakely, Johnston & Garrett, P.A., Montgomery; and Rowe W. Snider of Lord, Bissell & Brook, Chicago, Illinois, for appellee National Workers Compensation Reinsurance Pool. *76 Carol Ann Smith of Smith & Ely, L.L.P., Birmingham, for appellees Hartford Accident & Insurance Company, Employers Insurance of Wausau, and Travelers Indemnity Company. Joel A. Williams of Sadler Sullivan, P.C., Birmingham, for appellee Liberty Mutual Insurance Company. John J. Davis, assoc. counsel, Alabama Department of Insurance, for amicus curiae D. David Parsons, Commissioner of the Alabama Department of Insurance. BROWN, Justice. The Birmingham Hockey Club, Inc., d/b/a Birmingham Bulls ("BHC"), appeals from the dismissal of its claims against National Council on Compensation Insurance, Inc. ("NCCI"), National Workers Compensation Reinsurance Pool ("National Pool"), Hartford Accident & Insurance Company ("Hartford"), Employers Insurance of Wausau ("Wausau"), Travelers Indemnity Company ("Travelers"), and Liberty Mutual Insurance Company ("Liberty Mutual"). We affirm the judgment of dismissal in part, vacate the judgment in part, and remand the case for further proceedings. Alabama's Workers' Compensation System Because all of BHC's factual allegations concern Alabama's system for overseeing workers' compensation insurance in the State, a brief overview of Alabama's workers' compensation system is necessary to understand the context of BHC's claims. Employers in the State of Alabama are required by law to provide workers' compensation benefits for employees injured in the course of their employment. See § 25-5-8 and § 25-5-50 et seq., Ala.Code 1975. Generally, employers purchase workers' compensation insurance policies in the "voluntary market" from an insurer who voluntarily agrees to underwrite the employer's risk. However, when an employer is not able to obtain insurance in the voluntary market, the employer may obtain coverage in the "residual market."[1] In the residual market, an employer is assigned an individual insurer, or "servicing carrier," from which the employer purchases a workers' compensation policy. The servicing carriers form a pool, and they remit the premium payments they receive from employers to an administrator of the pool. The rates charged to employers in the residual market are set by the insurance commissioner, and the servicing carriers do not have the authority to deviate from those rates. When an employer makes a workers' compensation claim with a servicing carrier, the servicing carrier pays the claim and is then reimbursed by the pool administrator for the loss payments made to the insured employer. At the end of each year, any funds remaining in the pool are distributed equally among the servicing carriers forming the pool. Thus, by forming a pool and charging assigned rates, the servicing carriers share the losses incurred and the profits made each year in the residual market in Alabama. The pooling system prevents a servicing carrier from being solely responsible for paying the claim of an employer who incurs a large workers' compensation liability. Although the servicing carriers in Alabama issue policies, collect premiums, and pay losses, each servicing carrier issues the same type of policy to every employer who obtains insurance in the residual *77 market and may charge only the rates set by the insurance commissioner. The amount of the premium an employer must pay to the servicing carrier is determined by several variables. Under the formula used to calculate the premium, the amount of remuneration the employer pays its employees is multiplied by a number called the "experience-modification factor"; the resulting number is then multiplied by the employer's "classification-code rate." Remuneration is the total amount of pay the employer remits to all its employees combined. The experience-modification factor is determined, at least in part, by the dollar amount of workers' compensation claims actually made by an employer over a certain period. The classification-code rate, also known as the bureau-loss-cost rate, varies according to the job-risk classification. Factual Background When BHC was incorporated in 1992, it employed primarily hockey players. Because of the hockey players' relatively high risk for future workers' compensation claims, BHC was not able to purchase insurance on the voluntary market. Consequently, BHC sought insurance coverage in the residual market and was assigned a servicing carrier. From 1992 to 1994, BHC's workers' compensation insurance servicing carrier was Continental Casualty Insurance Company.[2] In 1994, BHC was assigned a new servicing carrier, Liberty Mutual. Liberty Mutual was a member of National Pool,[3] along with other service carriers Hartford, Wausau, and Travelers. NCCI is a licenced rating organization in Alabama and was the pool administrator for National Pool. NCCI is responsible for filing with the Alabama Department of Insurance on behalf of National Pool and its servicing carriers the proposed rates used to determine premiums. The Department of Insurance then either approves or rejects the proposed rates. Liberty Mutual quoted BHC an estimated annual premium of $78,754 to provide workers' compensation and employers' liability insurance for one year. It appears that BHC paid the estimated premium. At the end of the year, Liberty Mutual audited BHC's payroll expenses. Liberty Mutual's auditor determined that BHC had underreported the amount of remuneration it had paid its employees. Liberty Mutual contends that BHC neglected to report as remuneration payments made to hockey players in the form of per diem living allowances, travel expenses, and payments made by BHC directly to apartment complexes for apartments for its employees. Therefore, Liberty Mutual adjusted its premium to reflect the actual remuneration paid to the players. This adjustment caused the premium to increase by $85,220. BHC refused to pay the increase in the premium. On May 6, 1996, BHC sued Liberty Mutual, NCCI, and two individual insurance brokers and their employer, alleging fraud, deceit, suppression, and negligence and making various class-action averments. BHC claimed that the brokers and their employer had represented to BHC that BHC was purchasing a workers' compensation policy and an employers' liability *78 policy from Liberty Mutual. BHC claimed that the employers' liability policy was unnecessary because, it said, that policy provided no protection beyond what they received under the workers' compensation policy. In its class-action averments, BHC claimed that NCCI had arbitrarily increased rates for workers' compensation and employers' liability policies and that Liberty Mutual wrongly charged the increased rates. BHC made no attempts to bring these allegations before the insurance commissioner before it filed this action. Liberty Mutual and NCCI filed motions to dismiss, or, in the alternative, for a summary judgment, arguing that the insurance commissioner had primary and exclusive jurisdiction over BHC's claims. Additionally, Liberty Mutual and NCCI argued that because BHC's complaint merely alleged that BHC had suffered damage as the result of paying rates lawfully set by the insurance commissioner, BHC's claims were barred by the filed-rate doctrine.[4] The two independent insurance brokers filed a motion for a summary judgment based on evidence that demonstrated, according to the brokers, that the employers' liability insurance policy covered claims that were not covered by the workers' compensation insurance policy. On February 5, 1997, BHC filed an amended complaint, in which it made further allegations against NCCI. NCCI and Liberty Mutual answered the second complaint and renewed their motions for dismissal or for a summary judgment. On September 4, 1997, Liberty Mutual filed a counterclaim against BHC for the unpaid balance on the insurance premium. On May 22, 1998, BHC filed its second amended complaint, making class-action allegations against an additional 350 fictitiously named defendants. BHC contended, without explanation, that some of the defendants charged BHC and other class members rates that deviated from those approved by the insurance commissioner. The named defendants answered the complaint and renewed their motions for dismissal or for a summary judgment.[5] On December 15, 1999, BHC filed its third amended complaint, adding National Pool, Hartford, Wausau, Travelers, and 18 additional servicing-carriers as defendants. BHC alleged that the defendants had wrongly charged BHC rates that exceeded the rates approved by the insurance commissioner. Specifically, BHC contended that in 1993 and 1994, NCCI improperly increased its premium rates and altered the rating plan used to calculate premiums without securing approval from the insurance commissioner for the altered rating plan and the new rates. BHC sought reimbursement for the "false" or "unlawful" rates charged to it and other class members by the various servicing carriers. BHC made further allegations against the defendants, i.e., that they had engaged in tax evasion, that they had conspired to operate as unlicensed insurers, and that they had conspired to limit employers' access to the voluntary market. BHC alleged that NCCI was without authority to act as the administrator for Alabama's workers' compensation insurance plan and that National Pool was not "properly engaged *79 to play its role in Alabama's [workers' compensation insurance plan]." BHC also alleged that Alabama's entire workers' compensation plan had never been properly approved by the insurance commissioner and that it violated the Alabama Constitution. The defendants answered the complaint and filed motions for dismissal and for a summary judgment. On September 1, 2000, the trial court issued an order dismissing BHC's claims against all defendants except Liberty Mutual. The trial court, after noting that BHC's claims and allegations in this action had shifted numerous times, explained its rationale: "The last claimed actionable conduct in this case occurred in July of 1994. This lawsuit was originally filed on May 6, 1996. No fictitious parties were named therein. The first pleading naming fictitious parties was filed on February 4, 1997.[[6]] On August 25, 1998, the plaintiff undertook its first substitution for a fictitious party. On December 15, 1999, the plaintiff undertook to name by substitution all remaining insurance carriers who participated in the workers' compensation insurance industry in Alabama, regardless of their capacity. Inasmuch as the Statute of Limitations had expired before there was any claim [that] there might be fictitious parties in this lawsuit, it is quite clear there is nothing to which the complaint of December 15, 1999, can relate back. Accordingly, all claims for negligence and unjust enrichment are due to be and are hereby dismissed. "The motions to dismiss are treated as such and also as motions for summary judgment inasmuch as discovery [h]as been had and evidentiary submissions have been made in connection with the briefs and arguments. There is no claim that any named insurance carrier dealt directly with the plaintiff in this case. Further, the only carrier with which plaintiff had any contact was the defendant, Liberty Mutual Insurance Company. Accordingly, all claims for deceit and misrepresentation against all defendants except Liberty Mutual are due to be and are hereby dismissed. "The claims against defendants, NCCI and The National Workers' Compensation Reinsurance Pool, under the sixth and seventh causes of action [requests for declaratory relief against NCCI and National Pool] are due to be and are hereby dismissed for the reasons stated by the defendants in briefs. No claim is stated against any other defendant[;] therefore all other claims under the sixth and seventh causes of action are dismissed. "All claims against all parties added on August 25, 1998, and December 15, 1999, are therefore dismissed by this Order. The only claims [not] disposed of are those against National Council [on] Compensation Insurance, Inc. (NCCI), and Liberty Mutual Insurance Company, the original defendants in this action. The remaining claims against NCCI as originally stated and as amended are due to be and are hereby dismissed pursuant to the filed rate doctrine and exclusive jurisdiction doctrine as set forth in brief "The claims and counterclaims between plaintiff and defendant, Liberty Mutual Insurance Company, remain pending and are hereafter set for trial." *80 The trial court also dismissed all of BHC's class-action claims. On November 16, 2000, Liberty Mutual moved to dismiss its counterclaim against BHC, stating as its reason that BHC was insolvent and that it was no longer possible to collect the unpaid portion of the premium from BHC. On the same day, the trial court granted Liberty Mutual's motion to voluntarily dismiss its claim and then dismissed BHC's claims against Liberty Mutual. BHC now appeals the dismissal of its case. I. BHC contends that the trial court erred in dismissing its claim alleging unjust enrichment as being barred by the statute of limitations. It appears, however, that BHC has failed to preserve this argument for appellate review. In order to be considered on appeal, issues must be presented to the trial court and to the opposing parties at the trial level. "`The Oregon Court of Appeals has stated additional reasons for holding that an error not raised and preserved at the trial level cannot be considered on appeal: "`"[I]t is a necessary corollary of our adversary system in which issues are framed by the litigants and presented to a court; ... fairness to all parties requires a litigant to advance his contentions at a time when there is an opportunity to respond to them factually, if his opponent chooses to; ... the rule promotes efficient trial proceedings;... reversing for error not preserved permits the losing side to second-guess its tactical decisions after they do not produce the desired result; and ... there is something unseemly about telling a lower court it was wrong when it never was presented with the opportunity to be right. The principal rationale, however, is judicial economy. There are two components to judicial economy: (1) if the losing side can obtain an appellate reversal because of error not objected to, the parties and public are put to the expense of retrial that could have been avoided had an objection been made; and (2) if an issue had been raised in the trial court, it could have been resolved there, and the parties and public would be spared the expense of an appeal."'" Ex parte Elba Gen. Hosp., 828 So.2d 308, 314 (Ala.2001), quoting Cantu v. State, 660 So.2d 1026, 1031-32 (Ala.1995)(Maddox, J., concurring in part and dissenting in part), quoting in turn State v. Applegate, 39 Or. App. 17, 21, 591 P.2d 371, 373 (1979). BHC argues that the trial court erroneously applied a two-year statute of limitations to its claims rather than the six-year statute of limitations found in § 6-2-34, Ala.Code 1975. Although BHC argues this issue extensively in its briefs on appeal, it failed to argue this issue to the trial court, and it deprived the opposing parties of an opportunity to respond to it. This case was litigated for 4 years at the trial level and produced 13 volumes of record, along with 2 additional boxes of deposition testimony and other filings, yet an allusion to the applicability of a 6 year statute of limitations to BHC's unjust-enrichment claim appears in only one-half of a sentence in a postjudgment motion. In its "Motion to Reinstate and/or Motion to Clarify," filed on September 14, 2000, BHC argued to the trial court that its September 1, 2000, order was in error because, BHC said, it had properly brought its unjust-enrichment claim within the two-year limitations period. In furtherance of its argument, BHC stated on the last page of the three-page motion, "[a]ssuming there is a two year statute of *81 limitations for unjust enrichment and imposition of a constructive trust or an equitable lien, and not the six year statute specified in Alabama Code § 6-2-34(3), Plaintiff's claims against National Workers Compensation Reinsurance Pool were filed within two years of its last actionable conduct." This is the only reference appearing in the entire record to the applicability of a six-year statute of limitations, and BHC does not cite to this Court any other instance in which this argument was presented to the trial court. Instead, BHC argues extensively throughout the record that its claims were brought within a two-year limitations period.[7] It can hardly be said that BHC has presented this argument to the trial court and opposing parties so as to give them an opportunity to address this issue. Because BHC argues this issue for the first time on appeal, the trial court's dismissal of BHC's unjust-enrichment claim is due to be affirmed. II. BHC characterizes its primary issue on appeal as whether "a regulated entity may charge rates greater than the rates approved by the regulator."[8] BHC contends that NCCI wrongly increased insurance premium rates in 1993 and 1994 without proper approval from the insurance commissioner. Specifically, BHC argues that the increased rates in 1993 and 1994 were not approved by the insurance commissioner because, BHC says, the commissioner failed to comply with § 25-5-8(f)(2), Ala.Code 1975. Section 25-5-8(f)(2) states, in pertinent part: "The Commissioner of the Department of Insurance shall convene a public hearing with reasonable public notice for the purpose of considering public testimony and other evidence relevant to any filing prior to approval of any bureau-loss cost or rate filing related to workers' compensation insurance." BHC alleges that in 1993, NCCI altered one of the components of the experience-modification factor, the "Deductible Experience Rating Formula" ("DERF"), without prior approval from the insurance commissioner. BHC argues that this alteration resulted in increased premiums for employers. BHC further alleges that any purported approval by the insurance commissioner of the alteration in the DERF is void because the commissioner failed to hold a hearing on the increase as required by § 25-5-8(f)(2), Ala.Code 1975. In response, NCCI argues that BHC lacks standing to challenge any change to the DERF because the DERF affected only insureds who had "deductible" policies. NCCI contends, and BHC does not dispute, that BHC did not have a "deductible" policy; therefore, NCCI says, any change in the DERF had no effect on BHC.[9] In its amicus curiae brief, the Department of Insurance asserts that it did approve *82 the alteration of the DERF in 1993 and that it was not required by law to hold a hearing before it approved the change to the DERF. The Department of Insurance explains that under § 25-5-8(f)(2), Ala. Code 1975, it is required to hold public hearings only before approving an increase in "any bureau-loss cost" or "rate filing." According to the Department of Insurance, the DERF is neither a "bureau-loss cost" nor a "rate filing." Therefore, the Department of Insurance contends that it properly approved the 1993 alteration of the DERF without holding a public hearing and that NCCI properly increased the rates after that approval. BHC further alleges that in 1994 NCCI increased premium rates charged to employers in the residual market by 19%. BHC contends that the insurance commissioner denied NCCI's request for the 19% increase and that, therefore, NCCI wrongly charged BHC 19% over the rates actually approved by the insurance commissioner. NCCI responds that the insurance commissioner had, in fact, approved the increase. In support of its argument, NCCI points to documents in the record that tend to show that the Department of Insurance did approve the 19% increase in 1994. The Department of Insurance argues in its amicus curiae brief that it properly approved the 1994 increase in premium rates after a public hearing. The Department of Insurance strongly argues to this Court that the issues involved in this action come within its jurisdiction because of the technical questions raised and because expertise in insurance matters and in rate-setting is required to resolve these issues. We agree. The issues that have arisen in BHC's claim against NCCI should be addressed by the Department of Insurance under the doctrine of primary jurisdiction. "`The doctrine of primary jurisdiction, like the rule requiring exhaustion of administrative remedies, is concerned with promoting proper relationships between the courts and administrative agencies charged with particular regulatory duties. "Exhaustion" applies where a claim is cognizable in the first instance by an administrative agency alone; judicial interference is withheld until the administrative process has run its course. "Primary jurisdiction," on the other hand, applies where a claim is originally cognizable in the courts, and comes into play whenever enforcement of the claim requires the resolution of issues which, under a regulatory scheme, have been placed within the special competence of an administrative body; in such a case the judicial process is suspended pending referral of such issues to the administrative body for its views.' ". . . . "One of the aims of the doctrine is to insure uniformity and consistency in dealing with matters entrusted to an administrative body. Another factor which must be considered is whether referral to an agency is preferable because of its specialized knowledge or expertise in dealing with the matter in controversy. Still another is whether initial review of the controversy by the administrative body will either assist a court in its adjudicatory function or perhaps alleviate entirely the need for resort to judicial relief. This latter factor indicates it is preferable to obtain the views of the administrative body concerning the statutes or rules with which it must work and how those statutes or rules should be applied to the controversy at hand." Fraternal Order of Police, Strawberry Lodge # 40 v. Entrekin, 294 Ala. 201, 210, *83 314 So.2d 663, 671 (1975) (citations omitted). "The doctrine of primary jurisdiction implies that matters entrusted by the legislature to an administrative agency, ought first be considered by that agency. Thus when a controversy arises as to how an agency is conducting its affairs, a demand for corrective action first should be made to that agency...." Entrekin, 294 Ala. at 212, 314 So.2d at 673. Although the trial court dismissed BHC's claim against NCCI based in part on the doctrine of exclusive jurisdiction, BHC's claim was not cognizable in the first instance by an administrative agency alone, as Entrekin requires.[10] BHC's final amended complaint alleged that NCCI and the other defendants had charged rates in excess of the filed rate approved by the insurance commissioner. This Court has held that such a claim is cognizable in the first instance in the circuit court and that a party bringing such a claim is not required to first seek an administrative hearing. Ex parte Blue Cross & Blue Shield of Alabama, 582 So.2d 469 (Ala.1991). Therefore, because BHC's claim as stated in its complaint is cognizable in the first instance in the circuit court, the claim is barred by neither the doctrine of exclusive jurisdiction nor by the filed-rate doctrine.[11] See id. See also Allen v. State Farm Fire & Cas. Co., 59 F.Supp.2d 1217 (S.D.Ala.1999)(noting that although the plaintiffs state-law claim was barred by the doctrine of exclusive jurisdiction, by the doctrine of primary jurisdiction, and by the filed-rate doctrine, the outcome of the case may have been different if the plaintiff had claimed that the defendants had applied a rate in excess of that approved by the insurance commissioner); and Emperor Clock Co. v. AT & T Corp., 727 So.2d 41 (Ala.1998)(noting that because the plaintiff did not claim that the defendant had charged a rate in excess of the filed tariff, but instead claimed that the defendant had misrepresented the applicable rate to the plaintiff, the claim was barred by the filed-rate doctrine). Thus, although BHC's claim, as stated in its complaint, was cognizable in the first instance in the courts, the issues that have developed clearly come within the jurisdiction of the Department of Insurance. The issues whether BHC was affected by the altered DERF, whether the DERF is a bureau-loss cost or a rate filing, and whether the 19% increase was reflected in BHC's premium rates are questions that require specialized knowledge to answer. Furthermore, whether the insurance commissioner approved the 19% increase in premium rates after conducting a hearing is a matter concerning the operations of the Department of Insurance and should be addressed in the first instance by the commissioner. An administrative determination by the Department of Insurance of the issues now before us will ensure uniformity, will assist this Court, and may alleviate entirely the need for resort to judicial relief in this case. We agree with the trial court that the issues argued by BHC against NCCI are best addressed in the first instance by the commissioner, who has the expertise and *84 knowledge necessary to make findings concerning the operations of the Department of Insurance and to resolve the technical questions BHC raises. However, we do not agree with the trial court that the claims against NCCI are due to be dismissed on the basis of the filed-rate doctrine or the doctrine of exclusive jurisdiction. Because BHC's claims against NCCI were properly brought in the circuit court and because issues have arisen requiring resolution by the insurance commissioner, we apply the doctrine of primary jurisdiction. A court has several options for disposing of a case after invoking the doctrine of primary jurisdiction. "Primary jurisdiction may be invoked where a claim that is properly before the court nonetheless falls within the particular expertise of a government agency, such as the [Interstate Commerce Commission]. Under this doctrine, the court may retain jurisdiction, or it may dismiss the case without prejudice. Reiter v. Cooper, 507 U.S. 258, 268-69, 113 S.Ct. 1213, 1220, 122 L.Ed.2d 604 (1993). The court also has the option of staying the proceedings, retaining jurisdiction and referring the matter to the agency for an administrative ruling. Id. The option rests with the court, though; it need not refer the matter to the agency if it does not desire to do so. Id." Jones Truck Lines, Inc. v. Price Rubber Corp., 182 B.R. 901, 911 (M.D.Ala.1995). A court should not dismiss a case, however, when the dismissal would "unfairly disadvantag[e]" a party. Reiter v. Cooper, 507 U.S. 258, 268, 113 S.Ct. 1213, 122 L.Ed.2d 604 (1993). In Entrekin, supra, this Court noted that it is appropriate for the trial court to stay proceedings in a case and to retain jurisdiction pending agency review when the doctrine of primary jurisdiction is applicable. The Court further noted that a stay rather than a dismissal is especially appropriate where the claims of a party would otherwise be barred by a statute of limitations. Entrekin, 294 Ala. at 210, 314 So.2d at 671. Therefore, because the doctrine of primary jurisdiction applies to BHC's claims against NCCI and because a dismissal might unfairly disadvantage BHC, we vacate the trial court's judgment of dismissal as it pertains to BHC's claims that NCCI wrongly increased premium rates in 1993 and 1994 above those approved by the insurance commissioner. We remand this case to the trial court so that it may enter a stay pending the resolution of issues discussed in this portion of the opinion insofar as they pertain to NCCI. III. Finally, BHC contends that the trial court's dismissal of its claims against Liberty Mutual on November 16, 2000, was erroneous. Upon a closer reading of BHC's briefs on appeal, however, it appears that BHC is actually challenging the trial court's dismissal of Liberty Mutual's counterclaim against BHC. BHC's arguments against the dismissal are based on factual issues addressed exclusively in Liberty Mutual's counterclaim against BHC, and BHC attempts to contend in its reply brief that the trial court's order allowing Liberty Mutual's voluntary dismissal was defective. BHC cites this Court to no authority that would allow it to challenge the trial court's dismissal of Liberty Mutual's counterclaims against BHC. Liberty Mutual requested that the trial court dismiss its counterclaim against BHC because BHC was no longer a viable financial entity and because it would have been almost impossible for Liberty Mutual to collect the unpaid portion of the premium, even if it received a favorable ruling. The trial *85 court properly granted Liberty Mutual's motion to dismiss in accordance with Rule 41(a)(2) and Rule 41(c), Ala. R. Civ. P., and then dismissed BHC's action against Liberty Mutual. "`Where an appellant fails to cite any authority, we may affirm, for it is neither our duty nor [our] function to perform all of the legal research for an appellant.'" McLemore v. Fleming, 604 So.2d 353, 353 (Ala.1992), quoting Gibson v. Nix, 460 So.2d 1346, 1347 (Ala.Civ.App.1984). "Furthermore, we cannot, based on undelineated propositions, create legal arguments for the appellant." McLemore, 604 So.2d at 353. BHC has demonstrated no reason to disturb the trial court's dismissal of its claims against Liberty Mutual. Conclusion BHC failed to present its argument concerning a six-year statute of limitations to the trial court; that argument, therefore, was not preserved for appellate review. While we agree with the trial court that the Department of Insurance is the appropriate body to address the issues of BHC's claims against NCCI, we vacate the trial court's dismissal as to BHC's claims against NCCI alleging that NCCI illegally raised rates in 1993 and 1994 and direct the trial court to enter a stay pending resolution of those claims by the Department of Insurance. Finally, BHC has presented this Court with no reason to disturb the dismissal of its claims against Liberty Mutual. Accordingly, we affirm the trial court's judgment in part, we vacate the judgment in part, and we remand the case. AFFIRMED IN PART; VACATED IN PART; AND REMANDED. MOORE, C.J., and HOUSTON, LYONS, HARWOOD, WOODALL, and STUART, JJ., concur. JOHNSTONE, J., concurs in part and concurs specially in part. SEE, J., recuses himself. JOHNSTONE, Justice (concurring in part and concurring specially in part). I would not want for our rationale for our affirmance of the dismissal of the untimely unjust-enrichment claims to be misconstrued to mean that "only one-half of a sentence" citing a statute to a trial court cannot preserve error for review. The "one-half of a sentence" in this particular case is inadequate because the statute of limitations it invokes does not expressly apply to the unjust-enrichment claims. The statute cited by the plaintiff is § 6-2-34(3), Ala.Code 1975, which provides a six-year limitation period for filing "[a]ctions for the detention or conversion of personal property." How this statute of limitations would apply to unjust-enrichment claims would need some explaining. If the statute were expressly applicable and efficacious to the plaintiffs unjust-enrichment claims, we would not want to require a less concise explanation to preserve error. In all other respects, I concur in the main opinion. NOTES [1] The residual market is also known as "the assigned-risk market," "the involuntary market," or "the market of last resort." [2] It takes two years for an insurer to gain sufficient data on an employer's history of workers' compensation claims to establish an experience-modification factor. Therefore, BHC did not have an experience-modification-factor adjustment during the policy years when BHC's servicing carrier was Continental Casualty Insurance Company (1992-1994). [3] National Pool, despite its name, is a state-specific entity. The servicing carriers that make up National Pool share only in the pool results for Alabama. [4] The filed-rate doctrine provides that once a filed rate is approved by the appropriate governing regulatory agency, it is per se reasonable and is unassailable in judicial proceedings. Allen v. State Farm Fire & Cas. Co., 59 F.Supp.2d 1217, 1227 (S.D.Ala.1999). [5] The two independent insurance brokers and their employer settled with BHC; they are not parties to this appeal. [6] On February 4, 1997, BHC filed a motion with the trial court, seeking permission to add fictitiously named defendants. [7] Before the trial court's September 1, 2000, order, several of the defendants had submitted motions to dismiss, arguing that BHC's claims were barred by a two-year statute of limitations. BHC submitted a brief over 40 pages long in response. In that brief, BHC made several arguments as to why its claims were not barred by a two-year statute of limitations; however, none of BHC's arguments mentioned the applicability of a six-year statute of limitations or of § 6-2-34, Ala.Code 1975. [8] Although the bulk of the litigation after BHC's final amended complaint centered around the issue whether the Department of Insurance had properly approved Alabama's workers' compensation system, BHC does not argue this issue on appeal. [9] BHC alleged in its final amended complaint that the defendants failed to provide BHC with a policy containing "optional deductibles." [10] When an agency has exclusive jurisdiction over an issue, a plaintiff is required to exhaust administrative remedies with the agency before resorting to the courts. See South Cent. Bell Tel. Co. v. Holmes, 689 So.2d 786 (Ala. 1996); Talton Telecomm. Corp. v. Coleman, 665 So.2d 914 (Ala.1995); Mobile & Gulf R.R. v. Crocker, 455 So.2d 829 (Ala.1984). [11] Because the filed-rate doctrine prohibits collateral challenges to rates properly approved by the insurance commissioner, any such challenge raised in the courts is due to be dismissed. See Allen v. State Farm Fire & Cas. Co., 59 F.Supp.2d 1217, 1227-29 (S.D.Ala.1999).
null
minipile
NaturalLanguage
mit
null
MPs accused of breaking minimum wage law with interns Members of Parliament could be breaking minimum wage regulations by hiring young people to work as interns for nothing. The Inland Revenue is to crack down on the practice, saying parliamentarians risk a fine of more than £200 if they do not comply. The news will be an embarrassment to MPs who regularly hail the introduction of the national minimum wage for raising the income of people in their constituencies. Unpaid internships are offered by many MPs as a way for young people get experience of political life. In return, the politicians get valuable assistance running their offices. But they may be breaking the law, even though the regulations surrounding who is entitled to the minimum wage are so complex they may not even realise they are doing so. A Revenue and Customs spokesman said new guidance would be circulated to MPs this month. Volunteers who provide their time and effort freely do not need to be paid the minimum wage, currently £5.35 for people aged 22 and over, £4.45 for 18- to 21-year-olds and £3.30 for 16- to 17-year-olds. But interns could be eligible if there is an obligation on them to do the work - for example to arrive at the Commons at a particular time each day - even if there is no written contract. A number of MPs have recently been advertising for interns, including Liberal Democrat environment spokesman Chris Huhne. Millionaire Mr Huhne wants an intern for "four or five days a week in the Westminster office and for a suggested minimum of three months." He emphasised he was following "best practice" guidance issued by the Lib-Dems' whips office and added: "I make it absolutely clear they (interns) are not expected to work regular hours."
null
minipile
NaturalLanguage
mit
null
Unless you are escaping to a desert island for the whole of December, Christmas shopping is pretty much unavoidable but if your local shops just aren’t going to cut it this year (and you don’t want to rely on the post!), then why not use our guide to head out to the best shopping locations in the UK? Pre-Christmas sales are not uncommon now so search around for some real bargains… After researching on the internet and speaking to a few people La Rochelle sounded just the sort of thing we were looking for. We found an article that recommended boat rides, gondolas (which weren’t really gondolas but the French equivalent!), Ile de Ré (Island of the King) and one particular restaurant by the name of Andrès. After reading the article I wanted to do everything, I knew that this really wasn’t possible but still it’s always nice to plan ahead… Search About Hi I'm Becky, a semi nomadic traveller but otherwise the UK-based owner of Global Grasshopper – an award winning blog and resource for independent travellers. I'm also joined by a team of self-confessed travel snobs and together we're embarking on a journey to unravel the secrets of the world's most unique, under-the-radar and beautiful places. Whether you are a backpacker, a flashpacker or just prefer to holiday away from the crowds, subscribe to our email post alert for uplifting photography, guides & stories from our many collective journeys and inspiration for the road less travelled.
null
minipile
NaturalLanguage
mit
null
Calcium-mediated neurofilament protein degradation in rat optic nerve in vitro: activity and autolysis of calpain proenzyme. In this study, we examined calcium-mediated degradation of a neurofilament protein (NFP), and autolytic activation of calpain in Lewis rat optic nerve in vitro. After incubation with calcium, homogenized optic nerve samples were analysed by SDS-PAGE in association with ECL immunoblot techniques. 68 kD NFP, calpain, and calpastatin antibodies were used for identification of the respective proteins. The extent of calcium-mediated 68 kD NFP degradation compared to EGTA controls, served to quantify calpain activity, while the extent of calpain autolysis measured the activation of the enzyme. A progressive loss of 68 kD NFP was observed at 15 min (42.1%), 1 hr (52.7%) and 6 hr (73.4%) incubation periods compared to EGTA controls. The immunoreactive calpain bands showed progressive autolysis after 15 min (26.6%), 1 hr (31.4%) and 6 hr (43.4%) incubations. We also found degradation of low molecular weight isoforms of calpastatin (43 kD and 27 kD) in the presence of calcium compared to controls. These results indicate that calpain is present in optic nerve in its inactive form but when calcium is added, it undergoes autolysis and becomes active. Thus, active calpain is capable of degrading endogenous substrates (e.g. cytoskeletal and myelin proteins) and may promote the degeneration of optic nerve in optic neuritis.
null
minipile
NaturalLanguage
mit
null
Germline genetic variation in prostate susceptibility does not predict outcomes in the chemoprevention trials PCPT and SELECT. The development of prostate cancer can be influenced by genetic and environmental factors. Numerous germline SNPs influence prostate cancer susceptibility. The functional pathways in which these SNPs increase prostate cancer susceptibility are unknown. Finasteride is currently not being used routinely as a chemoprevention agent but the long term outcomes of the PCPT trial are awaited. The outcomes of the SELECT trial have not recommended the use of chemoprevention in preventing prostate cancer. This study investigated whether germline risk SNPs could be used to predict outcomes in the PCPT and SELECT trial. Genotyping was performed in European men entered into the PCPT trial (n = 2434) and SELECT (n = 4885). Next generation genotyping was performed using Affymetrix® Eureka™ Genotyping protocols. Logistic regression models were used to test the association of risk scores and the outcomes in the PCPT and SELECT trials. Of the 100 SNPs, 98 designed successfully and genotyping was validated for samples genotyped on other platforms. A number of SNPs predicted for aggressive disease in both trials. Men with a higher polygenic score are more likely to develop prostate cancer in both trials, but the score did not predict for other outcomes in the trial. Men with a higher polygenic risk score are more likely to develop prostate cancer. There were no interactions of these germline risk SNPs and the chemoprevention agents in the SELECT and PCPT trials.
null
minipile
NaturalLanguage
mit
null
The Etowah County Sheriff's Office has a Fourth Amendment problem. About once a month, a marked sheriff's car shows up, unannounced and after dark, outside a family's home in Alabama. Uniformed officers walk to the family's door, in plain sight of every neighbor. They knock and demand to be let in. If the family refuses, the police threaten them with arrest. Once inside, the officers search the family's home – all without ever obtaining a warrant. These unannounced intrusions are an ongoing, regular practice of the Etowah County Sheriff's Office. That practice is the basis for a lawsuit filed today by the ACLU, the ACLU of Alabama, and the law firm Jaffe & Drennan challenging these random and suspicionless searches as unconstitutional – and unfounded – harassment. Why would law enforcement officers harass this family? When one member of the family was a child, he was found guilty of committing a sexual offense (an appeals court suggested the charges were bogus and that the child's lawyer was ineffective, but that's another story). Now, he must register with the state. Since his release, he has fulfilled every requirement under Alabama's Sex Offender Registration and Community Notification Act (RCNA). This includes registering in person at the Sheriff's office four times a year, which he has done without fail, and which he must continue to do for the rest of his life. The Department of Youth Services has found that this young man is at low risk of reoffending, and he has complied with every requirement placed upon him by the state, including court-mandated treatment. Yet, every month, the Sheriff's department goes far beyond Alabama law – not to mention the Constitution – with their frequent, random intrusions into the plaintiffs' home. Nothing in the RCNA or any other law gives the sheriff's department license to invade this family's home without a warrant, or to conduct these inspections without any reason to suspect that the plaintiffs have done something wrong. What's more, Alabama state law protects the anonymity of anyone registered under the RCNA, and yet conspicuous law enforcement intrusions clearly draw unwanted attention to this family. This is police abuse of power, plain and simple. And since the Sheriff's Department has announced on its website a policy of "random, monthly" visits to the home and work of everyone registered under the RCNA, we fear this may be happening over and over again to families throughout the county. The lawsuit we filed today seeks to stop the Sheriff's Department from continuing its unconstitutional searches of the plaintiffs' home and to end its broader policy of inspecting the homes of every registrant in the county without suspicion. It's time to call off the witch hunt. For more information about the case, click here .
null
minipile
NaturalLanguage
mit
null
Knowledge and understanding of disease process, risk factors and treatment modalities in patients with a recent TIA or minor ischemic stroke. Patients with acute stroke often have a striking lack of knowledge of causes, warning signs, and risk factors. Lack of knowledge may lead to inappropriate secondary prevention behavior. We investigated the knowledge of patients with a TIA or minor stroke about specific aspects of their disease 3 months after the event. Patients with a TIA or minor stroke who participated in a randomized controlled trial of the effect of health education by an individualized multimedia computer program (IMCP) were included. All patients received information about their disease from their treating neurologist and half of the patients received extra information through the IMCP. The patients' knowledge was tested after 3 months by means of a questionnaire that contained items on pathogenesis, warning signs, vascular diseases, risk factors, lifestyle and treatment. The highest possible score was 71 points. The 57 patients had a mean total score of 41.2 points (SD 10.4) of the maximum 71. Only 15 (26%) correctly identified the brain as the affected organ in stroke and TIA, and only 21 (37%) could give a correct description of a TIA or stroke. In contrast, 80-90% of the patients identified hypertension and/or obesity as vascular risk factors. Knowledge of various treatment modalities of hypertension, hypercholesterolemia and obesity was moderate to high (40-91% adequate responses). The vast majority of patients with TIA or stroke lack specific knowledge about their disease, but they do have a reasonable knowledge of general vascular risk factors and treatment. This suggests that counseling by neurologists of patients with a TIA or stroke can be improved.
null
minipile
NaturalLanguage
mit
null
Usefulness of Tc-99m-GSA scintigraphy for liver surgery. Postoperative mortality remains high after hepatectomy compared with other types of surgery in patients who have cirrhosis or chronic hepatitis. Although there are several useful perioperative indicators of liver dysfunction, no standard markers are available to predict postoperative liver failure in patients with hepatocellular carcinoma (HCC) undergoing hepatectomy. The best preoperative method for evaluating the hepatic functional reserve of patients with HCC remains unclear, but technetium-99m diethylenetriamine pentaacetic acid galactosyl human serum albumin ((99m)Tc-GSA) scintigraphy is a candidate. (99m)Tc-GSA is a liver scintigraphy agent that binds to the asialoglycoprotein receptor, and can be used to assess the functional hepatocyte mass and thus determine the hepatic functional reserve in various physiological and pathological states. The maximum removal rate of (99m) Tc-GSA (GSA-Rmax) calculated by using a radiopharmacokinetic model is correlated with the severity of liver disease. There is also a significant difference of GSA-Rmax between patients with chronic hepatitis and persons with normal liver function. Regeneration of the remnant liver and recurrence of hepatitis C virus infection in the donor organ after living donor liver transplantation have also been investigated by (99m)Tc-GSA scintigraphy. This review discusses the usefulness of (99m)Tc-GSA scintigraphy for liver surgery.
null
minipile
NaturalLanguage
mit
null
1. Field of the Invention This invention relates to an electro-optical device in an image display adapted to drive an electro-optic material layer which uses plasma to select pixels. 2. Description of the Related Art As the means for providing, for example, a liquid crystal display with high resolution and high contrast, there is generally provided active elements, such as transistors, etc. to drive every display pixel (which is referred to as an active matrix addressing system). In this case, however, since it is necessary to provide a large number of semiconductor elements such as thin film transistors, the problem of yield results particularly when the display area is enlarged, giving rise to the great problem that the cost is necessarily increased. Thus, as the means for solving this, Buzak et al. have proposed in the Japanese Laid Open Application No. 217396/89 publication a method utilizing discharge plasma in place of semiconductor elements such as MOS transistors or thin film transistors, etc. as an active element. The configuration of an image display device for driving a liquid crystal by making use of discharge plasma will be briefly described below. This image display device is called a Plasma Addressed Liquid Crystal display device (PALC). As shown in FIG. 6 of the present drawings, a liquid crystal layer 101 serving as an electro-optic material layer and plasma chambers 102 are adjacently arranged on an opposite side of a thin dielectric sheet 103 comprised of glass, etc. The plasma chambers 102 are constituted by forming a plurality of grooves 105 in parallel to each other in a glass substrate or base plate 104. These chambers are filled with an ionizable gas. Further, pairs of electrodes 106 and 107 are provided in the grooves 105 in parallel to each other. These electrodes 106 and 107 function as an anode and a cathode for ionizing the gas within the plasma chambers 102 to generate a discharge plasma. The liquid crystal portion of the display has the liquid crystal layer 101 held between the dielectric sheet 103 and a transparent base plate 108. On the surface of the transparent base plate 108 at the liquid crystal layer 101 side are formed transparent electrodes 109. These transparent electrodes 109 are perpendicular to the plasma chambers 102 constituted by the grooves 105. The locations where the transparent electrodes 109 and the plasma chambers 102 intersect with each other correspond to respective pixels. In the above-mentioned image display device, by switching and scanning the plasma chambers 102 in sequence where a plasma discharge is to be carried out, and applying signal voltages to the transparent electrodes 109 on the liquid crystal layer 101 side in synchronism with the switching scan operation, these signal voltages are held by respective pixels. The liquid crystal layer 101 is thus driven. Accordingly, the grooves 105, i.e., plasma chambers 102 respectively correspond to one scanning line, and the discharge region is divided every scanning unit. In image display devices utilizing discharge plasma as described above, an enlarged display area is more easily realized than larger areas utilizing semiconductor elements, but various problems arise in putting such a device into practice. For example, forming the grooves 105 which constitute the plasma chambers 102 on the transparent glass substrate 104 raises considerable manufacturing problems. In particular, it is extremely difficult to form such grooves at a high density. Further, it is required to form the electrodes 106 and 107 which generate the discharge in the grooves 105. However, an etching process which form the electrodes is troublesome, and it is difficult to maintain the spacing between electrodes 106 and 107 accurately.
null
minipile
NaturalLanguage
mit
null
Q: LinearLayout space distribution I have a complex layout situation in which an horizontal LinearLayout holds two other LinearLayouts. The content for those layouts is dynamic, can be any kind of views, and is generated at runtime from different independent sources. I want to display both of them as long as there is enough space, and limit them to 50% of the available space each otherwise. So I want those child LinearLayouts to have layout_width="wrap_content" when there is enough space, and layout_weight="0.5" when there isn't. This means that the space distribution could be 10-90, 25-75, 60-40; it would only be 50-50 when there isn't enough space to show the entire content of both views. So far I haven't find a way to do this from XML, so I'm doing it from code. My question is can I achieve what I want using only XML attributes? Will a different kind of layout be able to do it? Here is my current layout XML: <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="wrap_content" android:gravity="center_vertical" android:orientation="horizontal" android:padding="2dp" > <LinearLayout android:id="@+id/title_frame" android:layout_width="wrap_content" android:layout_height="48dp"/> <LinearLayout android:id="@+id/options_frame" android:layout_width="wrap_content" android:layout_height="48dp"/> </LinearLayout> A: It appears this cannot be achieved using only XML attributes.
null
minipile
NaturalLanguage
mit
null
In Philippines: IFAD, Government and partners review on-going projects and prepare future loans and grants programs For the past 7 years the IFAD country team in Philippines conducts annual Country Program Reviews named ACPOR. These meetings bring together loans and grants and help to synergise the various initiatives and strengthen their impact. Quaterly meetings are then monitoring the progress of the action plan. The International Fund for Agricultural Development-Philippines (IFAD-PH) is currently conducting its 7th Annual Country Programme Review (ACPoR) hosted by the Integrated Natural Resources and Environmental Management Project (INREMP) at Bohol Plaza Resort, Tagbilaran City, Bohol, Philippines. The activity involves six (6) grant, three (3) loan andtwo (2) upcoming Projects/Programmes (Converge and Fishcoral) in the Philippines. The session will last from 20 to 23 January 2015 with the theme “Leveraging and Scaling Up for Strategic Rural Transformation.” The activity aims to report on the achievement of the Philippines Country Programme in 2014 and the result of the 2014 Philippines-Country Strategic Operations Programme (PH-COSOP) review, and the performance of both loan and grant projects in the country; assess how the Country Programme grant and loan Projects have contributed to the achievement of the strategic objectives of the PH-COSOP and to the sectoral outcomes of the Philippines Development Plan (PDP); identify the distinct/comparative advantage of the IFAD Country Programme and Project for leveraging and for scaling up; identify the challenges, gaps and practical solutions in implementing the Country programme activities and projects of both ongoing and upcoming projects; and, prepare an action plan both for country programme and projects for implementation in 2015. In his remarks, Mr. Benoit Thierry, the new Country Programme Manager of the Philippines, congratulated teams for their dynamism and challenged the various programmes and projects to go beyond their borders in contributing to IFAD’s mandate of supporting government policies for poverty alleviation, and revive the active yet decreasing Philippines portfolio. Impact on poverty which remain an issue in rural areas of the country and focus on smallholders will remain the key drivers of IFAD country program. The first day highlighted the comparative advantages of IFAD which the projects and programmes appreciated. Among which were IFAD’s flexibility in terms of programing, co-financing with other financing institution, strong knowledge management and knowledge sharing, indulgence in providing capacity building and technical backstopping, and its multi-dimensional partnerships. Likewise, Mr. Thierry further emphasized the need for clear achievable objectives for 2015 and stressed that IFAD is very interested on how the projects made an impact to the rural families and contribute to the PH-COSOP. On top of the two (2) new projects, already designed and to be negotiated before end march for IFAD approval, a new project and a new cosop will be conceptualised and designed end 2015.
null
minipile
NaturalLanguage
mit
null
Buy Cheap Foreclosed Homes for Sale in Hall County, GA If you've ever wondered how to get the best deals on Hall County foreclosed homes, you've found the answer here. We have the most comprehensive listings of cheap Hall County foreclosure houses available, including apartments, condos, REO properties and all sort of real estate. Why pay more when you can have it all for less? Save Big today buying a foreclosed property in Hall County, GA. BUY YOUR NEXT HOME FOR LESS! FIND INCREDIBLE DEALS AND SAVE BIG TODAY!
null
minipile
NaturalLanguage
mit
null
Sperm mitochondrial DNA 15bp deletion of cytochrome c oxidase subunit III is significantly associated with human male infertility in Pakistan. To find genetic association of sperm mitochondrial deoxyribonucleic acid cytochrome c oxidase III subunit 15bp deletion with male infertility in Pakistan. The case-control study was conducted from July 2011 to December 2013, and comprised semen samples that were divided into two main groups; the control group had normozoospermic patients while the other group had infertile subjects. The Infertile group was sub-divided into four groups on the basis of semen analysis. Deoxyribonucleic acid was extracted using modified organic extraction method, amplified by polymerase chain reaction with cytochrome c oxidase III-specific primers. The fragments were separated by agarose gel electrophoresis; 135bp wild fragment and 120bp deleted one. Data was analysed using SPSS 22. Of the 194 samples, 44(22.6%) were controls, and 150(77.3%) were infertile. The infertile group sub-division was oligozoospermic 20(13.3%), asthenozoospermic 36(24%), oligoasthenoteratozoospermic 88(58.6%) and necrozoospermic 6(4%). Polymerase chain reaction amplification of the control group revealed wild 4(9.09%), deleted 13(29.55%) and hybrid 27(61.36%)The findings in the four infertile sub-groups were: deleted 6(30%) and hybrid 14(70%) in oligozoospermic, deleted 12(33.33%) and hybrid 24(66.66%) in asthenozoospermic, wild 2(2.27%), deleted 41(46.59%) and hybrid 45(51.14%) in oligoasthenoteratozoospermic, and wild 1(16,66%) and hybrid 5(83.33%) in the necrozoospermic group. There was a significant association of cytochrome c oxidase III 15bp deletion with human male infertility (p=0.033). There was a higher frequency of mutations in infertile groups compared to the control group.
null
minipile
NaturalLanguage
mit
null
Identification and nucleotide sequence of Rhizobium meliloti insertion sequence ISRm6, a small transposable element that belongs to the IS3 family. The insertion sequence ISRm6 is a small transposable element identified in Rhizobium meliloti strain GR4 by sequence analysis. Two copies of this IS element were found in strain GR4, one of them is linked to the nfe genes located on plasmid pRmeGR4b. ISRm6 seems to be widespread in R. meliloti. Data suggest that ISRm6 is active in transposition at an estimated frequency of 2 x 10(-5) per generation per cell in strain GR4. This 1269-bp element carries 27/26-bp terminal imperfect inverted repeats with six mismatches and a direct target site duplication of 4 bp. The IR terminate with the dinucleotide 5'-TG as all the members of the IS3 family. In addition, as other IS belonging to the IS3 family, ISRm6 carries two open reading frames (ORFA and ORFB) with a characteristic translational frame-shifting window in the overlapping region. Furthermore, ISRm6 putative transposase contains the triad of amino acids called DDE motif. Comparison of the ISRm6 DNA sequence and the putative proteins encoded with sequences derived from the EMBL, GenBank, PIR and Swissprot databases showed significant similarity to IS that belongs to the IS3 family with a highest homology to a subclass containing IS476 from Xanthomonas campestris, IS407 from Burkholderia cepacia, and ISR1 from Rhizobium lupini.
null
minipile
NaturalLanguage
mit
null
An exclusion from a failed bank’s D&O liability policy for any claim brought by the insured bank—or any receiver of the bank—barred coverage for a claim by the Federal Deposit Insurance Corporation (FDIC), as receiver of the bank, according to a summary judgment ruling by the United States District Court for the Eastern District of California. Hawker v. BancInsure, Inc., 2014 WL 1366201 (E.D. Cal. Apr. 7, 2014). The FDIC sued the bank’s former officers for alleged negligence and breach of fiduciary duty. The insurer denied coverage for the FDIC lawsuit under the policy’s Insured v. Insured exclusion, which barred coverage for any claim “by or on behalf of, or at the behest of, any other insured person, the company, or any successor, trustee, assignee or receiver of the company.” The FDIC made numerous arguments that this exclusion did not apply to its lawsuit. The FDIC first argued that “receiver” must refer to a court-appointed receiver, because some dictionary definitions of the term reference appointment by courts. The district court rejected that argument, finding that the Black’s Law Dictionary definition—which does not limit the term “receiver” to those appointed by courts—is representative of the “ordinary and popular” meaning of the term. The court concluded that the FDIC meets the definition of “receiver” as used in the exclusion. The FDIC also contended that the fact that the insurer offered a separate regulatory exclusion that specifically barred claims by the FDIC in any capacity meant that the Insured v. Insured exclusion should not apply to the FDIC, lest the regulatory exclusion be rendered meaningless. The court rejected this argument as well, finding that any overlap between the exclusions did not negate the reach of the Insured v. Insured exclusion to claims by the FDIC as a receiver where the broader regulatory exclusion would also exclude claims by the FDIC as a regulator. The FDIC further asserted that, based on the purpose of the Insured v. Insured exclusion to prevent collusive lawsuits, and the fact that the FDIC’s claims clearly are not collusive, the application of the exclusion defied “reasonable expectations.” The court rejected this argument, noting that it is flawed because it would apply to any receiver in any context and therefore cannot be conclusive if the term “receiver” is to be given meaning within the exclusion, as it must under California law. The court observed in addition that claims by the FDIC against insureds who are also creditors of the failed bank may well be collusive because “the directors and officers benefit by having their uninsured investment and loans paid from the insurance proceeds.” Finally, the court examined proffered extrinsic evidence concerning the alleged intent of the parties during the underwriting process for the policy, but concluded that the emails and deposition testimony did not render the exclusion reasonably susceptible to the FDIC’s interpretation. Because the FDIC acted as a receiver within the meaning of the Insured v. Insured exclusion, its claim against the former bank officers was excluded from coverage. SIGNAL Group (formerly McBee Strategic Consulting, LLC) is a wholly owned subsidiary of Wiley Rein. SIGNAL is a total solutions provider—advocacy, strategic communications, research, and digital media—for clients seeking to engage the federal government to achieve competitive advantage, influence public policy, establish new markets, and secure public capital.
null
minipile
NaturalLanguage
mit
null
--- author: - 'F. Götze$^1$andA. Yu. Zaitsev$^2$' title: | Multidimensional Hungarian construction\ for vectors with almost Gaussian\ smooth distributions --- ¶[[**P**]{}]{} \#1 Ł[[L]{}]{} \#1[[B]{}\_d(\#1)]{} \#1[[B]{}\_d\^\*(\#1)]{} \#1\#2[[B]{}\_[\#2]{}(\#1)]{} \#1[[A]{}\_d(\#1)]{} \#1\#2[[A]{}\_[\#2]{}(\#1)]{} \#1\#2\#3[[A]{}\_[d]{}”(\#1,\#2,\#3)]{} \#1[[A]{}\_d\^\*(\#1,,)]{} \#1\#2[[A]{}\_[\#2]{}\^\*(\#1,,)]{} \#1\#2\#3[[A]{}\_[d]{}\^\*(\#1,\#2,\#3)]{} \#1\#2\#3\#4[[A]{}\_[\#4]{}\^\*(\#1,\#2,\#3)]{} \#1\#2\#3\#4[[A]{}\_[\#4]{}\^[\*\*]{}(\#1,\#2,\#3)]{} \#1\#2\#3[\_[\#3]{}(\#1,\#2)]{} \#1\#2\#3[[\_[\#1]{}[\#2]{},[\#3]{}]{}]{} \#1\#2\#3[[\_[\#1]{}[\#2]{},[\#3]{}]{}]{} \#1\#2\#3[[\_[\#1]{}(h)[\#2]{},[\#3]{}]{}]{} \#1\#2[[\_[\#1]{}(h)[\#2]{},[\#2]{}]{}]{} \#1\#2\#3[[\_[\#1]{}(\#3),[\#2]{}]{}]{} \#1[[,[\#1]{}]{}]{} \#1[[(h),[\#1]{}]{}]{} \#1\#2[[(h),[\#2]{}]{}]{} \#1\#2[[,[\#2]{}]{}]{} \#1\#2[[\_[\#1]{}[\#2]{},[\#2]{}]{}]{} \#1\#2[[\_[\#1]{}[\#2]{},[\#2]{}]{}]{} \#1[\#1]{} \#1[\#1]{} \#1[|\#1|]{} \#1[|\#1|]{} \#1 \#1[||]{} \#1[\#1]{} \#1[**1\_[[\#1]{}]{}**]{} \#1[[**I**]{}{[\#1]{}}]{} \#1[(+)]{} \#1\#2[e\^[i.,[\#2]{}.]{}]{} \#1\#2[e\^[i.,[\#2]{}.]{}]{} \#1\#2[e\^[.,[\#2]{}.]{}]{} \#1\#2[e\^[.,[\#2]{}.]{}]{} \#1[\_[\#1]{}(h)]{} \#1\#2[[\_[\#1]{}]{}(h\_[\#2]{})]{} \#1\#2[[\_[\#1]{}]{}(h’\_[\#2]{})]{} \#1[\_\#1]{} \#1[h\_[\#1]{}]{} \#1 \#1[[\_[[\#1]{}]{}]{}()]{} \#1\#2[[\_[[\#1]{}]{}]{}(h’\_[\#2]{})]{} \#1[(2)\^[d/2]{}(\_[\#1]{})]{} \#1\#2\#3[[\#1]{}\_[\#2]{}\^[\[\#3\]]{}]{} \#1\#2\#3[[\#1]{}\_[\#2]{}\^[(\#3)]{}]{} \#1\#2\#3[\_[\#2]{}\^[(\#3)]{}]{} \#1\#2\#3[[\#1]{}\_[\#2]{}\^[{\#3}]{}]{} \#1 \#1 \#1 \#1 \#1[\_[(\#1)]{}]{} \#1\#2[R\_d(\#1,\#2)]{} \#1\#2[R\_d(\#1,\#2)]{} \#1\#2[R\_[d-1]{}(\#1,\#2)]{} \#1\#2[\^[\#1]{}-.5pt/.5pt\_[\#2]{}]{} \#1\#2[[\#1]{}/[\#2]{}]{} \#1\#2 \#1\#2[[ ]{}]{} \#1\#2[.5pt]{} \#1 \#1[-2pt]{} \#1[\[\#1\]]{} : = \[section\] \[section\] \[section\] \[section\] [**Abstract:**]{} A multidimensional version of the results of Komlós, Major and Tusnády for sums of independent random vectors with finite exponential moments is obtained in the particular case where the summands have smooth distributions which are close to Gaussian ones. The bounds obtained reflect this closeness. Furthermore, the results provide sufficient conditions for the existence of i.i.d.  vectors $X_1, X_2,\dots$ with given distributions and corresponding i.i.d.  Gaussian vectors $Y_1, Y_2,\dots$ such that, for given small $\e$, $${\P\Big\{\4{\limsup\limits_{n\to\infty} \ffrac1{\log n}\Bigl|\,\sum\limits_{j=1}^n X_j- \sum\limits_{j=1}^n Y_j\,\Bigr|}\le \e\Big\}=1}.$$ [**Keywords and phrases:**]{} Multidimensional invariance principle, strong approximation, sums of independent random vectors, Central Limit Theorem. ------------------------------------------------------------------------ Introduction {#s1} ============ The paper is devoted to an improvement of a multidimensional version of strong approximation results of Komlós, Major and Tusnády (KMT) for sums of independent random vectors with finite exponential moments and with smooth distributions which are close to Gaussian ones. Let ${\cal F}_d$ be the set of all $d$-dimensional probability distributions defined on the $\si$-algebra $\Bd$ of Borel subsets of $\Rd$. By $\wh F(t)$, $t\in\Rd$, we denote the characteristic function of a distribution ${F\in\Fd}$. The product of measures is understood as their convolution, that is, ${F\4G=F*G}$. The distribution and the corresponding covariance operator of a random vector $\xi$ will be denoted by $\L(\xi)$ and $\cov \xi$ (or $\cov F$, if ${F=\L(\xi)}$). The symbol ${\bf I}_d$ will be used for the identity operator in ${\bf R}^d$. For $b>0$ we denote $\log^*b=\max\,\bgl\{1,\,\log b\bgr\}$. Writing ${z\in\Rd}$ (resp. $\Cd)$, we shall use the representation ${z=(z_1,\dots,z_d)=z_1\4e_1+\dots+z_d\4e_d}$, where ${z_j\in{\bf R}^1}$ (resp. ${\bf C}^1)$ and the $e_j$ are the standard orthonormal vectors. The scalar product is denoted by ${\langle x,y\rangle=x_1\4\ov y_1+\dots+x_d\4\ov y_d}$. We shall use the Euclidean norm $\norm z=\langle z,z\rangle\ssqrt$ and the maximum norm ${|z|=\max\limits_{1\le j\le k}\,|z_j|}$. The symbols $c,c_1,c_2,\dots$ will be used for absolute positive constants. The letter $c$ may denote different constants when we do not need to fix their numerical values. The ends of proofs will be denoted by $\square$. Let us consider the definition and some useful properties of classes of distributions ${\cal A}_d(\tau)\subset\Fd$, $\tau\ge0$, introduced in Zaitsev (1986), see as well Zaitsev (1995, 1996, 1998a). The class ${\cal A}_d(\tau)$ (with a fixed $\tau\ge0$) consists of distributions $F\in\Fd$ for which the function $$\p(z)=\p(F,z)=\log\int_{{\bf R}^d}e^{\8z,x\9}F\{dx\}\qquad (\p(0)=0)$$ is defined and analytic for $\norm z \tau<1$, $z\in {{\bf C}\2}^d$, and $$\bgl|d_ud_v^{\22}\,\p(z)\bgr|\4\le\|u\|\4\tau\,\<{\bf D}\,v,v\> \qquad \hbox{for all}\ \,u,v\in {\bf R}^d \ \,\hbox{and} \ \,\norm z \tau<1,$$ where ${\bf D}=\cov F$, and the derivative $d_u\p$ is given by $$d_u\p(z)=\lim_{\be\to 0}\,\ffrac{\p(z+\be\4 u)-\p(z)}\be\,.$$ It is easy to see that $\t_1<\t_2$ implies ${\AT{\t_1}\subset\AT{\t_2}}$. Moreover, the class $\A$ is closed with respect to convolution: if $F_1,F_2\in\A$, then ${F_1\4F_2\in\A}$. The class $\AT0$ coincides with the class of all Gaussian distributions in $\Rd$. The following inequality can be considered as an estimate of the stability of this characterization: if ${F\in\A}$, $\t>0$, then $$\pi\bgl(F,\,\Phi(F)\bgr)\le c\4d^2\t\,\log^*(\t\me), \eqno(1.1)$$ where $\pi(\4\cdot\4,\4\cdot\4)$ is the Prokhorov distance and $\Phi(F)$ denotes the Gaussian distribution whose mean and covariance operator are the same as those of $F$. Moreover, for all $X\in\Bd$ and all $\la>0$, we have $$\begin{aligned} F\big\{X\big\}&\le&\hbox{\rlap{\hskip7.25cm(1.2)}} \Phi(F)\big\{X^\la\big\} +c\4d^2\exp\Big(-\ffrac\la{c\4d^2\4\t}\Big),\\ \Phi(F)\big\{X\big\}&\le&\hbox{\rlap{\hskip7.25cm(1.3)}} F\big\{X^\la\big\} +c\4d^2\exp\Big(-\ffrac\la{c\4d^2\4\t}\Big),\end{aligned}$$ where ${X^\la=\bgl\{y\in\Rd:\inf\limits_{x\in X}\,\nnnorm{x-y} <\la\bgr\}}$ is the $\la$-neighborhood of the set $X$, see Zaitsev (1986). The classes $\A$ are closely connected with other natural classes of multidimensional distributions. In particular, by the definition of $\A$, any distribution $\L(\xi)$ from $\A$ has finite exponential moments $\E e^{\8h,\xi\9}$, for $\htau<1$. This leads to exponential estimates for the tails of distributions (see, e.g., Lemma \[3.3\] below). On the other hand, if $\E e^{\8h,\xi\9}<\infty$, for ${h\in A\subset\Rd}$, where $A$ is a neighborhood of zero, then $F=\L(\xi)\in\AT{\t(F)}$ with some $\t(F)$ depending on $F$ only. Throughout we assume that $\t\ge0$ and $\xi_1,\xi_2,\dots$ are random vectors with given distributions ${\L(\xi_k)\in \A}$ such that ${\E\xi_k=0}$, $\cov\xi_k={\bf I}_d$, ${k=1,2,\dots}$. The problem is to construct, for a given $n$, $1\le n\le\infty$, on a probability space a sequence of independent random vectors $ X_1,\dots, X_n$ and a sequence of i.i.d.  Gaussian random vectors $ Y_1,\dots, Y_n$ with $\L(X_k)=\L(\xi_k)$, $ \E Y_k=0$, $\cov Y_k={\bf I}_d$, $k=1,\dots,n$, such that, with large probability, $$\DE(n)=\max_{1\le r\le n} \,\Bigl|\,\sum\limits_{k=1}^r X_k-\sum\limits_{k=1}^r Y_k\,\Bigr|$$ is as small as possible. The aim of the paper is to provide sufficient conditions for the following Assertion A: [**Assertion A.**]{} [*There exist absolute positive constants $c_1$, $c_2$ and $c_3$ such that, for ${\t\4d^{3/2}\le c_1}$, there exists a construction with $$\E\exp\Bigl(\ffrac{c_2\,\DE(n)} {d^{3/2}\4\t}\Bigr) \le \exp\bgl(c_3\4\log^*d \,\log^*n\bgr).\eqno(1.4)$$*]{} Using the exponential Chebyshev inequality, we see that  (1.4) implies $${}\P\bgl\{\4c_2\,\DE(n)\ge \t\4d^{3/2}\bgl(c_3\,\log^*d\,\log^*n +x\bgr)\4\bgr\} \le e^{-x},\qquad x\ge0. \eqno(1.5)$$ Therefore, Assertion A can be considered as a generalization of the classical result of KMT (1975, 1976). Assertion A provides a supplement to an improvement of a multidimensional KMT-type result of Einmahl (1989) presented by Zaitsev (1995, 1998a) which differs from Assertion A by the restriction $\t\ge1$ and by another explicit power-type dependence of the constants on the dimension $d$. In a particular case, when $d=1$ and all summands have a common variance, the result of Zaitsev is equivalent to the main result of Sakhanenko (1984), who extended the KMT construction to the case of non-identically distributed summands and stated the dependence of constants on the distributions of the summands belonging to a subclass of $\AD\t1$. The main difference between Assertion A and the aforementioned results consists in the fact that in Assertion A we consider “small” $\t$, $0\le\t\le c_1\4d^{-3/2}$. In previous results the constants are separated from zero by quantities which are larger than some absolute constants. In KMT (1975, 1976) the dependence of the constants on the distributions is not specified. From the conditions $( 1)$ and $( 4)$ in Sakhanenko (1984, Section 1), it follows that $\Var \xi_k\le\la^{-2}$ ($\la\me$ plays in Sakhanenko’s paper the role of $\t$) and, if $\Var \xi_k=1$, then $\la\me\ge1$. This corresponds to the restrictions $\a\me\ge2$ in Einmahl (1989, conditions (3.6) and (4.3)) and $\t\ge1$ in Zaitsev (1995, 1998a, Theorem 1). Note that in Assertion A we do not require that the distributions $\L(\xi_k)$ are identical but we assume that they have the same covariance operators, cf.  Einmahl (1989) and Zaitsev (1995, 1998a). A generalization of the results of Zaitsev (1995, 1998a) and of the present paper to the case of non-identical covariance operators appeared recently in the preprint Zaitsev (1998b). According to (1.1)–(1.3), the condition ${\L(\xi_k)\in \A}$ with small $\t$ means that $\L(\xi_k)$ are close to the corresponding Gaussian laws. It is easy to see that Assertion A becomes stronger for small $\t$ (see as well Theorem \[1.4\] below). Passing to the limit as $\t\to0$, we obtain a spectrum of statements with the trivial limiting case: if $\t=0$ (and, hence,  $\L(\xi_k)$ are Gaussian) we can take $X_k=Y_k$ and ${\DE(n)=0}$. We show that [*Assertion [A]{} is valid under some additional smoothness-type restrictions on $\L(\xi_k)$*]{}. The question about the necessity of these conditions remains open. The case $\t\ge1$ considered by Zaitsev (1995, 1998a, Theorem 1) does not need conditions of such kind. The formulation of our main result—Theorem \[2.1\]—includes some additional notation. In order to show that the conditions of Theorem \[2.1\] can be verified in some concrete simple situations, we consider at first three particular applications—Theorems \[1.1\], \[1.2\] and \[1.3\]. \[1.1\] Assume that the distributions  ${\L(\xi_k)\in\A}$ can be represented in the form $$\L(\xi_k)=H_k\4G,\qquad k=1,\dots,n,$$ where $G$ is a Gaussian distribution with covariance operator $\cov G=b^2\,{\bf I}_d$ with $b^2$ satisfying ${b^2\ge 2^{10}\,\t^2\4d^3\,\log^*\ffrac1\t}$. Then Assertion [A]{} is valid. The following example deals with a non-convolution family of distributions approximating a Gaussian distribution for small $\t$. \[1.2\] Let $\eta$ be a random vector with an absolutely continuous distribution and density $$p_\t(x)=\ffrac{\big(4+\t^2\nnnorm x^2\big) \,\exp\big(-\nnnorm x^2\!/2\big)} {(2\4\pi)^{d/2}\4(4+\t^2\4d)},\qquad x\in\Rd. \eqno(1.6)$$ Assume that  ${\L(\xi_k)=\L\big(\eta/\gamma\big)}$, $k=1,\dots,n$, where $$\gamma^2=\ffrac{\big(4+\t^2\4(d+2)\big)} {(4+\t^2\4d)},\qquad \gamma>0. \eqno(1.7)$$ Then Assertion [A]{} is valid. The proof of Theorem \[1.2\] can be apparently extended to the distributions with some more generale densities of type $P(\t^2\nnnorm x^2)\, \,\exp\big(-c\,\nnnorm x^2\big)$, where $P(\cdt)$ is a suitable polynomial. \[1.3\] Assume that a random vector $\zeta$ satisfies the relations $$\E\zeta=0,\qquad \P\bgl\{\nnnorm \zeta\le b_1\bgr\}=1, \qquad H\=\L(\zeta)\in\AT{b_2} \eqno(1.8)$$ and admits a differentiable density $p(\cdt)$ such that $$\sup_{x\in\Rd}\,\bgl|d_u\,p(x)\bgr|\le b_3\,\nnnorm u,\qquad \hbox{for all}\quad u\in\Rd, \eqno(1.9)$$ with some positive $b_1,\,b_2$ and $b_3$. Let $ \zeta_1,\zeta_2,\dots$ be independent copies of $\zeta$. Write $$\t=b_2\4m\msqrt, \eqno(1.10)$$ where $m$ is a positive integer. Assume that the distributions  ${\L(\xi_k)}$ can be represented in the form $$\L(\xi_k)=\YY L{}k\4P,\qquad k=1,\dots,n,\nopagebreak \eqno(1.11)$$ where $$\YY L{}k\in\A\quad\hbox{and}\quad P=\L\bgl(\bgl(\zeta_1+\dots+\zeta_m\bgr)\big/\sqrt m\bgr). \eqno(1.12)$$ Then there exist a positive $b_4$ depending on $H$ only and such that $m\ge b_4$ implies Assertion [A]{}. \[r1.1\] If all the distributions $\YY L{}k$ are concentrated at zero, then the statement of Theorem [\[1.3\]]{} $($for $\t=b\4m\msqrt$ with some $b=b(H))$ can be derived from the main results of [KMT]{} [(1975, 1976)]{} $($for $d=1)$ and of [Zaitsev]{} [(1995, 1998a)]{} $($for $d\ge1).$ A consequence of Assertion A is given in Theorem \[1.4\] below. \[1.4\] Assume that $\xi,\, \xi_1,\xi_2,\dots,$ are i.i.d. random vectors with a common distribution ${\L(\xi)\in \A}$. Let Assertion [A]{} be satisfied for  $\xi_1,\dots,\xi_{n}$ for all $n$ with some $c_1$, $c_2$ and $c_3$ independent of $n$. Suppose that $\t\4d^{3/2}\le c_1$. Then there exist a construction such that $${}\P\Big\{\4\limsup_{n\to\infty} \ffrac1{\log n}\Bigl|\,\sum\limits_{j=1}^n X_j- \sum\limits_{j=1}^n Y_j\,\Bigr|\le c_4\,\t\4d^{3/2}\log^*d\,\Big\}=1 \eqno(1.13)$$ with some constant $c_4=c_4(c_2,c_3)$. From a result of Bártfai (1966) it follows that the rate $O(\log n)$ in (1.13) is the best possible if  $\L(\xi)$ is non-Gaussian. In the case of distributions with finite exponential moments this rate was established by Zaitsev (1995, 1998a, Corollary 1). Theorems \[1.1\]–\[1.3\] and \[2.1\] provide examples of smooth distributions which are close to Gaussian ones and for which the constants corresponding to this rate are arbitrarily small. The existence of such examples has been already mentioned in the one-dimensional case, e.g., by Major (1978, p. 498). The paper is organized as follows. In Section \[s2\] we formulate Theorem \[2.1\]. To this end we define at first a class of distributions $\AV \t\rho d$ used in Theorem \[2.1\]. The definition of this class is given in terms of smoothness conditions on the so-called conjugate distributions. Then we describe a multidimensional version of the KMT dyadic scheme, cf.  Einmahl (1989). We prove Theorem \[2.1\] in Section \[s3\]. Section \[s4\] is devoted to the proofs of Theorems \[1.1\]–\[1.4\]. A preliminary version of the present paper appeared as the preprint Götze and Zaitsev (1997). [**Acknowledgment**]{} The authors would like to thank V. Bentkus for very useful discussions. The main result {#s2} =============== Let $F=\L(\xi)\in\A$, $\norm h\t<1$, $h\in {\bf R}^d$. The conjugate distribution $\ovln F=\ovln F(h)$ is defined by $$\ovln F\{dx\}= \bgl(\!\E e^{\8h,\xi\9}\!\bgr)^{-1}e^{\8h,x\9}F\{dx\}. \eqno(2.1)$$ Sometimes we shall write $F_h=\ovln F(h)$. It is clear that $\ovln F(0)=F$. Denote by ${\overline{\xi}}(h)$ a random vector with $\L\bbgl(\2\overline{\xi}(h)\bbgr)=\ovln F(h)$. From (2.1) it follows that $$\E f\big(\overline{\xi}(h)\big)=\bgl(\!\E e^{\8h,\xi\9}\!\bgr)^{-1} \E f(\xi)\,e^{\8h,\xi\9}, \eqno(2.2)$$ provided that $\E \bgl| f(\xi)\,e^{\8h,\xi\9}\bgr|<\infty$. It is easy to see that $$\hbox{if}\quad U_1,U_2\in\A,\quad U=U_1\4U_2,\quad \hbox{then}\quad\ovln U(h)=\ovln U_1(h)\,\ovln U_2(h). \eqno(2.3)$$ Below we shall also use the following subclasses of $\A$ containing distributions satisfying some special smoothness-type restrictions. Let $\t\ge0$, $\de>0$, ${\rho>0}$, $ h\in\Rd$. Consider the conditions: $$\int\limits_{\rho\2\norm t\2\t\2d\ge1}\bbgl|\wh F_h(t)\bbgr| \,dt\le\ffrac{(2\4\pi)^{d/2}\4\t\4d^{3/2}} {\si\,(\det{\bf D})^{1/2}},\nopagebreak \eqno(2.4)$$ $$\int\limits_{\rho\2\norm t\2\t\2d\ge1}\bbgl|\wh F_h(t)\bbgr| \,dt\le\ffrac{(2\4\pi)^{d/2}\4\t^2\4d^{2}}{\si^2\,(\det{\bf D})^{1/2}}, \nopagebreak \eqno(2.5)$$ $$\int\limits_{\rho\2\norm t\2\t\2d\ge1} \bbgl|\langle t,v\rangle\4\wh F_h(t)\bbgr| \,dt\le\ffrac{(2\4\pi)^{d/2}\4\<{\bf D}\me v,v\>^{\!1/2}} {\de\,(\det{\bf D})^{1/2}}, \qquad\hbox{for all}\quad v\in\Rd,\nopagebreak \eqno(2.6)$$ where $F_h=\ovln F(h)$ and $\si^2=\si^2(F)>0$ is the minimal eigenvalue of ${\bf D}=\cov F$. Denote by $\AV \t\rho d$ (resp.  $\AQZ$) the class of distributions $F\in\A$ such that the condition (2.4) (resp.  (2.5) and (2.6)) is satisfied for $ h\in\Rd$, $\norm h\t<1$. It is easy to see that $$\AQZ\subset\AV \t\rho d, \qquad\hbox{if}\quad \ffrac{\t\4d^{1/2}}{\si}\le1. \eqno(2.7)$$ In this paper the class $\AV \t\rho d$ plays the role of the class $\AQZ$ which was used by Zaitsev (1995, 1998a), see also Sakhanenko (1984, inequality (49),  p. 9) or Einmahl (1989, inequality (1.5)). Note that (2.2) implies $$\wh F_h(t)=\E e^{\8it,\4\smash{\ov\xi(h)}\9} =\bgl(\!\E e^{\8h,\4\xi\9}\!\bgr)^{-1} \E e^{\8h+it,\4\xi\9}. \eqno(2.8)$$ [**The dyadic scheme.**]{} Let $N$ be a positive integer and $\bgl\{\xi_1,\dots,\xi_{2^N}\bbgr\}$ a collection of $d$-dimensional independent random vectors. Denote $$\wt S_0=0;\qquad \wt S_k=\sum_{l=1}^k\xi_l,\quad 1\le k\le2^N; \eqno(2.9)$$ $$U_{m,k}^*=\wt S_{(k+1)\cdot2^m}-\wt S_{k\cdot2^m}, \qquad 0\le k<2^{N-m},\quad0\le m\le N. \eqno(2.10)$$ In particular, $U_{0,k}^*=\xi_{k+1}$, $U_{N,0}^*=\wt S_{2^N}=\xi_{1}+\dots+\xi_{2^N}$. In the sequel we call [*block of summands*]{} a collection of summands with indices of the form ${k\cdot2^m+1,\dots,(k+1)\cdot2^m}$, where $0\le k<2^{N-m}$, $0\le m\le N$. Thus, $U_{m,k}^*$ is the sum over a block containing $2^m$ summands. Put $$\wt U_{n,k}^* =U_{n-1,2k}^*-U_{n-1,2k+1}^*, \qquad 0\le k<2^{N-n},\quad1\le n\le N. \eqno(2.11)$$ Note that $$U_{n-1,2k}^*+U_{n-1,2k+1}^*=U_{n,k}^*, \qquad 0\le k<2^{N-n},\quad1\le n\le N. \eqno(2.12)$$ Introduce the vectors $$\wt{\bf U}_{n,k}^* =\bgl(U_{n-1,2k}^*,\,U_{n-1,2k+1}^*\big)\in{\bf R}^{2d}, \qquad 0\le k<2^{N-n},\quad1\le n\le N, \eqno(2.13)$$ with the first $d$ coordinates coinciding with those of the vectors $U_{n-1,2k}^*$ and with the last $d$ coordinates coinciding with those of the vectors $U_{n-1,2k+1}^*$. Similarly, denote $${\bf U}_{n,k}^* =\bgl(U_{n,k}^*,\,\wt U_{n,k}^*\big)\in{\bf R}^{2d}, \qquad 0\le k<2^{N-n},\quad1\le n\le N. \eqno(2.14)$$ Introduce now the projectors ${\bf P}_{\!i}:{\bf R}^s\to{\bf R}^1$ and $\ov{\bf P}_j:{\bf R}^s\to{\bf R}^j$, for $i,\,j=1,\dots,s$, by the relations ${\bf P}_{\!i}\4 x=x_i$, $\ov{\bf P}_j\4 x=(x_ 1,\dots,x_ j)$, where ${x=(x_1,\dots,x_s)\in{\bf R}^s}$ (we shall use this notation for $s=d$ or $s=2\4d$). It is easy to see that, according to (2.11)–(2.14), $${\bf U}_{n,k}^*={\bf A}\,\wt{\bf U}_{n,k}^*\in{\bf R}^{2d}, \qquad 0\le k<2^{N-n},\quad1\le n\le N, \eqno(2.15)$$ where ${\bf A}:{\bf R}^{2d}\to {\bf R}^{2d}$ is a linear operator defined, for $x=(x_1,\dots,x_{2d})\in{\bf R}^{2d}$, as follows: $$ ------------------------------- --- ------------------------- ${\bf P}_{\!j}\,{\bf A}\,x\5$ = $\5x_{j}+x_{d+j},\qquad \jd,$ ${\bf P}_{\!j}\,{\bf A}\,x\5$ = $\5x_{j}-x_{d+j},\qquad j=d+1,\dots,2\4d.$ ------------------------------- --- ------------------------- (2.16) $$Denote$$ -------------------------- --- --------------------------------------------------------- ${\bf U}_{n,k}^{*(j)}\5$ = $\5{\bf P}_j\,{\bf U}_{n,k}^*,$ ${\bf U}_{n,k}^{*j}\5$ = $\5\bgl({\bf U}_{n,k}^{*( 1)},\dots, {\bf U}_{n,k}^{*(j)}\bgr)=\ov{\bf P}_j\,{\bf U}_{n,k}^* \in{\bf R}^{j},$ -------------------------- --- --------------------------------------------------------- j=1,…,2d. (2.17) $$ Now we can formulate the main result of the paper. \[2.1\] Let the conditions described in [(2.9)–(2.17)]{} be satisfied, $\tau\ge0$ and $\E\xi_k=0$, $\cov \xi_k={\bf I}_d$, $k=1,\dots,2^N$. Assume that $${\cal L}\big({\bf U}_{n,k}^{*j}\big)\in \AV\t4j \for 0\le k<2^{N-n},\quad1\le n\le N,\quad d\le j\le 2\4d, \eqno(2.18)$$ and $${\cal L}\big({\bf U}_{N,0}^{*j}\big)\in \AV\t4j \for 1\le j\le 2\4d. \eqno(2.19)$$ Then there exist absolute positive constants $c_5$, $c_6$ and $c_7$ such that, for ${\t\4d^{3/2}\le c_5}$, one can construct on a probability space sequences of independent random vectors $ X_1,\dots, X_{2^N}$ and i.i.d.  Gaussian random vectors $ Y_1,\dots, Y_{2^N}$ so that $$\L(X_k)=\L(\xi_k), \quad \E Y_k=0, \quad\cov Y_k={\bf I}_d, \qquad k=1,\dots,2^N, \eqno(2.20)$$ and $$\E\exp\Bigl(\ffrac{c_6\,\DE(2^N)} {d^{3/2}\4\t}\Bigr) \le \exp\bgl(c_7\4N\,\log^*d\bgr), \eqno(2.21)$$ where $\DE(2^N)=\max\limits_{1\le r\le 2^N} \,\Bigl|\,\sum\limits_{k=1}^r X_k-\sum\limits_{k=1}^r Y_k\,\Bigr|$. Theorem \[2.1\] says that the conditions (2.18) and (2.19) suffice for Assertion A. However, these conditions require that the number of summands is $2^N$. For an arbitrary number of summands, one should consider additional (for simplicity, Gaussian) summands in order to apply Theorem \[2.1\]. Below we shall prove Theorem \[2.1\]. Suppose that its conditions are satisfied. At first, we describe a procedure of constructing the random vectors $\big\{U_{n,k}\big\}$ with distributions $\L\bgl(\big\{U_{n,k}\big\}\bgr) =\L\bgl(\big\{U_{n,k}^*\big\}\bgr)$, provided that the vectors  $Y_1,\dots,Y_{2^N}$ are already constructed (then we shall define $X_k=U_{0,k-1}$, $k=1,\dots,2^N$). This procedure is an extension of the KMT (1975, 1976) dyadic scheme to the multivariate case due to Einmahl (1989). For this purpose we shall use the so-called Rosenblatt quantile transformation (see Rosenblatt (1952) and Einmahl (1989)). Denote by $F_{N,0}^{( 1)}(x_1) =\P\bgl\{{\bf P}_1\,U_{N,0}^*<x_1\bgr\}$, $x_1\in{\bf R}^1$, the distribution function of the first coordinate of the vector $U_{N,0}^*$. Introduce the conditional distributions, denoting by $ F_{N,0}^{(j)}\bgl(\,\cdot\,\bgm x_1,\dots,x_{j-1}\bgr)$, $2\le j\le d$, the regular conditional distribution function (r.c.d.f.) of ${\bf P}_j\,U_{N,0}^*$, given $\ov{\bf P}_{j-1}\,U_{N,0}^*=(x_1,\dots,x_{j-1})$. Denote by $\wt F_{n,k}^{(j)}\bgl(\,\cdot\,\bgm x_1,\dots,x_{j-1}\bgr)$ the r.c.d.f. of  ${\bf P}_{\!j}\,{\bf U}_{n,k}^*$, given $\ov{\bf P}_{j-1}\4{\bf U}_{n,k}^* =(x_1,\dots,x_{j-1})$, for  ${0\le k<2^{N-n}}$, $1\le n\le N$, $d+1\le j\le 2\4d$. Put $$T_k=\sum_{l=1}^kY_l,\qquad 1\le k\le2^N;\nopagebreak \eqno(2.22)$$ $$ ---------------------- --- ------------------------------------------------------------------ $V_{m,k}\5$ = $\5\bgl(V_{m,k}^{( 1)},\dots,V_{m,k}^{(d)}\bgr) =T_{(k+1)\cdot2^m}-T_{k\cdot2^m},$ $0\le k<2^{N-m}\!,\quad0\le m\le N;$ $\wt{\bf V}_{n,k}\5$ = $\5\bgl(V_{n-1,2k},\,V_{n-1,2k+1}\big) =\bgl(\wt{\bf V}_{n,k}^{( 1)},\dots,\wt{\bf V}_{n,k}^{(2d)}\bgr) \in{\bf R}^{2d},$ $0\le k<2^{N-n},\quad1\le n\le N;$ ---------------------- --- ------------------------------------------------------------------ (2.23) $$and$$ [**V**]{}\_[n,k]{}=([**V**]{}\_[n,k]{}\^[( 1)]{},…, [**V**]{}\_[n,k]{}\^[(2d)]{})=[**A**]{}\_[n,k]{}\^[2d]{}, 0k&lt;2\^[N-n]{},1nN. (2.24) $$ According to the definition of the operator ${\bf A}$, we have (see (2.11)–(2.16) and (2.22)–(2.24)) $${\bf V}_{n,k} =\bgl(V_{n,k},\,\wt V_{n,k}\big)\in{\bf R}^{2d}, \qquad 0\le k<2^{N-n},\quad1\le n\le N, \eqno(2.25)$$ where $$ ----------------- --- ------------------------------ $V_{n,k}\5$ = $\5V_{n-1,2k}+V_{n-1,2k+1},$ $\wt V_{n,k}\5$ = $\5V_{n-1,2k}-V_{n-1,2k+1},$ ----------------- --- ------------------------------ 0k&lt;2\^[N-n]{},1nN, (2.26) $$and$$ V\_[N,0]{}=Y\_1+…+Y\_[2\^N]{}. (2.27) $$ Thus, the vectors $V_{m,k},\wt{\bf V}_{n,k}$ and ${\bf V}_{n,k}$ can be constructed from the vectors $\4Y_1,\dots,Y_{2^N}$ by the same linear procedure which was used for constructing the vectors  $U_{m,k}^*,\wt{\bf U}_{n,k}^*$ and ${\bf U}_{n,k}^*$ from the vectors $\xi_1,\dots,\xi_{2^N}$. It is obvious that, for fixed $n$ and $k$, $$\cov {\bf U}_{n,k}^*=\cov {\bf V}_{n,k}=2^n\,{\bf I}_{2d} \eqno(2.28)$$ and, hence, the coordinates of the Gaussian vector ${\bf V}_{n,k}$ are independent with the same distribution function  $\Phi_{2^{n/2}}(\cdt)$ (here and below $$\Phi_{\si}(x)= \int\limits_{-\infty}^{x}\ffrac1{\sqrt{2\4\pi}\,\si} \,\exp\Bigl(-\ffrac{y^2}{2\4\si^2}\Bigr)\,dy, \qquad x\in{\bf R}^1,\quad\si>0,$$ is the distribution function of the normal law with mean zero and variance $\si^2$). Denote now the new collection of random vectors $X_k$ as follows. At first we define $$ -------------------- --- -------------------------------------------------------- $U_{N,0}^{( 1)}\5$ = $\5\bgl(\!F_{N,0}^{( 1)}\bgr)^{\!-1} \bgl(\Phi_{2^{N/2}}\big(V_{N,0}^{( 1)}\big)\bgr)\qquad \hbox{and, \quad for} \quad 2\le j\le d,$ $U_{N,0}^{(j)}\5$ = $\5\bgl(\! F_{N,0}^{(j)}\bgr)^{\!-1} \bgl( \Phi_{2^{N/2}}\big(V_{N,0}^{(j)}\big)\bgr| \,U_{N,0}^{( 1)},\dots,U_{N,0}^{(j-1)}\bgr)$ -------------------- --- -------------------------------------------------------- (2.29) $$ (here $\bgl(\!F_{N,0}^{( 1)}\bgr)^{\!-1}(t)= \sup\,\bgl\{x:F_{N,0}^{( 1)}(x)\le t\bgr\}$, $0<t<1$, and so on). Taking into account that the distributions of the random vectors $\xi_1,\dots,\xi_{2^N}$ are absolutely continuous, we see that this formula can be rewritten in a more natural form, cf. Sakhanenko (1984, p. 30–31): $$ ---------------------------------------------- --- -------------------------------------------------------------- $F_{N,0}^{( 1)}\big(U_{N,0}^{( 1)}\big)\5$ = $\5\Phi_{2^{N/2}}\big(V_{N,0}^{( 1)}\big),$ $F_{N,0}^{(j)}\big(U_{N,0}^{(j)}\bgm = $\5 \Phi_{2^{N/2}}\big(V_{N,0}^{(j)}\big),\for 2\le j\le d.$ U_{N,0}^{( 1)},\dots,U_{N,0}^{(j-1)}\big)\5$ ---------------------------------------------- --- -------------------------------------------------------------- (2.30) $$Suppose that the random vectors$$ U\_[n,k]{}=(U\_[n,k]{}\^[( 1)]{},…,U\_[n,k]{}\^[(d)]{}),0k&lt;2\^[N-n]{}, (2.31) $$ corresponding to blocks containing each $2^{n}$ summands with fixed $n$, ${1\le n\le N}$, are already constructed. Now our aim is to construct the blocks containing each $2^{n-1}$ summands. To this end we define $${\bf U}_{n,k}^{(j)}={\bf P}_{\!j}\,U_{n,k}= U_{n,k}^{(j)}, \quad 1\le j\le d, \eqno(2.32)$$ and, for $d+1\le j\le 2\4d$, $${\bf U}_{n,k}^{(j)}=\bgl(\!\wt F_{n,k}^{(j)}\bgr)^{\!-1} \bgl(\Phi_{2^{n/2}}\big({\bf V}_{n,k}^{(j)} \big)\bgr|\4 {\bf U}_{n,k}^{( 1)},\dots, {\bf U}_{n,k}^{(j-1)}\bgr) . \eqno(2.33)$$ It is clear that (2.33) can be rewritten in a form similar to (2.30). Then we put $$ ----------------------- --- ------------------------------------------------------ ${\bf U}_{n,k}\5$ = $\5\bgl({\bf U}_{n,k}^{( 1)},\dots, {\bf U}_{n,k}^{(2d)}\bgr)\in{\bf R}^{2d},$ ${\bf U}_{n,k}^j\5$ = $\5\bgl({\bf U}_{n,k}^{( 1)},\dots, {\bf U}_{n,k}^{(j)}\bgr)=\ov{\bf P}_j\,{\bf U}_{n,k} \in{\bf R}^{j},\qquad j=1,\dots,2\4d,$ $\wt U_{n,k}^{(j)}\5$ = $\5{\bf U}_{n,k}^{(j+d)}, \qquad j=1,\dots,d,$ $\wt U_{n,k}\5$ = $\5\bgl(\wt U_{n,k}^{( 1)},\dots, \wt U_{n,k}^{(d)}\bgr)\in{\bf R}^{d}$ ----------------------- --- ------------------------------------------------------ (2.34) $$and$$ ------------------ --- ------------------------------------- $U_{n-1,2k}\5$ = $ \5\bgl(U_{n,k}+\wt U_{n,k}\bgr)/2,$ $U_{n-1,2k+1}\5$ = $\displaystyle \5\bgl(U_{n,k}-\wt U_{n,k}\bgr)/2.$ ------------------ --- ------------------------------------- (2.35) $$ Thus, we have constructed the random vectors $U_{n-1,k}$, $0\le k<2^{N-n+1}$. After $N$ steps we obtain the random vectors $U_{0,k}$, $0\le k<2^{N}$. Now we set $$X_k=U_{0,k-1},\qquad S_0=0,\quad S_k=\sum_{l=1}^kX_l,\qquad1\le k\le2^{N}. \eqno(2.36)$$ \[l2.1\] [(Einmahl (1989))]{} The joint distribution of the vectors $U_{n,k}$ and ${\bf U}_{n,k}$ coincides with that of the vectors $U_{n,k}^*$ and ${\bf U}_{n,k}^*$. In particular, $X_k$, $k=1,\dots,2^N$, are independent and $\L(X_k)=\L(\xi_k)$. Moreover, according to (2.11) and (2.12), we have $$ ----------------- --- ------------------------------------ $\wt U_{n,k}\5$ = $\5U_{n-1,2k}-U_{n-1,2k+1},$ $U_{n,k}\5$ = $\5U_{n-1,2k}+U_{n-1,2k+1} =S_{(k+1)\cdot2^n}-S_{k\cdot2^n},$ ----------------- --- ------------------------------------ (2.37) $$ for $ 0\le k<2^{N-n}$, $1\le n\le N$ (it is clear that (2.37) follows from (2.35)). Furthermore, putting $$\wt{\bf U}_{n,k} =\bgl(U_{n-1,2k},\,U_{n-1,2k+1}\big)\in{\bf R}^{2d}, \qquad 0\le k<2^{N-n},\quad1\le n\le N,\nopagebreak \eqno(2.38)$$ we have (see (2.13) and (2.15)) $${\bf U}_{n,k}={\bf A}\,\wt{\bf U}_{n,k}\in{\bf R}^{2d}, \qquad 0\le k<2^{N-n},\quad1\le n\le N. \eqno(2.39)$$ Note that it is not difficult to verify that, according to (2.16), $$\nnnorm{\bf A}=\ffrac1{\nnnorm{{\bf A}\me}}= \nnnorm{{\bf A}^*}=\ffrac1{\nnnorm{({\bf A}^*)\me}}= \sqrt2, \eqno(2.40)$$ where the asterisk is used to denote the adjoint operator ${\bf A}^*$ for the operator ${\bf A}$. \[r2.1\] The conditions of Theorem [\[2.1\]]{} imply the coincidence of the corresponding first and second moments of the vectors ${\bf U}=\bgl\{U_{n,k},\,\wt{\bf U}_{n,k},\,{\bf U}_{n,k}\bgr\}$ and ${\bf V}=\bgl\{V_{n,k},\,\wt{\bf V}_{n,k},\,{\bf V}_{n,k}\bgr\}$ since the vectors ${\bf U}$ can be restored from vectors  $X_1,\dots,X_{2^N}$ by the same linear procedure which is used for reconstruction of the vectors ${\bf V}$ from $Y_1,\dots,Y_{2^N}$. In particular, $\E{\bf U}=\E{\bf V}=0$. \[2.2\] [(Einmahl 1989, Lemma 5, p. 55)]{} Let $1\le m=(2\4s+1)\cdot2^r\le2^{N}$, where $s,r$ are non-negative integers. Then $$S_m=\ffrac m{2^N}S_{2^N}+\sum_{n=r+1}^N\g_n\,\wt U_{n,l_{n,m}}, \eqno(2.41)$$ where $\g_n=\g_n(m)\in[\40,\half\4]$ and the integers  $l_{n,m}$ are defined by $$l_{n,m}\cdot2^n<m\le \big(l_{n,m}+1\big)\cdot2^n. \eqno(2.42)$$ The shortest proof of Lemma \[2.2\] can be obtained with the help of a geometrical approach due to Massart (1989, p. 275). \[r2.2\] The inequalities $(2.42)$ give a formal definition of $l_{n,m}$. To understand better the mechanism of the dyadic scheme, one should remember another characterization of these numbers$:$ $U_{n,l_{n,m}}$ is the sum over the block of $2^n$ summands which contains $X_m$, the last summand in the sum $S_m$. \[c2.1\] Under the conditions of Lemma [\[2.2\]]{} $$\bgl|S_m-T_m\bgr|\le \bgl| U_{N,0}- V_{N,0}\bgr| +\ffrac12\sum_{n=r+1}^N \bgl|\wt U_{n,l_{n,m}}-\wt V_{n,l_{n,m}}\bgr|,\qquad m=1,\dots,2^N.$$ This statement evidently follows from Lemmas \[l2.1\] and \[2.2\] and from the relations (2.9)–(2.12), (2.22) and (2.23). Proof of Theorem \[2.1\] {#s3} ======================== In the proof of Theorem \[2.1\] we shall use the following auxiliary Lemmas \[3.1\]–\[3.4\] (Zaitsev 1995, 1996, 1998a). \[3.1\] Suppose that $\L(\xi)\in\A$, $y\in{\bf R}^m$, $\a\in{\bf R}^1$. Let ${\bf M}:\Rd\to{\bf R}^m$ be a linear operator and $\wt\xi\in{\bf R}^k$ be the vector consisting of a subset of coordinates of the vector $\xi$. Then $$\begin{aligned} \L({\bf M}\,\xi+y)\in{\cal A}_m\big(\norm {\bf M}\t\big), &&\qquad\hbox{where} \quad\norm {\bf M}=\sup_{\norm x\le1}\norm{{\bf M}\4x},\\ \L(\a\4\xi)\in\AT{|\a|\4\t},&&\qquad\L(\wt\xi)\in\AD\t k.\end{aligned}$$ \[3.2\] Suppose that independent random vectors $\xi^{(k)}$, $k=1,2$, satisfy the condition  $\L\big(\xi^{(k)}\big)\in\AD\t{d_k}$. Let $\xi=\bgl(\xi^{( 1)},\,\xi^{( 2)}\bgr)\in{\bf R}^{d_1+d_2}$ be the vector with the first $d_1$ coordinates coinciding with those of $\xi^{( 1)}$ and with the last $d_2$ coordinates coinciding with those of $\xi^{( 2)}$. Then ${\L(\xi)\in\AD\t{d_1+d_2}}$. \[3.3\] [(Bernstein-type inequality)]{} Suppose that $\L(\xi)\in\AD\t1$, ${\E\xi=0}$ and $\E\xi^2=\si^2$. Then $$\P\bgl\{|\2\xi\2| \ge x\bgr\}\le2\,\max\bgl\{\exp\bgl(-\pfrac{x^2\!}{4 \4\si^2}\bgr), \,\exp\bgl(-\pfrac{x}{4\4\t}\bgr)\bgr\},\qquad x\ge0.$$ \[3.4\] Let the distribution of a random vector  $\xi\in\Rd$ with $\E\xi=0$ satisfy the condition $\L(\xi)\in\AV\t4d$, $\t\ge0$. Assume that the variance ${\si^2=\E\xi_d^2>0}$ of the last coordinate $\xi_d$ of the vector $\xi$ is the minimal eigenvalue of $\cov \xi$. Then there exist absolute positive constants $c_8,\dots,c_{12}$ such that the following assertions hold$:$ [a)]{} Let $d\ge2$. Assume that $\xi_d$ is not correlated with previous coordinates $\xi_1,\dots,\xi_{d-1}$ of the vector $\xi$. Define ${\bf B}=\cov \,\ov{\bf P}_{d-1}\4\xi$ and denote by $F(z\gm x)$, $z\in{\bf R}^1$, the r.c.d.f. of $\xi_d$ for a given value of ${\ov{\bf P}_{d-1}\4\xi=x\in{\bf R}^{d-1}}$. Let $\L(\ov{\bf P}_{d-1}\4\xi)\in\AV\t4{d-1}$. Then there exists $y\in{\bf R}^1$ such that $$|y|\le c_8\,\t \,\bgl\|{\bf B}\msqrt x\bgr\|^2 \le c_8\,\t\2\ffrac{\norm{x}^2}{\si^2}, \eqno(3.1)$$ and $$\Phi_\si\big(\2z-\g(z)\2\big)<F(z+y\gm x) <\Phi_\si\big(\2z+\g(z)\2\big), \eqno(3.2)$$ for $\ftdsi\le c_9$, ${\bgl|{\bf B}\msqrt x\bgr| \le \ffrac{c_{10}\4\si}{d^{3/2}\t}}$, $|z|\le\ffrac{c_{11}\4\si^2}{d\4\t}$, where $$\g(z)=c_{12}\4\t\,\biggl(d^{3/2} +d\4\de \,\Bigl(1+\ffrac{|z|}{\si}\Bigr) +\ffrac{z^2}{\si^2}\biggr),\qquad \de=\bgl\|{\bf B}\msqrt x\bgr\|. \eqno(3.3)$$ [b)]{} The assertion [a)]{} remains valid for $d=1$ with $F(z\gm x)=\P\bgl\{\xi_1<z\bgr\}$ and $y=\de=0$ without any restrictions on ${\bf B}$, $\ov{\bf P}_{d-1}\4\xi$ and $x$. \[r3.1\] In [Zaitsev]{} $(1995, 1996)$ the formulation of Lemma [\[3.4\]]{} is in some sense weaker, see [Zaitsev (1995, 1996, Lemmas $6.1$ and $8.1)$]{}. In particular, instead of the conditions $$\L(\xi)\in\AV\t4d \qquad\hbox{and} \qquad \L(\ov{\bf P}_{d-1}\4\xi)\in\AV\t4{d-1} \eqno(3.4)$$ the stronger conditions $$\L(\xi)\in\ATDRZ\t44\qquad\hbox{and} \qquad \L(\ov{\bf P}_{d-1}\4\xi)\in\ADRZ\t44{d-1} \eqno(3.5)$$ are used. However, in the proof of $(3.1)$ and $(3.2)$ only the conditions $(3.4)$ are applied. The conditions $(3.5)$ are necessary for the investigation of quantiles of conditional distributions corresponding to random vectors having coinciding moments up to third order which has been done in [Zaitsev]{} $(1995, 1996)$ simultaneously with the proof of $(3.1)$ and $(3.2)$. \[3.5\] Let $S_k=X_1+\dots+X_k$, $k=1,\dots,n$, be sums of independent random vectors ${X_j\in\Rd}$ and let $q(\cdt)$ be a semi-norm in $\Rd$. Then $$\P\bgl\{\max_{1\le k\le n}\,q(S_k)>3\4t\bgr\} \le3\,\max_{1\le k\le n}\P\bgl\{q(S_k)>t\bgr\},\qquad t\ge0. \eqno(3.6)$$ Lemma \[3.5\] is a version of the Ottaviani inequality, see Dudley (1989, p. 251) or Hoffmann-Jørgensen (1994, p. 472). In the form (3.6) this inequality can be found in Etemadi (1985) with 4 instead of 3 (twice). The proof of Lemma \[3.5\] repeats those from the references above and is therefore omitted. \[3.6\] Let the conditions of Theorem [\[2.1\]]{} be satisfied and assume that the vectors $X_k$, $k=1,\dots,2^N$, are constructed by the dyadic procedure described in [(2.22)–(2.36).]{} Then there exist absolute positive constants $c_{13},\dots,c_{17}$ such that [a)]{} If $\t\4d^{3/2}\!\big/2^{N/2}\le c_9$, then $$\bgl| U_{N,0}- V_{N,0}\bgr|\le c_{13}\4d^{3/2}\4\t \,\bgl(1+2^{-N}\4\bgl|U_{N,0}\bgr|^2\bgr) \eqno(3.7)$$ provided that $\bgl|U_{N,0}\bgr|\le\ffrac{c_{14}\cdot2^N}{d^{3/2}\4\t};$ [b)]{} If $1\le n\le N$, $0\le k<2^{N-n}$, $\t\4d^{3/2}\!\big/2^{n/2}\le c_{15}$, then $$\bgl|\wt U_{n,k}-\wt V_{n,k}\bgr|\le c_{16}\4d^{3/2}\4\t \,\bgl(1+2^{-n}\4\bgl|{\bf U}_{n,k}\bgr|^2\bgr) \eqno(3.8)$$ provided that $\bgl|{\bf U}_{n,k}\bgr| \le\ffrac{c_{17}\cdot2^n}{d^{3/2}\4\t}.$ In the proof of Lemma \[3.6\] we need the following auxiliary Lemma \[3.7\] which is useful for the application of Lemma \[3.4\] to the conditional distributions involved in the dyadic scheme. \[3.7\] Let $F(\cdt)$ denote a continuous distribution function and $G(\cdt)$ an arbitrary distribution function satisfying for $z\in B\in{\cal B}_1$ the inequality $$G\bgl(z-f(z)\big)<F(z+w)<G\bgl(z+f(z)\big)$$ with some $f:B\to{\bf R}^1$ and $w\in{\bf R}^1$. Let  $\eta\in{\bf R}^1$, $0<G(\eta)<1$ and ${\xi=F\me\big(G(\eta)\big)}$, where ${F\me(x)= \sup\,\bgl\{u:F(u)\le x\bgr\}}$, $0<x<1$. Then $$|\4\xi-\eta\4|< f(\xi-w)+|\4w\4|,\qquad \hbox{if} \quad \xi-w\in B.$$ [*Proof*]{}  Put $\zeta=\xi-w$. The continuity of $F$ implies that $F\big(F\me(x)\big)\equiv x$, for ${0<x<1}$. Therefore, $$\zeta\in B\Rightarrow G\bgl(\zeta-f(\zeta)\big)<F(\xi)=G(\eta)\Rightarrow \zeta-f(\zeta)<\eta\Rightarrow \xi-\eta< f(\zeta)+w$$ and $$\zeta\in B\Rightarrow G(\eta)= F(\xi)<G\bgl(\zeta+f(\zeta)\big) \Rightarrow\eta< \zeta+f(\zeta)\Rightarrow \eta-\xi< f(\zeta)-w.$$ This completes the proof of the lemma. $\square$ [*Proof of Lemma [\[3.6\]]{}*]{}  At first we note that the conditions of Theorem \[2.1\] imply that $$\cov {\bf U}_{n,k} =2^n\4{\bf I}_{2d},\for1\le n\le N, \quad0\le k<2^{N-n},$$ and, hence (see (2.28)), $$\cov {\bf U}_{n,k}^j =2^n\4{\bf I}_j,\for1\le j\le 2\4d. \eqno(3.9)$$ Let us prove the assertion a). Introduce the vectors $$U_{N,0}^j=\bgl(U_{N,0}^{( 1)}\4,\dots,U_{N,0}^{(j)}\bgr),\qquad V_{N,0}^j=\bgl(V_{N,0}^{( 1)}\4,\dots,V_{N,0}^{(j)}\bgr) \eqno(3.10)$$ consisting of the first $j$ coordinates of the vectors $U_{N,0},\,V_{N,0}$ respectively. By (3.9), (2.32) and (2.34), $$U_{N,0} =\ov{\bf P}_d\,{\bf U}_{N,0} \eqno(3.11)$$ and $$U_{N,0}^j ={\bf U}_{N,0}^j,\qquad\cov U_{N,0}^j =2^n\4{\bf I}_j,\for1\le j\le d. \eqno(3.12)$$ Moreover, according to Lemma \[l2.1\], Remark \[r2.1\], (3.12) and (2.19), the distributions $\L(U_{N,0}^j)$, $\jd$, satisfy in the $j$-dimensional case the conditions of Lemma \[3.4\] with $\si^2=2^N$ and ${\bf B}=\cov U_{N,0}^{j-1}=2^N\4{\bf I}_{j-1}$ (the last equality for ${j\ge2}$). Taking into account (2.29) and applying Lemmas \[3.4\] and \[3.7\], we obtain that $$\bgl|U_{N,0}^{( 1)}- V_{N,0}^{( 1)}\bgr| \le c_{12}\4\t\,\Bigl(1+ \ffrac{\bgl|U_{N,0}^{( 1)}\bgr|^2}{2^N}\Bigr), \eqno(3.13)$$ if $\ffrac\t{2^{N/2}}\le c_9$, $\bgl|U_{N,0}^{( 1)}\bgr|\le\ffrac{c_{11}\cdot2^N}{\t}$. Furthermore, $$\bgl|U_{N,0}^{(j)}- V_{N,0}^{(j)}\bgr| \le c_{12}\4\t\,\biggl(j^{3/2} +j^{3/2}\4\ffrac{\bgl|U_{N,0}^{j-1}\bgr|}{2^{N/2}} \,\Bigl(1+\ffrac{\bgl|U_{N,0}^{(j)}-y_j\bgr|}{2^{N/2}}\Bigr)$$ $$\kern6cm +\ffrac{\bgl|U_{N,0}^{(j)}-y_j\bgr|^2}{2^N} \biggr)+|\4y_j\4|, \eqno(3.14)$$ if $$\ffrac{\t\4j^{3/2}}{2^{N/2}}\le c_9, \quad \ffrac{\bgl|U_{N,0}^{j-1}\bgr|}{2^{N/2}} \le \ffrac{c_{10}\cdot2^{N/2}}{j^{3/2}\t}, \quad \bgl|U_{N,0}^{(j)}-y_j\bgr|\le\ffrac{c_{11}\cdot2^N}{j\4\t}, \qquad 2\le j\le d, \eqno(3.15)$$ where $$|\4y_j\4|\le c_8\4\t\4j\4 \ffrac{\bgl|U_{N,0}^{j-1}\bgr|^2}{2^N}, \qquad2\le j\le d. \eqno(3.16)$$ Obviously, $$\bgl|U_{N,0}^{( 1)}\bgr|\le \max\bgl\{\bgl|U_{N,0}^{j-1}\bgr|,\,\bgl|U_{N,0}^{(j)}\bgr|\bgr\} =\bgl|U_{N,0}^j\bgr|\le\bgl|U_{N,0}\bgr|, \qquad 2\le j\le d, \eqno(3.17)$$ see (2.31) and (3.10). Using (3.13), (3.14), (3.16) and (3.17), we see that one can choose $c_{13}$ to be so large and $c_{14}$ to be so small that $$\bgl|U_{N,0}^{(j)}- V_{N,0}^{(j)}\bgr| \le c_{13}\4d^{3/2}\4\t \,\bgl(1+2^{-N}\4\bgl|U_{N,0}\bgr|^2\bgr), \eqno(3.18)$$ if $ \ffrac{\t\4d^{3/2}}{2^{N/2}}\le c_9$, $ \bgl|U_{N,0}\bgr| \le \ffrac{c_{14}\cdot2^N}{d^{3/2}\t}$, $1\le j\le d$. The inequality (3.7) immediately follows from (3.18), (2.23) and (2.31). Now we shall prove item b). According to Lemma \[l2.1\], Remark \[r2.1\], (2.18), (2.31) and (3.9), the distributions $\L({\bf U}_{n,k}^j)$, $j=d+1,\dots,2\4d$, satisfy in the $j$-dimensional case the conditions of Lemma \[3.4\] with $\si^2=2^n$, ${{\bf B}=\cov {\bf U}_{n,k}^{j-1}=2^n\4{\bf I}_{j-1}}$. Using (2.33) and applying Lemmas \[3.4\] and \[3.7\], we obtain that $$\bgl|{\bf U}_{n,k}^{(j)}-{\bf V}_{n,k}^{(j)}\bgr| \le c_{12}\4\t\,\biggl(j^{3/2} +j^{3/2}\4\ffrac{\bgl|{\bf U}_{n,k}^{j-1}\bgr|}{2^{n/2}} \,\Bigl(1+\ffrac{\bgl|{\bf U}_{n,k}^{(j)}-y_j\bgr|}{2^{n/2}}\Bigr)$$ $$\kern6cm +\ffrac{\bgl|{\bf U}_{n,k}^{(j)}-y_j\bgr|^2}{2^n} \biggr)+|\4y_j\4|, \eqno(3.19)$$ if $$\ffrac{\t\4j^{3/2}}{2^{n/2}}\le c_9, \quad \ffrac{\bgl|{\bf U}_{n,k}^{j-1}\bgr|}{2^{n/2}} \le \ffrac{c_{10}\cdot2^{n/2}}{j^{3/2}\t}, \quad \bgl|{\bf U}_{n,k}^{(j)}-y_j\bgr|\le\ffrac{c_{11}\cdot2^n}{j\4\t}, \eqno(3.20)$$ where $$|\4y_j\4|\le c_8\4\t\4j\4 \ffrac{\bgl|{\bf U}_{n,k}^{j-1}\bgr|^2}{2^n}, \qquad d+1\le j\le 2\4d. \eqno(3.21)$$ Obviously, $$\max\bgl\{\bgl|{\bf U}_{n,k}^{j-1}\bgr|, \,\bgl|{\bf U}_{n,k}^{(j)}\bgr|\bgr\} =\bgl|{\bf U}_{n,k}^j\bgr|\le\bgl|{\bf U}_{n,k}\bgr|, \eqno(3.22)$$ see (2.34). Using (3.19), (3.21) and (3.22), we see that one can choose $c_{15}$ and $c_{17}$ to be so small and $c_{16}$ to be so large that $$\bgl|{\bf U}_{n,k}^{(j)}-{\bf V}_{n,k}^{(j)}\bgr| \le c_{16}\4d^{3/2}\4\t \,\bgl(1+2^{-n}\4\bgl|{\bf U}_{n,k}\bgr|^2\bgr)\nopagebreak \eqno(3.23)$$ if $ \ffrac{\t\4d^{3/2}}{2^{n/2}}\le c_{15}$, $ \bgl|{\bf U}_{n,k}\bgr| \le \ffrac{c_{17}\cdot2^n}{d^{3/2}\t}$, $ d+1\le j\le 2\4d$. The inequality (3.8) immediately follows from (3.23), (2.24), (2.25) and (2.34). $\square$ [*Proof of Theorem [\[2.1\]]{}*]{} Let $X_k$, $k=1,\dots,2^N$, denote the vectors constructed by the dyadic procedure described in (2.22)–(2.36). Denote $$\DE=\DE(2^N)=\max_{1\le k\le2^N}\,\bgl|S_k-T_k\bgr|, \eqno(3.24)$$ $$c_5 =\min\,\bgl\{c_9,\,c_{15}\bgr\}, \quad c_{18} =\min\,\bgl\{c_{14},\,c_{17},\,1\bgr\}, \quad y\=\ffrac{c_{18}}{d^{3/2}\4\t}\le\ffrac1\t, \eqno(3.25)$$ fix some $x>0$ and choose the integer $M$ such that $$x<4\4y\cdot2^M\le2\4x. \eqno(3.26)$$ We shall estimate $\P\bgl\{\DE\ge x\bgr\}$. Consider separately two possible cases: $M\ge N$ and $M< N$. Let, at first, $M\ge N$. Denote $$\DE_1=\max_{1\le k\le2^N}\,\bgl|S_k\bgr|,\qquad \DE_2=\max_{1\le k\le2^N}\,\bgl|T_k\bgr|. \eqno(3.27)$$ It is easy to see that $ \DE\le\DE_1+\DE_2 $ and, hence, $$\P\bgl\{\DE\ge x\bgr\}\le\P\bgl\{\DE_1\ge x/2\bgr\} +\P\bgl\{\DE_2\ge x/2\bgr\}. \eqno(3.28)$$ Taking into account the completeness of classes $\A$ with respect to convolution, applying Lemmas \[3.5\], \[3.1\] and  \[3.3\] and using (3.25) and (3.26), we obtain that $2^N\le 2^M\le x/2\4y$ and $$\begin{aligned} \vspace{5pt} \P\bgl\{\DE_1\ge x/2\bgr\}\5 &\le& \5 3\,\max\limits_{1\le k\le2^N}\,\P\bgl\{\bgl|S_k\bgr|\ge x/6\bgr\}\5\\ &\le& \5 6\4d\,\exp\Big(-\min\Big\{\ffrac{x^2}{144\cdot2^N}, \ffrac{x}{24\4\t}\Big\}\Big)\\ &\le& \5 6\4d \,\exp\Big(-\ffrac{c_{19}\,x}{d^{3/2}\4\t}\Big). \hbox{\rlap{\hskip3.9cm(3.29)}}\end{aligned}$$ Since all $d$-dimensional Gaussian distributions belong to all classes $\A$, ${\t\ge0}$, we automatically obtain that $$\P\bgl\{\DE_2\ge x/2\bgr\} \le6\4d \,\exp\Big(-\ffrac{c_{19}\,x}{d^{3/2}\4\t}\Big). \eqno(3.30)$$ From (3.28)–(3.30) it follows in the case $M\ge N$ that $$\P\bgl\{\DE\ge x\bgr\} \le 12\4d\,\exp\Big(-\ffrac{c_{19}\,x}{d^{3/2}\4\t}\Big). \eqno(3.31)$$ Let now $M< N$. Denote $$L=\max\bgl\{0,\,M\bgr\} \eqno(3.32)$$ and $$ ----------- --- ------------------------------------------------------------- -- $\DE_3\5$ = $\5\max\limits_{0\le k<2^{N-L}}\,\max\limits_{1\le l\le2^L} \,\bgl|S_{k\cdot2^L+l}-S_{k\cdot2^L}\bgr|,$ $\DE_4\5$ = $\5\max\limits_{0\le k<2^{N-L}}\,\max\limits_{1\le l\le2^L} \,\bgl|T_{k\cdot2^L+l}-T_{k\cdot2^L}\bgr|,$ $\DE_5\5$ = $\5\max\limits_{1\le k\le2^{N-L}} \,\bgl|S_{k\cdot2^L}-T_{k\cdot2^L}\bgr|.$ ----------- --- ------------------------------------------------------------- -- $$Introduce the event$$ A={:|U\_[L,k]{}| &lt;y2\^L, 0k&lt;2\^[N-L]{}} (3.36) $$ (we assume that all considered random vectors are measurable mappings of ${\om\in\Omega}$). For the complementary event we use the notation  $\ovln A=\Omega\setminus A$. We consider separately two possible cases: $L=M$ and $L=0$. Let ${L=M}$. It is evident that in this case $$\DE\le\DE_3+\DE_4+\DE_5. \eqno(3.37)$$ Moreover, by virtue of (3.37), (3.26), (3.33) and (3.36), we have $$\ovln A\subset\bgl\{\om:\DE_3\ge x/4\bgr\}. \eqno(3.38)$$ From (3.37) and (3.38) it follows that $${}\P\bgl\{\DE\ge x\bgr\}\le\P\bgl\{\DE_3\ge x/4\bgr\} +\P\bgl\{\DE_4\ge x/4\bgr\}+\P\bgl\{\DE_5\ge x/2, \,A\bgr\}. \eqno(3.39)$$ Using Lemmas \[3.5\], \[3.1\] and  \[3.3\], the completeness of classes $\A$ with respect to convolution and the relations  (3.25) and (3.26), we obtain, for $0\le k<2^{N-L}$, that ${2^L= 2^M\le x/2\4y}$ and $$\begin{aligned} %\setcounter{equation}{40} {}\P\bgl\{\max_{1\le l\le2^L} \,\bgl|S_{k\cdot2^L+l}-S_{k\cdot2^L}\bgr|\ge x/4\bgr\}\5 &\le&\5 3\,\max_{1\le l\le2^L}\,\P\bgl\{ \bgl|S_{k\cdot2^L+l}-S_{k\cdot2^L}\bgr|\ge x/12\bgr\} \\ &\le&\5 6\4d\,\exp\Big(-\min\Big\{\ffrac{x^2}{576\cdot2^L}, \ffrac{x}{48\4\t}\Big\}\Big) \\ &\le&\5{6\4d \,\exp\Big(-\ffrac{c_{20}\4x}{d^{3/2}\4\t}\Big). \hskip2.8cm(3.40)}\end{aligned}$$ Since all $d$-dimensional Gaussian distributions belong to classes $\A$ for all $\t\ge0$, we immediately obtain that $${}\P\bgl\{\max_{1\le l\le2^L} \,\bgl|\4T_{k\cdot2^L+l}-T_{k\cdot2^L}\bgr|\ge x/4\bgr\} \le6\4d \,\exp\Big(-\ffrac{c_{20}\4x}{d^{3/2}\4\t}\Big). \eqno(3.41)$$ From (3.33), (3.34), (3.40) and (3.41) it follows that $${}\P\bgl\{\DE_3\ge x/4\bgr\} +\P\bgl\{\DE_4\ge x/4\bgr\} \le2^N\cdot12\4d \,\exp\Big(-\ffrac{c_{20}\4x}{d^{3/2}\4\t}\Big). \eqno(3.42)$$ Assume that $L=0$. Then, according to (3.24) and (3.35), $\DE=\DE_5$ and, hence, we have the rough bound $${}\P\bgl\{\DE\ge x\bgr\}\le\P\bgl\{\ovln A\bgr\} +\P\bgl\{\DE_5\ge x/2, \,A\bgr\}. \eqno(3.43)$$ In this case $U_{L,k}=X_{k+1}$, $2^L=1\ge2^M$, $y> x/4$ (see (3.25), (3.26) and (3.32)). Therefore, by (3.36) and by Lemmas \[3.1\] and \[3.3\], $$\begin{aligned} {}\P\bgl\{\ovln A\bgr\}&\le&\sum\limits_{k=0}^{2^N-1} \P\bgl\{\bgl|U_{L,k}\bbgr| \ge y\cdot2^L\bgr\}= \sum\limits_{k=1}^{2^N} \P\bgl\{\bgl|X_{k}\bbgr| \ge{y}\bgr\} \\ &\le&2^{N+1}\4d\,\exp\Big(-\min\Big\{\ffrac{y^2}{4}, \ffrac{y}{4\,\t}\Big\}\Big)\\ &\le&2^{N+1}\4d\,\exp\Big(-\min\Big\{\ffrac{x\4y}{16}, \ffrac{x}{16\,\t}\Big\}\Big)\\ &\le&2^{N+1}\4d \,\exp\Big(-\ffrac{c_{21}\4x}{d^{3/2}\4\t}\Big). \hbox{\rlap{\hskip4.37cm(3.44)}}\end{aligned}$$ It remains to estimate $\P\bgl\{\DE_5\ge x/2, \,A\bgr\}$ in both cases: $L=M$ and ${L=0}$ (see (3.39) and (3.42)–(3.44)). Let $L$ defined by (3.32) be arbitrary. Fix an integer $k$ satisfying ${1\le k\le2^{N-L}}$ and denote for simplicity $$j=j(k)\=k\cdot2^{L}. \eqno(3.45)$$ By Corollary \[c2.1\], we have $$\bgl|S_{k\cdot2^{L}}-T_{k\cdot2^{L}}\bbgr|= \bgl|S_{j}-T_{j}\bgr|\le \bgl| U_{N,0}- V_{N,0}\bgr| +\ffrac12\sum_{n=L+1}^N \bgl|\wt U_{n,l_{n,j}}-\wt V_{n,l_{n,j}}\bgr|, \eqno(3.46)$$ where $l_{n,j}$ are integers, defined by $l_{n,j}\cdot2^n<j\le \big(l_{n,j}+1\big)\cdot2^n\,$ (see (2.42)). By virtue of (3.25) and (3.36), for $\om\in A$ we have $$\bgl|U_{L,l}\bbgr| <y\cdot2^L =\ffrac{c_{18}\cdot2^L} {d^{3/2}\4\t}\le\ffrac{\min\{c_{14},\,c_{17}\}\cdot2^L} {d^{3/2}\4\t},\qquad 0\le l<2^{N-L}, \eqno(3.47)$$ and, by (2.35)–(3.37), $U_{L,l}$ are sums over blocks consisting of $2^L$ summands. Moreover, $ U_{n,l}$ (resp. $\wt U_{n,l}$), $L+1\le n\le{N}$, $0\le l<2^{N-n}$, are sums (resp. differences) of two sums over blocks containing each $2^{n-1}$ summands. These sums and differences can be represented as linear combinations (with coefficients $\pm1$) of $2^{n-L}$ sums over blocks containing each $2^L$ summands and satisfying (3.47). Therefore, for $\om\in A$, $L+1\le n\le{N}$, $0\le l<2^{N-n}$ we have (see (2.32) and (2.34)) $$\bgl|{\bf U}_{n,l}\bbgr|= \max\bgl\{\bgl|U_{n,l}\bbgr|, \,\bgl|\wt U_{n,l}\bbgr|\bgr\} \le2^{n-L}\4y\cdot2^L =y\cdot2^n\le\ffrac{\min\{c_{14},\,c_{17}\}\cdot2^n} {d^{3/2}\4\t}. \eqno(3.48)$$ Using (3.48), we see that if $\om\in A$, the conditions of Lemma \[3.6\] are satisfied for $\t$, $ U_{N,0}$ and  ${\bf U}_{n,l}$, if $L+1\le n\le{N}$, $0\le l<2^{N-n}$. By (3.46), (3.48) and by Lemma \[3.6\], for $\om\in A$ we have $$\bgl|S_{j}-T_{j}\bbgr|\le c_{13}\4d^{3/2}\4\t \,\bgl(1+2^{-N}\4\bgl|U_{N,0}\bgr|^2\bgr)\hskip6cm{}$$ $$+\sum_{n=L+1}^N c_{16}\4d^{3/2}\4\t \,\Big(\,1+2^{-n}\4\max\bgl\{\bgl|U_{n,l_{n,j}}\bgr|^2, \,\bgl|\wt U_{n,l_{n,j}}\bgr|^2\bgr\}\Big)$$ $$\le c\4d^{3/2}\4\t\,\biggl( \,N+1+2^{-N}\4\bgl|U_{N,0}\bgr|^2 +\sum_{n=L}^{N-1} 2^{-n}\4\bgl(\bgl|\YY U{}n\bgr|^2+ \bgl|U_{(n)}\bgr|^2\bgr)\biggr), \eqno(3.49)$$ where $$\YY U{}n=U_{n,l_{n,j}},\qquad U_{(n)}=U_{n,\wt l_{n,j}}, \eqno(3.50)$$ and $$ l\_[n-1,j]{}= { ----------------- ------------------------------------------ $2\4l_{n,j},$ $\hbox{if}\quad l_{n-1,j}=2\4l_{n,j}+1,$ $2\4l_{n,j}+1,$ $\hbox{if}\quad l_{n-1,j}=2\4l_{n,j},$ ----------------- ------------------------------------------ . L&lt; n (3.51) $$ (it is easy to see that $l_{n-1,j}$ can be equal either to $2\4l_{n,j}$ or to $2\4l_{n,j}+1$, for given $l_{n,j}$). In other words, $\YY U{}n$, $L\le n\le{N}$, is the sum over the block of $2^n$ summands which contains $X_{j}$. The sum $U_{(n)}$ does not contain $X_{j}$ and $$\YY U{}{n+1}=\YY U{}n+U_{(n)}, \qquad L\le n<{N}\nopagebreak \eqno(3.52)$$ (see (3.37)). The equality (3.52) implies $$\YY U{}n= \YY U{}{L}+ \sum_{s=0}^{n-L-1}U_{(L+s)},\qquad L\le n\le{N}. \eqno(3.53)$$ It is important that all summands in the right-hand side of (3.53) are the sums of disjoint blocks of independent summands. Therefore, they are independent. Put $\be=1/\sqrt2$. Then, using (3.53) and the Hölder inequality, one can easily derive that, for $ L\le n\le{N}$, $$\bgl|\YY U{}n\bbgr|^2\le c_{22}\,\bigg(\,\be^{-(n-L)}\4 \bgl|\YY U{}{L}\bbgr|^2+ \sum_{s=0}^{n-L-1}\be^{-(n-L-1)+s}\4 \bgl|U_{(L+s)}\bbgr|^2\,\bigg), \eqno(3.54)$$ with $c_{22}=\sum\limits_{j=0}^{\infty}\be^{j}=\ffrac{\sqrt2}{\sqrt2-1}$. It is easy to see that $$\sum_{n=L}^{N}2^{-n} \,\be^{-(n-L)}\4 \bgl|\YY U{}{L}\bbgr|^2\le c_{22}\cdot2^{-L}\4 \bgl|\YY U{}{L}\bbgr|^2. \eqno(3.55)$$ Moreover, $$\sum_{n=L+1}^{N}\sum_{s=0}^{n-L-1} 2^{-n}\4\be^{-(n-L-1)+s}\4 \bgl|U_{(L+s)}\bbgr|^2\hskip3cm{}$$ $$=\sum_{s=0}^{N-L-1}\sum_{n=L+1+s}^{N} 2^{-n}\4\be^{-(n-L-1)+s}\4 \bgl|U_{(L+s)}\bbgr|^2$$ $${}\hskip3cm\le c_{22}\sum_{s=0}^{N-L-1}2^{-(L+1+s)}\4 \bgl|U_{(L+s)}\bbgr|^2.\qquad \eqno(3.56)$$ It is clear that the inequalities (3.54)–(3.56) imply $$2^{-N}\4\bgl|U_{N,0}\bgr|^2+\sum_{n=L}^{N-1} 2^{-n}\4\bgl(\bgl|\YY U{}n\bgr|^2+ \bgl|U_{(n)}\bgr|^2\bgr)\hskip3cm{}$$ $$\le c_{22}\,\bigg(\ffrac{ \bgl|\YY U{}{L}\bbgr|^2}{2^{L}} +\sum_{s=0}^{N-L-1}\ffrac{ \bgl|U_{(L+s)}\bbgr|^2}{2^{L+1+s}}\bigg) +\sum_{n=L}^{N-1}\ffrac{\bgl|U_{(n)}\bgr|^2}{2^{n}}$$ $$\hskip3cm\le c\,\bigg(\ffrac{ \bgl|\YY U{}{L}\bbgr|^2}{2^{L}} +\sum_{n=L}^{N-1}\ffrac{\bgl|U_{(n)}\bgr|^2}{2^{n}}\bigg). \eqno(3.57)$$ From (3.49) and (3.57) it follows that for $\om\in A$ we have $$\bgl|S_{j}-T_{j}\bbgr|\le c_{23}\4d^{3/2}\4\t\,\biggl(\,N+1+ \ffrac{ \bgl|\YY U{}{L}\bbgr|^2}{2^{L}} +\sum_{n=L}^{N-1}\ffrac{\bgl|U_{(n)}\bgr|^2}{2^{n}}\bigg). \eqno(3.58)$$ Denote (for $0\le n\le N$, $0\le l<2^{N-n}$) $$ W\_[n,l]{}={ -------------------------------- ----------------------------------- $2^{-n}\4\bgl|U_{n,l}\bgr|^2,$ $\bgl|U_{n,l}\bgr|\le y\cdot2^n,$ 0, . -------------------------------- ----------------------------------- . (3.59) $$Let us show that$$ (tW\_[n,l]{})2d+1,0t1[8]{}. (3.60) $$Indeed, integrating by parts, we obtain$$ (tW\_[n,l]{})=1+\_0\^[y\^22\^n]{} t(tu)¶{W\_[n,l]{}u}du2.45cm $$$$ 2.45cm1+18\_0\^[y\^22\^n]{} (u/8)¶{|U\_[n,l]{}| 2\^[n/2]{}u}du. (3.61) $$Taking into account (3.37), (3.25) and using Lemmas~\ref{3.1} and~\ref{3.3}, we obtain that$$ [¶{|U\_[n,l]{}|]{} 2\^[n/2]{}u} 2d(-{, }) $$$$ 2.37cm 2d(-{, }) $$$$.24cm =2d (-), (3.62) $$ if $0\le u\le y^2\cdot2^n$. The relation (3.60) immediately follows from  (3.61) and (3.62). The relations (3.47), (3.48) and (3.59) imply that, for $ L\le n\le{N}$, $0\le l<2^{N-n}$, $\om\in A$, $$2^{-n}\4\bgl|U_{n,l}\bgr|^2=W_{n,l}. \eqno(3.63)$$ Thus, according to (3.50), we can rewrite (3.58) in the form $$\bgl|S_{j}-T_{j}\bgr|\le c_{23}\4d^{3/2}\4\t\,\biggl(\,N+1+\YY W{}L +\sum_{n=L}^{N-1}W_{(n)}\bigg),\qquad \om\in A,\nopagebreak \eqno(3.64)$$ where $$\YY W{}L=W_{L,l_{L,j}},\qquad W_{(n)}=W_{n,\wt l_{n,j}}, \eqno(3.65)$$ Putting now $t^*=(8\,c_{23}\4d^{3/2}\4\t)\me$ and ${t=t^*\cdot c_{23}\4d^{3/2}\4\t=1/8}$, taking into account that the random variables $\YY W{}L$, $W_{(L)}$, …, $W_{(N-1)}$ are independent and applying  (3.60), (3.64) and (3.65), we obtain $$\begin{aligned} \P\Bigl\{\bgl\{&&\hskip-.7cm\om: \bgl|S_{j}-T_{j}\bbgr| \ge x/2\bgr\}\cap A\Bigr\}\\ &&\le\P\Bigl\{ \,c_{23}\4d^{3/2}\4\t\,\Bigl(\,N+1+\YY W{}L +{\sum\limits_{n=L}^{N-1}}W_{(n)}\,\Big) \ge x/2\,\Bigr\}\\ &&\le\P\Bigl\{t\,\Bigl(\,\YY W{}L +{\sum\limits_{n=L}^{N-1}} W_{(n)}\,\Big)\ge t^*x/2-t\4(N+1)\Bigr\}\\ &&\le\E\exp\biggl(t\,\Bigl(\,\YY W{}L +{\sum\limits_{n=L}^{N-1}}W_{(n)}\,\Big)\bigg) \Big/\exp\bgl( t^*x/2-t\4(N+1)\bgr) \\ &&=\E\exp\bgl(t\4\YY W{}L\bgr) {\prod\limits_{n=L}^{N-1}} \E\exp\bgl(t\4W_{(n)}\bgr) \Big/\exp\bgl( t^*x/2-t\4(N+1)\bgr)\\ &&\le\,(3\4d)^{N+1}\,\exp\Big(\ffrac {N+1}{8} -\ffrac x{16\,c_{23}\4d^{3/2}\4\t}\Big). \hskip4.05cm(3.66)\end{aligned}$$ From (3.35), (3.45) and (3.66) it follows that $${}\P\bgl\{\DE_5\ge x/2, \,A\bgr\} \le2^N\cdot(3\4d)^{N+1}\,\exp\Big(\ffrac {N+1}{8} -\ffrac x{16\,c_{23}\4d^{3/2}\4\t}\Big). \eqno(3.67)$$ Using (3.31), (3.39), (3.42)–(3.44) and (3.67), we obtain that $${}\P\bgl\{\DE\ge x\bgr\}\le(19\4d)^{N+1} \,\exp\Big(-\ffrac x{c_{24}\4d^{3/2}\4\t}\Big), \qquad x\ge0, \eqno(3.68)$$ where we can take $ c_{24}=\max \,\bgl\{16\4c_{23},\, c_{19}\me,\, c_{20}\me,\, c_{21}\me,\, 2\bgr\} $. Let the quantities $\e, \,x_0>0$ be defined by the relations $$\e=\ffrac 1{2\4c_{24}\4d^{3/2}\4\t}\le \ffrac 1{4\4\t}, \qquad e^{\e x_0}=(19\4d)^{N+1}. \eqno(3.69)$$ Integrating by parts and using (3.68) and (3.69), we obtain $$\E e^{\e\DE}=\int _0^\infty \e\4e^{\e x} \,\P\bgl\{\DE\ge x\bgr\}\,dx+1,$$ $$\int _0^{x_0} \e\4e^{\e x} \,\P\bgl\{\DE\ge x\bgr\}\,dx\le \int _0^{x_0} \e\4e^{\e x}\,dx=e^{\e x_0}-1 =(19\4d)^{N+1}-1,$$ $$\int _{x_0}^\infty \e\4e^{\e x} \,\P\bgl\{\DE\ge x\bgr\}\,dx\le \int _{x_0}^\infty \e\4e^{-\e (x-x_0)}\,dx=1,$$ and, hence, $$\E e^{\e\DE}\le (19\4d)^{N+1}+1\le (20\4d)^{N+1}.$$ Together with (3.24) and (3.69), this completes the proof of Theorem \[2.1\]. $\square$ Proofs of Theorems \[1.1\]–\[1.4\] {#s4} ================================== We start the proofs of Theorems \[1.1\]–\[1.3\] with the following common part. [*Beginning of the proofs of Theorems  [\[1.1\]]{}, [\[1.2\]]{} and [\[1.3\]]{}*]{} At first we shall verify that under the conditions of Theorems \[1.2\] or \[1.3\] we have ${\L(\xi_k)\in \A}$. For Theorem \[1.3\] this relation is an immediate consequence of Lemma \[3.1\], of the completeness of classes $\A$ with respect to convolution and of the conditions (1.8) and (1.10)–(1.12). In the case of Theorem \[1.2\] we denote $K=\L(\eta)$. One can easily verify that ${\bf B}=\cov K=\gamma^2\,{\bf I}_d$, where $\gamma^2$ is defined by (1.7) and, hence, $$1\le\gamma^2\le3. \eqno(4.1)$$ Moreover, $$\p(K,z)=\log\E e^{\8z,\eta\9}= \log\ffrac{\bgl(4+\t^2\4(d+\8z,\ov z\9)\bgr) \,\exp\big(\8z,\ov z\9\!/2\big)} {(4+\t^2\4d)},\qquad z\in\Cd. \eqno(4.2)$$ Using (4.1) and (4.2), we obtain $$\bgl|d_u d_v^2\4\p(K,z)\bgr| =\bgl|d_u d_v^2\,\log \big(4+\t^2\4(d+\8z,\ov z\9)\big)\bgr|\le c\4\t^3\nnnorm u\nnnorm v^2\le\nnnorm u\t\<{\bf B} \,v,v\>, \eqno(4.3)$$ for $\nnnorm z\t\le1$, provided that $c_1$ (involved in Assertion A) is sufficiently small. This means that ${K=\L(\eta)\in\A}$. The relation $\L(\xi_k)=\L\big(\eta/\gamma\big)\in\A$, ${k=1,\dots,n}$, follows from (4.1) and from Lemma \[3.1\]. The text below is related to Theorems \[1.1\], \[1.2\] and \[1.3\] simultaneously. Without loss of generality we assume that the amount of summands is equal to $2^N$ with some positive integer $N$. It suffices to show that the dyadic scheme related to the vectors $\xi_1,\dots,\xi_{2^N}$ satisfies the conditions of Theorem \[2.1\] with $\t^*=\sqrt2\,\t$ instead of $\t$. According to Lemma \[l2.1\], we can verify the conditions (2.18) and (2.19) for the vectors ${\bf U}_{n,k}^j$ and ${\bf U}_{N,0}^j$ instead of ${\bf U}_{n,k}^{*j}$ and ${\bf U}_{N,0}^{*j}$. To this end we shall show that $${\cal L}\big({\bf U}_{n,k}^j\big)\in \AV{\sqrt 2\,\t}4j \for 0\le k<2^{N-n},\quad1\le n\le N,\quad 1\le j\le 2\4d. \eqno(4.4)$$ Recall that ${\bf U}_{n,k}={\bf A}\4\wt{\bf U}_{n,k}$, where ${\bf A}$ is the linear operator defined by (2.16) and satisfying (2.40). Furthermore, $\wt{\bf U}_{n,k} =\bgl(U_{n-1,2k},\,U_{n-1,2k+1})\in{\bf R}^{2d}$, where the $d$-dimensional vectors $U_{n-1,2k}$ and $U_{n-1,2k+1}$ are independent. The relation ${\L({\bf U}_{n,k})\in\AD{\sqrt 2\,\t}{2d}}$ can be therefore easily derived from the conditions of Theorems \[1.1\], \[1.2\] and \[1.3\] with the help of Lemmas \[l2.1\], \[3.1\] and \[3.2\] (see (2.40)) if we take into account the completeness of classes $\A$ with respect to convolution and their monotonicity with respect to $\t$. It is easy to see that ${{\bf U}_{n,k}^j=\ov{\bf P}_j\,{\bf U}_{n,k}}$, where the projector ${\ov{\bf P}_j:{\bf R}^{2d}\to{\bf R}^j}$ can be considered as a linear operator with $\nnnorm{\ov{\bf P}_j}=1$ (see (2.34)). Applying Lemma \[3.1\] again, we obtain the relations $\L({\bf U}_{n,k}^j)\in\AD{\sqrt 2\,\t}j$, $1\le j\le 2\4d$. It remains to verify that, for $ h\in{\bf R}^j$, ${\norm h\sqrt 2\,\t<1}$, the following inequality hold: $$\int\limits_{T} \,\bbgl|\wh F_h(t)\bbgr| \,dt\le\ffrac{(2\4\pi)^{j/2}\,\sqrt2\,\t\4j^{3/2}} {\si\,(\det{\bf D})\ssqrt},\nopagebreak \eqno(4.5)$$ $$T=\bgl\{t\in{\bf R}^j:4\4\norm t\4\sqrt 2\,\t\4j\ge1\bgr\}, \eqno(4.6)$$ where $F=\L \big({\bf U}_{n,k}^j\big)$, and $\si^2$ is the minimal eigenvalue of  ${{\bf D}=\cov {\bf U}_{n,k}^j}$. Note that, according to (3.9), we have $${\bf D}=2^n\4{\bf I}_j,\quad \si^2=2^n,\quad \det{\bf D}=2^{nj}. \eqno(4.7)$$ Introduce $2^{n-1}$ random vectors $${\bf X}_r=\big(X_r,\,X_{2^{n-1}+r}\big)\in{\bf R}^{2d},\qquad r=2^{n-1}\cdot 2\4k+1,\dots,2^{n-1}\4(2\4 k+1). \eqno(4.8)$$ Obviously, these vectors are independent. According to (2.36), (4.37) and (4.8), $$\wt{\bf U}_{n,k} =\bgl(U_{n-1,2k},\,U_{n-1,2k+1})= \sum_{r=2^{n-1}\cdot 2k+1}^{2^{n-1}(2k+1)}\3 {\bf X}_r. \eqno(4.9)$$ Denote now ${\YY R{h}s}=\ovln{\L(X_s)}(h)$, for $s=1,\dots, 2^N$, ${h\in\Rd}$, and ${\YY M{h}r}\=\ovln{\L({\bf X}_r)}(h)$, ${{\YY Q{h}r}\=\ovln{\L({\bf A}\4{\bf X}_r)}(h)}$, for $r=2^{n-1}\cdot 2\4k+1,\dots,2^{n-1}\4(2\4 k+1)$, $h\in{\bf R}^{2d}$. As usually, we consider only such $h$ for which these distributions exist. Using (2.8), we see that, for all $t\in{\bf R}^{2d}$, $$\begin{aligned} {\wh Q_h^{(r)}(t)} =\ffrac{\E \exp\bgl(\<h+i\4t,\4{\bf A} {\bf X}_r\>\bgr)} {\E \exp\bgl(\<h,{\bf A} {\bf X}_r\>\bgr)} \hskip-.6cm&&=\ffrac{\E \exp\bgl(\<{\bf A}^* h+i\4{\bf A}^* t, {\bf X}_r\>\bgr)} {\E \exp\bgl(\<{\bf A}^* h, {\bf X}_r\>\bgr)}\\ &&=\,\wh M_{{\bf A}^*\! h}^{(r)}({\bf A}^* t). \hbox{\rlap{\hskip3.14cm(4.10)}}\end{aligned}$$ By (2.3) and (4.9), we have (for $j=2\4d$) $$\bbgl|\wh F_h(t)\bbgr| = \prod_{r=2^{n-1}\cdot 2k+1}^{2^{n-1}(2k+1)}\3\ \bbgl|\wh Q_h^{(r)}(t)\bbgr|. \eqno(4.11)$$ Split $t=\big(t_1,\dots,t_{2d}\big)\in{\bf R}^{2d}$ as $t=\big(\YY t{}1,\,\YY t{}2\big)$, where we denote $\YY t{}1=\big(t_1,\dots,t_d\big)$ and ${\YY t{}2=\big(t_{d+1}},\dots,t_{2d}\big)\in\Rd$. Using formulae (2.8) and (4.8) and introducing a similar notation for  $h\in{\bf R}^{2d}$, it is easy to check that $$\wh M_{h}^{(r)}(t)=\wh R_{\YY h{}1}^{(r)}\big(\YY t{}1\big) \,\wh R_{\YY h{}2}^{(2^{n-1}+r)}\big(\YY t{}2\big). \eqno(4.12)$$ Note that $$\nnnorm {\2t\2}^2= \nnnorm {\4t^{( 1)}}^2+ \nnnorm {\4t^{( 2)}}^2. \eqno(4.13)$$ [*End of the proof of Theorem [\[1.1\]]{}*]{} Let now the distributions $\L(\xi_s)$ satisfy the conditions of Theorem \[1.1\]. In this case, according to (2.3), we have ${\YY R{h}s= \ovln H_s(h)\,\ovln G(h)}$. It is well-known that the conjugate distributions $\ovln G(h)$ of the Gaussian distribution $G$ are also Gaussian with covariance operator $\cov \ovln G(h)=\cov G=b^2\,{\bf I}_d$. Therefore, $$\bbgl|\wh R_h^{(s)}(t)\bbgr| \le \exp\big(-b^2\nnnorm t^2\!/2\big),\qquad t,h\in\Rd,\ \,\norm h\t<1. \eqno(4.14)$$ Using (4.12)–(4.14), we get, for $ t,h\in{\bf R}^{2d}$, $\norm h\t<1$: $$\bbgl|\wh M_h^{(s)}(t)\bbgr|\le \prod_{\mu=1}^2\exp\big(-b^2\nnnorm {\4t^{(\mu)}}^2\!/2\big) =\exp\big(-b^2\nnnorm {\2t\2}^2\!/2\big). \eqno(4.15)$$ Applying (2.40), (4.10) and (4.15) with $t={\bf A}^*\4u$ and $h={\bf A}^*\4\gamma$, we see that $$\bbgl|\wh Q_\gamma^{(s)}(u)\bbgr|\le \exp\bgl(-b^2\4\nnorm {{\bf A}^*u}^2\!/2\bgr) \le \exp\big(-b^2\nnnorm {u}^2\big), \eqno(4.16)$$ for $u, \gamma\in{\bf R}^{2d}$, $\norm \gamma\sqrt 2\,\t<1$. The relations (4.11) and (4.16) imply that $$\bbgl|\wh F_h(t)\bbgr| \le \exp\bgl(-b^2\nnnorm t^2\cdot2^{n-2}\bgr),\qquad t,h\in{\bf R}^j,\ \,\norm h\sqrt 2\,\t<1. \eqno(4.17)$$ It is clear that it suffices to verify (4.17) for $j=2\4d$ (for ${1\le j<2\4d}$ one should apply (4.17) for $j=2\4d$ and for $t, h\in{\bf R}^{2d}$, with $h_m=t_m=0$, ${m=j+1,\dots,2\4d}$). Using (4.6), (4.7) and (4.17), we see that $$\int\limits_{T} \,\bbgl|\wh F_h(t)\bbgr|\,dt\le \,\exp\Bigl(-\ffrac{b^2\cdot2^{n-3}}{32\,\t^2\4j^{2}}\Bigr) \int\limits_{{\bf R}^j} \,\exp\bgl(-b^2\4\nnnorm t^2\cdot2^{n-3}\bgr)\,dt\$$ $${}\hskip-.4cm = \ffrac {(2\4\pi)^{j/2}}{(b^2\cdot2^{n-2})^{j/2}} \,\exp\Bigl(-\ffrac{b^2\cdot2^n}{2^8\,\t^2\4j^{2}}\Bigr)$$ $${}\hskip.2cm\le\ffrac{(2\4\pi)^{j/2}\,\t^{4j\cdot2^n}} {(\det{\bf D})\ssqrt\,\t^{2j}} \le\ffrac{(2\4\pi)^{j/2}\,\t} {2^{n/2}\, (\det{\bf D})\ssqrt}, \eqno(4.18)$$ if $c_1$ is small enough. The relations (4.7) and (4.18) imply (4.5). It remains to apply Theorem \[2.1\] to complete the proof of Theorem \[1.1\]. $\square$ [*End of the proof of Theorem [\[1.2\]]{}*]{} Let now the distributions $\L(\xi_s)$ satisfy the conditions of Theorem \[1.2\]. In this case, according to (2.8) and (4.2), we have $$\begin{aligned} \bbgl|\wh R_h^{(s)}(t)\bbgr| \hskip-.5cm&& = \,\biggl| {\ffrac{\bgl(4+\t^2\4(d+\nnnorm h^2+2\4 i\8h, t\9-\nnnorm t^2)\bgr) \,\exp\big((\nnnorm h^2+2\4 i\8h, t\9-\nnnorm t^2)/2\big)} {\bgl(4+\t^2\4(d+\nnnorm h^2)\bgr) \,\exp\big(\nnnorm h^2\!/2\big)}}\biggr| \\ &&\le \,\bgl(2+\nnnorm t^2\big)\,\exp\big(-\nnnorm t^2\!/2\big) \\ &&\le \,c_{25}\,\exp\big(-\nnnorm t^2\!/4\big),\qquad\qquad \norm h\t<1. \hbox{\rlap{\hskip5.48cm(4.19)}}\end{aligned}$$ The rest of the proof is omitted. It is similar to that of Theorem \[1.1\] with ${b^2=\half}$. The presence of $c_{25}$ in the right-hand side of (4.19) can be easily compensated by choosing $c_1$ to be sufficiently small. [*End of the proof of Theorem [\[1.3\]]{}*]{} Consider the dyadic scheme with $$\L(\xi_s)=\L(X_s)=\YY L{}s\4P, \qquad s=1,\dots, 2^N. \eqno(4.20)$$ Putting $H\=\L(\zeta)$, ${\psi_h(x)=e^{\8h, x\9}\,p(x)}$, $h,x\in{\bf R}^d$, and integrating by parts, we see that (for $t\in{\bf R}^d$, $t\ne0$) $$\begin{aligned} \wh H_h(t)&=&\bgl(\!\E e^{\8h,\zeta\9}\bgr)^{-1} \int\limits_{\nnnorm x\le b_1} e^{i\2\8t,x\9}\,\psi_h(x)\,dx\\ &=&-\bgl(\!\E e^{\8h,\zeta\9}\bgr)^{-1} \int\limits_{\nnnorm x\le b_1} \ffrac{e^{i\2\8t,x\9}} {i\4\nnnorm t^2}d_t \4\psi_h(x)\,dx, \hbox{\rlap{\hskip2.6cm(4.21)}}\end{aligned}$$ where $H_h=\ovln H(h)$. Besides, using (1.9), we see that $$\sup_{\nnnorm x\le b_1}\,\sup_{\nnnorm h \4b_2\le1} \,\bgl|d_t \4\psi_h(x)\bgr|\le b_5\4\nnnorm t. \eqno(4.22)$$ As in the formulation of Theorem \[1.3\] we denote by $b_m$ different positive quantities depending on $H$. Note that the quantities depending on the dimension $d$ can be considered as depending on $H$ only as well. From (4.21) and  (4.22) it follows that $$\sup_{\nnnorm h \4b_2\le1}\,\bbgl|\wh H_h(t)\bbgr| \le b_6\4\nnnorm t^{-1} \eqno(4.23)$$ (note that, by the Jensen inequality, $\E e^{\8h,\zeta\9}\ge e^{\E\8h,\zeta\9}=1$). The inequality (4.23) implies that $$\sup_{\nnnorm h \4b_2\le1} \,\bbgl|\wh H_h(t)\bbgr|\le \Big(\,1+\ffrac{\nnnorm t}{b_7}\Big)^{\!-1} \qquad\hbox{for}\quad \nnnorm t\ge b_7=2\4b_6 \nopagebreak \eqno(4.24)$$ and $$\sup_{\nnnorm h \4b_2\le1}\,\sup_{\nnnorm t\ge b_7} \,\bbgl|\wh H_h(t)\bbgr|\le \half.\nopagebreak \eqno(4.25)$$ Since the distributions $H_h$ are absolutely continuous, the relation $\bbgl|\wh H_h(t)\bbgr|=1$ can be valid for $t=0$ only. Furthermore, the function $\bbgl|\wh H_h(t)\bbgr|$ considered as a function of two variables $h$ and $t$ is continuous for all $h,t\in{\bf R}^d$. Therefore, $$\sup_{\nnnorm h \4b_2\le1} \,\sup_{b_8\le\4\nnnorm t\4\le\4 b_7} \,\bbgl|\wh H_h(t)\bbgr|\le b_9<1,\nopagebreak \eqno(4.26)$$ where $$b_8=\big(4\sqrt2\,b_2\4d\big)\me\quad\hbox{and} \quad b_9\ge\half. \eqno(4.27)$$ The inequalities (4.25) and (4.26) imply that $$\sup_{\nnnorm h \4b_2\le1} \,\sup_{\nnnorm t\4\ge\4 b_8} \,\bbgl|\wh H_h(t)\bbgr|\le b_9\=e^{-b_{10}}<1. \eqno(4.28)$$ Denoting $\YY Lhs=\ovln L{}^{(s)}(h)$, $h\in \Rd$, $s=1,\dots,2^N$, and using (1.11), (1.12), (2.3) and (2.8), it is easy to see that $$\wh R{}^{(s)}_h(t)=\bgl(\wh H_{h/\sqrt m}\big(t/\sqrt m\big)\bgr)^{\!m} \,\wh L{}^{(s)}_h(t). \eqno(4.29)$$ The relations (1.10), (4.24), (4.28) and (4.29) imply that $$\sup_{\nnnorm h \4\t\le1} \,\bbgl|\wh R{}^{(s)}_h(t)\bbgr|\le \Big(\,1+\ffrac{\nnnorm t}{b_7\,\sqrt m}\Big)^{\!-m}\for \nnnorm t\ge b_7\,\sqrt m \eqno(4.30)$$ and $$\sup_{\nnnorm h \4\t\le1} \,\sup_{\nnnorm t\4\ge\4 b_8\sqrt m} \,\bbgl|\wh R{}^{(s)}_h(t)\bbgr|\le e^{-mb_{10}}. \eqno(4.31)$$ Using (4.12), (4.13), (4.20) and (4.30), we get, for $r=2^{n-1}\cdot 2\4k+1,\dots,2^{n-1}\4(2\4 k+1)$, $\nnnorm t\ge b_7\,\sqrt {2\4m}$, $t\in{\bf R}^{2d}$, $$\sup_{\htau\le 1} \,\bbgl|\wh M_h^{(r)}(t)\bbgr|\le \min_{\mu=1,2} \Big(\,1+\ffrac{\nnnorm {t^{(\mu)}}}{b_7\,\sqrt m}\Big)^{\!-m} \le \Big(\,1+\ffrac{\nnnorm t}{b_7\,\sqrt{2\4m}}\Big)^{\!-m}. \eqno(4.32)$$ Moreover, $$\sup_{\nnnorm h \4\t\le1} \,\sup_{\nnnorm t\4\ge\4 b_8\sqrt {2\4m}} \,\bbgl|\wh M_h^{(r)}(t)\bbgr|\le e^{-mb_{10}}. \eqno(4.33)$$ Using (2.40), (4.10), (4.32) and (4.33), we see that, for the same $r$ and for $t\in{\bf R}^{2d}$, $\nnnorm t\ge b_7\,\sqrt m$, $$\sup_{\htau\sqrt2\le 1} \,\bbgl|\wh Q_h^{(r)}(t)\bbgr|\le \Big(\,1+\ffrac{\nnnorm t}{b_7\,\sqrt m}\Big)^{\!-m} \eqno(4.34)$$ and $$\sup_{\htau\sqrt2\le 1} \,\sup_{\nnnorm t\4\ge\4 b_8\sqrt m} \,\bbgl|\wh Q_h^{(r)}(t)\bbgr|\le e^{-mb_{10}}. \eqno(4.35)$$ It is easy to see that the relations (4.11), (4.34) and (4.35) imply that, for $ h\in{\bf R}^j$, ${\norm h\sqrt 2\,\t<1}$, and for $t\in{\bf R}^j$, $\nnnorm t\ge b_7\,\sqrt m$, $$\bbgl|\wh F_h(t)\bbgr|\le \Big(\,1+\ffrac{\nnnorm t}{b_7\,\sqrt m}\Big)^{\!-m\cdot2^{n-1}} \eqno(4.36)$$ and $$\sup_{\nnnorm t\4\ge\4 b_8\sqrt m} \,\bbgl|\wh F_h(t)\bbgr|\le e^{-mb_{10}\cdot2^{n-1}}. \eqno(4.37)$$ It suffices to prove (4.36) and (4.37) for $j=2\4d$ (for $1\le j<2\4d$ one should apply (4.36) and (4.37) for $j=2\4d$ and for $ h\in{\bf R}^{2d}$, ${\norm h\sqrt 2\,\t<1}$, $t\in{\bf R}^{2d}$ with $h_m=t_m=0$, $m=j+1,\dots,2\4d$). Note now that the set $T$ defined in (4.6) satisfies the relation $$T\subset\bgl\{t\in{\bf R}^j:\nnnorm t\4\ge\4 b_8\4\sqrt m\bgr\} \eqno(4.38)$$ (see (1.10) and (4.27)). Below (in the proof of (4.5)) we assume that ${\norm h\sqrt 2\,\t<1}$. According to (4.37) and (4.38), for $t\in T$ we have $$\bbgl|\wh F_h(t)\bbgr|\ssqrt\le e^{-m b_{10}\cdot2^{n-2}}. \eqno(4.39)$$ Taking into account that $\bbgl|\wh F_h(t)\bbgr|\le1$, and $m\ge b_4$, choosing $b_4$ to be sufficiently large and using (1.10), (4.7), (4.36) and (4.39), we obtain $$\begin{aligned} \int\limits_{T} \,\bbgl|\wh F_h(t)\bbgr|\,dt \,\5&\le&\5 \,\exp\big(-m\4 b_{10}\cdot2^{n-2}\big) \biggl(\,\int\limits_{{\bf R}^j} \,\Big(\,1+\ffrac{\nnnorm t}{b_7\,\sqrt m}\Big)^{\!-m\cdot2^{n-2}} dt+b_{11}\4m^{d/2}\biggr)\\ \vspace{.5pt} \5&\le&\5 \,b_{12}\,m^{d/2} \,\exp\big(-m\4 b_{10}\cdot2^{n-2}\big)\\ \vspace{1\jot} \5&\le&\5\ffrac {(2\4\pi)^{j/2}\sqrt2\,b_2\4j^{3/2}} {m\ssqrt\cdot2^{n/2}\cdot2^{n j/2}}= \ffrac{(2\4\pi)^{j/2}\,\sqrt2\,\t\4j^{3/2}} {\si\,(\det{\bf D})\ssqrt}. \hbox{\rlap{\hskip4.03cm(4.40)}}\end{aligned}$$ The inequality (4.5) follows from (4.40) immediately. It remains to apply Theorem \[2.1\].  $\square$ [*Proof of Theorem [\[1.4\]]{}*]{} Define $m_0,m_1,m_2,\dots$ and $n_1,n_2,\dots$ by $$m_0=0,\quad m_s=2^{2^s}, \qquad n_s=m_s-m_{s-1},\qquad s=1,2,\dots. \eqno(4.41)$$ It is easy to see that $$\log n_s\le\log m_s=2^s\4\log2,\qquad s=1,2,\dots. \eqno(4.42)$$ By Assertion A (see (1.5)), for any $s=1,2,\dots$ one can construct on a probability space a sequence of i.i.d.  $\YY X1s,\dots,\YY X {n_s}s$ and a sequence of i.i.d.  Gaussian $\YY Y1s,\dots,\YY Y {n_s}s$ so that $\L(\YY Xks)=\L(\xi)$, $ \E \YY Yks=0$, $\cov \YY Yks={\bf I}_d$, and $${}\P\bgl\{\4c_2\,\DE_s\ge \t\4d^{3/2}\bgl( c_3\,\log^*d\,\log n_s +x\bgr)\4\bgr\} \le e^{-x},\qquad x\ge0, \eqno(4.43)$$ where $$\DE_s=\max_{1\le r\le n_s} \,\Bigl|\,\sum\limits_{k=1}^r X_k^{(s)} -\sum\limits_{k=1}^r Y_k^{(s)}\,\Bigr|. \eqno(4.44)$$ It is clear that we can define all the vectors mentioned above on the same probability space so that the collections $\Xi_s=\bgl\{\YY X1s,\dots,\YY X {n_s}s; \,\YY Y1s,\dots,\YY Y {n_s}s\bgr\}$, $s=1,2,\dots$ are jointly independent. Then we define $X_1,X_2,\dots$ and $Y_1,Y_2,\dots$ by $$ ------------------- --- ---------------- $X_{m_{s-1}+k}\5$ = $\5X_k^{(s)},$ $Y_{m_{s-1}+k}\5$ = $\5Y_k^{(s)},$ ------------------- --- ---------------- k=1,…,n\_s,s=1,2,…. (4.45) $$In order to show that these sequences satisfy the assertion of Theorem \ref {1.4}, it remains to verify the equality~(1.13). Put$$ c\_[25]{}= ,c\_[26]{}=c\_[25]{}\_[l=0]{}\^2\^[-l/2]{}=, (4.46) $$and introduce the events$$ A\_l={:\^[(l)]{}2\^lc\_[26]{}d\^[3/2]{}\^\*d}, l=1,2,…, (4.47) $$where$$ \^[(l)]{}=\_[1rm\_l]{} |\_[j=1]{}\^r X\_j -\_[j=1]{}\^r Y\_j|. (4.48) $$According to (4.44), (4.45) and~(4.48), we have$$ \^[(l)]{}\_1+…+\_l. (4.49) $$ Taking into account the relations  (4.42), (4.46), (4.47), (4.49) and applying the inequality (4.43) with $x=2^{(s+l)/2}$, we get $$\begin{aligned} {}\P\bgl\{A_l\bgr\}&\le &\sum_{s=1}^l \P\bgl\{\4\DE_s\ge2^{(s+l)/2}\,c_{25}\,\t\4d^{3/2}\log^*d\4\bgr\}\\ &\le& \sum_{s=1}^l\exp\bgl(-2^{(s+l)/2}\bgr) \le c\,\exp\bgl(-2^{l/2}\bgr). \hbox{\rlap{\hskip2.3cm(4.50)}}\end{aligned}$$ The inequality (4.50) implies that $\sum\limits_{l=1}^\infty \P\bgl\{A_l\bgr\}<\infty $, Hence, by the Borel–Cantelli lemma with probability one a finite number of the events $A_l$ occurs only. This implies the equality (1.13) with $c_4=2\4c_{26}\big/\log2$ (see (4.41), (4.47) and (4.48)). $\square$ References {#references .unnumbered} ========== 1. Bártfai, P. (1966). Die Bestimmung der zu einem wiederkehrenden Prozess gehörenden Verteilungfunktion aus den mit Fehlern behafteten Daten einer einzigen Realisation, [*Studia Sci. Math. Hungar.*]{}, [**1**]{}, 161–168. 2. Dudley, R. M. (1989). [*Real analysis and probability*]{}, Pacific Grove, California: Wadsworth & Brooks/Cole. 3. Einmahl, U. (1989). Extensions of results of Komlós, Major and Tusnády to the multivariate case, [*J. Multivar. Anal.*]{}, [**28**]{}, 20–68. 4. Etemadi, N. (1985). On some classical results in probability theory, [*Sankhy$\bar{\hbox{a}}$, Ser. A*]{}, [**47**]{}, 2, 215–221. 5. Hoffmann-Jørgensen, J. (1994). [*Probability with a view toward statistics*]{}, I, New York: Chapman & Hall. 6. Götze, F. and Zaitsev A. Yu. (1997). Multidimensional Hungarian construction for vectors with almost Gaussian smooth distributions, [*Preprint 97-071 SFB 343*]{}, Universität Bielefeld. 7. Komlós, J., Major, P., Tusnády, G. (1975-76). An approximation of partial sums of independent RV’-s and the sample DF. [I; II]{}, [*Z. Wahrscheinlichkeitstheor. verw. Geb.*]{}, [**32**]{}, 111–131; [**34**]{}, 34–58. 8. Major, P. (1978). On the invariance principle for sums of independent identically distributed random variables [*J. Multivar. Anal.*]{}, [**8**]{}, 487–517. 9. Massart, P. (1989). Strong approximation for multivariate empirical and related processes, via KMT construction, [*Ann. Probab.*]{}, [**17**]{}, 1, 266–291. 10. Rosenblatt, M. (1952). Remarks on a multivariate transformation, [*Ann. Math. Statist.*]{}, [**23**]{}, 470–472. 11. Sakhanenko, A. I. (1984). Rate of convergence in the invariance principles for variables with exponential moments that are not identically distributed, In: [*Trudy Inst. Mat. SO AN SSSR*]{}, [**3**]{}, pp. 4–49, Novosibirsk: Nauka (in Russian). 12. Zaitsev, A. Yu. (1986). Estimates of the Lévy–Prokhorov distance in the multivariate central limit theorem for random variables with finite exponential moments, [*Theor. Probab. Appl.*]{}, [**31**]{}, 2, 203–220. 13. Zaitsev, A. Yu. (1995). Multidimensional version of the results of Komlós, Major and Tusnády for vectors with finite exponential moments, [*Preprint 95-055 SFB 343*]{}, Universität Bielefeld. 14. Zaitsev, A. Yu. (1996). Estimates for quantiles of smooth conditional distributions and multidimensional invariance principle, [*Siberian Math. J.*]{}, [**37**]{}, 4, 807–831 (in Russian). 15. Zaitsev, A. Yu. (1998a). Multidimensional version of the results of Komlós, Major and Tusnády for vectors with finite exponential moments, [*ESAIM : Probability and Statistics*]{}, [**2**]{}, 41–108. 16. Zaitsev, A. Yu. (1998b). Multidimensional version of the results of Sakhanenko in the invariance principle for vectors with finite exponential moments, [*Preprint 98-045 SFB 343*]{}, Universität Bielefeld.
null
minipile
NaturalLanguage
mit
null
SOLAPUR: Sumana has the appearance of a 10-year-old though she’s a teenager 11 months shy of 18. Four feet tall and with body weight of 30 kilos, she is at least 25kg underweight and significantly short for her age. Her mental ability is of a kid half her age. She is also hearing impaired.At Solapur’s Palawi-Prabha Hira Pratishthan, one of Maharashtra’s oldest institutions caring for children living with HIV , cases like Sumana’s are not an exception. At least 60% of the 110 children here suffer from varying degrees of hearing loss. The prevalence of height and weight stunting is more than 70%, while over a third have stymied mental development. Besides physical, the prevalence of mental health problems such as depression is alarmingly high.AIDS-related deaths may have dropped 54% and new infections 66% in India over a decade, but children like Sumana are testimony to the gaping deficit in the national programme that focuses on drugs instead of a comprehensive medical, nutritional and psychosocial model that a complex disease like HIV warrants.An otherwise spirited child, Sumant becomes drowsy and irritable after taking medicines. “I get severe stomach-ache, dizziness, headache and diarrhoea,” he says, describing his plight after taking the daily dose of antiretroviral therapy (ART) medicines. The six-year-old is among the 17 children at the Palawi home who have forsaken ART even though their survival depends on it. Born with the virus, these children need to take 2-6 capsules daily. For additional infections such as tuberculosis, the pill count could run into double digits.“Children go through hell in spite of ART. There is no denying the dramatic shift in HIV care today, but what about those who got the disease at the turn of the century and have spent a lifetime taking medicines? We know little or nothing about what the virus or ART does to each child,” said Palawi’s 65-year-old founder Mangaltai Shah. She says most children in Palawi grapple with everyday health problems ranging from skin rashes, oral and vaginal yeast infections, pus accumulation in ears, hearing and vision problems to life-threatening infections like TB. “So when medicines meant to heal start giving trouble, they find it easier to discard.” Moreover, she adds, institutions like Palawi neither have adequate funds nor support from government hospitals to tackle the unique problems of each child.Sumant is supposed to receive intensive counselling from the local ART centre till his adherence improves, but Shah says the three doctors and four counsellors that cater to 300 HIV patients daily at Solapur Civil Hospital barely have the time.For nearly 3 lakh registered HIV patients in the state, there are less than 1,000 counsellors. Palawi was compelled to start its own school in 2014 as children are denied admission in regular ones the moment Palawi is mentioned. But it’s a blessing in disguise for these children who struggle to sit through more than one or two classes without dozing off.Palawi, which has been home to 250 HIV-positive children since 2001, has lost 40 to the disease.
null
minipile
NaturalLanguage
mit
null
For indispensable reporting on the coronavirus crisis, the election, and more, subscribe to the Mother Jones Daily newsletter. Stymied at the national level, Republicans have spent the past couple of years focusing a lot of their energy at the state level. And they’ve had considerable success. Hundreds of abortion restrictions have been passed. Voter ID laws were enacted all over the country. Just recently half a dozen Republican-controlled states have started efforts to game the Electoral College in preparation for the 2016 election. So what’s next? Apparently state sales taxes. CBPP’s Elizabeth McNichol reports: In an alarming trend, governors in Louisiana, Nebraska, and North Carolina have proposed eliminating their state’s personal and corporate income taxes and raising the sales tax to offset the lost revenue….Proponents claim that eliminating income taxes and expanding the sales tax would make tax systems simpler, fairer, and more business-friendly, with no net revenue loss. In reality, they would tilt state taxes against middle- and lower-income households and likely undercut the state’s ability to maintain public services. Specifically they would: Raise taxes on the middle class. Require huge sales tax hikes. Levy those new, higher rates on a much larger number of transactions. Create an unsustainable spiral of rising rates and widening exemptions. Fail to boost state economies. Make state revenues much less stable. There’s more detail at the link. But the bottom line is pretty simple: This is a transparent effort to reduce taxes on the rich and increase taxes on the poor and the middle class. No matter how flowery their speech, Republicans remain hellbent on cutting taxes on the rich no matter what the consequences. Given how well the rich have done recently and how poorly the middle class is doing, this is nothing less than jaw dropping.
null
minipile
NaturalLanguage
mit
null
IN THE COURT OF APPEALS OF IOWA No. 3-1165 / 13-0491 Filed March 12, 2014 JERRY WESTCOTT and DARLENE WESTCOTT, Plaintiffs-Counterclaim Defendants-Appellees, vs. ROGER MALLI, Defendant-Counterclaimant-Appellant. ________________________________________________________________ Appeal from the Iowa District Court for Winneshiek County, Richard D. Stochl, Judge. Roger Malli appeals the district court’s finding that Jerry Westcott and Darlene Westcott are the legal title holders to 2.9 acres of disputed land. AFFIRMED. Kevin E. Schoeberl of Story & Schoeberl Law Firm, Cresco, for appellant. Erik W. Fern, Decorah, for appellees. Heard by Vogel, P.J., and Tabor and McDonald, JJ. 2 VOGEL, P.J. Roger Malli appeals the district court’s finding that Jerry Westcott and Darlene Westcott are the legal title holders to 2.9 acres of disputed land. Malli argues the Westcotts failed to prove by clear and convincing evidence they adversely possessed the property, and consequently, the district court erred in dismissing Malli’s counterclaims of trespass and conversion. Malli further argues the district court erred in admitting testimony of a statement made by a deceased realtor. Finally, Malli claims the court should have awarded him attorney fees. Because we conclude the Westcotts proved their adverse possession claim, the district court properly admitted the realtor’s statement, as well as properly denied Malli attorney fees, we affirm. I. Factual and Procedural Background On November 5, 1988, Jerry and Darlene Westcott entered into a real estate contract with Malli to buy “80 acres, more or less, and buildings on land legally described as: The South One-half (S ½) Southwest Quarter (SW ¼) of Section Thirty (30), Township 100 North, Range Nine (9), West of the 5th P.M., Winneshiek County, Iowa.” At the time of the contract, Malli owned a 2.9 acre parcel of land described as: “Lot 1 of the Northwest Quarter of the Southwest Quarter of Section 30, Township 100 North, Range 9 West of the 5th P.M., in Winneshiek County, Iowa.” The eighty plus acres was listed by Malli with the real estate company of Erickson-Prohaska, and Dick Cummings was the real estate agent. Cummings advised the Westcotts the property encompassed everything within the fence line, which included the 2.9 acres. No survey was ever done, though the 3 Westcotts received a plat map from Cummings, which was highlighted to include the 2.9 acre parcel. The Westcotts testified they believed they purchased the disputed parcel along with the eighty acres. Consequently, they made improvements on the parcel, such as replacing and repairing the fencing, constructing new gates, grading an unimproved road and putting gravel on its surface. They have also used the land for grazing their cattle and horses. They have cut down trees, removed a dilapidated shed, and mowed and sprayed the grass on the property. Additionally, between 1989 and 2010, the Westcotts have leased out their land—including this parcel—and the tenants have used the parcel to access other pastures on the property as well as graze their livestock. A pole barn, constructed by Malli in 1978 and sold as part of the Westcott purchase, sits on the eighty acres with approximately forty-six inches sitting across the property line of the 2.9 acres. This encroachment was not described in the original deed. A corral is also located on the 2.9 acres, north of the barn. The Westcotts replaced the corral’s fencing. Both the barn and the corral for the cattle and horses have been used by the Westcotts since they purchased the property from Malli. The Westcotts believed they were paying taxes on the disputed land because of the irregular shape of the property, as well as the fact their tax statement indicated they were paying taxes on 82.3 acres. To correct a prior deed, Malli received a quit claim deed to the 2.9 acres from Michael and Carolyn Junk in 1993.1 The deed was recorded on February 1 In 1988 Herb and Naomi Gossman sold the property to Michael Junk and Caroline Junk, who sold to Richard Janechek and Dennis Janechek in 1993. Upon selling a 150 acre tract of land to the Janacheks, the Junks learned from Herb Gossman that the 2.9 acre parcel had been conveyed by the Gossmans in the mid 1980’s to Malli. As there 4 24, 1994. Since 1988, Malli has only been on the parcel two to three times2 and has never interfered with the Westcotts’ use of the property. However, Malli has paid the property taxes for the parcel since 1993. In July of 2011, the Westcotts were informed by the Farm Service Agency that they did not have legal title to the 2.9 acres. Consequently, they filed suit to obtain title through adverse possession. Malli resisted, filing counterclaims of trespass and conversion. Trial was held on February 27, 2013. On February 28, the district court issued an order finding the Westcotts had proven the elements of adverse possession, such that they had established legal title to the property. Malli appeals. II. Standard of Review We review this action brought in equity de novo. Rubes v. Mega Life & Health Ins. Co., 642 N.W.2d 263, 266 (Iowa 2002). We are not bound by the district court’s factual findings but we may give them weight, particularly with regard to the credibility of witnesses. Id. III. Statement by Cummings We begin by addressing an evidentiary issue. Malli asserts the district court erred in admitting the statement of Cummings—now deceased—to the Westcotts that the land they were about to purchase included the 2.9 acre parcel. Malli argues the statute of frauds, see Iowa Code section 622.32 (2013), was no dispute over the ownership of the parcel, the Junks issued a quit claim deed to Malli. 2 There is some dispute as to how often Malli visited the property. Malli asserts he visited the property on numerous occasions, though the Westcotts claim Malli has only been on the parcel once, after the suit was filed. In its findings of fact, the district court stated: “Malli has been on the 2.9 acre parcel twice since 1988. Each occurred after this action was filed. He did not step foot on the land once in over ten years and only did so when he faced a claim adverse to his.” 5 prevents the use of parol evidence in interpreting the parties’ real estate contract. Additionally, the fact the real estate contract was a fully integrated document precludes the admission of any parol evidence in interpreting the contract. Malli also claims the statement was inadmissible based on relevance and hearsay. We review the admissibility of evidence for an abuse of discretion and hearsay evidence for correction of errors at law. State v. Dullard, 668 N.W.2d 585, 589 (Iowa 2003). Hearsay must be excluded as evidence unless admitted as an exception or exclusion under the hearsay rule or some other provision. Id. The district court admitted the testimony of Jerry Westcott, who stated: “We come back down to the north fence, and Dick Cummings said that everything that you see inside of the fences is the property.” 3 In admitting the statement, the following exchange occurred: The Court: But the question is: Are you offering this testimony to prove the matter asserted, that, in fact, this 2.9 acres is included within the 80 acres, not based on adverse possession but that he was correct in his assertion that the 2.9 acres is included? If the 2.9 acres, in fact, was not a part of the 80 acres, you’re not offering his testimony to prove that, in fact, it was. Counsel: Oh, correct, Your Honor. The Court: Then you’re not offering it to prove the matter asserted and it, therefore, does not become hearsay. Counsel: That is correct, Your Honor, yes. The Court: So all you’re offering it for is they heard him say that and they believed it. Is that why you’re offering the evidence? Counsel: Yes, and to show their—basically show their belief and their occupancy. The Court: Based on that clarification, the objection is overruled. 3 We note this statement is corroborated both by the aerial map showing that the property included the 2.9 acres as well as the testimony of Keith Hansen, who stated: “[Cummings] said that the north corner was from here pretty much straight back . . . but it was from that corner post to—on the road and then it—of course, it comes out into the center of the road which the county maintains.” 6 We agree with the district court’s interpretation that this statement was not admitted for the truth of the matter asserted. Rather, it was offered to show the Westcotts’ understanding they owned the 2.9 acre parcel because they believed it was sold as part of the “eighty acres more or less,” as reflected on their contract and deed in satisfaction of the contract. See Iowa R. Evid. 5.802. Consequently, the statement is not hearsay, and because it is also relevant, see Iowa R. Evid. 5.402, the district court properly admitted the statement. Moreover, as demonstrated by the record, this evidence was not admitted to interpret the real estate contract. Therefore, the statute of frauds and the parol evidence rule do not apply. See Garland v. Branstad, 648 N.W.2d 65, 69 (Iowa 2002) (stating the parol evidence rule forbids the use of extrinsic evidence to vary, add to, or subtract from a written agreement). Consequently, Malli’s arguments in this regard are without merit. IV. Adverse Possession Claim A. Adverse Possession “A party claiming title by adverse possession must establish hostile, actual, open, exclusive and continuous possession, under claim of right or color of title for at least ten years.” Garrett v. Huster, 684 N.W.2d 250, 253 (Iowa 2004). This doctrine is strictly construed. Id. “Although ‘mere use’ is insufficient to establish hostility or claim of right, certain acts, including substantial maintenance and improvement of the land, can support a claim of ownership and hostility to the true owner.” Louisa Cnty. Conservation Bd. v. Malone, 778 N.W.2d 204, 208 (Iowa Ct. App. 2009). Here, the Westcotts maintained the land by improving and maintaining the pole barn 7 and fencing, grading the road, constructing new gates, removing a shed, and mowing and spraying the grass. Malli, in contrast, never used nor maintained the land and, as the district court found, did not even venture onto the property during the Westcotts’ use of the disputed parcel until after this action was filed. Furthermore, Malli did not describe the pole barn’s forty-six inch encroachment onto the 2.9 acres in the deed to the Westcotts, indicating Malli believed he was selling all of the property within the fence line. Malli claims, however, that he gave the Westcotts permission to use the land in this manner, thereby negating the hostile element. In assessing Malli’s credibility, the district court stated: “Malli claims he informed the Westcotts of his ownership and allowed them free use of the property. The Westcotts deny any such conversation occurred. The court finds the Westcotts far more credible and concludes no such conversation occurred.” We lend significant weight to the district court’s determination of credibility because the court is in the best position to observe the witnesses and establish the veracity of their testimony. See Rubes, 642 N.W.2d at 266. Therefore, we rely on the district court’s conclusion that Malli did not in fact give permission to the Westcotts to use the parcel. Furthermore, no evidence corroborates Malli’s claim he gave the Wescotts permission to use the land. Malli did not pay taxes on to the property until 1993, which indicates he did not believe he owned the property. Without knowledge of ownership, no permission would have been granted. Therefore, given the Westcotts’ substantial maintenance and improvement of the land, as well as the fact they did not have Malli’s permission to use the land in such a manner, the hostile element is satisfied. 8 The Westcotts must also establish their possession was under claim of right or color of title. The claim of right element may be satisfied when the plaintiff takes and maintains the property in the manner of an owner, that is, the plaintiff’s conduct must evidence ownership. Louisa Cnty. Conservation Bd., 778 N.W.2d at 208. Since 1988, the Westcotts have made many improvements as well as maintained the disputed parcel. They have leased out their land, which included the 2.9 acres, between 1989 and 2010, and the tenants used the parcel in a manner consistent with the Westcotts’ exclusive ownership of the land. Additionally, the Westcotts have used the parcel to graze their own livestock. This use, which is consistent with the ownership of the parcel, is sufficient to establish the claim of right element. The Westcotts have also been able to satisfy the remaining elements of adverse possession. Their use of the land has been continuous since 1988, satisfying the ten-year requirement. As evidenced by the manner in which they used and maintained the land, their possession has also been actual, open, and exclusive. Other than the tenants and the Westcotts, no one else has used the disputed parcel, and had Malli ever ventured onto the property, he would have had notice of the Westcotts’ open use of the land. See Lawese v. Glaha, 114 N.W.2d 900, 904 (Iowa 1962) (“If possession is originally acquired in subordination to the title of the true owner, there must be a disclaimer of the title from him, an actual hostile possession of which he has notice or which is so open and notorious as to raise a presumption of notice.”). Therefore, the Westcotts proved by clear and convincing evidence all the elements necessary to establish 9 their claim of adverse possession, and the district court properly found title of the 2.9 acres was with the Westcotts. B. Easement by Prescription Claim Malli asserts the Westcotts’ easement by prescription claim, pled in the alternative, suffers from the same defects as their adverse possession claim. However, as discussed above, the Westcotts proved by clear and convincing evidence they adversely possessed the 2.9 acre parcel. Therefore, the easement by prescription claim is moot, and we decline to address the merits of Malli’s alternative argument. C. Malli’s Counterclaims Malli further argues the district court erred in dismissing his counterclaims of trespass and conversion. However, the district court correctly concluded the Westcotts established their adverse possession claim, thus obviating Malli’s counterclaims. Consequently, the district court properly dismissed the counterclaims of trespass and conversion, and we affirm. V. Attorney Fees Malli’s final argument asserts the district court erred in declining to award him attorney fees because the Westcotts breached the real estate contract and engaged in trespass and conversion. Malli also requests he be awarded attorney fees and costs associated with this appeal. We review the decision of whether or not to award attorney fees for an abuse of discretion. Boyle v. Alum-Line, Inc., 773 N.W.2d 829, 832 (Iowa 2009). Reversal is warranted only when the court rests its ruling on grounds that are clearly unreasonable or untenable. Id. 10 Malli has no right to attorney fees based either on statutory or contractual grounds. NevadaCare, Inc. v. Dep’t of Human Servs., 783 N.W.2d 459, 469 (Iowa 2010) (“As a general rule, unless authorized by statute or contract, an award of attorney fees is not allowed.”). Therefore, the district court did not abuse its discretion in not awarding attorney fees, and we decline to award attorney fees on appeal. Having considered all arguments presented by Malli, we affirm the decision of the district court. Costs of this appeal are assessed to Malli. AFFIRMED.
null
minipile
NaturalLanguage
mit
null
BARSTOW, Calif. (KSNV MyNews3) — Former Nevada Assemblyman Steven Brooks will return to a southern California courtroom May 7 after his court-appointed lawyer requested the time to review the evidence. Brooks had faced an evidence hearing Tuesday afternoon in San Bernardino County Superior Court to determine whether he'll go to trial on charges including resisting an officer, felony evading and assault on a police animal. Brooks pleaded not guilty a week ago. He could face more than five years in prison if he's convicted. The March 28 arrest on Interstate 15 near Victorville was Brooks' third this year. It came just hours after the North Las Vegas Democrat became the first Nevada lawmaker ever ousted from the Legislature.
null
minipile
NaturalLanguage
mit
null
Modest Prom Dresses – Look Beautiful, Flowing, and Elegant Modest Prom Dresses – Look Beautiful, Flowing, and Elegant. In the event that you are searching for a rich prom dress that doesn’t make them flaunt everything god gave you it could be testing particularly nowadays yet not inconceivable. Looking provocative and being unobtrusive can go together you simply need to work at it somewhat more than you would need to have done once upon a time. Some unassuming dresses that can be found have charming minimal topped sleeves so regardless you get the fantasy of being sleeveless yet the shoulders are to some degree secured. Neck areas are somewhat higher to keep cleavage from being seen. The backs of these dresses go no lower than the shoulder bones. Fabric is picked that won’t stick to the body exorbitantly however fall smoothly to the floor. They utilize rouching to give the dress style and mold and shroud any knots and knocks in the body. Domain waist styles are extraordinary in light of the fact that they can’t be low profile and the fabric just falls straight to the ground as opposed to sticking. In the most recent 10 years or so makers have seen that there was a genuine requirement for prom dresses that fit the age of the young lady wearing them as opposed to imitating a more established big name. It is getting progressively less demanding to locate a dress that does not indicate everything. They come in all the most loved and well known hues and styles. These new humble prom dresses are made to look like well known styles without so much skin appearing. The dresses don’t look antiquated and unattractive yet the exact inverse. They look wonderful, streaming, and exquisite. Here we have 14 great photos about Modest Prom Dresses – Look Beautiful, Flowing, and Elegant. We hope you enjoyed it and if you want to download the pictures in high quality, simply just click the image and you will be redirected to the download page of Modest Prom Dresses – Look Beautiful, Flowing, and Elegant.
null
minipile
NaturalLanguage
mit
null
Q: Concentrate dataset arrays in matlab Hi I have many arrays of different lengths now I want to create ONE long array (1D) out of all of them. Counterintuitively vertcat gives me a dimension error even though I do not see the point why the dimensions of my arrays should match. Am I using vertcat wrong? A: Your vectors are probably column vectors of different lengths (or matrices). Suppose A to D are the matrices you want to create a 1D-vector from. Try "flattening" them out using (:), and vertcat thereafter, like this: long_1D_vector = [A(:); B(:); C(:); D(:)]; You may transpose it if you want a column vector instead: long_1D_vector = [A(:); B(:); C(:); D(:)].';
null
minipile
NaturalLanguage
mit
null
763 So.2d 552 (2000) Joseph B. CAMMARATA and Judith A. Cammarata, Petitioners, v. Anna Marie JONES, as Trustee of the Michael F. Cammarata and Jennie M. Cammarata Living Trust, Respondent. No. 4D00-1543. District Court of Appeal of Florida, Fourth District. August 2, 2000. James B. Boone, Sunrise, for petitioner. William R. Black of William R. Black, P.A., Fort Lauderdale, for respondent. PER CURIAM. During a hearing on the Respondent's motion for leave to amend complaint, the trial court denied the Respondent's motion, but suggested the following: [W]hy don't we talk about the possibility of conforming—of a motion to, or entertaining a motion to conform the pleadings to the evidence and see if you can do it that way, because it's too late now to add an indispensable party and amend your pleadings a month before trial. . . . Well, the second thing you can do, you can take a voluntary dismissal and refile because it's always dismissed without prejudice. The third thing is you can move to conform the pleadings with regard to a piercing of the corporate veil situation. The Petitioners' counsel objected to the suggestions on the ground that it was improper for the judge to offer advice to opposing counsel. Ten days later, the Cammaratas filed their verified motion for disqualification, alleging that the judge's advice to opposing counsel demonstrated bias leading them to believe they would not get a fair trial. The motion to disqualify was denied, and this petition for writ or prohibition followed. With regard to the legal sufficiency of the motion for disqualification, Hayslip v. Douglas, 400 So.2d 553, 555-56 (Fla. 4th DCA 1981) and Fischer [v. Knuck], 497 So.2d [240] at 242 [(Fla.1986)], establish that the standard is whether the party has a "well-founded fear" of prejudice on the part of the trial judge. Michaud-Berger v. Hurley, 607 So.2d 441, 446 (Fla. 4th DCA 1992). Although the *553 trial judge in this case is known to be an extremely fair and impartial jurist and we have no doubt the Petitioners would receive a fair trial from her, the law requires us to consider this petition from the Cammaratas' perspective. See id. (stating, "We wish to make it clear that we have the utmost confidence in Judge Hurley's commitment to be fair and impartial and we have no doubt that petitioner would receive a fair trial from him. However, the law requires us to consider this petition from petitioner's perspective."). The judge in this case erred in suggesting to the Respondent's counsel alternatives on how to proceed strategically. See Crescent Heights XLVI, Inc. v. Sea-Air Towers Condo. Ass'n, 729 So.2d 420 (Fla. 4th DCA 1999); Shore Mariner Condo. Ass'n v. Antonious, 722 So.2d 247 (Fla. 2d DCA 1998). "Obviously, the trial judge serves as the neutral arbiter in the proceedings and must not enter the fray by giving `tips' to either side." Chastine v. Broome, 629 So.2d 293, 295 (Fla. 4th DCA 1993). Thus, we conclude the trial judge's suggestions to the Respondent's counsel caused the Petitioners to have a well-founded fear that they would not receive a fair trial. We grant the petition, but withhold issuance of the writ since we are confident that the trial judge will act in a manner consistent with this opinion. PETITION GRANTED. GUNTHER and POLEN, JJ., concur. STONE, J., dissents with opinion. STONE, J., dissenting. I would deny the petition. In my judgment, neither the trial court's comments to Respondent's counsel, nor the case law, mandate that the trial court grant Petitioners' motion to disqualify. The context for the court's comments was a hearing, held one month before trial, on Respondent's motion to add a new party defendant. The trial court's ruling on the motion was adverse to Respondent from the beginning of the hearing. However, in the course of the hearing, the court, while explaining to Respondent why it would not grant leave to amend at that late date, recognized that Respondent still had some options; such as, seeking to amend to conform to the evidence at trial, or entering a voluntary dismissal and refiling. In the course of the discussion, the court qualified its comments indicating that it was not necessarily favorably disposed to the first option, and acknowledged that reliance on that possibility would be risky. The other alternative did not require court approval. In response to Petitioners' objection that the court was giving advice to opposing counsel, the trial court responded, "I don't know, I'm just giving him his options. I have no idea what I'll do or what I'll determine." I would deem the authorities cited by the majority distinguishable. In Crescent Heights, the trial court, in its written order granting a temporary injunction to the plaintiff, directed the plaintiff to amend to add a count on an additional cause of action. In Shore Mariner, the trial court was seen as unfairly assisting one side by instructing the defendant to amend its answer to include certain defenses, the obvious indication being that leave would then be granted (which subsequently turned out to be the case). However, the opinions in Crescent Heights and Shore Mariner do not disclose the context in which the suggestions were made. Here, the entire thrust of the court's discussion with counsel was in response to counsel's argument, and explained where the court's adverse ruling left them. In Chastine, a first-degree murder case, the written suggestion to the prosecutor was conveyed exparte and essentially gave advice to the prosecutor, to the advantage of the state, against further cross-examination of the witness. The court was not only concerned with the content of the note, but the context in which the incident occurred. In any event, to the extent that decisions are be deemed indistinguishable from this *554 case, I would either note conflict or recede in part to acknowledge such distinctions. I cannot conclude on this record that Petitioners have a rational basis for concluding that they would not receive a fair trial by this judge.
null
minipile
NaturalLanguage
mit
null
Earlier today one of our readers (Thanks Alice) noticed that there was a lot more activity related to one of her servers which was running phpMyAdmin. Upon further investigation it appears that her server had been compromised by exploitation of the vulnerability detailed in PMASA-2009-4. The attacker uploaded a lot of the same old types of tools such as a misnamed EnergyMech IRC bot, a perl based UDP flooding tool, and an automated tool to attempt phpMyAdmin. It is now past time to update to phpMyAdmin 3.1.3.2 and/or updating firewall rules to limit the public Internet from touching this web application. Updated: Monday 06/22/2009 22:30 UTC I have heard more reports locally about activity which seems to point to phpMyAdmin scanning and exploitation. I haven't seen a copy of the exploiting tool as of yet. If you happen to get a copy of the tool, or get packet captures of it at work, please feel free to send to us.
null
minipile
NaturalLanguage
mit
null
Professor Who Helped Expose Crisis in Flint Says Public Science Is Broken - danso http://chronicle.com/article/The-Water-Next-Time-Professor/235136 ====== kafkaesq My favorite part: _Q: How exactly does one teach heroism to college students?_ _A. We teach aspirational ethics. What I teach my students is, You’re born heroic. I go into these animal studies, and heroism is actually in our nature. What you have to do is make sure that the system doesn’t change you, that our educational system doesn’t teach you to be willfully blind and to forget your aspirations, because that’s the default position. ... The main thing is, Do not let our educational institutions make you into something that you will be ashamed of._ _Q. And you sort of warn them that you’re preparing them for a life of possible sadness and alienation?_ _A. Well, yeah. There’s a price to be paid._ ~~~ mgregory22 What's sad about being alienated from fools? ~~~ bpchaps Probably something along the lines of, "If you call them fools, you probably deserve the alienation." Edit: Read that wrong, but I'll change my answer: I feel that a large amount of the issues in science and "exclusive knowledge groups" stem from alienation at its root. Exclusivity, obfuscation, lack of publishing, "no true Scotsman", "othering", etc. By calling them fools, you're only contributing to the sorts of alienation that causes these issues to exist. ~~~ progressive_dad There are so many issues here I hardly know where to begin. I've made similar arguments myself in the past and you have to be very careful to compartmentalize what you hope to say and accomplish when debating someone directly affected by these issues. What do you want to accomplish? Do you believe that all the world’s entire scientific and cultural heritage, published over centuries in books and journals, is increasingly being digitized and locked up by a handful of private corporations? Do you believe that the complexity of modern science and the scale of the problems inherently requires increasing specialization and an inherent detachment for modern scientists from the impact of the work in a cultural context? Does that lead us toward a specialization in being a spokesperson for science? Should large institutions feel obligated to create such positions? Because if you're simply pining for the days of natural philosophers who were statesmen, lawyers, political figures, and leading minds embodied in one person you're at a dead end. Whatever you do, don't bring up ethics. A required college course wraps up any and all debate on that front! ------ progressive_dad Science is interesting and if you don't agree you can fuck off. [https://www.youtube.com/watch?v=-Fh_liyhIH8](https://www.youtube.com/watch?v=-Fh_liyhIH8) ~~~ dang Please don't do this here. ~~~ progressive_dad I was quoting Richard Dawkins quoting an editor of New Scientist magazine. I even linked the debate. Its not a new issue, its a very divisive issue, and I think that sums up the opposition quite succinctly. For further context this discussion was between Richard Dawkins and Neil Degrasse Tyson at the "Beyond Belief" panel discussion: [https://www.youtube.com/watch?v=-_2xGIwQfik](https://www.youtube.com/watch?v=-_2xGIwQfik) ~~~ dang I know, but quoting the most inflammatory thing an inflammatory figure has said is unconducive to substantive discussion. The last thing we want on HN is binary ragewars between people who identify one way and people who identify the other. We detached this subthread from [https://news.ycombinator.com/item?id=11031472](https://news.ycombinator.com/item?id=11031472) and marked it off-topic.
null
minipile
NaturalLanguage
mit
null
switch case statement keeps going straight to default option Posted 19 April 2013 - 12:10 AM Hi everyone, iv written this code which reads in a kelvin degrees temperature then gives you the option to convert it to either Celsius or Fahrenheit, However my switch case statement for the convert function keeps going straight to the default option and I can't figure out why. I've tried searching around but I can't seem to find an answer specific enough. Can someone please help me? Here is the code: Re: switch case statement keeps going straight to default option Posted 19 April 2013 - 11:18 AM Thanks everyone for the help! It seems my problem was solved by adding in the %*c buffer in the first scanf line but why do I need to do this? Could someone please explain why this is necessary so I don't make the same mistake again? Thank you! Re: switch case statement keeps going straight to default option Posted 19 April 2013 - 11:38 AM Quote Thanks everyone for the help! It seems my problem was solved by adding in the %*c buffer in the first scanf line but why do I need to do this? Could someone please explain why this is necessary so I don't make the same mistake again? Thank you! This because the scanf() call leaves the end of line character (the enter key) in the input buffer. So you must extract this character before your next character entry. This is only necessary when dealing with character input, numeric input skips leading whitespace by default. Also note another way or removing this character is to have your character input skip the leading white space as well. You do this by putting a space in front of the character entry: Re: switch case statement keeps going straight to default option Posted 19 April 2013 - 11:46 AM It is absolutely NOT necessary. You have to be aware of what's going on. There are more characters there than you expect, mostly because of hitting enter. You should allow for that by checking for valid characters. Re: switch case statement keeps going straight to default option Posted 19 April 2013 - 12:08 PM Thanks baavgai I understand what your saying, it's just that I wrote the %*c after scanning a character in the second scanf() and left out the %*c for the first scanf() because it was only reading an integer and I didn't think this was necessary, but in doing so this caused my switch case statement to keep going to default until I went back to the first scanf() and put %*c after the %d I just want to understand why that happened?
null
minipile
NaturalLanguage
mit
null
A putative calcium-ATPase of the secretory pathway family may regulate calcium/manganese levels in the Golgi apparatus of Entamoeba histolytica. Calcium regulates many cellular processes in protozoa, including growth, differentiation, programmed cell death, exocytosis, endocytosis, phagocytosis, fusion of the endosomes of distinct stages with phagosomes, fusion of phagosomes with lysosomes, and recycling the membrane. In Entamoeba histolytica, the protozoa responsible for human amoebiasis, calcium ions are essential for signaling pathways that lead to growth and development. In addition, calcium is crucial in the modulation of gene expression in this microorganism. However, there is scant information about the proteins responsible for regulating calcium levels in this parasite. In this work, we characterized a protein of E. histolytica that shows a close phylogenetic relationship with Ca2+ pumps that belong to the family of secretory pathway calcium ATPases (SPCA), which for several organisms are located in the Golgi apparatus. The amoeba protein analyzed herein has several amino acid residues that are characteristic of SPCA members. By an immunofluorescent technique using specific antibodies and immunoelectron microscopy, the protein was detected on the membrane of some cytoplasmic vacuoles. Moreover, this putative calcium-ATPase was located in vacuoles stained with NBD C6-ceramide, a Golgi marker. Overall, the current findings support the hypothesis that the presently analyzed protein corresponds to the SPCA of E. histolytica.
null
minipile
NaturalLanguage
mit
null
Q: How do I assign aggregate values to multiple variables on the same column in SQL? In SQL, I have an OrderTable with columns for dates, sale prices, and product ids. I am trying to get average prices over a date interval based on product id. For example, given the following table: SaleDate ProductId Price -------- --------- ----- 1/1/2020 1 1.00 1/2/2020 1 2.00 1/2/2020 2 1.00 1/2/2020 2 3.00 1/3/2020 2 2.00 1/3/2020 1 1.00 1/3/2020 3 2.00 I want the equivalent of the following: SELECT @t1 = AVG(Price) FROM OrderTable WHERE ProductId = 1 SELECT @t2 = AVG(Price) FROM OrderTable WHERE ProductId = 2 SELECT @t3 = AVG(Price) FROM OrderTable WHERE ProductId = 3 I know I can group them by ProductId, like so: SELECT ProductId, AVG(Price) FROM OrderTable GROUP BY ProductId And get the average price for each ProductId, but how do I assign those to multiple variables? I want to do something like this (this doesn't work): SELECT @t1 = AVG(Price) WHERE ProductId = 1, @t2 = AVG(Price) WHERE ProductId = 2, @t3 = AVG(Price) WHERE ProductId = 3 FROM OrderTable A: Use conditional aggregation to put them all in one row: select @t1 = avg(case when productid = 1 then price end), @t2 = avg(case when productid = 2 then price end), @t3 = avg(case when productid = 3 then price end) from OrderTable; I assume the syntax @t1 = is what your database uses for assigning variables.
null
minipile
NaturalLanguage
mit
null
Q: Who are these Lego superheroes? Recently my wife bought me this amazing gift: it's a frame with a set of Lego superheroes/villains from the Marvel and DC universes. Image below. Now, I know most of them, but I'm not able to place a few, here's how they sit (the ones with * next to them are the unknowns): [Left to right] Top Row Captain America Spiderman Wolverine The Hulk Unknown 1* - Falcon? Unknown 2* Second Row from Top The Flash The Joker Batman Me (IT Nerd Guy) Superman Iron Man Second Row from Bottom Nick Fury HawkEye* Deadpool* Loki Unknown 3* - Antman? Unknown 4* Bottom Row Wonder Woman Batman* Aqua Man Unknown 5* Catwoman Thor Who are these Lego Superheroes and Villains? A: First Row Captain America I'm pretty sure this is The Avengers version of the Cap minifig, although it's hard to tell. I'm basing this judgment off his belt, mainly, which in the question image appears to be very thick red; Avengers cap is the only one who has thick red bands on his belt in those locations. This variant is only available with the Captain America's Avenging Cycle set, released as a tie-in for the 2012 Avengers movie. Spider-man Based on the defined musculature on his chest, this is the "Ultimate Spider-man" variant of the Spidey minifig. Ultimate Spidey is available in almost every Spidey set released between 2012 and 2016 The rope in his hands is, maybe obviously, meant to represent one of Spidey's web-ropes Wolverine This is the "Astonishing Wolverine" variant (you can tell because of the blue vertical stripes on either side of his chest. This is a rare variant (although Wolvie is a pretty rare minifig to begin with), and is only available in the Wolverine's Chopper Showdown set Hulk This is notably the Hulk minifigure, which was given away as a promotional item with LEGO Store purchases in May 2012. The Hulks that you can get in the regular sets are more...hulking. Falcon Based on the guns, I think this one might be an off-brand, not the official Lego figure. I obviously can't be certain, but the picture I've included above is of an off-brand, and it's the only one I could find of Falcon carrying those guns. In any case, classic Falcon is only available in the Hulk Lab Smash set (which also doesn't have guns for Falcon) Cyborg Cyborg is a newcomer; his minifig was only released in September of 2015. You can get him as a Dimensions Set, or from the Darkseid Invasion set, which is also the only place you can get Hawkman Second Row The Flash This is the "New 52" Flash variant, available in Batman: The Riddler Chase and Gorilla Grodd Goes Bananas. The rods he's holding are presumably meant to represent the Speed Force, and are available in lots of sets (the part started out as a lightsaber blade); as far as I know, they're not actually distributed with Flash. Joker Based on the green checkered waistcoat, this is the 2012 version of the Joker minifig. He's available in a few of the non-movie tie-in Batman sets Batman This is actually a special Batman figure; you can only get the black costume with wings from the Arkham Asylum Breakout set Computer Programmer (not, strictly speaking, a superhero) This happy fellow isn't in any sets; you can only get him by buying a Series 7 minifigure bag Superman This is the comic variant Superman minifig. He debuted as a Comic-Con exclusive in 2011, but has since appeared in a few sets, such as Darkseid Invasion and Brainiac Attack Iron Man Mk. 6 You can tell it's the Mark 6 because of the triangular Arc Reactor light on his chest. The Mk. 6 is only available in two sets: Loki's Cosmic Cube Escape and Iron Man vs. Fighting Drone Third Row Nick Fury This is the "Ultimate Spider-man" Nick Fury minifigure; he's only available in the Spider-Man: Spider-Cycle Chase set. Hawkeye Unlike the rest, this one doesn't appear to be an official Lego minifigure; the official one has red accents, not purple. This purple variant seems to be a custom job, from this Etsy store Carnage So far, Carnage has only appeared in a single set: Carnage's SHIELD Sky Attack Loki It looks like you have the Avengers variant of Loki, which appears in a few of the tie-in sets for the 2012 Avengers film (such as Loki's Cosmic Cube Escape); however, the standard Loki minifigure comes with a bright green cape, while the one you have appears to be wearing a black cape (although that may just be a trick of the light; it's hard to tell). Also note that Loki's Scepter comes in two pieces (one of which was later re-used in Ninjago); in most official images these two pieces are together, but in yours he's holding one piece in each hand. Ant-Man This seems to be another custom job, available from this Etsy store; the design is based on the classic Ant-Man look, while official products tend to follow the design from the films. Brainiac Brainiac is only available in the Brainiac Attack set. Fourth Row Wonder Woman This is the comics variant of Diana, available in Superman vs. Power Armour Lex and a Dimensions set. She also apparently appears in an exclusive The Lego Movie tie-in set distributed to members of the press. Bizarro-Batman By most accounts, Batzarro is a very rare minifigure; as near as I can tell, the only way to get him is with the home video release of the Justice League vs. Bizarro League Lego movie. I wouldn't be entirely surprised if this one was off-brand. Aquaman This is the comics Aquaman, rather than the guyliner-tastic Justice League variant, available in a few Aquaman sets release pre-2017 Hawkman As I mentioned above, Hawkman is only available in the Darkseid Invasion set. Note that the set comes with two different sets of wings: one extended (for flying) and one folded up (for going through narrow doorways). Whoever put together your product put both sets on him, hence why he looks like he has butterfly wings Catwoman Catwoman is a bit unusual because no two sets seem to have quite the same version of her minifig; I'm pretty sure, however, that you have the variant appearing in Catwoman Catcyle Chase (which also contains a diamond and that brown whip) Thor That looks like the Avengers version of the Thor minifig, which appears in a few of the sets that tie into the 2012 Avengers film (such as Hulk's Helicarrier Breakout) A: Links are to the Brickipedia pages for the heroes/villains: Unknown 1 Is indeed Falcon Unknown 2 is Cyborg Unknown 3 is indeed Antman Unknown 4 is Braniac Unknown 5 is Hawkman A: Bottom row "Unknown 5" is Hawkman. https://en.wikipedia.org/wiki/Hawkman Not sure about the rest.
null
minipile
NaturalLanguage
mit
null
Most first-timers to Jordan make a beeline for Petra, then after spending a day or two exploring the site – an ancient metropolis carved from stone that’s like nothing else on earth – head off to float in the Dead Sea. I prefer my travels a little more off the beaten path, which is why I was so excited to learn about the just-launched Jordan Trail, a 400-mile hiking route that runs the length of the country from the ruins of Umm Qais up north all the way to the Red Sea in the south. As I trudged along an old shepherd’s path toward a Jordanian eco-lodge that shimmered like a mirage in the heat, however, one recent May afternoon I wondered what I’d signed on for … continue reading Kim Brown Seely is a contributing editor at Virtuoso Life magazine and 2016 Lowell Thomas Travel Journalist of the Year. She has written about far-flung places for National Geographic Adventure, Travel & Leisure, Town & Country, Outside,Coastal Living, and Sunset. When she’s not on the road she divides her time between Seattle, Washington, and Hailey, Idaho, and can be found at kimbrownseely.com.
null
minipile
NaturalLanguage
mit
null
Evelyn Nakano Glenn Evelyn Nakano Glenn is a Professor of the Graduate School at the University of California, Berkeley. In addition to her teaching and research responsibilities, she served as Founding Director of the University's Center for Race and Gender (CRG). The CRG is a leading U.S. academic center for the study of intersectionality among gender, race and class social groups and institutions. In June 2008, Professor Glenn was elected President of the 15,000-member American Sociological Association. She served as President-elect during the 2008–2009 academic year, assumed her presidency at the annual ASA national convention in San Francisco in August 2009, served as President of the Association during the 2009–2010 year, and continued to serve on the ASA Governing Council as Past-president until August 2011. Her Presidential Address, given at the 2010 meetings in Atlanta, was entitled "Constructing Citizenship: Exclusion, Subordination, and Resistance," and was printed as the lead article in the American Sociological Review. Professor Glenn's scholarly work focuses on the dynamics of race, gender, and class in processes of inequality and exclusion. Her early research documented the work and family lives of heretofore neglected women of color in domestic service and women in clerical occupations. This work drew her into historical research on the race and gender structure of local labor markets and the consequences of labor market position on workers, including the forms of resistance available to them. Most recently she has engaged in comparative analysis of race and gender in the construction of labor and citizenship across different regions of the United States. Evelyn Nakano Glenn is author of Issei, Nisei, War Bride (Temple University Press), Unequal Freedom (Harvard University Press, 2002), "From Servitude to Service Work" (Signs: Journal of Women in Culture and Society), and Forced to Care: Coercion and Caregiving in America (Harvard University Press, 2010). She is also editor of Mothering (Routledge), and Shades of Difference: Why Skin Color Matters (Stanford University Press, 2009). Additionally, Professor Glenn is the author of many journal articles, reviews, and commentaries. A review of her most recent book, Forced to Care stated, "Glenn's prose is concise and elegantly crafted, and despite the complexity of the subject matter, the reader is swept along with the force of the narrative structure." Biography Born in 1940 in California to Nisei (second-generation) parents in California, Glenn (a Sansei) was incarcerated from 1942 to 1945, along with more than 120,000 other Japanese Americans, in concentration camps. Glenn's family was first assigned to live in the horse stables at a race track in Turlock, California, and thereafter was sent to the Gila River camp in the Arizona desert, and then to the Heart Mountain camp in the high country of Wyoming. When her family was released in 1945 they moved to Chicago, where Glenn was raised until the age of 16. Professor Glenn received her BA from the University of California, Berkeley, and her PhD from Harvard University. Her first academic position was as Assistant Professor of Sociology at Boston University; she has also taught at Florida State University, Binghamton University, and was a Visiting Professor at the University of Hawaii. She has been at the University of California, Berkeley since 1990. Teaching Glenn has taught a variety of courses having to do with research methods and theory in the social sciences, women and work, the Asian American family, comparative gender systems, race and social structures in the United States, and graduate seminars in gender, race, and class. Associations American Sociological Association, 1972–present (elected President in June 2008) Society for the Study of Social Problems, 1976–present (served as President, 1998–1999) Sociologists for Women in Society, 1983–present (served as Feminist Lecturer for Outstanding Sociology, 2008) Pacific Sociological Association Council on Contemporary Families Massachusetts Sociological Association (President, 1979–80) Association for Asian American Studies Awards 2015 Asian American & Asian Disapora Studies Award, UC Berkeley 2013 KQED Asian American Local Heroes Award 2012 Lee Founders Award for Life Achievement("in recognition of significant achievements over a distinguished career, that have demonstrated a long-time devotion to the ideals ... and especially to the humanistic tradition of sociology ...") awarded by the Society for the Study of Social Problems 2011 at its national convention, August 2012. 2011 C.Wright Mills Award (finalist), for her book "Forced to Care," awarded by Society for the Study of Social Problems. 2007 Sociologists for Women in Society, Feminist Lecturer for Outstanding Feminist Sociology. 2005 Jessie Bernard Award, American Sociological Association "in recognition of outstanding scholarship that has enlarged the horizons of sociology to encompass fully the role of women in society". 2004 Outstanding Book Award, American Sociological Association Section on Asia and Asian Americans, for her book "Unequal Freedom". 2004 Distinguished Contribution to Scholarship Award, Pacific Sociological Association, for her book "Unequal Freedom". 2003 Oliver Cromwell Cox Award, American Sociological Association, Section on Racial and Ethnic Minorities, for "Unequal Freedom". 2003 Outstanding Achievement in Scholarship Award, American Sociological Association Section on Race, Gender, and Class, for "Unequal Freedom". 2001 Visiting Scholar, The Havens Center, University of Wisconsin, Madison, WI. 1994 Outstanding Alumna Award, Japanese Women Alumnae of the University of California. 1994 Nikei of the Biennium Award for Outstanding Contributions to Education, Japanese American Citizens League (awarded at National Convention, Salt Lake City, UT). 1993 Association of Black Women Historians, Leititia Woods Brown Memorial Article Prize for "From Servitude to Service Work: Historical Continuities in the Racial Division of Paid Reproductive Labor." Selected publications Books Forced to Care: Coercion and Caregiving in America, Harvard University Press, 2010. Shades of Difference: Why Skin Color Matters (ed.) Stanford University Press, 2009. Unequal Freedom: How Race and Gender Shaped American Citizenship and Labor, Cambridge: Harvard University Press, 2002. Mothering: Ideology, Experience and Agency, Evelyn N. Glenn, Grace Chang, and Linda Forcey (eds), New York: Routledge, 1994. Issei, Nisei, Warbride: Three Generations of Japanese American Women in Domestic Service, Philadelphia: Temple University Press, 1986. Hidden Aspects of Women's Work, Christine Bose, Roslyn Feldberg, and Natalie Sokoloff, with the Women and Work Research Group (eds), New York: Praeger, 1987. Recent articles "Caring and Inequality" in Sharon Harley et al. (eds), Women's Labor in the Global Economy: Speaking in Multiple Voices, Rutgers University Press, 2007. "Whose Public Sociology? The Subaltern Speaks, But Who Is Listening?" in Dan Clawson, Robert Zussman, Joya Misra, Naomi Gerstel, Randall Stokes, Douglas L. Anderton and Michael Burawoy (eds), Public Sociology: Fifteen Eminent Sociologists Debate Politics and the Profession in the Twenty-first Century, Berkeley: University of California Press, 2007. "Race, Labor, and Citizenship in Hawai'i," in Donna Gabaccia and Vicki Ruiz (eds.) American Dreaming, Global Realities: Rethinking U.S. Immigration History, Urbana and Chicago: University of Illinois Press, 2006. "Race, Labor and Citizenship in Hawai'i," in Donna R. Gabaccia and Vicki L. Ruiz (eds.) American Dreaming, Global Realities: Rethinking U.S. Immigration History (Urbana & Chicago: University of Illinois Press, 2006). "Citizenship and Inequality," in Elizabeth Higginbotham and Margaret L. Anderson (eds.), Race and Ethnic in Society: The Changing Landscape (Wadsworth, 2006). "Gender, Race and Citizenship," om Judith Lorber (ed) Gender Inequality: Feminist Theory and Politics (Roxbury, 2005). "Citizenship and Inequality: Historical and Global Perspectives" in A. Kathryn Stout, Richard A. Dellobuono, William Cambliss (eds), Social Problems, Law, and Society (Rowman and Littlefield, 2004). "From Servitude to Service Work: Historical Continuities in the Racial Division of Paid Reproductive Labor," Signs: Journal of Women in Culture and Society, Fall 1992). (Reprinted in 12 separate anthologies and collections). References External links American Sociological Association Center for Race and Gender Category:1940 births Category:Living people Category:American social sciences writers Category:University of California, Berkeley faculty Category:Women's studies academics Category:Harvard University alumni Category:American women of Japanese descent Category:Japanese-American internees Category:American academics of Japanese descent Category:Place of birth missing (living people) Category:Binghamton University faculty Category:University of California, Berkeley alumni
null
minipile
NaturalLanguage
mit
null
In order to detect the molecular mechanisms whereby human immunodeficiency virus (HIV-1) functionally alters or kills CD4+ T lymphocytes Jurkat cell lines transfected to express different HIV-1 proteins were established. Cells constitutively expressing functional gp120 and gp41 showed no direct alterations in their cell growth but did not spontaneously fuse. In contrast, HIV-infected cells or HIV-envelope transfected cells could be induced to from syncytium and die upon co-culture with naive cells. Such infected or transfected cells undergoing cell fusion and cell death displayed dramatic alterations in their intracellular signalling pathways as evaluated by changes in tyrosine phosphorylation of intracellular substrates. These phosphorylations include the induction of tyrosine phosphate on substrates of 95 and 30 kilodaltons (kd), the latter event displaying kinetic correlation with syncytium formation and cell death. Constructs for the inducible or constitutive T cell expression of HIV-1 nef, vpu, rev, and tat were prepared to permit the functional evaluation of each of these HIV-1 genes both in vitro and in the scid/hu mouse model. Peripheral blood CD4+ T lymphocytes infected in vitro with HIV-1 were found to give rise to CD4-/CD8- gamma/delta (gamma/delta T lymphcytes that did not express interleukin-2 (IL-2) following stimulation. In studies on the peripheral blood mononuclear cells of patients with HIV infection an increased proportion of cells expressing gamma/delta T cell receptor were identified. The heavy and light chain antibody genes derived from two human anti-HIV envelope gp41 monoclonal cell lines were cloned, sequenced, and functionally expressed in recipient B cell lines. The genes from one of the two monoclonal lines conferred anti-gp4l specificity to transfected cell lines. The heavy and light chain variable region genes from this antibody (98-6) were genetically linked to the T cell receptor (TCR) constant alpha and beta regions, respectively to generate chimeric antibody/TCR genes capable of expression in T lymphocytes.
null
minipile
NaturalLanguage
mit
null
The Opening Ceremony of the 2018 Winter Olympic Games takes place Friday in South Korea, yet these games already have a winner: the Olympic spirit itself. North and South Korean athletes will march under the same flag during the ceremony, signaling that at least for a fortnight or so the ancient tradition of the Olympic Truce can have a modern manifestation. The symbolic gesture has been accompanied by a slight thaw in the conflict between the Koreas that has also iced more productive relations between the U.S. and China. None of this means the intractable issues isolating the North from South — let alone the rest of the world — are solved. Most notably, the threat from the North’s nuclear weapons and ballistic missile development still looms, and the heinous nature of Kim Jong Un’s regime has not changed in any fundamental way. But the Olympics do provide a diplomatic opening, and at a minimum the North’s participation means the regime won’t menace the games taking place south of the Demilitarized Zone. It’s not the first time the Koreas have marched together during a sporting event, but it will be the first time they have fielded a unified hockey team. That squad will also feature a Minnesotan, Marissa Brandt, who was adopted from her native South Korea when she was just months old. Marissa’s sister, Hannah, is one of eight players with state ties to make the U.S. women’s hockey team. Overall there are 20 Minnesotans (and one alternate) on Team USA, the third-most after Colorado (31) and California (21). The Brandt sisters’ feel-good story will certainly interest NBC, and the network should draw more viewers because of its decision to finally air prime-time coverage live across the country, reflecting the reality of real-time results instantly posted on internet news sites and social-media feeds. Of course, every story won’t be a feel-good tale, and in fact there are already several unfortunate Olympic developments, including the NHL’s shortsighted decision not to participate. That’s a loss for the Olympics but also for the league, which gave up an opportunity to showcase its increasingly international talent base. Far more significant are the growing calls for a congressional investigation of the roles played by the U.S. Olympic Committee and USA Gymnastics in the sexual abuse of athletes by former sports doctor Larry Nassar. There’s also the sordid story of Russian athletes disqualified for doping. The rot went all the way to the top of Russia’s Olympic Committee and was so egregious that the International Olympic Committee banned Russia’s team from marching under its flag. And yet Russia will still send about 170 participants competing as “Olympic athletes from Russia,” not too far below the 232 who competed — and in many cases, cheated — in Sochi four years ago. The disconnect sends a mixed message about just how tough the IOC plans to be on doping. A worried world needs the Olympic spirit more than ever. Yes, geopolitics and scandals loom large, but that’s always been part of the narrative. But so too has hope for peace through sport, and at least so far the Pyeongchang Games have delivered.
null
minipile
NaturalLanguage
mit
null
May, 2017 Dear Uncle Colin, I've been given $u = (2\sqrt{3} – 2\i)^6$ and been told to express it in polar form. I've got as far as $u=54 -2\i^6$, but don't know where to take it from there! – Not A Problem I'm Expecting to Resolve Hello, NAPIER, and thanks for your Someone recently asked me where I get enough ideas for blog posts that I can keep up such a 'prolific' schedule. (Two posts a week? Prolific? If you say so.) The answer is straightforward: Twitter Reddit One reliable source of interesting stuff is @WWMGT – What Would Martin Gardner Tweet? Dear Uncle Colin, I'm told that $z=i$ is a solution to the complex quadratic $z^2 + wz + (1+i)=0$, and need to find $w$. I've tried the quadratic formula and completing the square, but neither of those seem to work! How do I solve it? – Don't Even Start Contemplating It turns out I was wrong: there is something worse than spurious pseudocontext. It's pseudocontext so creepy it made me throw up a little bit: This is from 1779: a time when puzzles were written in poetry, solutions were assumed to be integers and answers could be a bit creepy… Dear Uncle Colin, I recently had to decompose $\frac{3+4p}{9p^2 – 16}$ into partial fractions, and ended up with $\frac{\frac{25}{8}}{p-\frac{4}{3}} + \frac{\frac{7}{8}}{p-\frac{4}{3}}$. Apparently, that's wrong, but I don't see why! — Drat! Everything Came Out Messy. Perhaps Other Solution Essential. Hi, there, DECOMPOSE, and thanks for your message – and your In this month's episode of Wrong, But Useful, @reflectivemaths1 and I are joined by consultant and lapsed mathematician @freezingsheep2. We discuss: Mel's career trajectory into 'maths-enabled type things that are not actually maths', although she gets to wave her hands a lot. What you can do with a maths degree, There is a danger, when your book comes plastered in praise from people like Art Benjamin and Ron Graham, that reviewers will hold it to a higher standard than a book that doesn't. That would be unfair, and I'll try to avoid that. What it does well This is a Dear Uncle Colin, In an answer sheet, they've made a leap from $\arctan\left(\frac{\cos(x)+\sin(x)}{\cos(x)-\sin(x)}\right)$ to $x + \frac{\pi}{4}$ and I don't understand where it's come from. Can you help? — Awful Ratio Converted To A Number Hello, ARCTAN, and thank you for your message! There's a principle I want to introduce Last week, I wrote about the volume and outer surface area of a spherical cap using different methods, both of which gave the volume as $V = \frac{\pi}{3}R^3 (1-\cos(\alpha))^2(2-\cos(\alpha))$ and the surface area as $A_o = 2\pi R^2 (1-\cos(\alpha))$. All very nice; however, one of my most beloved heuristics fails Dear Uncle Colin, One of my students recently attempted the following question: "At time $t=0$ particle is projected upwards with a speed of 10.5m/s from a point 10m above the ground. It hits the ground with a speed of 17.5m/s at time $T$. Find $T$." They used the equation $s
null
minipile
NaturalLanguage
mit
null
The 20 Best Offensive Players From Week 3 AJ Green went nuts against the Ravens on Sunday. Was his this week's best offensive player? Week 3 games aren't over yet, so maybe the title of this article is a little misleading. It should read something like The 20 Best Offensive Players From the Thursday and Sunday Games in Week 1. That just doesn't flow all that well. As you know, we like math here at numberFire. Our algorithms help tell a better story about sports -- they're able to dig through the nonsense, helping us look at things that matter on the court, field or rink. With football, we love our Net Expected Points (NEP) metric, which measures the number of points a player adds (or loses) to his team versus what he's expected to add. Rather than counting statistics like yards, touchdowns and receptions, NEP looks at down-and-distance situations and field position and relates these instances to history. When a player outperforms what's happened in the past, he sees a positive expected points value on the play. When he doesn't, his expected points gained on the play is negative. All of these little instances add up, then, to be a player's Net Expected Points total. Using a formula that compares individual single-game performance to history, the numberFire Live platform takes this Net Expected Points formula and assigns a rating to a player's performance. Each week, that's what we'll show here -- the 20 best ratings from the Thursday and Sunday games. Here are Week 3's results: Rank Player Position Rating 1 AJ Green WR 100 2 Steve Smith WR 100 3 Julio Jones WR 100 4 Devonta Freeman RB 99 5 Rueben Randle WR 97 6 Larry Fitzgerald WR 97 7 Rishard Matthews WR 97 8 Tyrod Taylor QB 97 9 Tom Brady QB 96 10 Gary Barnidge TE 95 11 Karlos Williams RB 94 12 Chris Johnson RB 94 13 Chris Thompson RB 93 14 Frank Gore RB 91 15 Keenan Allen WR 89 16 Adrian Peterson RB 89 17 Matt Ryan QB 89 18 Jimmy Graham TE 88 19 Carson Palmer QB 87 20 Cam Newton QB 87 - A.J. Green was this week's top player, though Steve Smith and Julio Jones both had perfect scores as well. Green took advantage of what looks to be a very average-at-best Baltimore secondary, and he was able to add over 20.00 expected points for the Bengals on Sunday on his 10 receptions and 227 yards. - How about Devonta Freeman? As one of the least efficient backs in the NFL last year, per our numbers, Freeman was a monster without Tevin Coleman yesterday, adding over 8.00 expected points on the ground alone. His per-rush NEP was 0.28, which is roughly 0.30 points better than what an average running back in the NFL saw last year. And it's the complete opposite of his average from last year (-0.29) on 65 carries. - Larry Fitzgerald just keeps performing. Through three weeks, he has the third best Reception NEP total, behind only Julio Jones and Antonio Brown. - Tyrod Taylor entered the week ranked 16th in Passing NEP per drop back, but his Week 3 performance will more than likely shoot him close to the top 10. The Bills look like they have their quarterback. - Gary Barnidge was the most efficient and effective tight end this week. And, not surprisingly, he faced the Raiders. So far this year, the Raiders have allowed a tight end to hit this list in all three weeks of the season.
null
minipile
NaturalLanguage
mit
null
A few days before the October 21 federal election, Vancouver Mayor Kennedy Stewart issued a public statement urging voters to vote for anybody but the Conservative Party. The former NDP MP stated that a Conservative government would be “a disaster for the city.” To support this unusual intervention, Stewart asserted that a Conservative government would provide less help (i.e., federal money) for affordable housing and urban transit and would be less open to helping with the opioid crisis through measures such as providing a “safe” supply of drugs to addicts at government expense. Kennedy was elated with the outcome, a Liberal minority government supported by the NDP. Stewart wasn’t the only happy mayor. Jonathan Coté is mayor of nearby New Westminster and chair of the mayors’ council at the regional transportation authority, TransLink. He said he was encouraged that the Liberal and NDP platforms included “some pretty significant commitments for investing in public transit.” Coté added, “I think we’re cautiously optimistic that it looks like a coalition government will partner with major cities.” Before the election, the Big City Mayors’ Caucus, a collection of 22 cities that’s part of the Federation of Canadian Municipalities (FCM), issued a call for all federal party leaders to commit to “permanent” funding for transit. The mayors demanded the Trudeau government’s 10-year federal transit plan be perpetuated. This followed an earlier FCM statement calling on all federal parties to commit to making low-income housing more accessible. Toronto Mayor John Tory supported the call by the Big City Mayors’ Caucus and praised the current federal transit plan, which has helped pay for improvements in Quebec City, Edmonton, Toronto, and other cities. The ironically named mayor called the Trudeau government “good partners” and added, “We need them to make solid commitments to us for the long term.” Going into the election the Trudeau government had, in fact, committed to spending more than $1 billion to improve public transit in Toronto. Canadian brokerage politics on full display. Making good on the promise Trudeau’s Liberals won almost all of their seats in the cities. The leader hardly campaigned anywhere else. Of the 65 electoral districts with the densest populations of at least 2,500 people per square kilometre, the Liberals won 54, the NDP took most of the rest, and the Conservatives won none. It seems obvious Trudeau will give the mayors what they’re asking for. The federal government has no way to magically create money. Trudeau’s government will get the money by taxing residents of cities and then giving the money to city governments. There seems to be an unnecessary step there. But that is not the whole story. Some of the money will also come from taxing residents of rural areas, where most people voted Conservative. These residents will do their part to help the cities with their pressing problems.
null
minipile
NaturalLanguage
mit
null
Post by Sylvia on Jul 14, 2017 23:39:54 GMT -6 Chris and I "met" through Nancy's site many years ago. She and I have become good friends, although we have never actually met, we chat daily, she is now part of my Family. Bob is now 85ish and underwent Major heart surgery in February this year, he rallied well, but has gone downhill this week. Can you please send your good wishes, prayers in their direction for a recovery? xx I want to encourage youngsters (male or female) to get into the kitchen and enjoy cooking. TASTE it!
null
minipile
NaturalLanguage
mit
null
They are playable but you have to use the old master file with the old scenarios, which would be just like the old game. The old ones weren't made to use resources. the nato counters and graphic mods work fine. The scenario bank says which engine the mod was made for. budd _____________________________ Enjoy when you can, and endure when you must. ~Johann Wolfgang von Goethe "Be Yourself; Everyone else is already taken" ~Oscar Wilde *I'm in the Wargamer middle ground* I don't buy all the wargames I want, I just buy more than I need. Go to the ATG community site for all the downloads, you have to sign up to download. In the game just click install zip file and find the file wherever you downloaded it too and click open. After that it should be an option when you start the game, a menu should pop up. Hope this helps. budd _____________________________ Enjoy when you can, and endure when you must. ~Johann Wolfgang von Goethe "Be Yourself; Everyone else is already taken" ~Oscar Wilde *I'm in the Wargamer middle ground* I don't buy all the wargames I want, I just buy more than I need.
null
minipile
NaturalLanguage
mit
null
Latest on Bute House Hotel windows petition More than three hundred people, at the time of writing, had signed a petition urging a rethink on plans for UPVC windows at the Bute House Hotel after the proposals were rejected by both Argyll and Bute Council and the Scottish Government. Craig Borland More than three hundred people have added their names to a petition urging Argyll and Bute Council to look again at its decision to refuse planning permission for uPVC windows at a Rothesay town centre hotel. The petition, which was launched on January 29 and can be signed online and at 20 business premises in the town, calls on the authority to rethink its decision to turn down the application for the Bute House Hotel on the corner of West Princes Street and Watergate. At the time of writing this article, we’ve collected completed sheets bearing the names of 273 people, while 55 names had been added online - and that’s not counting the names added to as-yet-incomplete pages of the hard-copy version. The petition, which has a closing date of Friday, February 21, asks the council to “reconsider its decision and to work without delay towards a solution which will address the urgent need for quality hotel accommodation on Bute, and be of benefit to the economy of the island”. The hotel’s owners, Harry and Hazel Greene, have had two applications for UPVC windows refused by the council - the first by planning department officials and the second by the authority’s planning, protective services and licensing committee - on the grounds that the material would damage the character of the Rothesay conservation area. The Greenes appealed against refusal of the second application, but a reporter from the Scottish Government’s directorate for planning and environmental appeals (DPEA) dismissed the appeal in January, in a decision which attracted almost universal condemnation from local residents and businesses. Click here to sign the online version of the petition - or alternatively, you can add your name to a hard copy at the following Rothesay businesses: The Buteman, Victoria Street; Card & Gift Shop, Victoria Street; Toffo’s newsagent, Montague Street; Rothesay post office, Watergate; Taverna Bar, Watergate/West Princes Street; Black Bull, Albert Place/West Princes Street; Oakenshield’s Shoe Repair, West Princes Street; Bute Dental Surgery, West Princes Street; Jessmay’s greengrocer, Bishop Street; Glen’s electricals, East Princes Street; The Bike Shed, East Princes Street; Lloyds Pharmacy, Victoria Street; Glen’s shoe shop, Victoria Street; Caroline’s Hair Salon, Victoria Street; West End Cafe, Gallowgate; Diane’s Hair & Body Salon, Montague Street; Knox’s Menswear, Montague Street; J.S. Slaven’s fishmonger, Montague Street; Sheena’s Shop, Montague Street; Bute Tools, Montague Street; Dil’s Newsagent, High Street. Argyll and Bute Council has promised to send us a statement outlining the authority’s approach to such planning applications, and we’ll publish it as soon as it’s received.
null
minipile
NaturalLanguage
mit
null
Company Petronet LNG Limited, one of the fastest growing world-class companies in the Indian energy sector, has set up the country's first LNG receiving and regasification terminal at Dahej, Gujarat, and another terminal at Kochi, Kerala. While the Dahej terminal has a nominal capacity of 15 MMTPA, the Kochi terminal has a capacity of 5 MMTPA. Natural Gas Natural Gas consists mainly of Methane and small amounts of ethane, propane and butane. It is transported through pipelines but is extremely bulky. A high-pressure gas pipeline can transport in a day only about one-fifth of the energy that can be transported through an oil pipeline. Terminals The Company has set up South East Asia's first LNG Receiving and Regasification Terminal with an original nameplate capacity of 5 MMTPA at Dahej, Gujarat. The infrastructure was developed in the shortest possible time and at a benchmark cost. The capacity of the terminal has been expanded to 10 MMTPA and the same has been commissioned in June, 2009. The expansion involved construction of 2 additional LNG storage tanks and other vaporization facilities. The terminal is meeting around 20% of the total gas demand of the country. CSR Petronet LNG, as responsible Corporate/Community/Government Citizens, undertake Socio-Economic Development Programme to supplement the efforts to meet priority needs of the community with the aim to help them become self-reliant. These efforts would be generally around our work centres mostly in the areas of Education, Civil Infrastructure, Healthcare, Sports & Culture, Entrepreneurship in the Community. Petronet LNG also support Water Management and Disaster Relief in the country thereby help to bolster its image with key stakeholders. Media Centre Four of the top public sector companies of the country's Hydrocarbon Sector viz. Oil and Natural Gas Corporation Limited (ONGC), Indian Oil Corporation Limited (IOCL), Bharat Petroleum Corporation Limited (BPCL) and GAIL (India) Limited (GAIL) have invested in Petronet LNG. Each has a 12.5% equity share, leading to a total of 50% for the four. Four of the top public sector companies of the country's Hydrocarbon Sector viz. Oil and Natural Gas Corporation Limited (ONGC), Indian Oil Corporation Limited (IOCL), Bharat Petroleum Corporation Limited (BPCL) and GAIL (India) Limited (GAIL) have invested in Petronet LNG. Each has a 12.5% equity share, leading to a total of 50% for the four. Concerned that the country would not have gas supply to the tune of over 450 million standard cubic metre per day (mmscmd) required for its Greenfield and Brownfield power projects in the foreseeable future, the Empowered Group of Ministers (EGoM) is likely to discuss ways of prioritizing the applicants through a three-way plan. ?Since the total demand for gas for the proposed Greenfield and Brownfield power projects exceeds 450 mmscmd and that production of domestic gas is unlikely to be achieved in the near future, there is a need for prioritization of applicants,? according to a note prepared by the Ministry of Petroleum and Natural gas for the EGoM headed by Finance Minister Pranab Mukherjee. One way to do so would be to allot gas on priority to central and state public sector utilities for which several requests were pending ?as they would supply the entire quantity of gas to grid at regulated rates,? the note says. ?Another alternative can be by inviting bids on fixed costs and variable costs of generation on lines similar to the Ultra Mega Power Projects (UMPPs), while the third alternative could be to allot gas to all entities in the public and private sector on the basis of preparation and the likely commencement dates with the stipulation that at least 80 per cent of power would be sold to SEBs at rates approved by the power regulator,? according to the ministry. The power ministry has prepared a list of 78 plants, which have sought for gas allocation totalling 488.48 mmscmd, excluding those already under construction. Of these, 90.46 mmscmd is for expansion projects, while the remaining 398.02 mmscmd is for Greenfield projects. In view of the paucity of domestic gas against the huge demand registered, a policy decision needed to be taken whether any new power plant would be given gas only to the extent of 50 per cent of its capacity requirement on a particular day, the note said. ?The power project would source its remaining requirement from LNG for which sufficient capacities exist. The project may be given additional domestic gas on a fallback basis if there are no takers,? the ministry said. Initially the availability of gas from KG D6 fields was expected to increase substantially and it was even reported in some quarters that gas production would increase to 120 mmscmd within a year of commencement of production and to higher levels thereafter. However, the reality of production since 2009 has been different and it has not gone beyond 60 mmscmd and it is unlikely to increase in the near future. State-run behemoth ONGC is likely to commence production of natural gas from its C series on the western coast at a price of $5.25 per mmbtu till March 2014. Out of the allocations made for KG D6 gsa, 7.88 mmscmd on a firm basis and 0.548 mmscmd on a fallback basis have been made to customers in the Uran region.
null
minipile
NaturalLanguage
mit
null
Data Summary {#s1-2} ============ This work made use of sequencing data obtained from the pubMLST database ([https://pubmlst.org/neisseria](https://pubmlst.org/neisseria/)), the NCTC 3000 project (<https://www.sanger.ac.uk/resources/downloads/bacteria/nctc/>) and GenBank (<https://www.ncbi.nlm.nih.gov/genbank/index.html>). A comprehensive list of strain IDs, metadata and accession numbers can be found on FigShare at <https://doi.org/10.6084/m9.figshare.6016112.v1>. Annotated pubMLST isolates can be searched in the pubMLST isolate database, and allele sequences retrieved using the NEIS numbers from the pubMLST sequence database. ###### Impact Statement *Neisseria meningitidis*, whilst normally a harmless commensal of the human nasopharynx, can cause invasive meningococcal disease (IMD), comprising meningitis and/or septicaemia. Expression of a polysaccharide capsule is essential for IMD, but must also be involved in asymptomatic colonization. The capsule has been considered a virulence factor unique to *N. meningitidis*; however, here a full complement of homologous putative capsule genes was identified in non-pathogenic *Neisseria* (NPN) species. NPN species are important members of the human nasopharyngeal microbiota, as well as coexisting with the meningococcus in the nasopharynx. The results inform debate about the acquisition of capsule by the meningococcus, an important step in the emergence of pathogenic potential. Introduction {#s1-4} ============ The genus *Neisseria* is a diverse group of Gram-negative bacteria, many of which are asymptomatic colonizers of the mucosal surfaces of animals and man \[[@R1]\]. In humans, they have been isolated from the mouth, nose, throat and urogenital tract, but whilst many *Neisseria* species belong to the human oral microbiota, research has focused on those associated with disease: *Neisseria gonorrhoeae* and *Neisseria meningitidis*. In common with many other *Neisseria* species, *N. meningitidis* usually colonizes the nasopharynx asymptomatically; however, it occasionally invades the bloodstream, leading to life-threatening invasive meningococcal disease (IMD), comprising meningitis and/or septicaemia \[[@R2]\]. In contrast, there are very few case reports of other *Neisseria* species causing invasive disease. Compared to asymptomatic colonization, IMD is an extremely rare transmission-terminating event, associated with particular meningococcal genotypes that normally express a polysaccharide capsule \[[@R3]\]. Capsules are associated with virulence in several human pathogens, including *Escherichia coli*, *Haemophilus influenzae* and *Klebsiella pneumoniae* \[[@R4]\]. A number of successful vaccines have been developed that target capsular antigens, for example the polysaccharides forming the capsules of the meningococcal serogroups A, C, W and Y \[[@R7]\]. Capsules can aid evasion of immune responses, including the complement system and phagocytosis by macrophages, facilitating persistence in the bloodstream \[[@R8]\]; however, capsules have also been identified in free-living bacteria and symbionts \[[@R10]\]. In both *N. meningitidis* and other species, association with disease is often restricted to a subset of capsular groups or types \[[@R2]\], indicative that in general the capsule confers benefits during transmission or protects the bacteria from local inflammation in the nasopharynx \[[@R12]\]. In *N. meningitidis,* the capsule is produced via ABC transporter-dependent polymerization, whereby synthesis and polymerization of the polysaccharide take place at the bacterial inner membrane, prior to transport across the membrane and translocation to the cell surface \[[@R14]\]. These processes are encoded by genes located in the *cps* locus \[[@R15]\], which is functionally divided into several contiguous regions. Region A contains genes involved in capsule synthesis, in particular glycosyltransferases and capsule polymerases, but also other proteins involved in additional capsule modifications, and sometimes insertion sequences are present \[[@R15]\]. This region is highly variable, with 12 known variants corresponding to the 12 meningococcal serogroups. Of these, only six (A, B, C, W, X and Y) are associated with disease. Regions B and C are composed of the genes *ctrEF* and *ctrABCD*, respectively, and are required for capsule translocation and transport. These regions are well conserved throughout *N. meningitidis*, unlike region A \[[@R15]\]. Region D of the *cps* contains the genes *rfbABC* and *galE*, which are thought to play a role in LPS synthesis \[[@R17]\]. A duplication of region D, containing a truncated *galE,* is designated region D' \[[@R18]\]. Finally, region E contains the gene *tex* and two pseudo cytosine methyltransferases of unknown function. Although regions D, D' and E are not directly involved in capsule synthesis or transport, they are generally considered part of *cps* due to their location within the locus. Isolates that do not contain regions A, B and C are described as capsule null, and instead possess a distinct 113--118 bp sequence located between regions D and E, the capsule null locus (*cnl*) \[[@R19]\]. The *cnl* locus has also been identified in *N. gonorrhoeae* and the non-pathogenic *Neisseria* (NPN) species *Neisseria lactamica* \[[@R19]\], and no encapsulated isolates from these species have been described. Whilst a number of putative virulence genes have been found in NPN species \[[@R20]\], the capsule has been considered to be unique to the meningococcus \[[@R21]\], possibly acquired in a horizontal genetic transfer (HGT) event that gave rise to the potentially pathogenic variants of *N. meningitidis* \[[@R3]\]. In this study, capsule genes have been identified and characterized in NPN species, which typically do not cause disease. These results have implications for understanding the acquisition of capsule in *N. meningitidis*. Methods {#s1-5} ======= Isolate collection and species definitions {#s2-5-1} ------------------------------------------ Whole-genome sequence (WGS) data from *Neisseria* isolates were obtained from pubmlst.org/neisseria, which is hosted on the Bacterial Isolate Genome Sequence Database (BIGSdb) genomics platform \[[@R23]\]. At the time of writing, the database contained WGS data from \>13 000 *Neisseria* isolates, 235 of which were from NPN species. The pubMLST sequence database contains defined *Neisseria* loci and allele sequences, with each locus assigned a unique NEIS number. Isolates can be annotated with NEIS loci automatically or manually through a [blast]{.smallcaps}-based process, and new alleles are assigned an arbitrary allele number. Most of the WGSs in pubMLST are high-quality draft genomes, with sequencing reads assembled into approximately 100--300 individual contigs. A total of 20 species had been defined in the pubmlst.org/neisseria database using the universal and high-resolution ribosomal multi-locus sequence typing (rMLST) approach \[[@R24]\]. These species included the human nasopharyngeal commensals *Neisseria polysaccharea* (16 isolates), '*Neisseria bergeri'* (1 isolate), *N. lactamica* (140 isolates), *Neisseria cinerea* (15 isolates), *Neisseria subflava* (19 isolates), *Neisseria oralis* (4 isolates), *Neisseria mucosa* (10 isolates), *Neisseria elongata* subsp. (4 isolates) and *Neisseria bacilliformis* (4 isolates), and the pathogens *N. meningitidis* and *N. gonorrhoeae*, along with several species isolated from animals or animal-bite wounds. *N. meningitidis* region A genes were sourced from isolate sequences for 0106/93 and 0084/93 published in pubMLST, and 29 043 published in GenBank (accession no. HF562984.1). All NPN isolates in pubMLST were surveyed for the presence of *cps* genes that putatively encoded a polysaccharide capsule. All isolates found to contain a *cps* were further characterized. Isolates from additional species were included in phylogenetic analyses. Capsule sequence data from other genera were obtained from GenBank. Reference genomes were obtained from either GenBank (<http://www.ncbi.nlm.nih.gov/genbank/index.html>) or the NCTC 3000 project (<http://www.sanger.ac.uk/resources/downloads/bacteria/nctc/>). Annotation of *cps* in NPN isolates {#s2-5-2} ----------------------------------- The BIGSdb software enables [blast]{.smallcaps} searches of protein or nucleotide sequences against genomes contained within pubMLST. Region B and C *cps* genes *ctrABCDEF*, which are involved in capsule transport in the meningococcus, had previously been defined in the pubMLST.org/neisseria sequence database as NEIS0055, NEIS0056, NEIS0057, NEIS0058, NEIS0066 and NEIS0067, respectively. For each gene, the amino acid sequence of allele 1 was used as a pBLAST query against all available NPN isolates within the database for which WGSs were available. Candidate genes were annotated in Artemis \[[@R25]\] and G+C content determined in [mega]{.smallcaps} 7 \[[@R26]\]. The same approach was also used to annotate region D and E genes, and any other relevant genes, where necessary. Annotations were uploaded as novel alleles in pubMLST. The proposed *cps* regions of NPN isolates were further annotated in Artemis \[[@R25]\]. ORFs adjacent to proposed region C genes were queried against the National Center for Biotechnology Information (NCBI) RefSeq protein database using pBLAST and the Pfam database \[[@R27]\], as well as the pubMLST sequence database. Support for putative region A genes was based on homology to capsule synthesis genes from *N. meningitidis* and/or other bacterial species, or at least for gene products consistent with a function in capsule synthesis, such as glycosyltransferases. Additional guidance was based on comparisons of synteny with *N. meningitidis* and between NPN isolates and species. In this way, ORFs that did not contain significant homology to previously described capsule synthesis genes, or previously described capsule synthesis-like genes, could only be included in a putative region A if they were flanked by ORFs that did have significant homology. Region A candidates were also queried against the non-redundant sequences of the CAZy database \[[@R28]\], which contains data on carbohydrate-active enzyme families, using the CAZymes Analysis Toolkit \[[@R29]\]. CAZy families were predicted using both sequence similarity and Pfam rule-based annotations, with an *E* value threshold of 1×10^−10^ and bit score threshold of 55. Annotations were uploaded as novel NEIS loci and alleles in pubMLST. The organization of the *cps* found in NPN isolates was compared with that of the meningococcus and visualized using genoPlotR \[[@R30]\]. Identification of homologous candidate region A genes {#s2-5-3} ----------------------------------------------------- Potentially homologous genes shared by isolates from the same species were identified based on gene order, sequence length and predicted function. The nucleotide sequences for each gene were aligned in Clustal Omega \[[@R31]\], and paired identity matrices (PIMs) were generated using Clustal 2.1. Suspected homology between different species was identified in the same way and investigated using pairwise comparisons of amino acid sequences generated by Clustal, and pairwise tBLASTx comparisons between the proposed region A of each species were made using the Artemis Comparison Tool \[[@R32]\] and visualized using genoPlotR. Phylogenetic analyses {#s2-5-4} --------------------- A recombination-corrected phylogenetic tree of *Neisseria* isolates, with *Moraxella catarrhalis* as an outgroup, was generated based on rMLST loci \[[@R24]\]. The nucleotide sequences of 51 of the 53 genes that constitute the protein subunits of the ribosome (excluding *rpmE* and *rpmJ*, as they are paralogous in some *Neisseria*), were extracted from each isolate using the BIGSdb genome comparator, and aligned using [mafft]{.smallcaps} \[[@R33]\]. A maximum-likelihood (ML) tree was generated in PhyML v3.1 \[[@R34]\] using the GTR+I+G substitution model, determined to be the best-fit model by jModelTest v2.1.10 \[[@R35]\], with 100 bootstrap replicates. The tree was corrected for recombination in ClonalFrameML \[[@R36]\], and rendered and annotated using the [ete]{.smallcaps} 3 toolkit \[[@R37]\]. The phylogeny included all isolates belonging to those species in which capsule genes were identified, and representatives of *N. cinerea, N. lactamica, 'N. bergeri', N. polysaccharea* and *N. gonorrhoeae,* in none of which were capsule genes identified, and representatives of *N. meningitidis*. Where present, region B and C genes were also extracted from isolates in the dataset. Additionally, homologous capsule genes, as determined by NCBI [blast]{.smallcaps} queries, were extracted from isolates of *Mannheimia haemolytica*, *Actinobacillus pleuropneumoniae*, *Actinobacillus suis*, *Bibersteinia trehalosi*, *H. influenzae*, *Pasteurella multocida* and *Kingella kingae*. Amino acid sequences from all species were concatenated, aligned with [muscle]{.smallcaps} \[[@R38]\] and trimmed in trimAL v1.2 \[[@R39]\] to remove columns with gaps in more than 20 % of sequences or with a similarity score lower than 0.001. An unrooted ML tree was generated in PhyML using the LG+I+G substitution model, selected to be the best fit by ProtTest v3.4.2 \[[@R40]\]. The tree was rendered in FigTree v1.4.3 (<http://tree.bio.ed.ac.uk/software/figtree/>). Results {#s1-6} ======= Identification of *cps* in NPN species {#s2-6-1} -------------------------------------- Capsule gene homologues were identified in isolates from a total of 13 *Neisseria* species contained within the pubmlst.org/neisseria database \[[@R23]\], including *N. bacilliformis* (4 of 4isolates); *N. elongata* subsp. (2 of 4 isolates); *Neisseria musculi* (1 isolate); *Neisseria dentiae* (1 isolate); *Neisseria animaloris* (1 isolate); *Neisseria zoodegmatis* (1 isolate); *Neisseria weaveri* (1 isolate); *Neisseria canis* (1 isolate); *Neisseria wadsworthii* (1 isolate); *Neisseria animalis* (1 isolate); *N. oralis* (4 of 4isolates); *N. mucosa* (1 of 9 isolates); and *N. subflava* (17 of 19 isolates). Regions A, B and C were annotated in isolates of all of these species ([Fig. 1](#F1){ref-type="fig"}), with the exception of *N. wadsworthii* and *N. animaloris*, in which region A was interrupted in the genome assembly. ![Arrangement of *cps* across the genome in *Neisseria* species. Region A is involved in capsule synthesis, region B in capsule transport and region C in capsule translocation. *N. animaloris* and *N. wadsworthii* were excluded, since region A was interrupted in the genome assembly for the isolates from these species. Two diagonal lines represent \>5 kb between genes. cp refers to capsule phosphotransferase, as seen in some W, I and K isolates of *N. meningitidis* \[[@R15]\].](mgen-4-208-g001){#F1} With the exception of 11 *N. subflava* isolates, isolates in which capsule genes were identified were found to contain homologues for all six of the *N. meningitidis* region B and C genes (Table S1, available in the online version of this article), sharing 52--99 % amino acid sequence identity with the relevant query. Alignment length covered at least 92 % of the relevant query, except the homologues of *ctrF* in CCUG 50858 (85 %) and NJ9703 (71 %). This reduced query coverage was due to an incomplete gene located at the end of a contig in WGS data for isolate CCUG 50858, and a frameshift mutation in the sequences for isolate NJ9703. Annotation using Artemis \[[@R25]\] showed that *ctrABCD* genes of region C were contiguous and in the same order as those found in *N. meningitidis*. The *ctrE* and *ctrF* genes were contiguous in all isolates except those from *N. canis* and *N. bacilliformis*, although in these two species *ctrE* was adjacent to region C. The remaining 11 *N. subflava* isolates were found to contain homologues for one or both of *ctrE* and *ctrF*, but no region C genes were identified (Table S1). Annotation of putative novel region A genes in NPN species {#s2-6-2} ---------------------------------------------------------- With the exception of *N. bacilliformis*, isolates that contained a complete set of region B and C genes possessed a region adjacent to region C that had not been defined in the pubMLST.org/neisseria database at the time of analysis. A total of 59 ORFs were annotated as putative region A genes. Of these, 33 were homologous, with capsule genes described in both *N. meningitidis* and in other non-*Neisseria* species, including *A. pleuropneumoniae* and *Mannheimia haemolytica*, and a further 16 were homologous with genes commonly involved in capsule synthesis, including glycosyltransferases and acetyltransferases ([Table 1](#T1){ref-type="table"}). Remaining genes were included as part of a putative region A based on synteny. In many cases, genes had been annotated previously in the relevant species in RefSeq, albeit not directly attributed to a NPN *cps*, and so [blast]{.smallcaps} hits were almost identical to the [blast]{.smallcaps} query. A total of 40 of the region A candidates belonged to a CAZy \[[@R28]\] glycosyltransferase family, based on either sequence similarity of Pfam rule-based annotation ([Table 1](#T1){ref-type="table"}). With the exception of GT61 (to which a gene in *N. weaveri* belonged), these were all glycosyltransferase families to which *N. meningitidis* region A genes belong. ###### Highest pBLAST hits of candidate region A genes against RefSeq ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- **Species** **NEIS locus** **Product of highest non-hypothetical pBLAST hit/CAZy family** **Identity (%)** ***E* value** **G+C\ (mol%)** ---------------------------------------------- ---------------- --------------------------------------------------------------------------- ------------------ --------------- ---------- *N. animalis* (CCUG 808) 2173 Glycerol-3-phosphate cytidylytransferase GT9\* 99 0 43.6 2947 Phosphotransferase 98 0 43.8 2939 CDP-glycerol--glycerophosphate glycerophosphotransferase GT4 41 6^−31^ 44.2 2940 CDP-glycerol--glycerophosphate glycerophosphotransferase GT2 33 2^−37^ 40.6 2966 Galactosyl transferase GT32 33 3^−33^ 41.1 2967 α-/β-Hydrolase 31 5^−44^ 45.8 *N. bacilliformis* (CCUG 38158) 2173 Glycerol-3-phosphate cytidylytransferase GT9\* 100 1^−94^ 37.6 2938 Phosphotransferase 47 7^−118^ 36.7 2939 CDP-glycerol--glycerophosphate glycerophosphotransferase GT4 41 3^−24^ 37.9 2940 CDP-glycerol---glycerophosphate glycerophosphotransferase GT2 36 2^−36^ 39.4 *N. canis* (CCUG 56775 T) 2157 UDP-*N*-acetylglucosamine 2-epimerase GT28\* 74 0 40.3 2737 UDP-*N*-acetyl-[d]{.smallcaps}-mannosamine dehydrogenase GT2\* 87 0 43.4 2738 Glycosyl transferase GT2 40 1^−92^ 37.1 2739 Glycosyl transferase GT4 45 0 38.9 2740 Riboflavin synthase subunit β 28 4^−13^ 31.3 2968 α-/β-Hydrolase 34 3^−35^ 30.2 2741 *N*-Acetyltransferase 31 6^−40^ 34.2 2742 Spore coat protein GT4 81 0 38.5 *N. dentiae* (CCUG 53898) 2736 UDP-*N*-acetylglucosamine 2-epimerase GT28\* 78 0 50.2 2737 UDP-*N*-acetyl-[d]{.smallcaps}-mannosamine dehydrogenase GT2\* 91 0 49.9 2738 Glycosyl transferase GT2 59 6^−92^ 35.4 2739 Glycosyl transferase GT4 48 0 38.5 2950 Asparagine synthase 22 1^−22^ 31.6 2741 *N*-Acetyltransferase 34 3^−37^ 35.1 2742 Spore coat protein GT4 90 0 46.4 *N. elongata* subsp. *elongata* (CCUG 2043T) 2965 Capsule biosynthesis protein GT4 100 0 31.4 2941 Glycosyl transferase GT2 100 0 36.1 2942 Acetyltransferase 100 0 37.9 2943 α-/β-Hydrolase 38 1^−15^ 34.6 2974 [d]{.smallcaps}-Alanine--[d]{.smallcaps}-alanine ligase GT2 100 0 37.7 *N. elongata* subsp. *nitroreducens*\ 2173 Glycerol-3-phosphate cytidylytransferase GT9\* 99 1^−89^ 39.2 (CCUG 30802T) 2948 Phosphotransferase 51 1^−116^ 33.7 2939 CDP-glycerol--glycerophosphate glycerophosphotransferase GT4 31 1^−22^ 36.5 2940 CDP-glycerol--glycerophosphate glycerophosphotransferase GT2 35 9^−37^ 37.4 *N. mucosa* (CCUG 431) 2972 Spore coat protein GT2 59 0 29.9 2973 Acetyltransferase 45 8^−58^ 33.5 *N. musculi* (AP2031) 2736 UDP-*N*-acetylglucosamine 2-epimerase GT28\* 76 0 46.1 2737 UDP-*N*-acetyl-[d]{.smallcaps}-mannosamine dehydrogenase GT2\* 91 0 49.5 2738 Glycosyl transferase GT2 38 4^−92^ 39.6 2739 Glycosyl transferase GT4 47 0 37.1 2740 Riboflavin synthase subunit β 24 2^−10^ 30.1 2741 *N*-Acetyltransferase 35 6^−37^ 40.8 2742 Spore coat protein GT4 87 0 46 *N. oralis* (F0314) 2941 Glycosyl transferase GT2 100 0 39.6 2943 α-/β-Hydrolase (variant in CCUG 804) 36 9^−14^ 36.3 2974 [d]{.smallcaps}-Alanine--[d]{.smallcaps}-alanine ligase GT2 100 0 37.98 *N. subflava* (NJ9703) 2184 Capsule biosynthesis protein GT4 100 0 43 2941 Glycosyl transferase GT2 (variant in C102, C6A, CCUG 7826 and CCUG 24918) 100 0 39.7 2942 Acetyltransferase (missing in C6A, CCUG 7826 and CCUG 24918) 100 0 37.2 2943 α-/β-Hydrolase 38 6^−15^ 36.1 2974 [d]{.smallcaps}-Alanine--[d]{.smallcaps}-alanine ligase GT2 100 0 40.55 *N. weaveri* (CCUG 4007 T) 2944 Glycosyl transferase GT4 100 0 30.2 2945 Capsule biosynthesis protein 99 0 27.5 2946 Glycosyl transferase GT61\* 29 2^−17^ 28.6 *N. zoodegmatis* (NCTC 12230 T) 2736 UDP-*N*-acetylglucosamine 2-epimerase GT28\* 77 0 40.7 2737 UDP-*N*-acetyl-[d]{.smallcaps}-mannosamine dehydrogenase G2\* 85 0 42.3 2949 Glycosyl transferase GT4 34 3^−126^ 37.6 2971 Hypothetical 48 0 34.7 2742 Spore coat protein GT4 81 0 37.8 ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- \*Indicates CAZy family predictions determined using Pfam rule-based annotations rather than sequence similarity. In all but four species, the proposed region A was flanked on both sides by region B, region C, region D or some other gene with an unrelated function. In *N. animalis*, between the last putative capsule synthesis gene and *galE* was an ORF predicted to belong to the DUF1016 family; since this family is predicted to code for nuclease genes, it was considered unlikely to have a role in capsule synthesis. In *N. elongata* subsp. *elongata*, an IS*565* insertion sequence was identified between region A and *galE*, but no evidence was found to suggest that this was interrupting a putative capsule synthesis gene. In *N. canis*, the proposed region A was preceded by an IS*481* insertion sequence, but again no evidence was found showing that a putative capsule synthesis gene had been interrupted; ORFs adjacent to the insertion sequence had been previously annotated. [blast]{.smallcaps} querying the *N. bacilliformis* isolates with all novel region A candidates identified putative region A genes homologous to those found in *N. elongata* subsp. *nitroreducens*. In *N. bacilliformis*, these four region A candidates were contiguous, but were located on different contigs from regions B and C. Region A candidates were not identified in *N. subflava* isolates that contained region B, but not region C. The G+C content of region A candidates was found to be lower than those typical for *Neisseria* genomes (49--54 %, but 60 % for *N. bacilliformis*) ([Table 1](#T1){ref-type="table"}). tBLASTx queries additionally identified homologues of meningococcal serogroup B/C/W/Y sialic acid synthesis genes *cssABC* in *N. weaveri* isolate CCUG 4007 T. These three genes were in a separate region of the genome of CCUG 4007 T and were distinct to the other candidate region A genes identified flanking regions B and C. Arrangement of *cps* in *N. meningitidis* and NPN species {#s2-6-3} --------------------------------------------------------- None of the NPN *cps* were syntenic with the gene order seen in *N. meningitidis* ([Fig. 1](#F1){ref-type="fig"}): all NPN lacked the duplicated region D\', and only contained *tex* from region E, with the pseudo cytosine methyltransferases not identified during [blast]{.smallcaps} searches. *N. subflava* was the only NPN species in which the putative regions A, B, C and D were contiguous, although the putative regions A, B and C were contiguous or nearly contiguous in all species, apart from *N. oralis*, *N. animalis, N. canis* and *N. bacilliformis.* In the case of *N. canis*, *N. animalis, N. elongata* subsp. *elongata* and *N. bacilliformis*, the different regions were not located on a single contig, but the separation of these regions being an artefact was rejected based on comparisons to closed genome sequences of these species. The gene encoding *galE* from region D was not present in all NPN found to contain a *cps*, and in *N. animalis*, *N. oralis* and both *N. elongata* isolates, it was found to be near or adjacent to region A, rather than contiguously with the other region D genes as is the case in *N. meningitidis*. Homologous region A genes among species {#s2-6-4} --------------------------------------- In some instances, several isolates from each species possessed a *cps: N. bacilliformis* (four isolates); *N. subflava* (six isolates); and *N. oralis* (four isolates). Each gene found in *N. bacilliformis* shared \>98 % nucleotide identity with the corresponding gene in all other isolates, indicating that all four isolates shared a highly similar *cps*. In *N. subflava*, four of the five candidate genes shared \>97 % nucleotide identity among isolates, although three isolates were missing a predicted acetyltransferase. The identity scores for the other gene, a predicted glycosyltransferase, were either 71--73 or 97--100 %, which indicated that there were two versions of this gene ([Fig. 2b](#F2){ref-type="fig"}). *N. oralis* had three region A candidates, two of which shared \>98 % nucleotide identity among isolates. The third only had identity scores of \>98 % among three of the four isolates, with the version in CCUG 804 only sharing 81 % identity with the others ([Fig. 2b](#F2){ref-type="fig"}). ![Pairwise tBLASTx comparisons of candidate region A genes between representative *Neisseria* species proposed to contain region A genes. Isolates within each group, 1 (a), 2 (b) or 3 (c) share homologous region A genes. Red and blue indicate shared amino acid identity between isolates, with higher intensity indicating higher sequence similarity. Blue indicates an inversion. Between species variation in nucleotide sequence identity was \>97 %; hence, one representative is provided for each species, with three exceptions: two *N. elongata* isolates belong to totally different groups \[1 (a)/ 3 (b)\]; one of the four *N. oralis* isolates had a variant of NEIS2943 that shared only 81 % nucleotide identity with other *N. oralis* isolates, and two of the six *N. subflava* isolates had a variant of NEIS2941 that shared only 71--73 % nucleotide identity with other *N. subflava* isolates (b).](mgen-4-208-g002){#F2} Based on similarities in *cps* organization, and the results of [blast]{.smallcaps} searches during region A annotation, it also became clear that there was shared region A homology between NPN species. Homologous genes were consistently grouped, such that if a group of species shared one gene, they were likely to share another gene, with three homology groups identified in total ([Fig. 2](#F2){ref-type="fig"}). Pairwise comparisons of amino acid sequences generated by Clustal were used to analyse these groups further. Group 1 contained putative region As from *N. animalis, N. elongata* subsp. *nitroreducens* and *N. bacilliformis*, which shared four homologous genes with 78--83, 42--47, 56--58 and 61--64 % aa identity between species ([Fig. 2a](#F2){ref-type="fig"}). The first two genes also shared 73--81 and 42--47 % aa identity with *cszA/cshA* and *cszB/cshB*, respectively, which are found at the beginning of the region A of serogroups H and Z *N. meningitidis.* Group 2 contained putative region A from *N. elongata* subsp. *elongata*, *N. subflava* and *N. oralis*, which shared up to five homologous genes with 58, 60--95 , 97 , 86--95  and 75--\>99 % aa identity between species, although *N. oralis* was missing the first and third genes, and three isolates of *N. subflava* lacked the third gene ([Fig. 2b](#F2){ref-type="fig"}). The first two genes also shared 61--72 and 32--33 % aa identity with *cslA* and *cslB*, respectively, which are found at the beginning of the region A of serogroup L *N. meningitidis*, although the *cslB* homologue was 40 % longer. *N mucosa* also possessed a gene with 59 % aa identity with *cslB*, differing in length by only 3 bp, as well as a homologue with 56 % aa identity to *cslC*. Group 3 contained putative region As from *N. dentiae*, *N. musculi*, *N. canis*, *N. zoodegmatis* and *N. animaloris*, although *N. animaloris* was not annotated further due to its incomplete assembly. These isolates shared up to seven homologous genes with 56--96, 86--94 , 70--82 , 73--85 , 65 , 68--83  and 79--90 % aa identity between species ([Fig. 2c](#F2){ref-type="fig"}). *N. zoodegmatis* only contained the first two and last one of these genes, whilst *N. dentiae* lacked the fifth gene. The first two genes and last gene were also 71--74, 83--89 and 76--86 % homologous to *csiA*, *csiB* and *csiE* from serogroup I *N. meningitidis*, respectively, with the exception of the first gene in *N. canis,* which had only 56 % aa identity with *csiE* and 68 % aa identity with *csaA* from serogroup A *N. meningitidis*. Distribution of *cps* homologues among *Neisseria* species {#s2-6-5} ---------------------------------------------------------- Mapping the presence of *cps* onto the phylogeny of *Neisseria* species reconstructed from rMLST \[[@R24]\] sequences indicated that *cps* genes were common and widely distributed among *Neisseria* ([Fig. 3](#F3){ref-type="fig"}). Species sharing homologous region A genes did not necessarily belong to a monophyletic group. *N. cinerea*, *N. lactamica*, *N. polysaccharea* and *N. gonorrhoeae* all belonged to a monophyletic group with *N. meningitidis*, and, with the exception of *N. meningitidis*, no isolates from these species were found to possess a complete *cps* locus. The closest encapsulated relative of *N. meningitidis* was *N. subflava*. Absences of *cps* genes in other species were more sporadic, with 2 of 4 *N. elongata* subsp. isolates, *Neisseria shayeganii* (1 isolate), 9 of 10 *N. mucosa* and 13 of 19 *N. subflava* lacking *cps*, although 11 of the *N. subflava* isolates contained remnants of region B genes. ![An ML phylogeny generated from an alignment of concatenated rMLST nucleotide sequences from *Neisseria*, with *Moraxella catarrhalis* as an outgroup. Branch supports are based on 100 bootstrap replicates. Corrected for recombination in ClonalFrameML. Coloured circles indicate the presence of capsule genes. Isolates sharing homology in region A, as shown in [Fig. 2](#F2){ref-type="fig"}, are filled with the same colour. Other isolates are black. Grey circles indicate partial region B genes identified only.](mgen-4-208-g003){#F3} NCBI pBLAST searches indicated that *Kingella kingae*, *B. trehalosi*, *Actinobacillus* sp., *Mannheimia haemolytica* and *H. influenzae* possessed homologues of *N. meningitidis* region B and C genes. An unrooted ML phylogeny generated from aligned amino acid sequences of region B and C homologues in these species, *P. multocida* and *Neisseria* indicated that genes in *N. meningitidis* were more closely related to those from *N. subflava* and most other *Neisseria* than any other genera ([Fig. 4](#F4){ref-type="fig"}). ![An unrooted ML phylogeny with 100 bootstraps, generated from an alignment of concatenated amino acid sequences of *ctrABCDEF* (or their equivalent homologues) from *Neisseria* species, and other members of the *Neisseriaceae* and *Pasteurellaceae*.](mgen-4-208-g004){#F4} Discussion {#s1-7} ========== Among the *Neisseria,* the polysaccharide capsule has been considered to be a virulence factor unique to *N. meningitidis.* Although region B and C genes had been identified previously in an isolate of *N. subflava,* in the absence of further evidence at the time, this was attributed to an isolated HGT event facilitated by a DNA uptake sequence in *ctrA* \[[@R21]\]. In this study, homologues of all the conserved region B and C genes in multiple NPN species from across the genus have been identified, with accompanying putative capsule synthesis loci. On the balance of evidence from genomic data, including the comparable synteny between most NPN species and the meningococcal *cps* ([Fig. 1](#F1){ref-type="fig"}), and the high homology of several putative capsule synthesis genes to those of *N. meningitidis* and other species ([Fig. 2](#F2){ref-type="fig"}, [Table 1](#T1){ref-type="table"}), the candidate region A genes identified were most likely to function in capsule synthesis. The discovery of capsule genes in non-pathogenic bacteria is not unprecedented, with a similar finding in the *Streptococcus mitis* group streptococci overturning the assumption that capsule production was unique to the pathogenic *Streptococcus pneumoniae* \[[@R41]\]. In common with many virulence factors \[[@R42]\], including the type IV pilus \[[@R20]\], the capsule might be better described as a 'host adaptation factor' \[[@R21]\], with effects on pathogenic potential being incidental. Region A annotations were consistent with the potential for more than one capsular group within *N. elongata*, and possibly *N. subflava* and *N. oralis* ([Fig. 2](#F2){ref-type="fig"}). Differences observed among *N. subflava* isolates were comparable to the divergence between the polysialyltransferase-encoding *csb* and *csc*, which give rise to the structural differences between meningococcal serogroups B and C, respectively \[[@R15]\]. The presence of multiple groups or types is commonplace among Gram-negative bacteria, including *E. coli*, *H. influenzae* and *Mannheimia haemolytica* \[[@R4]\]. In *E. coli*, over 80 structurally different capsules exist, some of which are associated with specific pathologies, or are only expressed at certain temperatures \[[@R4]\]. The range of niches exploited by *E. coli*, including different hosts and tissues, as well as free-living environments, may be responsible for this diversity \[[@R44]\]. *Neisseria* do not demonstrate such a wide exploitation of niches, but it has been demonstrated that different species have tropisms for specific nasopharyngeal sites \[[@R45]\]. In *N. meningitidis*, isolates expressing a capsule from serogroups A, B, C, W, X or Y are associated with IMD \[[@R2]\], leading to an interest in the evolutionary history of the capsule. Models presented previously, hypothesizing that *N. meningitidis* must have acquired a capsule by HGT \[[@R3]\], can be re-examined in light of the data presented here. The identification of capsules in 13 NPN species does not preclude the acquisition of capsule genes in *N. meningitidis* by HGT. Notably, capsule genes have still not been identified in any isolates belonging to the monophyletic group that contains *N. cinerea*, *N. lactamica*, '*N. bergeri'*, *N. polysaccharea*, *N. gonorrhoeae* and *N. meningitidis*, with the exception of *N. meningitidis* itself ([Fig. 3](#F3){ref-type="fig"}). This uneven distribution of capsule genes must be explained by acquisition and/or loss \[[@R46]\]. Given the predicted common ancestor of these species, for the capsule to be present only in certain *N. meningitidis* isolates, the capsule genes must either have been lost independently as many as six times, or lost once in a common ancestor of the monophyly and re-acquired in *N. meningitidis.* The latter is the most evolutionarily parsimonious explanation, although a scenario between these two extremes is also possible. Further support for an HGT event comes from the duplication of region D genes within the *N. meningitidis cps*, which is attributed to illegitimate recombination in the *galE* gene in a model proposed by Bartley *et al.* \[[@R22]\]. A satisfactory alternative explanation for the organization of the meningococcal *cps* has not been proposed to date. It has been posited that the donor of capsule genes to *N. meningitidis* may have been a member of the *Pasteurellaceae*, based on *cps* organization and sequence similarity between regions B and C of *N. meningitidis* and equivalent genes in *P. multocida* \[[@R3]\]. This is not consistent with the phylogenetic data presented here, which show that regions B and C of the *N. meningitidis cps* more closely resembled homologues of *N. subflava* than any other genus ([Fig. 4](#F4){ref-type="fig"}). Therefore, if the *N. meningitidis cps* was acquired by HGT, regions B and C at least were more likely to have been acquired from another *Neisseria* species. Recombination between closely related species is more probable, due to higher similarity of flanking sequences and the increased potential for a compatible DNA uptake sequence. HGT between *Neisseria* species has been described previously, including cross-species exchanges of *pilE,* another gene with links to virulence \[[@R21]\]. Acquisition of capsule genes in *H. influenzae* has also been proposed to be a result of HGT from a commensal species of the same genus \[[@R49]\]. Interestingly, the suggested organization of the donor ancestral *cps* island described in the model by Bartley *et al.* \[[@R22]\] matches the organization of the *N. subflava cps* described in the present study; however, a potential donor could alternatively be a close relative that has either not been previously isolated or since become extinct. The origin of region A genes in *N. meningitidis*, responsible for the differences in capsule serogroups, is less clear, since none of the isolates annotated here possessed a full complement of region A loci with a close resemblance to *N. meningitidis* serogroups. Based on sequence similarity among capsule synthesis genes of *Haemophilus*, *Actinobacillus*, *Mannheimia* and *Neisseria*, a case for horizontal exchange of capsule synthesis genes across genera and the formation of mosaic complements of genes has been made \[[@R50]\]. The lower G+C content of region A (as low as 25--45 %) compared to the rest of the genome (\~50 %), a phenomenon also seen in some *E. coli* capsular types, has also been cited as evidence for cross-genus horizontal acquisition of capsule synthesis genes \[[@R15]\]. The exact nature of this potentially complex evolutionary history, and the degree of exchange in recent evolutionary time, remain unclear. The discovery of capsule genes in NPN highlights the polysaccharide capsule's role in asymptomatic colonization and transmission, an important stage in meningococcal epidemiology; however, the acquisition of capsule by some genotypes of *N. meningitidis* has had an important impact on their behaviour, increasing their propensity to cause disease. Sequence similarity between NPN capsule genes and *N. meningitidis* sheds some light on the complicated evolutionary processes in these highly transformable organisms. Further sequencing of NPN, as well as other oral and nasopharyngeal commensals, may provide additional insights into the emergence of pathogenic serogroups in this important pathogen. Data bibliography {#s1-8} ================= 1. Schoen C, Blom J, Claus H, Schramm-Gluck A, Brandt P *et al*. pubMLST id 30 (2008). 2. Tettelin H, Saunders NJ, Heidelberg J, Jeffries AC, Nelson KE *et al*. pubMLST id 240 (2000). 3. Parkhill J, Achtman M, James KD, Bentley SD, Churcher C *et al*. pubMLST id 613 (2000). 4. Bentley SD, Vernikos GS, Snyder LA, Churcher C, Arrowsmith C *et al*. pubMLST id 698 (2007). 5. Lewis LA, Gillapsy AF, McLaughlin RE, Gipson M, Ducey TF *et al*. pubMLST id2855 (2015). 6. Bennet JS, Jolley KA, Earle SG, Corton C, Bentley SD *et al*. pubMLST ids 2863, 19077, 19091, 49339-49342, 49345-49349, 49351-49353 and 49358-49368 (2012). 7. Marri PR, Paniscus M, Weyand NJ, Rendon MA, Calton CM *et al*. pubMLST ids 3565, 1473903 and 14740 (2010). 8. Bennet JS, Jolley KA, Maiden MC. pubMLST ids 5197, 5354, 8778, 8837, 19940, 20515, 20516, 21038-21043 and 21045-21047 (2013). 9. Fulton L, Clifton S, Chinwalla AT, Mitreva M, Sodergren E *et al*. pubMLST id 5544 (2013). 10. Bennet JS, Bentley SD, Vernikos GS, Quail MA, Cherevach I *et al*. pubMLST id 8851 (2010). 11. Peng J, Yang L, Yang F, Yang J, Yan Y *et al*. pubMLST id 12672 (2008). 12. Chung GT, Yoo JS, Oh HB, Lee YS, Cha SH *et al*. pubMLST id 13685 (2008). 13. HMP Consortium. pubMLST ids 21044, 21048, 21049 and 21060 (2013). 14. Harrison OB, Bennet JS, Derrick JP, Maiden MC, Bayliss CD. pubMLST ids 21063 and 21064; GenBank accession no. HF562984.1 (2013). 15. Irish Meningococcus Genome Library. pubMLST id 26870 (2013). 16. Weyand NJ, Ma M, Phifer-Rixey M, Taku NA, Rendon MA *et al*. pubMLST id 29520 (2016). 17. Wolfgang W. pubMLST id 30325 (2016). 18. Liu G. pubMLST id 36317 (2015). 19. Abrams AJ, Trees DL, Nicholas RA. pubMLST ids 46275 and 46276 (2015). 20. Chong TM, Ng KT, Chan KG. pubMLST id 46753 (2014). 21. Craig Venter Institute. pubMLST id 49373 (2016). 22. Parkhill J. NCTC 3000 samples ERS950465, ERS1058919 and ERS980032 (2017). 23. Pathogen Informatics (Sanger). GenBank accession no. LT571436.1 (2016). 24. Muzny D, Qin X, Deng J, Jiang H, Liu Y *et al*. GenBank accession no. GL878494.1 (2013). 25. Calcutt MJ, Foecking MF, Mhlanga-Mutangadura T, Reilly TJ. GenBank accession no. CP009159.1 (2014). 26. MacInnes J, Kropinski AM, Nash JHE. GenBank accession no. CP003875.1 (2017). 27. May BJ, Zhang Q, Li LL, Paustian ML, Whittam TS *et al*. GenBank accession no. AE004439.1 (2014). 28. Harhay GP, McVey DS, Koren S, Phillippy AM, Bono J *et al*. GenBank accession no. CP006954.1 (2014). 29. Harhay GP, McVey S, Clawson ML, Bono J, Heaton MP *et al*. GenBank accession no. CP006955.1 (2014). 30. Harhay GP, McVey S, Clawson ML, Bono J, Heaton MP *et al*. GenBank accession no. CP006956.1 (2014). 31. Harhay GP, Koren S, Phillippy A, McVey DS, Kuszak J *et al*. GenBank accession no. CP003745.1 (2013). 32. Buettner F, Martinez-Arias R, Goesmann A, Baltes N, Tegetmeyer H *et al*. GenBank accession no. CP001091.1 (2014). 33. Xu Z, Zhou Y, Li L, Zhou R, Xiao S *et al*. GenBank accession no. CP000687.1 (2014). 34. Foote SJ, Bosse JT, Bouevitch AB, Langford PR, Young NM *et al*. GenBank accession no. CP000569.1 (2014). 35. Su Y-C, Horhold F, Singh B, Riesbeck K. GenBank accession no. CP005967.1 (2014). 36. Bidet P. GenBank accession no. LN869922.1 (2015). 37. Harhay GP, McVey S, Clawson ML, Bono J, Heaton MP *et al*. GenBank accession no. CP006957.1 (2015). 38. Hauglund MJ, Tatum FM, Bayles DO, Maheswaran SK, Briggs RE. GenBank accession no. CP006573.1 (2015). 39. Iskander M, Hayden K, Van Domselaar G, Tsang R. GenBank accession no. CP017811.1 (2016) 40. Heaton MP, Harhay GP, Smith TP, Bono JL, Chitko-McKown CG. GenBank accession no. CP011099.1 (2015). 41. Koren S, Harhay GP, Smith TPL, Bono JL, Harhay DM *et al*. GenBank accession no. CP006619.1 (2013). 42. Haugland MJ, Tatum FM, Bayles DO, Chriswell BO, Maheswaran SK *et al*. GenBank accession no. CP005972.1 (2013). 43. Eidam C, Poehlein A, Brenner Michael G, Kadlec K, Liesegang H *et al*. GenBank accession no. CP005383.1 (2015). 44. Harhay GP, Koren S, Phillippy AM, McVey DS, Kuszak J *et al*. GenBank accession no. CP004752.1 (2017). 45. Hauglund MJ, Tatum FM, Bayles DO, Chriswell BO, Maheswaran SK *et al*. GenBank accession no. CP006574.1 (2015). 46. Zomer A, de Vries SP, Riesbeck K, Meinke AL, Hermans PW *et al*. GenBank accession no. AMSO00000000.1 (2014). 47. Jolley KA, Kalmusova, Feil EJ, Gupta S, Musilek M *et al*. pubMLST ids 985 and 1575 (2000). Supplementary Data ================== ###### Supplementary File 1 ###### Click here for additional data file. Abbreviations: HGT, horizontal genetic transfer; IMD, invasive meningococcal disease; ML, maximum likelihood; NCBI, National Center for Biotechnology Information ; NPN, non-pathogenic Neisseria; rMLST, ribosomal multi-locus sequence typing; WGS, whole-genome sequence. One supplementary table is available with the online version of this article. The work was funded by the Wellcome Trust (109025/Z/15/A to M. C. J. M.; 104992/Z/14/Z to M. E. A. C.; O. B. H. was supported by the Wellcome Trust Institutional Strategic Support Fund). This publication made use of the *Neisseria* Multi Locus Sequence Typing website (<https://pubmlst.org/neisseria/>), developed by Keith Jolley and Martin Maiden \[[@R23]\] and sited at the University of Oxford, UK. The development of this site has been funded by the Wellcome Trust and the European Union. The authors are grateful to James Bray for assembly of genomes and implementation of rMLST-based speciation classification within pubMLST. The authors declare that there are no conflicts of interest.
null
minipile
NaturalLanguage
mit
null
Katy Branch Library pilots coding club for girls this month A fifth-grader works to code his Lego vehicle at school, which received a grant to purchase Legos to help students learn about coding. A fifth-grader works to code his Lego vehicle at school, which received a grant to purchase Legos to help students learn about coding. Photo: Blaine McCartney, MBO Image 2 of 2 Students code their Legos vehicle at a school in Cheyenne, Wyo. A teacher received a grant to purchase the Legos to help students learn about coding. Students code their Legos vehicle at a school in Cheyenne, Wyo. A teacher received a grant to purchase the Legos to help students learn about coding. Photo: Blaine McCartney, MBO Katy Branch Library pilots coding club for girls this month 1 / 2 Back to Gallery The Katy Branch Library is one of only four branches of the Harris County Public Library that will host Playbots Coding Clubs in January, and Harris County is the only library in Texas to receive the national grant. Mandy Carrico, HCPL adult services librarian of programs, partnerships and outreach, applied for the grant, intended to reach out to those underrepresented in computer science careers such as girls, minorities and people with disabilities. "The grant is providing funding for coding programs for underserved groups," said Angel Hill, branch manager of the Katy library. "The group that we decided to do it for is junior high girls. A lot of girls come in the library." Translator To read this article in one of Houston's most-spoken languages, click on the button below. Katy and the Lone Star College-Tomball Community Library are offering a girls edition of Playbots to promote their interest in science and technology. The Tomball library had filled its slots by Dec. 19. Openings still remained, however, at the Jacinto City Branch Library and North Channel Branch Library for Playbots clubs. The clubs will focus on coding basics and concepts, applied coding, film work and community connections, according to the HCPL. North Channel and Jacinto City branches' customers and communities are high in the audiences for which we were looking, said Carrico. At the other two branches, "we're celebrating women in these careers," she said. The eight-week course is open to girls ages 11-14 at the Katy Branch Library from Jan. 8-Feb. 19. The girls will create short videos about their communities starring Lego Mindstorms robots that they will build and code. "For the Katy branch," Hill said, "No matter what sort of activity we have for the teens, we seem to get the junior high girls. The kids already are coming into the library." She noted that Katy Junior High School is down the road from the library, which is at 5414 Franz Road, which may be a contributing factor. Hill added, "We're really glad they let us aim at the ones coming into the library." The grant provides funding for three staff members of the Katy branch to go for training this month to lead the club, said Hill. Carrico said they will learn coding, filming and editing. A party for family and friends to view the films created by the club members is set for 5 p.m. Feb. 28, Hill said. Carrico's division helps to provide resources and find grants to assist branches with programming, Hill said. "They're great at getting outside resources that a local branch may not be aware of or recognize as something that we can do. They find outside groups and businesses to partner with to fund library programs." Playbots is a coding club made possible by a grant from the American Library Association and Google. In October, the HCPL reported it was the only library in Texas to receive the ALA/Google Libraries Ready to Code grant. Carrico said the HCPL grant is under $25,000. The ALA announced more than $500,000 in grants for 28 libraries in 21 states plus the District of Columbia to design and implement coding programs for young people. Grantees were selected from a pool of more than 400 public and school libraries and officials said it is the first time the association has dedicated funding for computer science programs in libraries. "The Libraries Ready to Code grants are a landmark investment in America's young people and in our future," said ALA President Jim Neal in a news release. "As centers of innovation in every corner of the country, libraries are the place for youth – especially those underrepresented in tech jobs – to get the CS skills they need to succeed in the information age. These new resources will help cultivate problem-solving skills, in addition to coding, that are at the heart of libraries' mission to foster critical thinking." Carrico said HCPL and the other grant recipients will build a toolkit for other libraries to use. The grant will allow libraries to learn what works and doesn't and to fine-tune the curriculum and turn around and share it and adapt it to other libraries, she said. Developed by U.S. libraries, the toolkit will be released in conjunction with National Library Week in April 2018, according to a news release. "We plan to continue after the pilot," said Carrico, "and to use the notes and roll it out to more branches." Part of the grant pays for equipment that can be used again and again, she said, and the county can apply for other grants and combine them with county funds.
null
minipile
NaturalLanguage
mit
null
Bryn Williams-Jones Bryn Williams-Jones (born 1972) is a Canadian bioethicist who since 2010 has directed the Bioethics Program at the School of Public Health, Université de Montréal, and is professor in the Department of Social and Preventive Medicine. He is co-founder and editor-in-chief of the Canadian Journal of Bioethics/Revue canadienne de bioéthique, the first open access bilingual bioethics journal in Canada (formerly called BioéthiqueOnline, 2012–17), and is a member of the Public Health Research Institute of the Université de Montréal (IRSPUM) and the Centre for Ethics Research (CRÉ). Education An interdisciplinary scholar, Williams-Jones completed a bachelor's degree in Philosophy and then a Masters in Religious Studies (bioethics specialization) at McGill University, before pursuing his PhD in Interdisciplinary studies (bioethics) at the W. Maurice Young Centre for Applied Ethics at the University of British Columbia, where he focused on issues of genetics and ethics. He then did a post-doctoral fellowship at the Centre for Family Research, University of Cambridge, and was a junior research fellow at Homerton College. Before taking up his current position at the Université de Montréal, he worked for a year as a research ethicist at the Cardiff Institute of Society, Health and Ethics, Cardiff University, Wales. Research Williams-Jones is interested in the socio-ethical and policy implications of health innovations in diverse contexts. His work examines the conflicts that arise in academic research and professional practice with a view to developing practical ethical tools to manage these conflicts when they cannot be avoided. He has published more than 100 articles, commentaries, book chapters and case studies, on topics related to public health policy, regulation and science and technology innovation on subjects including genetics, pharmaceutical development, direct-to-consumer advertising, nanotechnology, and pharmacogenomics. He has also published on the responsible conduct of research (i.e., research integrity, research ethics), with a focus on the management of conflicts of interest. Academic service Williams-Jones is active in developing innovative pedagogical approaches in professional ethics, public health ethics, and research integrity, and has served on university committees in the Faculty of Graduate Studies and in the School of Public Health, to develop governance initiatives to encourage the responsible conduct of research and prevent misconduct (such as plagiarism and conflicts of interest). He is a member specialized in ethics of the top-level University Research Ethics Committee (CUER), and has served on expert advisory committees for the Canadian Institutes of Health Research, the Social Sciences and Humanities Research Council of Canada, Genome Canada, and the Quebec National Institute for Excellence in Health and Social Services (INESSS). Media Williams-Jones has been interviewed by LaPresse, CBC, Le Devoir, Toronto Star, National Post, and appeared on radio and television shows such as Tout le monde en parle, ICI Radio-Canada, and CBC Newsworld. References Category:Living people Category:1972 births Category:McGill University alumni Category:University of British Columbia alumni Category:Université de Montréal faculty Category:Canadian ethicists Category:Bioethicists
null
minipile
NaturalLanguage
mit
null
Sunday, July 3, 2016 Fireworks: Leave the crackle, bang, boom to professionals COLUMBUS - It wouldn't be the Fourth of July without the crackle, bang and boom of fireworks, but state safety officials are encouraging Ohioans to be smart when celebrating. Not only are bottle rockets, roman candles and most other fireworks not allowed to be discharged in Ohio without a license, they also are quite dangerous, says Chief Frank Conway, Ohio Division of State Fire Marshal's Fire Prevention Bureau. Fireworks were the cause of more than 10,000 injuries treated in the nation's emergency rooms in 2014, which is why Conway recommends leaving fireworks to the professionals. "It's the time of year we like to celebrate our independence and appreciate what has been done for us as Americans," he says. "And to do that, I think we just need to participate in the local events that are conducted in a safe manner by trained individuals." Trick and novelty fireworks are permitted under Ohio state law without a permit. Those are the smokes, snaps, snakes and sparklers sold in local drug stores and supermarkets. An Ohio State Bar Association summary of Ohio's fireworks laws is here. According to the organization Prevent Blindness, 40 percent of fireworks injuries in 2014 involved children younger than 15, and 1,400 injuries were caused by sparklers. Conway notes that sparklers burn at more than 2,000 degrees Fahrenheit. "Just be aware of the fact of how hot they are and the potential for burns," he says. "You know, you wouldn't allow your child to run through the yard with a cutting torch in their hand. So, safety needs to be on your mind with these devices." If someone is caught violating fireworks laws, Conway says not only will the fireworks be taken away, but first-time offenders could face a $1,000 fine or six months in jail.
null
minipile
NaturalLanguage
mit
null
Q: R Sweave not interpreted correctly due to writing style? This case won't get correctly interpreted. I doubt it must be to do something with my writing style but I cannot notice what it is. You can run it with R CMD Sweave Test.Rnw; pdflatex Test.tex produces the Test.tex file and then compiling to Test.pdf. What is causing the running to result into gibberish? Why are the Sweave blocks not properly interpreted? Test.Rnw causing gibberish \documentclass{article} \usepackage{subcaption} \usepackage{graphicx} \usepackage{pdfpages} \usepackage{Sweave} \begin{document} . \begin{figure} \centering \begin{subfigure}[b]{0.55\textwidth} <<>>= all.lab <- c("Arr./A", "Sin", "Digox") all.dat <- c(2274959, 1531001, 2406739) barplot(all.dat, names.arg=all.lab, col="darkblue", ylab="Average byte size", xlab="Groups") grid() @ %\includepdf{fig1.pdf} \caption{a} \label{fig:Ng1} \end{subfigure} \begin{subfigure}[b]{0.55\textwidth} <<>>= age.lab <- c("Arr. +65", "Arr. [45,65]", "Arr", "Sin", "Sin +33") age.dat <- c(2274959, 1481397, 773624, 874208, 1087411) barplot(age.dat, names.arg=age.lab, col="darkblue", ylab="Average byte size", xlab="Groups") grid() @ %\includepdf{fig2.pdf} \caption{b} \label{fig:Ng2} \end{subfigure} \begin{subfigure}[b]{0.55\textwidth} %\includepdf{fig3.pdf} <<>>= gender.lab <- c("Arr. f", "A m", "Sin f", "Sinu", "Digox f", "Digon m") gender.dat <- c(1416043, 2448017, 1421385, 537783, 1256545, 1181350) barplot(gender.dat, names.arg=gender.lab, col="darkblue", ylab="Average byte size", xlab="Groups") grid() @ \caption{c} \label{fig:Ng2} \end{subfigure} \caption{MAIN CAPTION} \end{figure} \end{document} A: The error is that you have indented the Sweave block, you must unindent the block. Sweave seems to be sensitive to indentation. \documentclass{article} \usepackage{subcaption} \usepackage{graphicx} \usepackage{pdfpages} \usepackage{Sweave} \begin{document} \begin{figure} \centering \begin{subfigure}[b]{0.55\textwidth} <<fig=TRUE>>= all.lab <- c("Arr./A", "Sin", "Digox") all.dat <- c(2274959, 1531001, 2406739) barplot(all.dat, names.arg=all.lab, col="darkblue", ylab="Average byte size", xlab="Groups") grid() @ \caption{a} \label{fig:Ng1} \end{subfigure} \begin{subfigure}[b]{0.55\textwidth} <<fig=TRUE>>= age.lab <- c("Arr. +65", "Arr. [45,65]", "Arr", "Sin", "Sin +33") age.dat <- c(2274959, 1481397, 773624, 874208, 1087411) barplot(age.dat, names.arg=age.lab, col="darkblue", ylab="Average byte size", xlab="Groups") grid() @ \caption{b} \label{fig:Ng2} \end{subfigure} \begin{subfigure}[b]{0.55\textwidth} <<fig=TRUE>>= gender.lab <- c("Arr. f", "A m", "Sin f", "Sinu", "Digox f", "Digon m") gender.dat <- c(1416043, 2448017, 1421385, 537783, 1256545, 1181350) barplot(gender.dat, names.arg=gender.lab, col="darkblue", ylab="Average byte size", xlab="Groups") grid() @ \caption{c} \label{fig:Ng2} \end{subfigure} \caption{MAIN CAPTION} \end{figure} \end{document} Minimal Working Examples The way I solved the issue was to encapsulate the issue, I noticed that there seems to be some conflicting character, block or space in the code. I provide below small working examples that made me realise that the error must be in the indentation. Example 1. OP's subcase working with Sweave \documentclass{article} \usepackage{graphicx} \usepackage{Sweave} \begin{document} \begin{figure} <<fig=TRUE>>= library(ggplot2) all.lab <- c('Arr./A', 'Sin', 'Digox') all.dat <- c(2274959, 1531001, 2406739) ggplot() + geom_bar(aes(x=all.lab, y=all.dat), stat='identity') @ \end{figure} \end{document} Example 2. Ggplot working with Sweave \documentclass{article} \usepackage{graphicx} \usepackage{Sweave} \begin{document} \SweaveOpts{concordance=TRUE} \begin{figure} \centering <<fig=TRUE>>= library(ggplot2) p <- ggplot(diamonds, aes(x=carat)) p <- p + geom_histogram() print(p) @ \end{figure} \end{document} Example 3. Base commands working with Sweave \documentclass{article} \usepackage{graphicx} \usepackage{Sweave} \begin{document} \begin{figure} \centering <<fig=TRUE>>= plot(1:10) @ \end{figure} \end{document} Run R CMD Sweave test.Rnw pdflatex test.txt open test.pdf
null
minipile
NaturalLanguage
mit
null
Q: json format to list of floats How can I change the type of my string list to float in Python? This is my data: {"time":20180124,"data1m":"[[1516752000,11590.6,11616.9,11590.4,11616.9,0.25202387],[1516752060,11622.4,11651.7,11622.4,11644.6,1.03977764]]"} This list is assigning every element an index like: a[0] = [, a[1] = [ and a[2] = 1, I want [1516752000,11590.6,11616.9,11590.4,11616.9,0.25202387] to be stored in one index and then a[0][0] = 1516752000 and so on. How can I do this? A: You can use the json module to convert the string value of data1m into a list-of-lists, and then convert the elements of sub-lists into floats like this: import json from pprint import pprint data = {"time":20180124, "data1m":"[[1516752000,11590.6,11616.9,11590.4,11616.9,0.25202387]," "[1516752060,11622.4,11651.7,11622.4,11644.6,1.03977764]]"} a = json.loads(data["data1m"]) for index, row in enumerate(a): a[index] = list(map(float, row)) pprint(a) Output: [[1516752000.0, 11590.6, 11616.9, 11590.4, 11616.9, 0.25202387], [1516752060.0, 11622.4, 11651.7, 11622.4, 11644.6, 1.03977764]]
null
minipile
NaturalLanguage
mit
null
"You've got to stay in shape, and you've got to pay attention to what's going on around you," the Roush Fenway Racing driver said. "I'm in better shape now than I was five years ago -- I can certainly tell you that." So what has been the biggest key to "The Biff" in improving his physical fitness in the past few years? "Got my ass in the gym and started paying attention to what I eat," Biffle said. "You get comfortable, right? You're winning races and doing whatever. Our sport, I think, is a lot different than a lot of sports, and the endurance and shape of the driver at some point has something to do with it. But the mental sharpness and reflex skill and knowledge a lot of times are more important than the physique or physical skill of a guy. "You look at Carl [Edwards] and look at Kyle [Busch] or look at Tony Stewart. Just because Carl's in better shape or lifts weights or does whatever doesn't mean that he's a better driver or better behind the wheel than Kyle is, or Tony Stewart or anybody else. But you've got to at least be at some level of conditioning in order to do what we do."
null
minipile
NaturalLanguage
mit
null
Voltage signals have an upper limit and a lower limit and a voltage swing therebetween. Circuits may be designed to work with high or low voltages, may be designed for high or low swings, may be designed to work near a saturation region or in the saturation region. The lower the swing the faster that processing can occur. Often the swing of a signal is sufficient but the upper or lower parameters of the signals need to be adjusted. For example, the signal may need to be shifted up or down so that a transistor receiving the signal operates in the saturation region. Shifting a signal entails maintaining the swing (absolute voltage drop) of the signal while moving upper and lower limits of the signal. Devices for shifting the current may be complex or may be based on current drawn by a load connected to the shifting device. Relying of the current drawn by the load requires excess power consumption. If the load is modified then the current drawn may be modified and the voltage shift may change accordingly.
null
minipile
NaturalLanguage
mit
null
RAN translation Repeat Associated Non-AUG translation, or RAN translation, is an irregular mode of mRNA translation that can occur in eukaryotic cells. Mechanism For the majority of eukaryotic messenger RNAs (mRNAs), translation initiates from a methionine-encoding AUG start codon following the molecular processes of 'cap-binding' and 'scanning' by ribosomal pre-initiation complexes (PICs). In rare exceptions, such as translation by viral IRES-containing mRNAs, 'cap-binding' and/or 'scanning' are not required for initiation, although AUG is still typically used as the first codon. RAN translation is an exception to the canonical rules as it uses variable start site selection and initiates from a non-AUG codon, but may still depend on 'cap-binding' and 'scanning'. Disease RAN translation produces a variety of dipeptide repeat proteins by translation of expanded trinucleotide repeats present in an intron of the C9orf72 gene. The expansion of the trinucleotide repeats and thus accumulation of dipeptide repeat proteins are thought to cause cellular toxicity that leads to neurodegeneration in ALS disease. See also Trinucleotide repeat disorder Eukaryotic translation C9orf72 References External links Category:Biology Category:Molecular biology Category:Biochemistry Category:Protein biosynthesis Category:RNA Category:Proteins Category:Neurodegenerative disorders
null
minipile
NaturalLanguage
mit
null
Angiotensin receptor sites in renal vasculature of rats developing genetic hypertension. The purpose of the present study was to characterize angiotensin II (ANG II) receptors in renal resistance vessels of young spontaneously hypertensive rats (SHR) ANG II receptor subtypes were evaluated in biochemical and functional terms using nonpeptide ANG II antagonists of the types AT1 (Dup-753 and Dup-532) and AT2 (PD-123319 and CGP-42112). In vitro radiolabeled ligand binding studies were performed on preglomerular resistance vessels freshly isolated from kidneys of SHR and Wistar-Kyoto (WKY) rats. The method of isolation and purification of renal microvessels was based on iron oxide infusion into the kidneys and separation of the vessels with the aid of a magnetic field followed by successive passages through various sized sieves. Physiological receptor expression was evaluated in vivo by measuring renal blood flow responses to ANG II injected alone and in a mixture with a receptor antagonist into the renal artery of indomethacin-treated rats. Our results indicate the existence of at least two functional (vasoconstriction mediating) subtypes of ANG II receptors sites in the renal microcirculation. Eighth percent of the ANG II receptor sites displayed high affinity to Dup-753 and Dup-532 and low affinity to PD-123319 and CGP-42112, whereas the remaining 20% of sites showed low affinity to Dup-753 and Dup-532 and CGP-42112 and intermediate affinity to PD-123319. In addition, the renal vasculature of young SHR and WKY displays similar ANG II receptor characteristics and identical blood flow responses to ANG II and to mixtures of ANG II and its antagonists.
null
minipile
NaturalLanguage
mit
null
London Tube trains regularly operate on a line that never appears on any Tube map. As one of the few examples of Underground car cascading, retired Tube trains have been operating on the Isle of Wight’s Island Line since 1967. This special relationship has lasted for decades, with two generations of retired Underground trains migrating south, like people, to retire by the seaside. But this time round as replacements are sought once again, the apparently simple solution of buying in a third generation of time expired Tube stock looks less likely, as a host of other problems have come to the fore. The problem is working out what comes next. The current generation Class 483s at Ryde St John’s Road in 2017 (author) When the Isle of Wight’s railway was electrified in 1967 by British Rail (BR)’s Southern Region, they imported ex-London Underground Standard Stock dating from the 1920s to comprise the train fleet. The Island Line’s Standard Stock, in turn, was replaced in 1989-90 by 1938 Tube stock, brought in by Network SouthEast (NSE). Now, nearly 30 years on, it will soon be time for the 1938 stock’s replacement as well. The Island Line was electrified at the same time as, and had economies of scale with, the Bournemouth electrification project. Whilst the former had been specified to operate 7 car trains up to every 12 minutes, it has degraded significantly in recent years. What is Island Line? Isle of Wight railway map (DfT) Island Line runs between Ryde Pier Head station, half a mile out to sea (at high tide) on the north-east coast of the Isle of Wight, and Shanklin on the Island’s south-east coast. There are six intermediate stations, and the line is all that remains of a once island-wide railway network. Whilst the remainder closed to regular passenger service, the Ryde Pier Head to Shanklin section was electrified. It is single track south of Smallbrook Junction station, where connections can be made with the preserved Isle of Wight Steam Railway, and there is a passing loop at Sandown. The down line between Ryde Pier Head and Ryde Esplanade is currently rusting and unused, with that section now effectively operated as a single track railway. The third dimension Ryde Tunnel (Wiki Commons) The reason for the use of Tube stock, and by extension the unusual association between railway operations under London and on the Isle of Wight, is the restricted clearances in Ryde Tunnel and under Bridge 12 near Smallbrook Junction. The maximum height of rolling stock on the line was limited to 11’ 8” (3.56m) even in steam days, some 25cm less than on the mainland. When the line was electrified, BR Southern Region took the opportunity to raise the track bed in Ryde Tunnel by another 25cm, in an only-partially successful attempt to address recurrent flooding problems there. Being close to the sea, and the track bed being lower than sea level, flooding risks have been ever present. Ryde Esplanade flooding (Wiki Commons) The raising of Ryde Tunnel’s track bed in 1967 made the height restriction even more severe. The headline figure for the current maximum height of stock is approximately 3.3m, though much depends on the interaction between the curved roof of the tunnel and the roof profile of trains. A train which is just under 3.3m in height might fit at its centre line, but not at its outside edges. At the time of electrification, the only easily available trains small enough to fit through Ryde Tunnel were the Tube’s Standard Stock, at 2.88m in height. Nomenclature The lack of the definite article in “Island Line” is on purpose, being the brand created by Network SouthEast in the 1980s and which has become ingrained, in the same way as people refer to “Thameslink” rather than “the Thameslink”. To its credit, the current train operating company (TOC) has kept the moniker. What’s the Problem(s)? The existing fleet of Class 483s (as the 1938 stock was classified on the BR ‘TOPS’ locomotives and rolling stock numbering system) is 80 years old and can’t go on for much longer. The Class 483 fleet was nine strong when it arrived, with a tenth unit for spares, but due to ageing and seaside corrosion it has steadily been contracting ever since. Ex-GNER Chief Executive Christopher Garnett undertook a study into Island Line in 2016 for Isle of Wight Council during the South Western refranchising process, of which Island Line is currently (a minute) part. At that time he found that just five of its trains remained serviceable. Dino 483 at Ryde Pier Head in 2005 (author) A sixth, unit 002 ‘Raptor’, is out of service and makes for a forlorn sight, slowly being cannibalised outside Ryde St Johns Road depot. The current peak timetable requirement is for just two trains, so a fleet of five trains sounds sufficient, but in fact the ageing trains are getting increasingly unreliable. One of them suffered a minor fire at Ryde Pier Head last November. The saline environment in which they operate takes its toll on the Class 483s (as the state of their roofs often shows) and with scheduled maintenance requirements, train availability can be tight. Despite that, Island Line maintains impressive punctuality and reliability. 99.3% of services arrived with five minutes of schedule in the 12 months to January 2018, and 99.6% of services were operated. Although the Class 483s are two-car trains, platforms are long enough to accommodate four-car formations. These operate during the summer months, when the additional capacity is helpful in dealing with the large groups of tourists arriving from Wightlink’s high-speed catamaran ferry once or twice hourly, which sails between Portsmouth Harbour and Ryde Pier Head stations. Additional ferry passengers join and leave trains at Ryde Esplanade upon transferring with Hovertravel’s hovercraft service to Southsea. Ryde Esplanade station and hovercraft, Ryde Pier Head in distance (author) The Wightlink catamaran ferry (Portsmouth Harbour to Ryde Pier Head) operates twice an hour Mon-Fri peak hours (not quite every 30 minutes, the headways are 25/35 minutes), spring/summer Saturdays and occasional summer weekdays. Otherwise the service is hourly. But only one Island Line train per hour connects with them in a timely fashion, particularly in the southbound direction. The fourth dimension – Time Twice, ex-London Underground Tube stock has been brought over to the Island to meet Island Line’s unusual rolling stock requirements, so why not thrice? The first reason is that it looks unlikely that any Tube stock will become available within an acceptable time frame. Despite Transport for London’s decision to sell and lease back Elizabeth line trains to release funding for a replacement fleet of Piccadilly line trains, 2023 would seem to be the earliest that the existing Piccadilly line 1973 stock might become available. By that time the Class 483s will be 85 years old, assuming they survive that long. As we’ll see later, there is no certainty that will be the case. Furthermore Ryde tunnel is not only low, but has a tight reverse curve with a combination of single and double track bores. Yet the 2016 Garnett report didn’t mention the tunnel curvature specifically, but instead noted that Piccadilly line 1973 stock wouldn’t be suitable for transfer to Island Line because of the curvature at Ryde Esplanade station, which is also quite severe. There are actually two related issues, getting the longer 1973 stock carriages round the curve without fouling the platform, and the distance from the platform of the doors at the carriage centres when stopped at the station. This would be a particular problem at Ryde Esplanade station, which is tightly curved. Vivarail’s refurbished ex-D78 trains have often been suggested as a potential Island Line rolling stock solution. Being approximately the same height as mainline rolling stock, they would require the trackbed in Ryde Tunnel to be re-lowered, but even then they might not be suitable. The 1973 stock driving cars are 17.47m long. The D78 driving cars are 18.37m long. So if a 1973 stock carriage is too long for Ryde Esplanade station, a D78 is even longer. Until someone performs a proper gauging exercise, it’s impossible to rule replacement vehicles in or out. Ex-Bakerloo line 1972 stock was another possibility considered by the Garnett report, but on current plans it would not be available until after the Piccadilly’s 1973 stock is retired, and the date when the Deep Tube Upgrade might allow its release seems to be receding. Conversion of 1972 stock to two-car operation on the Island would be more complex and expensive than it was for the 1938 stock, which had its traction equipment located in the driving trailers. The 1972 stock, on the other hand, has some of its traction equipment spread under trailer cars, and this would need to be transplanted to the driving trailers. The Standard Stock was 40 years old when it arrived on the Island, and the 1938 stock was just over 50 years old. The 1973 stock would be 50 years old on arrival even if it were deemed suitable and made available, whilst 1972 stock would probably be nearly 60 years old by the time it arrived. In a world in which Northern’s Pacers are finally being replaced by modern trains, many Islanders are getting fed up of having decades-old London Underground cast-offs dumped on them. The second, and probably more urgent, reason that a third generation of ex-Tube stock is unlikely on the Island is that the Class 483s are coming to the end of their natural lives at the same time that every other part of Island Line, signalling, power supply and track, seems to be doing exactly the same. There is a perfect storm of simultaneous technological demise taking place. Whilst we’re at it, what else is wrong with Island Line? As mentioned, four-car formations of Class 483s are helpful in dealing with tourist loadings on Island Line during the summer months, but it is clear when such operations are undertaken that something is amiss. Passengers never see both timetabled trains in four-car formations, even in the height of summer. One runs as a four-car formation, and the other as a two-car. 483 interior in 2017 (author) On summer Saturdays shortly after electrification (the changeover day for weekly holidays on the Island), the timetable saw six 7-car trains providing a 12-minute headway service along the line. This was the maximum level of service, using nearly all available carriages. However Garnett’s study found that today, Island Line’s third rail power supply is no longer robust enough to allow two four-car trains to run at once. The voltage drop along the line is apparently so severe that at Shanklin, the third rail is only supplying some 350V out of the normal 630V, which explains the leisurely get-aways passengers experience at that end of the line. Two trains per hour appears to be sufficient to handle current peak patronage levels, but the current timetable is inconvenient with headways between trains of 20 minutes and 40 minutes. The uneven spacing of services is a legacy of the decision to retain a passing loop at Sandown rather than Brading so that a 20/20/20 minute headway train service could be operated (although such a timetable hasn’t operated since 2007, when it did so on summer Saturdays). Train strip map in 2018 (author) Whilst an evenly-spaced 30 minute headway would make journey planning along the line generally easier, the drawback of the current 20/40 minute headway timetable is most noticeable for its impact on ferry connections at Ryde Pier Head. As the track condition has deteriorated over the years and is now poor, so too has the ride quality of old trains suffered. Maintenance arrangements are unusual at Island Line. It is vertically integrated, with the operator leasing the infrastructure from Network Rail – but only down to 450mm below the rails. The Ryde Pier Head station itself looks in need of refurbishment, while piers themselves impose severe maintenance workloads due to the hostile saline environment in which they are located. Luckily for Island Line’s operators, the Pier itself is not part of the vertical franchise and remains the sole responsibility of Network Rail – but given Network Rail’s other pressures on the mainland, Ryde Pier appears not to be a high priority. Adding to these obvious issues is the fact that Island Line runs at a considerable loss. Although exact figures aren’t easy to come by, Garnett quoted annual revenue of approximately £1m against costs of £4.5m. Assuming that station entry/exits are a reasonably proxy for passenger journeys and that most trips are within the island, the most recent ORR statistics show 628,446 journeys (1,256,892 entries plus exits) when Island Line’s eight stations are taken together. That works out as a subsidy of over £4 per passenger trip. [These figures have been revised] The daily ridership average is almost 7,000 journeys a day, but in reality is significantly higher in the summer and correspondingly lower in the winter. None of the problems Island Line faces have cheap solutions, and it is clear that revenues are not a source of funding given that the line doesn’t come close to covering its running costs. Although a micro-franchise has been suggested in the past to focus on improving the line’s financial position (indeed, at the first round of rail privatisation Island Line was a separate franchise) it has never been clear who would fund its subsidy requirement apart from central government. Garnett noted that the Isle of Wight Council “would not have either the financial resources or skills to be able to operate the Island Line franchise.” Being part of the wider, profitable, South Western franchise means that Island Line’s losses are covered by profits made elsewhere, but the line has received scant attention from franchise owners since its incorporation into the South Western franchise. Integrated fares Visitors can already take advantage of co-fares with Island Line and other transport: Day Rover passes (£16) allow unlimited travel on Island Line and the Isle of Wight Steam Railway. Return tickets from the mainland cover ferry/hovercraft travel, Island Line from Pier Head to Smallbrook Junction, then Steam Railway. The ferry and hovercraft serving Ryde are passenger only, so Island Line is ideally situated to cater for their custom. The car ferries to/from the mainland alight at other Isle of Wight ports, the closest being a few miles away. New franchise, new consultation The new South Western Railway (SWR) franchise, operated by First/MTR, began in August 2017, although the new company hasn’t yet got round to replacing the signage at Ryde St Johns Road which states that previous franchise operator Stagecoach is still in charge. As part of the new franchise, First/MTR committed to a “key relationship with the Isle of Wight Council and other stakeholders to develop a more sustainable future for Island Line. South Western Railway will now start the consultation phase of the process to deliver improvements for Island Line. This will include setting out a range of options to stakeholders for rolling stock and infrastructure, before submitting ideas to the Government next year.” A formal consultation document was issued in late October 2017, but the exercise wasn’t a full public consultation, remaining unpublished on SWR’s website, and being conducted instead with local stakeholder groups. Independent watchdog Pressure group Railfuture published the consultation document anyway, and it made for interesting reading. Not only did it recognise the obvious problems detailed earlier, but it revealed that Island Line’s situation was even worse than had previously been assumed. For a start, SWR revealed that only three Class 483 trains are currently serviceable; in other words 100% availability is necessary to run the summer two-car and four-car train service. Island Line is just one serious technical failure away from having too few trains to run its summer timetable at full capacity. And because the Class 483s are owned by the franchise, rather than leased, any conventionally-leased replacement trains will add to Island Line’s losses. Confirming that operating costs are around four times higher than revenue, without putting exact amounts to them, the consultation document summed up Island Line’s current challenges by reiterating some of the well-known issues and highlighting others as well: Class 483s do not have a modern on-board customer experience nor meet customer expectations. Given modern rail passenger expectations for features like Wi-Fi, power sockets, on-board digital information displays and the like, it is questionable whether a practicable conversion programme could ever meet them. The 40-minute / 20-minute headway does not serve customer needs. Revenue protection is challenging: guards cannot move between carriages except at stations, and fare evasion is a factor in Island Line’s revenue shortfall. The fare evasion is not always deliberate however – there are no ticket machines at most stations and passengers may be unable to pay because the guard is in the other carriage. Issues around power supply, signalling and infrastructure. The third rail also needs replacing and substations are in poor condition. Leasing costs for infrastructure from Network Rail to the franchisee are arranged so that costs increase towards the end of each lease period, adding to Island Line’s costs; the next period ends in 2019. Flooding remains a problem in Ryde Tunnel. Stations require modernisation to provide an appropriate, efficient, and pleasant retail and transport interchange experience. Ryde Esplanade in particular could see its connections with Southern Vectis buses and Hovertravel hovercraft much improved (plans for a substantial rebuild of the station and its interchange arrangements were abandoned in 2009 after costs rose). Taken together, SWR’s analysis suggests that what Island Line needs is the old Network SouthEast approach: a total route modernisation. The trains, track, power supply and signalling all need to be replaced, and the stations modernised, all at the same time. Interestingly, no mention was made of any need for the Class 483s to meet the Rail Vehicle Accessibility Regulations by 2020. Presumably SWR is expecting a dispensation to be applied allowing for their continued use. However any new trains will have to comply. What is Island Line actually for? Part of the problem in defining the future of Island Line is that is there is confusion as to what its main purpose actually is. Recent operators have taken the view that it is primarily a tourist attraction, rather than a public service railway. Network SouthEast treated Island Line as a conventional part of its network, despite its diminutive trains, with NSE branding applied to trains and stations (red lamp-posts etc) alike. But at privatisation, the first Island Line franchise saw trains painted in a tourist-friendly dinosaur livery, relating to the Isle of Wight’s rich fossil heritage and key tourist attraction Dinosaur Isle at Sandown. A recent op-ed in the Island’s newspaper, the Isle of Wight County Press, criticised calls for the modernisation of Island Lane, praising instead its current tourist focussed operation. “Not only do you board a ‘step-back-in-time’ quirky old railway carriage but you bounce along the track with everyone springing up and down in their seats,” it said. “Island Line is one of our Unique Selling Points,” the piece concluded, “let’s make a feature of it.” The Isle of Wight’s official tourism website meanwhile sells the former London Underground trains as giving Island Line, “its very own unique identity and appeal.” Yet Isle of Wight Council says what it actually wants is a “modern and extended Island Line that meets the needs of residents and cuts traffic congestion”. Isle of Wight bus map (Southern Vectis) The Isle of Wight has an extensive and high quality bus network, run by Go-Ahead subsidiary Southern Vectis. However its Routes 2 and 3 parallel Island Line over virtually its whole length, with the two routes combining to provide four buses per hour. But although the quality of the bus on-board environment is superior to that of the Class 483 in almost every way (most Southern Vectis buses have USB charging, many have Wi-Fi and the buses are getting next-stop displays/announcements), a comparison between bus and train journey times shows where Island Line’s unique competitive advantage lies. On a weekday morning in peak time, the Route 2 bus takes 51 minutes from Shanklin station to Ryde Esplanade. The train does it in just 22 minutes. Both buses and private cars have to use the Island’s narrow, twisty, and (in holiday periods) heavily congested roads. The present 20/40 minute train headways are hardly competitive, as the waiting time could be longer than the full bus journey over certain sections. If the train service were more frequent, or at least more regular, it would surely help generate more traffic. Buses also cannot run along the weight-restricted Ryde Pier to meet the ferry, so the train has a further advantage. For travellers using the ferry, or trying to get between the main towns on the east of the Island, the train is faster and more reliable than any other option including cars, despite the less-than-contemporary passenger environment on board. So there is an ‘express’ travel niche that Island Line could usefully exploit, but this contradicts the actual impression given by the line’s current appearance as a tourist attraction and/or vintage travel experience. Yet Island Line does carry regular commuter flows both within the Island and to/from the mainland (this writer uses it to get from Shanklin to his day job in Ryde), as well as schools traffic. Such passengers are unlikely to find the ‘heritage’ aspects of the way Island Line is currently operated appealing, and probably want a travel experience with a quality of passenger accommodation more like that of the local buses, or trains on the mainland. What are the solutions? SWR’s view The consultation document gave SWR a chance to put forward some ideas, and its preferred option, for the future of Island Line. Having considered Parry People Mover-type vehicles, conversion to a busway, new third-rail stock, tram-trains, overhead line-powered stock, self-powered stock and various combinations thereof, SWR’s preferred proposal was: A ‘self-powered’ – but not diesel – train, accommodated on the existing infrastructure. A 25-year lease to help spread costs. An enhanced service frequency to better connect with the hovercraft and catamaran ferries [presumably an even (30-minute) headway though this was not explicitly stated]. Infrastructure improvements to allow better interchange between Island Line and the Isle of Wight Steam Railway to generate revenue for both organisations. Better marketing and revenue protection. Running “on the existing infrastructure” is somewhat ambiguous. It is unclear whether SWR means the infrastructure exactly as is (such as no works to Ryde Tunnel, so any new trains could not be significantly taller than the current tube-style dimensions) or perhaps no route extensions at present, but not ruling out infrastructure changes to ease the height restrictions at Ryde Tunnel. Meanwhile, self-powered but not diesel would suggest battery or perhaps flywheel power sources. It is improbable that Transport Secretary Chris Grayling’s latest favourite technology – hydrogen fuel cell power – will become rapidly practicable for what would be a very limited batch of small trains, without much at all in the way of roof space for hydrogen tanks. The railway has a number of hills which could provide the opportunity for regenerative brakes to recharge batteries if a battery powered replacement train type is selected. Other options Garnett’s 2016 report suggested the acquisition of T69 trams from Centro’s Midland Metro operation, despite them being taller even than the Isle of Wight’s original steam stock, and requiring additional clearance for overhead power lines. It proposed conversion of Island Line to a tramway with a 15-minute service frequency and ‘line of sight’ operation to achieve savings through abandonment of the existing signalling equipment. However the proposal to use the ex-Midland Metro trams is no longer an option as Transport for West Midlands has just announced it has sold them for scrap. The 2016 report also suggested handing over one of the two tracks between Smallbrook Junction and Ryde St Johns Road to the Isle of Wight Steam Railway. Station strip map (author) But the report was also controversial because it suggested removing Island Line from the wider franchise. This raised local concerns, particularly as the Department for Transport (DfT) had suggested in the South Western franchise consultation at around the same time that a social enterprise could run Island Line, and that it should be put on a ‘self-sustaining’ footing. It seemed impossible to identify a way that the line could ever be self-sustaining, and the wording was eventually changed to “more sustainable” in the final version of the franchise specification. In any case, all of Garnett’s recommendations were ultimately rejected by the Isle of Wight Council. During the SWR-led consultation on the future of Island Line in late 2017, a public meeting was organised on the Island by local campaign group Keep Island Line in the Franchise (KILF). This group wants to retain third rail as a power source and to make it fit for purpose, but they would accept a hybrid or battery train, provided that service levels are not diminished. KILF would also like to see an even 20 minute headway service, though it notes a 30 minute headway service would potentially offer better connections with ferries. Re-openings? However, the public meeting which took place in Shanklin on 14 December 2017 was overshadowed by the release of the DfT’s Strategic Vision for Rail a fortnight earlier on 29 November. On the mainland, the DfT’s suggestion of re-opening long-closed railways successfully (at least at first) diverted media attention from the recurrence of revenue problems at the Intercity East Coast franchise. On the Island, the DfT’s sudden apparent enthusiasm for rail re-openings diverted attention from resolving Island Line’s current problems, resulting instead in Island crayons being broken out. Local press coverage of the public meeting majored on KILF’s longer-term aspirations for extending Island Line by reopening the line from Smallbrook Junction to Newport. This would be achieved by operating over the tracks owned by the Isle of Wight Steam Railway, which itself would need to be extended westwards from its current terminus at Wootton. Several route proposals for such a reopening had been put forward by consultancy Jacobs in 2001, though at least one (with a street-running town centre loop in Newport) required tram-type rolling stock, and featured some heroically tight curve radii. Although Newport is the ‘capital’ of the Island and the hub of the Island’s road network, it isn’t actually the largest town. Its population is slightly less than that of Ryde. The other big towns on Island Line, Sandown and Shanklin, have populations about half of Ryde’s. The release of the DfT’s Strategic Vision for Rail also led to suggestions on the Island for reopening the line from Shanklin to Wroxall and Ventnor, an idea which crops up every few years despite the technical challenges of that reopening not having become any easier to solve in the meantime. The tunnel into Ventnor now houses a water main which would require an expensive diversion, and the track bed of the line has been built over at Wroxall and Ventnor stations. The local MP took up the cause of Isle of Wight railway reopenings with some vigour in the House of Commons in January, asking Transport Minister Nus Ghani for a commitment to extend Island Line to Newport and Ventnor if feasibility studies confirm costs in the £10-30m range. However, he took an unusual approach in attempting to curry favour with the DfT by suggesting that such sums of money were equivalent to typical DfT margins for error in its accounting, and that his proposal compared favourably with the “very poor” returns offered by HS2. He made much less mention of the need to solve the existing challenges of Island Line’s immediate future, which SWR’s consultation had identified. What Next? Even the more realistic proposals, set out by SWR its consultation document and focussing on the existing route, face some daunting obstacles. It is hard to think of any off-the-shelf train capable of fulfilling SWR’s aspiration of being self-powered but non-diesel and capable of running within Island Line’s restricted loading gauge. Ryde Tunnel (west end) & pump house (author) The height restriction imposed by Ryde Tunnel rules out follow-on orders for any existing mainline train type in use on the mainland. It also would seem to exclude options like Vivarail’s ex-D78 District line ‘D Train’ stock, which is approximately the same height as mainline trains. Island Line staff have apparently visited Vivarail’s facilities to enquire about these trains’ suitability, but nothing has been officially stated about any outcomes. Nonetheless rumours persist of the island’s interest in these reburbished and rebuilt trains. Island Line’s track was raised at stations during the electrification programme to allow impressively level access on and off the Tube stock (except at Ryde Esplanade, where the platform was lowered). In mainline train terms, platform heights are now at low floor rolling stock level. This further adds to the procurement challenge unless expensive track re-lowering is undertaken. Were the track bed in Ryde Tunnel lowered to its original position, this might make any new Island Line trains easier to source. The original height restriction for the Island’s rolling stock equates to stock of a similar height to BR’s PEP-derived Class 313/314/315/507/508 series, which are slightly shorter in height than most British trains. Lowering the track bed in Ryde Tunnel would, though, leave it more susceptible to the original flooding problem it was raised to counteract. The increased flooding risk might be mitigated with new pumping equipment, but new pumps would just introduce another yet cost pressure to the future of Island Line, both in terms of capital outlay and running costs. And, as we have seen, this is not just a new trains problem. There is more to Island Line’s troubles than just the urgent need to replace the Class 483s. Ryde Pier Head in 2018 (author) Even apparently simple infrastructure projects to address obvious problems have additional costs. Reinstating the second track to create a passing loop at or near Brading station would be needed to achieve a regular 30-minute headway for train services, but it is not as straightforward as relaying the missing track. Lineside cabling has been placed on the old track bed, and would have to be relocated, adding to costs. The power supply is at end-of-life stage with third rail and substations requiring replacement. Network Rail’s recent adventures in electrification give concern as to what the cost of this would be if like-for-like replacement were felt to be the best way forward. Meanwhile the track and track bed also need attention to bring them up to modern standards of ride quality, with considerable cost financially and in terms of service disruption. Island Line can ill-afford to put off passengers with lengthy closures for engineering work. Though perhaps less easy to quantify in cash terms, if the decision is taken to de-electrify Island Line and use self-powered trains, what message would that send? Whilst the current Transport Secretary appears to be no fan of rail electrification, having cancelled several planned projects and curtailing others around the country, de-electrifying an existing electrified route would be another thing altogether. SWR’s consultation ended on 31 December 2017, and it has until 31 May 2018 to submit a costed proposal for the future of Island Line to the DfT. Regular meetings of a Steering Group comprising Isle of Wight Council, DfT, Network Rail and SWR are now taking place (albeit not in public) which will shape the final proposal. According to SWR’s franchise agreement, that proposal must be “capable of acceptance by the Secretary of State”. Given the peculiar decisions of the current incumbent of that post, what might constitute acceptability is open to some degree of uncertainty. To keep the line, targeted investment is needed Underlying all this is that Island Line is not in itself a profitable operation. Replacement of trains and infrastructure add substantial costs without necessarily adding anything to income, worsening Island Line’s accounting position. With annual revenues of around £1m, at what price do the works necessary to secure Island Line’s long term future cease to represent value for money? Ryde St John’s station platform & bracket (author) There are clear tourist, traffic, environmental and quality of life benefits to upgrading the Island Line, just not a financial case as yet. But the most compelling reason may be for the unique tourism and island travel situation on the Isle of Wight.
null
minipile
NaturalLanguage
mit
null
Q: sorting a vector according to some attribute in it If I have the following vector: vector< pair < pair< char,int >,pair< int,int > > > How can I sort in descending order using <algorithm> library according to the integer part in the first pair? (I want to use sort(vector.begin() , vector.end() ) A: using MyVector = vector< pair < pair< char,int >,pair< int,int > > >; MyVector v; std::sort(v.begin(), v.end(), [](const MyVector::value_type& a, const MyVector::value_type& b) { return a.first.second > b.first.second; } );
null
minipile
NaturalLanguage
mit
null
Johnny Drake John William "Zero" Drake (March 27, 1916 – March 26, 1973) was an American football player. He was the first round pick (10th overall) by the Cleveland Rams, their first ever draft pick, in the 1937 NFL Draft. A Purdue Boilermakers running back, he led the NFL in touchdowns in the 1939 & 1940 seasons. External links Biography of Johnny Drake - by Professional Football Researchers Association Category:1916 births Category:1973 deaths Category:Sportspeople from Chicago Category:American football fullbacks Category:Purdue Boilermakers football players Category:Cleveland Rams players Category:Players of American football from Illinois
null
minipile
NaturalLanguage
mit
null
Problem === Given an integer n, generate a square matrix filled with elements from 1 to n2 in spiral order. For example, Given n = 3, You should return the following matrix: [ [ 1, 2, 3 ], [ 8, 9, 4 ], [ 7, 6, 5 ] ]
null
minipile
NaturalLanguage
mit
null
FREE NEWSLETTER: Econintersect sends a nightly newsletter highlighting news events of the day, and providing a summary of new articles posted on the website. Econintersect will not sell or pass your email address to others per our privacy policy. You can cancel this subscription at any time by selecting the unsubscribing link in the footer of each email. The scene at Bratislava Castle last week was a familiar one: European leaders gathered for another summit in a typically idyllic setting, where the natural beauty of their surroundings belied the deep imperfections of the union they were struggling to salvage. But now, in the wake of Britain's vote to leave the Continental bloc, delusion steeped in the ideals of an "ever-closer" union is wearing thin, and the realists in the room seem to be gradually gaining ground. The shift in the summit's tone was to be expected; closet Euroskeptics can no longer hide behind the United Kingdom as they assert national rights and tamp down Brussels' principles. They realize that the longer Europe's leaders avoid the hard questions, opting instead to continue extolling the "spirit" of the European Union as a way to survive, the more the bloc's guardians will have to react to - rather than shape - the enormous changes bubbling up from their disillusioned electorates. As Italian Prime Minister Matteo Renzi (who has tied his own political fate to a referendum in October) testily noted, the Bratislava gathering amounted to little more than a "boat trip on the Danube" and an "afternoon writing documents without any soul or any horizon" on the real problems afflicting Europe. Tempering Ideals With Realities The same frustration was palpable in several conversations I had during a recent trip to Slovenia, a country that tends to stay below the radar in Europe but is nevertheless highly perceptive of ground tremors. Slovenia lies, often precariously, at the edge of empires. Under the weight of the Alps, the former Yugoslav republic has one foot lodged in the tumultuous cauldron of the Balkans while its other foot toes the merchant riches of the Adriatic Sea. All the while, its arms are outstretched across the Pannonian Plain toward Vienna, the seat of the Austro-Hungarian Empire. Slovenia is a land where the Slavic tongue is spoken with Italian gaiety, where German and Austrian freight trucks fill the highways, where quaint Germanic timber homes and Viennese boulevards are dotted with Catholic iconography, and where German bratwurst mingles naturally with Balkan cevapi, Turkish burek and Italian gnocchi on restaurant menus. Slovenia's medieval castles, dramatic scenery and dragon folklore are the stuff of fairytales. But sober-minded Slovenians know from a troubled past that even after being accepted into the European Union, their country should not hold its collective breath for a "happily ever after" in such a fluid corner of the Continent. Instead, a welcome dose of realism met me in Slovenia in talks on the future of Europe. During a panel discussion I participated in at the Bled Strategic Forum, one comment in particular stood out to me. Dr. Ziga Turk, a professor at the University of Ljubljana and a former government minister, argued that Europeans must stop deluding themselves into thinking that they can build a European nation on ideology. Common language, history, culture, religion and kin will consistently trump shared ideas on the free market, democracy, social justice, human rights and environmentalism. This is not to say that the latter are unimportant; they just aren't enough to hold up a European superstate. The implication, at least in my mind, is that European leaders need to temper their ambitions and focus on rebalancing the merits of a Continental union with the realities of the nation-state. This is still a very unsettling idea for Europeanists who would rather talk about the veritable achievements the European Union has had in preserving peace for decades. One member of the audience complained that he was severely disappointed more of the panelists were not speaking in defense of EU values. But wouldn't time be better spent working to understand and respond to the very real forces that are pulling the union apart? This, to me, is like keeping a vintage Ferrari in the garage without ever taking the time to repair the engine that makes it run. We can continue to admire a beautiful relic of a bygone era, but it will not get us anywhere until we are willing to get our hands greasy fixing and maintaining it. A Rare Set of Geographic Circumstances Perhaps nobody better understands the shortcomings of ideology in building nations than those who have lived through such experiments' failures. Socialism and Slavic brotherhood proved woefully inadequate in taming ethnic and nationalistic currents in the former Yugoslavia. Dialectical materialism held sway with intellectuals who were repulsed by Western capitalism, but it quickly became a nightmare for the masses living behind the Iron Curtain in the crumbling Soviet Empire. Gamal Abdel Nasser thought he could foster a common Arab identity by creating a United Arab Republic, only to find that his efforts to ensure Egyptian domination accelerated his project's downfall by consolidating a Syrian identity in opposition to Cairo. Now, the Islamic State faces at least a dozen militaries as it tries to prove it can resurrect a caliphate under the tenets of Sharia, even if that state can only be built and maintained through brute force. But there are "good" and "bad" ideologies, one might counter. What about a nation based on seemingly universal values? Many Europeanists point to the United States as an example of a state bound by a common Lockesian belief in life, liberty and prosperity. Perhaps such uncontroversial values could provide an equally sturdy foundation not just for a European superstate, but also for the post-colonial power vacuums scattered throughout the Middle East, or for the numerous fledgling nations trying to become full-fledged states. Values are easy to discuss in the abstract. But they can also come back to bite. Europeans may trumpet democratic values as one of the binding principles of the union, yet referendums and elections - the very tools of democracy - are pulling the union apart. The West likewise promotes democracy in the Middle East but is not eager to face the consequences of Islamists being elected into office. Democracy is both tantalizing and terrifying for everyone involved. Alone, however, it is not enough to build a viable state. We can romanticize the founding of the United States as the first nation-state to be built on universal truths and values. We should also remember, though, that the young republic had certain undeniable, unique geopolitical advantages. European empires were too busy competing with one another on their own continent to overextend themselves in the New World. And with a sizable ocean buffer, robust river networks and ample farmland to develop, young America had the breathing room it needed to build its economy, population centers and industries, fight a civil war, and settle boundaries with its neighbors. This luxury enabled it to eventually emerge as a great power without the constant intervention of external powers stunting its growth. Ideology, ethnic kinship, language and culture are all pillars of a nation's architecture, but geography still forms its foundation. Without some degree of geographic coherence, resources and insulation, a tribe is unlikely to find the time and space to forge a common identity and organically mold it into a nation. It is for this reason that China's Han core will outlive the Communist Party, and that a Persian-dominated Iran, buffeted by a mountain fortress, will endure beyond the Islamic Republic. It is for the same reason that a collection of distinct European nations cannot be shoehorned into a United States of Europe. In Search of a Geopolitical Haven On my flight back to the United States, a family of Syrian refugees stood ahead of me in line at the Charles de Gaulle Airport in Paris. Two nearly identical young boys and two small girls stood with their father holding a thick stack of passports - one yellowed and weathered Syrian passport and four crisp new U.S. passports. The father's young face was crowned by a single, thick brow, the deep lines around his eyes exposing the long journey behind and a glimmer within them hinting at the hope ahead. The mother was conspicuously absent. It seemed as though the family had made a big effort to dress for the occasion: The two boys had fresh haircuts and were buried in the folds of their oversized three-piece suits, while the girls wore long Arabic dresses and brightly colored hijabs. One struggled to walk with an adult-sized Dior purse wrapped around her small frame, and both tripped over shoes that looked several sizes too big. Despite their new clothes, each dragged a dirty plastic bag with Arabic lettering full of worn, dusty shoes and slippers. The family before me was a piece of the migrant mosaic that is forcing Europeans to confront a basic pillar of the union - the free movement of people - and a basic human desire to be surrounded by people who look, speak, act and believe as they do. As I watched the children and their father, I remembered the derelict border checkpoints that I had driven past on the Slovenia-Italy border, wondering whether those tragically beautiful buildings peppering the Schengen zone would remain relics or be rejuvenated in a new and uncomfortable era of a Continent that believed in reviving national borders. The Syrian family I stood in line with will not have to worry about that. They are leaving behind a land where Syrian nationalism - forged by Arab kinship and a common language, culture and history - has dissolved, for now, into a sectarian bloodbath. Western powers, still attempting to work off the obsolete Sykes-Picot model, will soon gather in Vienna to try to impose the values they deem necessary to rebuild the Syrian nation, even as regional powers distort those values for their own ends. At summits, any country can call for an end to violence or for talks on a power-sharing arrangement in Syria. But in practice, can Turkey tolerate a federal Kurdish region on both sides of the Euphrates? Can Syria's Iranian-backed Alawites concede large swaths of Sunni territory like Aleppo? By all appearances, the Syrian nation will remain subject to the whims of Western powers trying to stay within the lines of a colonial-era coloring book as regional actors carve out their own spheres of influence. The four kids ahead of me are escaping that fate. They will probably grow up as Americans, chiding their father for his accent once they've outgrown their own and holding faint memories of the day they got dressed up for a flight to a new land - a nation with the geopolitical underpinnings to support the ideas it espoused from the very beginning. The growing use of ad blocking software is creating a shortfall in covering our fixed expenses. Please consider a donation to Econintersect to allow continuing output of quality and balanced financial and economic news and analysis. Keep up with economic news using our dynamic economic newspapers with the largest international coverage on the internet
null
minipile
NaturalLanguage
mit
null
According to the National Oceanic and Atmospheric Administration (NOAA), last week’s tornado outbreak was the biggest on record for a single 24-hour period. Preliminary estimates counted 312 tornadoes from April 27 to April 28, far above the previous record of 148 in 1974. April also set a record for most tornadoes in a single month too — more than 600, compared with the previous record of 542 in May 2003. Last week’s storms killed 342 across seven states, but the state worst hit with death and property destruction was Alabama, where the storms also devastated Alabama’s $2.4 Billion a year poultry industry. Some 200 poultry houses were completely destroyed with as many as 450 other houses damaged. Millions of birds were killed, according to the Wall Street Journal, with electric power lost in feed mills and processing plants. The Journal said the Alabama Poultry and Egg Association estimates that 5 million chickens probably died in the tornadoes, which slammed the northern part of the state, where the industry is centered. “That alone isn’t enough to disrupt chicken supplies nationally. The state usually produces about 21.5 million chickens in a week. The U.S. produces roughly 9 billion chickens annually.” But Alabama’s bird losses could substantially increase if farmers aren’t able to quickly re-establish water supplies. “Power outages and loss of drinking water could worsen an already critical situation for poultry producers and meat processors,” said John McMillan, commissioner of the Alabama Department of Agriculture & Industries, in a statement. Dan Smalley, who has owned one of the largest poultry farms (400-acres) in the state of Alabama for over 30 years, estimates he will lose about 200,000 birds in the coming days because the storm destroyed all his poultry facilities. Nine of his 15 chicken houses accommodating 20,000 chicks each were completely destroyed. Unable to feed and water his chickens or transport them to the processing plants, Smalley says he’ll have to destroy them. It’s not clear why Smalley couldn’t arrange for his chickens to be transported and sold to another farmer. Alabama’s poultry industry is the third-largest in the US, producing approximately one billion broilers every year. According to a BBC report, it could be six months to a year before the industry resumes full production. Tyson Foods Inc., the nation’s largest chicken processor, sustained no damage at its Alabama plants but two processing plants were idled by power shortages. Smalley’s birds are sold by Pilgrim’s Pride, the second largest producer in the US, supplying retail companies like Burger King, Chick-fil-A, and KFC. Smalley said he had to wait for the insurance company to decide which houses he had to demolish and which could be rebuilt. “We’re going to still be in the chicken business,” he said. “But to what degree, I don’t know.”
null
minipile
NaturalLanguage
mit
null
Q: Finding the index of an array element I posted a question 1 day ago asking if it was possible to have multiple coin types in a single contract and I am trying to implement the answer I received. Is it possible to have multiple coin types in one contract? The answer said to use nested mappings which works perfect for me, but I can't have: mapping string => mapping (address => uint)) coinBalanceOf; because it generates an error "Accessors for mapping with dynamically-sized keys not yet implemented" So I am trying to find a way around this so that I can have multiple coin types in a single contract but allow the user to specify a string as the coin type when they use the transfer function in my contract instead of an integer. For example here was the answer code from my last question: contract token { mapping (uint => mapping (address => uint)) coinBalanceOf; event CoinTransfer(uint coinType, address sender, address receiver, uint amount); /* Initializes contract with initial supply tokens to the creator of the contract */ function token(uint numCoinTypes, uint supply) { for (uint k=0; k<numCoinTypes; ++k) { coinBalanceOf[k][msg.sender] = supply; } } /* Very simple trade function */ function sendCoin(uint coinType, address receiver, uint amount) returns(bool sufficient) { if (coinBalanceOf[coinType][msg.sender] < amount) return false; coinBalanceOf[coinType][msg.sender] -= amount; coinBalanceOf[coinType][receiver] += amount; CoinTransfer(coinType, msg.sender, receiver, amount); return true; } When creating this contract it takes the number of coin types and uses that to create a nested mapping of addresses so it contains "1, 2, and 3" and the user can specify which type they want to transfer and how much they want to transfer. What I want is for the user to be able to say they want to transfer "Type 1" instead of just specifying "1" under the coin type. I want to do this because in example I used Coin Type 1, Coin Type 2, etc but in reality they aren't going to be named like that and the user won't know the index number associated with each coin type. My initial thought was to have something like this: contract token { string[] public assets = [ 'Coin 1', 'Coin 2', 'Coin 3' ]; mapping (uint => mapping (address => uint)) public coinBalanceOf; event CoinTransfer(string coinType, address sender, address receiver, uint amount); function token(uint256 initialSupply) { if (initialSupply == 0) initialSupply = 1000000; uint length = assets.length; for (uint k=0; k<length; ++k) { coinBalanceOf[k][msg.sender] = initialSupply; } } function sendCoin(string coinType, address receiver, uint amount) returns(bool sufficient) { uint Index = getIndex(coinType); if (coinBalanceOf[Index][msg.sender] < amount) return false; coinBalanceOf[Index][msg.sender] -= amount; coinBalanceOf[Index][receiver] += amount; CoinTransfer(coinType, msg.sender, receiver, amount); return true; } function getIndex(string coinType) { for(uint i = 0; i<assets.length; i++){ if(coinType == assets[i]) return i; } } } But I get an error when trying to call the getIndex function "Not enough components (0) in value to assign all variables (1)." and I'm not sure where to go from here. Any help is greatly appreciated! A: A popular alternative to index mapping with strings is to apply sha3 to the string. Instead of mapping (string => Document) documents; you have to use mapping (bytes32 => Document) documents; Now to access each document instead of accessing with the string directly you apply the sha3 function. Chaging documents["Very Important Document.pdf"] to documents[sha3("Very Important Document.pdf")].
null
minipile
NaturalLanguage
mit
null
Our Tree Lopping Logan Co team is dedicated to two details: helping keep trees healthy and our satisfied clients. What sets us apart from other tree lopping & removal companies is greater than just our years of tree services expertise, our top-notch… At Smile Quest Dentists Melbourne we have a responsibility to provide our patients with the highest duty of care to safeguard their interests. As your trusted dentist we will explain different options, and we strive to offer impartial advice at an honest… Good news for people who hate cleaning their homes but love to stay in a super-clean abode! We offer you with the most reliable House cleaning services and our team of professionals makes sure that your home is cleaned just the way you expect it to be. So… Yes, it is difficult to get the best home in Atlanta. But we have a top level experience in the real estate industry in Atlanta, GA. If you have any questions related to find a new home or want to get the best homes in Atlanta, please contact us through… Rajasthan tour packages are possibly the most sought after vacation packages in India. The package covers famous places in Rajasthan, includes Jaipur, Jodhpur, Udaipur, Jaisalmer, Pushkar ad more. These popular tourist spots offer something special for… Gower St Family Dental Clinic is the best preston dental center,the best destination for Dental Problems. We have experienced and finest dentists. We provides services at affordable prices for your convenience.To schedule an appointment or a free…
null
minipile
NaturalLanguage
mit
null
PropTiger’s proprietary Livability Score signifies the quality of life a family will enjoy, living in a particular society and locality. This score is calculated based on data collected on quality of amenities in the nearby area and within society through rigorous effort of our data labs team. Apex Multicons launched a new residential project Athena in Pune. Athena is a dainty project of compact but elegant homes with just the right amenities to add cheer to living. It is delightfully close from Mumbai Bangalore highway near Hinjewadi IT park. So if your workplace is in Hinjewadi, Bhosari, Kothrud, Aundh or even Talegaon, you can zip smoothly along the highway and save significant commuting time. No flashy trappings that allure first and then disappoint with their inefficacy and maint... more Apex Multicons launched a new residential project Athena in Pune. Athena is a dainty project of compact but elegant homes with just the right amenities to add cheer to living. It is delightfully close from Mumbai Bangalore highway near Hinjewadi IT park. So if your workplace is in Hinjewadi, Bhosari, Kothrud, Aundh or even Talegaon, you can zip smoothly along the highway and save significant commuting time. No flashy trappings that allure first and then disappoint with their inefficacy and maintenance costs. At Athena, just like you, we believe in setting up amenities which will be fully utilized and enjoyed. Pleasing patches of landscaped garden, child friendly play equipment in a safe pocket on the premises, specially manicured greenery to encourage eco-friendly living, and an open space to play, run, fall, scrape and cheer on Live it up, at Athena. Homes at Athena are designed such that they are more than just showpieces. They nurture, they pamper and they care. This 12th floor property is north facing and is priced at INR 65.00 lacs (all inclusive, registration charges extra). Its size of 995 sq. ft. makes it highly spacious for a 1BHK. It has 1 car parking. Its master bedroom and other bedrooms have wooden flooring. Its kitchen has a Modular Kitchen with Dishwasher. Developer Network 400+ Reputed Developers Sites & Links About PropTiger PropTiger.com is an online real estate advisor that functions on the fundamentals of trust, transparency and expertise. As a digital marketplace with an exhaustive range of property listings, we know it is easy to get lost. At PropTiger.com, we guide home buyers right from the start of their home search to the very end. Browse through more than 139,000 verified real estate properties with accurate lowdown on amenities, neighborhoods and cities, and genuine pictures. Shortlist your favorite homes and allow us to arrange site visits. Our work does not end here. We assist you with home loans and property registrations. Buying a home is an important investment – turn it into your safest, best deal at PropTiger.com. Contact us Follow Us PropTiger.com shall neither be responsible nor liable for any inaccuracy in the information provided here and therefore the customers are requested to independently validate the information from the respective developers before making their decisions related to properties displayed here. PropTiger.com, its directors, employees, agents and other representatives shall not be liable for any action taken, cost / expenses / losses incurred, by you.Read disclaimer Thank you for your Interest. Would you like to explore similar projects from below? Thank you for sharing your details! We regret for currently not serving in Pune. However we can share your details with the property agents on makaan.com Share Now There was an error submitting your request. Please try again later. Report Error What is wrong? Price/RateProject StatusProject DetailsOthers Details about errors please provide details Report Thank you for submitting your interest. We'll get back to you shortly.
null
minipile
NaturalLanguage
mit
null
Introduction ============ Uveal melanoma is the primary malignant tumor of the adult eye, with an incidence rate, especially in the choroid, of 78%--85% of all cases, followed by the ciliary body (9%--12%), and the iris (6%--9.5%).[@b1-ijn-8-3805],[@b2-ijn-8-3805] The tumor can metastasize to the liver via a hematogenous pathway, and approximately 50% of patients show metastasis within 15 years of initial diagnosis and treatment. Once it has metastasized, the mortality rate is high.[@b3-ijn-8-3805]--[@b5-ijn-8-3805] Although current clinical practice involves ophthalmectomy, localized tumor resection, radiotherapy, and laser treatment, none of these treatments effectively inhibits tumor metastasis or improves postoperative life quality. The dilemma of treating melanoma and improving the success rate is still an important research subject. With advances in molecular biology and molecular genetic technology, gene therapy for malignant tumor had become the main issue for researchers, in which the herpes simplex virus thymidine kinase (HSV-TK) suicide-gene system has been considered the most promising treatment.[@b6-ijn-8-3805] Moolten was the initiator who proposed this method.[@b7-ijn-8-3805] The combination of the HSV-1-TK suicide-gene system and ganciclovir (GCV), not only killed the infected cells, but through a bystander effect also eliminated surrounding uninfected cells, which in turn effectively reduced the stress induced by the tumor cells. However, it could not sufficiently mobilize all bodily immune responses to fight the tumor. Tumor necrosis factor (TNF)-α is known as a central signaling molecule in natural antitumor mechanisms.[@b8-ijn-8-3805] It is sometimes referred to as cachectin, and has a direct killing effect and sensitizes tumor cells to radiation. At high doses, TNFα can cause hemorrhage and necrosis in tumor tissues, while the activation of the immune response could synergistically play an antitumor role.[@b9-ijn-8-3805] The combinatory use is often recommended to overcome the inadequacy of either treatment when used alone. Dendrimer nanoparticles are a nonviral vector composed of a polymer of nanoparticles of diameter less than 100 nm. Dendrimer nanoparticles contain many amino groups, which protonize under physiological pH. The protonized amino groups can then neutralize the electric charge on the surface of the DNA, allowing DNA molecules to be compacted into relatively smaller structures to prevent the nuclease from degradation. The transfection complex primarily passes the DNA into a cell via endocytosis and forms an endosome vesicle. DNA is then released from the vesicle into cytoplasm before entering the nucleus for transcription and translation. The nanometer-scale transfection reagent shows unique characteristics, such as provision of stronger protection to DNA and lower cytotoxicity.[@b10-ijn-8-3805],[@b11-ijn-8-3805] Many studies have shown the combination of early growth response-1 (Egr-1) promoter, and the target gene could prove to be an effective gene radiotherapy for tumor treatment.[@b12-ijn-8-3805]--[@b16-ijn-8-3805] To find a more feasible and less harmful treatment, we used dendrimeric nanoparticle as a vector to transfect into the OCM-1 human uveal melanoma cell strain with recombinant plasmid involving double-gene expression of Egr-1 promoter-controlled TNFα and HSV1-TK. The OCM-1 human uveal melanoma cell strain would then be exposed to 2 Gy iodine-125 (^125^I) radiation and the expression level of these two genes measured as well as the impact of radiation on recombinant plasmid expression being investigated and the cellular morphology, proliferation, and apoptosis observed in order to understand the efficacy and feasibility of an in vitro method for killing tumor cells. Materials and methods ===================== Construction of plasmid pEgr-TNFα-TK, plasmid pEgr-TNFα, and plasmid pEgr-HSV-TK -------------------------------------------------------------------------------- The recombinant DNA plasmids were constructed and sequenced by Life Technologies (Shanghai, People's Republic of China). According to the Egr-1 promoter sequence, the sequence of human TNFα and HSV-TK of GenBank, chemically synthesized the recombinant plasmids were restored in the *Escherichia coli* competent cells DH5α. These plasmids were extracted from the *E. coli* DH5α strain and subjected to agarose gel electrophoresis to ensure their correction. Human choroidal melanoma OCM-1 cell line ---------------------------------------- The human choroidal melanoma (OCM-1) cell line (American Type Culture Collection, Manassas, VA, USA) was cultured in Roswell Park Memorial Institute 1640 medium supplemented with 10% fetal bovine serum and 1% penicillin/streptomycin. Cells were maintained at 37°C and 5% CO~2~ in an incubator with 95% humidity. The cell-culture medium was replaced every second day, and cells were passaged at 85%--90% confluence. Polyplex formation ------------------ Dendrimer nanoparticles were purchased from Engreen Biosystem (Beijing, People's Republic of China). Following the protocol, 3 μg each of the recombinant DNA plasmids was incubated with 9 μL dendrimeric nanoparticles in 200 μL pH 7--8 culture medium without serum, protein, and antibiotics for 15--30 minutes, and verified with agarose gel electrophoresis. The dendriplexes were observed on transmission electron microscopy (TEM) and scanning electron microscopy (SEM). Zeta-potential analysis was performed on a Zetasizer Nano ZS90 (Malvern Instruments, Malvern, UK). DNase I sensitivity was examined by the following methods. All dendriplexes and the plasmid DNA were incubated with 1 U DNase I in a 37°C water bath for 30 minutes and 1, 2, 4, and 6 hours. After adding ethylenediaminetetraacetic acid (EDTA; 5 mmol/L), the samples were placed in a 65°C water bath for 10 minutes, then the appropriate concentration of heparin solution (final concentration of 5 mg/mL) was added, before placing them back in a 37°C water bath for 2 hours and subjecting them to agarose gel electrophoresis. Transient transfections ----------------------- For in vitro transfection experiments, OCM-1 cells were grown to about 80% confluence. Cells were incubated for 8 hours with dendriplexes in the absence of serum and antibiotics, followed by incubation with growth medium for 24 hours. The OCM-1 cells were transfected with the recombinant plasmid pEgr-TNFα-TK, hereafter referred to as TNF-TK, pEgr-TNFα, hereafter referred to as TNF, and pEgr-HSV-TK, hereafter referred to as TK. The negative-control group was cells incubated only with polyplexes without plasmids. The blank-control group was treated with phosphate-buffered saline (PBS). ^125^I radiation ---------------- In order to excite the recombinant DNA plasmids in cells, the transfected OCM-1 cells were exposed to 2 Gy ^125^I radiation and cultivated in normal condition (37°C, 5% CO~2~). After transfection, at the appropriate time, the expression products were tested and analyzed. The timing depended on the type of test and the target transfection gene. Measuring the protein-expression level of the target gene --------------------------------------------------------- ### ELLSA At 24 hours after transfection, the supernatant of the OCM-1 cells was collected and exposed to 2 Gy radiation. The cellular supernatant was collected again at 0, 2, 4, 8, 12, 24, and 48 hours after radiation. Enzyme-linked immunosorbent assay (ELISA) was performed to measure the concentrations of TNFα and HSV-TK from each group by taking three samples. All tests followed the instructions of ELISA test kit. At the wavelength of 450 nm, the OD value was measured and based on the standard curve, and thereby the expression levels of TNFα and HSV-TK in the sample were calculated. ### Western blot The medium was removed, and the plates were washed twice with ice-cold PBS. The treated OCM-1 cells were lysed with sample buffer that contained 60 mM Tris, pH 6.8, 2% (w/v) sodium dodecyl sulfate (SDS), 100 mM 2-mercaptoethanol, and 0.01% bromophenol blue. The lysate was then incubated on ice for 30 minutes. The lysate was scraped by a cell scraper and harvested by a pipettor, and then centrifuged at 4°C for 30 minutes. The supernatant was collected and boiled for 5 minutes and stored at −20°C. Cellular extracts from the treated OCM-1 cells were processed for Western blot analysis. Fifty micrograms of protein per well was loaded on a 10% SDS-polyacrylamide gel electrophoresis gel. The protein was electrotransferred to polyvinylidene difluoride membranes for 1 hour and 45 minutes at 100 V, then blocked with Tris-buffered saline (TBS) containing 5% fat-free milk and 0.1% Tween-20 (TBST) for 1 hour and incubated with human anti-TNFα (TNF-α \[N-19\]: sc-1350; Santa Cruz Biotechnology, Dallas, TX, USA) overnight. After three washes with TBST, the membranes were incubated with secondary antibody for 1--2 hours at room temperature and washed again with TBST. Localization of the antibody was detected by chemiluminescence using an ECL kit (CoWin Biosciences, Inc., Menlo Park, CA, USA) following the manufacturer's instructions. ### Electron microscopy observation For observation of transfected and exposed OCM-1 cells under electron microscopy, the nontransfected OCM-1 cells and OCM-1 cells in culture media with only dendrimeric nanoparticles were used as the negative control. At 24 hours after a combination of 0 Gy and 2 Gy ^125^I radiation exposure and GCV (50 μg/mL, pH 6.8--7.4; Life Technologies) for treatment, the cells were cultured to a logarithmic growth state. From each group, 1 × 10^7^ cells were respectively obtained the medium removed and the plate washed twice with 4°C PBS. A cell scraper was used to scrap the OCM-1 cells slide by slide, and these were collected in a 1.5 mL Eppendorf tube. The solution was centrifuged at 1000 rpm for 3 minutes before its supernatant was discarded and 2.5% glutaraldehyde was added for 2 hours of fixation at 4°C. Samples were washed three times with PBS (pH 7.2), each time lasted 10 minutes with pipets prohibition, then stored at 4°C. When all samples were collected they would be sent to the electron microscope laboratory at the Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University. Samples were first observed under regular-light microscopy before being prepared on slides for electron microscopy for further investigation. The cell-growth rate reflected the impact of transfection on OCM-1 cell proliferation ------------------------------------------------------------------------------------- Respectively, TNF-TK, TNF-α, and TK were transfected into OCM-1 cells under a logarithmic growth pattern. The non-transfected OCM-1 cells and OCM-1 cells in culture media with only dendrimeric nanoparticles were used as the negative control. After a combination of 0 Gy and 2 Gy ^125^I radiation exposure and GCV (50 μg/mL, pH 6.8--7.4; Life Technologies) for treatment, the cells were cultured to a logarithmic growth state and prepared in a single-cell suspension. At approximately 2 × 10^4^ cells per well, they were inoculated in six-well culture plates. On days 2, 4, 6, and 8, cells from three wells for each group were extracted and digested by 0.25% trypsin-EDTA into single cell suspension for cell count and plotting of the cell-growth curve. Measuring the effect of transfection on promoting in vitro cell apoptosis with flow cytometry --------------------------------------------------------------------------------------------- ### Annexin V-FITC/PI The nontransfected OCM-1 cells and OCM-1 cells in culture media with only dendrimeric nanoparticles were used as the negative control. After a combination of 0 Gy and 2Gy ^125^I radiation exposure and GCV (50 μg/mL, pH 6.8--7.4; Life Technologies) for treatment, the cells were cultured to a logarithmic growth state and prepared into single-cell suspension. At 2 × 10^6^ cells per well, the cells were inoculated in six-well culture plates, with each group occupying six wells. These were incubated at 37°C, 5% CO~2~, and saturated humidity. By 48 and 72 hours after exposure, they were prepared into single-cell suspension. Approximately 1 × 10^6^ cells were extracted and washed with 4°C PBS, before 100 μL annexin-binding buffer was added to resuspend the solution. Five microliters of annexin V fluorescein isothiocyanate (FITC) and 1 μL propidium iodide (PI) solution were added for another 15 minutes of incubation in the dark before 400 μL annexin-binding buffer was added. The solution was gently beaten while it was chilled on ice. Flow cytometry was then used within 1 hour to measure the level of apoptosis at a wavelength of 488 nm. ### Caspase-3 fluorescent stain test Quantitative analysis of apoptosis due to coexpression via the activated caspase-3 was undertaken. Each group of cells was prepared according to the aforementioned procedure. At 48 hours after exposure, 300 μL was extracted into a test tube from 1 × 10^6^/mL concentration solution. After 1 μL FITC-DEVD-FMK (fluorescein isothiocyanate-Asp-Glu-Val-Asp-fluoromethyl ketone) was added it was placed into incubation for 1 hour at 37°C, 5% CO~2~, and saturated humidity, before it was centrifuged at 3,000 rpm for 5 minutes. The supernatant was discarded and the cells resuspended by adding 500 μL wash buffer. Five minutes of centrifugation at 3,000 rpm was performed twice. These steps were repeated by adding 300 μL wash buffer for resuspension. The sample was tested by flow cytometry via the FL-1 channel. Statistical methods ------------------- All experiments were carried out in triplicate. Results are represented as means ± standard deviation. Statistical significance was tested using one-way analysis of variance. *P* \< 0.05 was considered statistically significant. Results ======= Construction of recombinant plasmid pEgr-TNFα-TK, plasmid pegr-TNFα, and plasmid pEgr-HSV-TK -------------------------------------------------------------------------------------------- According to the results of the gene-sequence analysis, we can synthesize the designed gene single-stranded oligo. The recombinant plasmids were ligased and transformed into the *E. coli* DH5α strain. Restricted digestion and sequencing carried out on the transformed DH5α proved that the genes was inserted to the vector correctly, indicating that the recombinant plasmids pEgr-TNFα-TK, hereafter referred to as TNF-TK, pEgr-TNFα, hereafter referred to as TNF, and pEgr-HSV-TK, hereafter referred to as TK were constructed successfully. The obtained target-gene fragments were sequenced and sequence aligned on NCBI. Agarose gel electrophoresis analysis is shown in [Figure 1](#f1-ijn-8-3805){ref-type="fig"}. Characterization of dendriplexes -------------------------------- The dendriplexes of TNF-TK, TNF, and TK were verified with agarose gel electrophoresis, as shown in [Figure 2A](#f2-ijn-8-3805){ref-type="fig"}. Each group showed similar bands with the plasmid DNA, but the bands were larger due to agglomeration. DNase sensitivity examination showed that dendriplexes and plasmid DNA both had apparent bands in the original place, and a little trailing and extra bands showed as DNA degradation in 30 minutes. In 6 hours, dendriplexes could still be observed with bands in the original place, but not the plasmid DNA ([Figure 2B](#f2-ijn-8-3805){ref-type="fig"}, [C](#f2-ijn-8-3805){ref-type="fig"} and [D](#f2-ijn-8-3805){ref-type="fig"}). The electron microscopy sizing data of dendrimer nanoparticles and dendriplexes demonstrated particle sizes of about 20 nm and 100--200 nm, and the dendriplex agglomeration was about 500 nm ([Figure 3](#f3-ijn-8-3805){ref-type="fig"}). The zeta potentials of TNF-TK, TNF, and TK were 6.49 ± 0.83 mV, 6.71 ± 0.77 mV, and 7.91 ± 1.60 mV, respectively. There was no significantly statistical difference between them. Expression of human tumor necrosis factor-α protein --------------------------------------------------- Without ^125^I radiation, the expression of TNFα was higher in TNF-TK and TNF group than the other group ([Figure 4A](#f4-ijn-8-3805){ref-type="fig"}). After 2 Gy ^125^I radiation, the expression of TNFα was higher in TNF-TK and TNF group and gradually increased with time; the highest expressions were shown at 24 hours and 48 hours ([Figure 4B](#f4-ijn-8-3805){ref-type="fig"}). After irradiation, the expression of TNFα in 2 hours to 48 hours versus 0 hours showed a significantly statistical difference (*P* \< 0.01), but there was no significant difference (*P* = 0.95) between the TNF-TK and TNF groups at any time point ([Figure 4C](#f4-ijn-8-3805){ref-type="fig"} and [D](#f4-ijn-8-3805){ref-type="fig"}). The TNF-TK-radiation group versus the TNF-TK group showed a statistically significant difference (*P* \< 0.01) at 24 hours. The TNF-radiation group versus the TNF group showed a statistically significant difference (*P* \< 0.01) at 24 hours. Expression of human herpes simplex virus thymidine kinase --------------------------------------------------------- Without ^125^I radiation, the expression of HSV-TK was higher in the TNF-TK and TK groups than other groups ([Figure 5A](#f5-ijn-8-3805){ref-type="fig"}). After 2Gy ^125^I radiation, the expression of HSV-TK was higher in the TNF-TK and TK groups and gradually increased with time, the highest expressions were shown at 24 hours and 48 hours ([Figure 5B](#f5-ijn-8-3805){ref-type="fig"}). After irradiation, the expression of HSV-TK in 4 hours versus 0 hours showed a significantly statistical difference (*P* \< 0.01), but there was no significant difference (*P* = 0.95) between the TNF-TK and TK groups at any time point. TNF-TK-radiation group versus the TNF-TK group showed statistically significant difference (*P* \< 0.01) at 24 hours. The TK-radiation group versus the TK group showed a statistically significant difference (*P* \< 0.01) at 24 hours. ([Figure 5C](#f5-ijn-8-3805){ref-type="fig"} and [D](#f5-ijn-8-3805){ref-type="fig"}). Under electron microscopy ------------------------- The sample in the TNF-TK group with exposure to 2 Gy ^125^I under regular light microscopy showed more cells in a necrotic state than other groups. There were large areas of vacuolar change, and some cells became smaller and oval in shape. When observed under electron microscopy, significant vacuoles were seen in the cytoplasm and there was expansion in nuclear space ([Figure 6](#f6-ijn-8-3805){ref-type="fig"}). The cell-growth rate reflected the impact of transfection on OCM-1 cell proliferation ------------------------------------------------------------------------------------- The cell numbers of each group were examined every 2 days in vitro. As shown in [Figure 7A](#f7-ijn-8-3805){ref-type="fig"}, irradiation only and transfection can suppress OCM-1 cell growth compared with the negative-control group and blank-control group, respectively. Growth conditions of the transfected group without irradiation were similar to the negative-control radiation and OCM-1 radiation groups ([Figure 7B](#f7-ijn-8-3805){ref-type="fig"}). The TNF-TK-radiation, TNF-radiation, and TK-radiation groups were significantly inhibited compared with the transfected groups and the irradiation groups (*P* \< 0.05) ([Figure 7C](#f7-ijn-8-3805){ref-type="fig"} and [D](#f7-ijn-8-3805){ref-type="fig"}). Effect of recombinant plasmid pEgr-TNFα-TK on OCM-1 cell apoptosis ------------------------------------------------------------------ By using flow cytometry, it was possible to measure quantitatively and analyze the effect of coexpression plasmid on promoting apoptosis. In the transfected groups, which were either exposed or not exposed to 2 Gy ^125^I, results for signs of early cellular apoptosis between the two groups (annexin V-FITC+/PI cells, early apoptotic cells) showed pEgr-TNFα-TK treatment, irradiation, and combined treatments could evoke OCM-1 cell apoptosis ([Figure 8A](#f8-ijn-8-3805){ref-type="fig"}) when compared with the control group and the negative control group. However, 2 Gy ^125^I treatment did not induce OCM-1 cell apoptosis in 48 hours (*P* = 0.057); a statistically significant difference was revealed in 72 hours (*P* = 0.00). The apoptosis rate of the transfected groups with irradiation was higher than the unexposed transfection groups in 48 hours, while there was statistical significance in the TNF-radiation and TK-radiation groups in comparison with the unexposed transfection group (*P* \< 0.05). In 72 hours ([Figure 8B](#f8-ijn-8-3805){ref-type="fig"}), the apoptosis rate of the transfected groups with irradiation was significantly higher than the unexposed transfection groups, and the difference in early apoptotic rate for cells in the TNF-radiation group was more significant when compared to others groups (*P* \< 0.01). The result of activated caspase-3 fluorescent stain showed that in the unexposed group and the group exposed to 2 Gy ^125^I radiation, the average fluorescent intensity of caspase-3 stain between the two groups was more intense in the experiment group than the control (*P* \< 0.01), and it was strongest in the TNF-radiation group ([Figure 8C](#f8-ijn-8-3805){ref-type="fig"}). There was statistical significance between the transfection and the nontransfection groups (*P* \< 0.01). Discussion ========== In this study, we use dendrimer nanoparticles as a vector to produce coexpression gene therapy of TNFα and HSV-TK, in order to find an efficient way to accomplish an improvement of human uveal primary tumor radiosensitivity. Dendrimers have been shown to be nontoxic, and they are highly efficient carriers for the delivery of nucleic acids and short oligodeoxynucleotides.[@b17-ijn-8-3805]--[@b19-ijn-8-3805] Ferenc et al proved that phosphorus dendrimers have the potential to become efficient carriers of small interfering RNA in anti-HIV therapy.[@b20-ijn-8-3805] Such complexes have been called dendriplexes by analogy with polyplexes (polymer/nucleic acid complexes) and lipoplexes (liposome/nucleic acid complexes).[@b21-ijn-8-3805] The results of our study show that these dendrimer nanoparticles are about 20 nm, and dendriplexes that formed with those plasmids that were well constructed could achieve 100--200 nm with neutral zeta potential.[@b22-ijn-8-3805],[@b23-ijn-8-3805] The properties of the dendriplexes suggest that they can perform the transfection appropriately and affect tumor-cell proliferation and apoptosis. Nanomaterials are similar in scale to biologic molecules, and systems can yet be engineered to have various functions; therefore, nanotechnology is potentially useful for medical applications.[@b24-ijn-8-3805] The influence of nanotechnology on the field of medicine has given rise to a new field known as "nanomedicine," which encompasses the utilization of nanoscale structures and devices for medical treatment and diagnosis.[@b25-ijn-8-3805] Numerous studies have reported that nanotechnology accelerates various regenerative therapies, such as those for the bone, vascular, heart, cartilage, bladder, and brain tissue.[@b26-ijn-8-3805]--[@b28-ijn-8-3805] Nanomedicines having at least one dimension in the nanoscale include nanoparticles, micelles, nanotubes, and dendrimers, with and without targeting ligands, and are making a significant impact in the fields of ocular drug delivery and gene delivery.[@b29-ijn-8-3805] Nanoparticles have better cellular uptake than larger particles. This uptake process is most likely by endocytosis, and has been utilized for gene delivery investigating the uptake of different-sized nanoparticles (20, 100, 500, 1,000, and 2,000 nm) into retinal pigment epithelial cells.[@b29-ijn-8-3805],[@b30-ijn-8-3805] Dendrimer nanoparticles form a tree-like, globular, nanostructured polymer that is a new type of vector for genetic transfection.[@b31-ijn-8-3805] Some dendrimers possess antimicrobial properties, and can be used as surface-coating agents and drug carriers.[@b32-ijn-8-3805] The polymer has more accurate nanostructure, higher solubility, low viscosity, and composability by using the positive charge on its surface to bind with the glycoprotein and phospholipid on the cell membrane.[@b33-ijn-8-3805] Therefore nanoparticle systems diffuse rapidly and are well internalized in ocular tissues.[@b34-ijn-8-3805] Dendrimeric nanoparticles have received significant attention due to their well-defined size, tailorable structure, narrow polydispersity, and potentially favorable ocular biodistribution.[@b35-ijn-8-3805] Because of the unique physiological structure of an eye, it is better to use lower-toxicity and higher-efficacy transfection vectors to localize drug administration.[@b36-ijn-8-3805] As a result, using dendrimeric nanoparticles as the vector will be the most valuable treatment to treat intraocular tumor. The involvement of gene therapy for tumor and transfection of genetic vectors consists of gene information for both direct or indirect antitumor effect, which will be expressed at the tumor-growth site to inhibit or even kill the tumor.[@b37-ijn-8-3805],[@b38-ijn-8-3805] In this study, we successfully combined human TNFα and HSV-TK genes into one single vector with Egr-1 promoter. Due to the eye's unique anatomical structure, a radio applicator for the iris has always been deemed to be the best choice for localization as well as short-distance radiotherapy. More recently, ^125^I brachytherapy has gained acceptance as an effective treatment alternative for small and medium-sized melanomas.[@b39-ijn-8-3805] However, simple radiotherapy can still injure the surrounding tissues, and some tumors even develop resistance to the radiation, resulting in reduction of therapeutic effect and its application. These are the primary factors to cause the failure of radiotherapy.[@b40-ijn-8-3805]--[@b42-ijn-8-3805] As a result, we hypothesize the combination of gene therapy and radiotherapy will have a better antitumor coeffect. Egr-1 played an important role in combining gene therapy and radiotherapy effectively. Murugesan et al used the recombinant technology with adenovirus as vector to embed human TNFα, chemo- or radiosensitive fragment, and Egr-1 into a compound called TNFerade (AdEgr.TNF.11D).[@b43-ijn-8-3805] Weichselbaum and Kufe successfully proceeded to use TNFerade in phase III human trials for treating pancreatic carcinoma.[@b44-ijn-8-3805] The use of recombinant technology could assist in finding the most appropriate vector, which could in turn increase the precision of transfection of TNFα into tumor cells, for it to express and synthesize target enzymes while HSV-TK would also be used, and both could together act to kill tumor cells. In this study, we not only constructed plasmid pEgr1-TNFα-TK with dendrimeric nanoparticles as the vector precisely but also combined 2 Gy ^125^I radiation for human uveal melanoma cells in vitro. DNase examination confirmed that dendrimer complexes can protect plasmid DNA from DNase I more effectively than naked plasmid DNA ([Figure 2B](#f2-ijn-8-3805){ref-type="fig"}--[D](#f2-ijn-8-3805){ref-type="fig"}). Protein expression of the genes was tested by ELISA and Western blot, and both showed significant expression for each gene to protein. The expression level is related to ^125^I radiation exposure, and also showed a specific time--effect relationship. We also confirmed that the transfection with dendrimer nanoparticles was successful. Based on the expression levels of TNFα and HSV-TK in those transfected groups and the combination groups, the bar graphs in [Figures 4C](#f4-ijn-8-3805){ref-type="fig"} and [5C](#f5-ijn-8-3805){ref-type="fig"} show increases in expression of these two proteins after exposure. Statistical analysis suggested a significant variation in expression before and after radiation. The study also proved that the radiation could activate the transcription of Egr-1 and induce downstream upregulation. However, the peak time and dosage for these two proteins in cells were partially different, probably due to the type of protein expression and the effect of transfection. Under TEM, the sample in the TNF-TK group with exposure to 2 Gy ^125^I radiation showed more cells in a necrotic state than other groups. Cell growth-rate results suggested that the TNF-TK group combined with radiation presented the best effect to inhibit the proliferation rate of OCM-1 cells. On the contrary, the sample in the negative-control group showed no damage, just like the control group. This observation confirmed that it is safe to use dendrimer nanoparticles as vectors. Also, our study proved that radiation can increase the efficiency of gene transfer to ameliorate the effect of gene therapy, and gene therapy can increase radiosensitivity, reduce radiation injury to normal tissue, and change oxygenation of tumor cells. The results of flow cytometry and caspase-3 fluorescent staining showed that there were no significant differences in the early apoptotic rate among the transfection groups that were not exposed to 2 Gy ^125^I, but there were significant differences between the transfection groups after irradiation. The TNF-radiation groups showed the highest early apoptotic rate compared with the other radiation groups. There was no statistical difference between the negative and control groups. In vitro experimentation showed that dendrimer nanoparticles delivered the genes into OCM-1 lines successfully. The recombinant plasmid pEgr-1-TNFα-TK could significantly affect OCM-1 lines' proliferation rate and cause apoptosis and necrosis. The effects were strong after 2 Gy ^125^I irradiation. However, the necrosis effect was more obvious, and the apoptosis effect was not as profound as the cells with transfection of pEgr-1-TNFα with irradiation. A possible reason could be the combinatory effect of TK/GCV, which would lead more cells to a necrotic state. We also confirmed that the dendrimer nanoparticles can deliver genes into the target cells effectively and safely. In our previous study, we used New Zealand rabbits to build choroidal melanoma animal models and prove that domestic ^125^I plaque irradiation is effective for the treatment of choroidal melanoma.[@b45-ijn-8-3805] Therefore, we will further investigate the inhibitory effect and the antitumor mechanisms of coexpression plasmid pEgr-1-TNFα-TK on uveal melanoma in animal experiments. Experimental animal models under cyclosporine would have reduced immunity. The use of a gene-transfection system with a highly toxic and unstable viral carrier would make it difficult to maintain animal survival rates during experiments. Therefore, in this study, we confirmed the efficiency of dendrimer nanoparticle transfection and expression of cytotoxicity to provide reliable experimental data for further experiments to investigate a new approach for treating human uveal melanoma. Conclusion ========== The present study indicated that the pEgr-1-TNFα-TK group had the best effect of destroying tumor cells after irradiation, and that dendrimer nanoparticles provided an efficient and safe avenue to deliver genes into target cells. This study was supported by the National Natural Science Foundation of China (grant 81272981), the Natural Sciences Foundation of Beijing, People's Republic of China (grant 7112031), and the Advanced Health-Care Professional Development Project of the Beijing Municipal Health Bureau, People's Republic of China (Grant 2009-3-32). **Disclosure** The authors report no conflicts of interest in this work. ![(**A**--**C**) Egr-1 promoter gene, TNF-α gene, HSV1-TK gene agarose gel electrophoresis. (**A**) Egr-1 promoter connected into pMD-18T vector double-cut by AseI/NheI into two bands −2.7 K and 0.6 K. (**B**) TNF-α gene connected into the PCR-XL-TOPO vector double-cut by XhoI/EcoRI into two bands −3.5 K and 0.7 K. (**C**) Repaired TK gene produced by PCR, approximately 1,150 bp.\ **Abbreviations:** Egr-1, early growth response-1; TNF-α, tumor necrosis factor-α; HSV1-TK, herpes simplex virus thymidine kinase; TK, thymidine kinase; PCR, polymerase chain reaction.](ijn-8-3805Fig1){#f1-ijn-8-3805} ![(**A**--**D**) Agarose gel electrophoresis of dendriplexes. (**A**) Dendripexes of TNF-TK, TNF, and TK were verified with gel electrophoresis. (**B**--**D**) DNase sensitivity examination results in 30 minutes, 4 hours, and 6 hours, respectively. The "d"s and "p"s refer to dendriplexes and plasmid DNA, respectively. We can see there are still bands around the original place.\ **Abbreviations:** TNF-TK, tumor necrosis factor-thymidine kinase; TNF, tumor necrosis factor; TK, thymidine kinase.](ijn-8-3805Fig2){#f2-ijn-8-3805} ![(**A**--**F**) Images of dendrimer nanoparticles and dendriplexes under electron microscope. TEM image of dendrimer nanoparticles (**A**) and dendriplexes (**B** and **C**) showed particle sizes of about 20 nm and 100--200 nm, respectively. SEM images of dendrimer nanoparticles (**D**) and dendriplexes (**E** and **F**) were matched with TEM and showed dendripolex agglomeration was approximately 500 nm.\ **Abbreviations:** SEM, scanning electron microscopy; TEM, transmission electron microscopy.](ijn-8-3805Fig3){#f3-ijn-8-3805} ![(**A**--**D**) The expression of human tumor necrosis factor-α protein. (**A**) Without ^125^I radiation. (**B**) 2 Gy ^125^I irradiation, TNFα expression levels of each group change over time curve. (**C**) TNF-α expression at 24 hours. (**D**) TNFα Western blot strip: TNF-TK-radiation group, about 17 K~d~ (1); TNF-radiation group, about 17 K~d~ (2); TNF-TK and TNF groups, no obvious strip tape (3 and 4).\ **Note:** The red arrows show the strip's position.\ **Abbreviations:** TNFα**,** tumor necrosis factor α; TNF-TK, tumor necrosis factor-thymidine kinase; TNF, tumor necrosis factor; ^125^I, iodine-125; TK, thymidine kinase.](ijn-8-3805Fig4){#f4-ijn-8-3805} ![(**A**--**D**) The expression of human herpes simplex virus thymidine kinase. (**A**) Without ^125^I radiation. (**B**) 2 Gy ^125^I irradiation, HSV-TK expression levels of each group change over time curve. (**C**) HSV-TK expression at 24 hours. (**D**) HSV-TK Western blot strip: TNF-TK-radiation group, about 24 K~d~ (1); TNF-TK group, strip is pale and is about 24 K~d~ (2); TK group, about 24 K~d~ (3) the TK-radiation group had the most obvious strip, about 24 K~d~ (4).\ **Note:** The red arrows show the strips position.\ **Abbreviations:**^125^I, iodine-125; HSV-TK, herpes simplex virus-thymidine kinase; TNF-TK, tumor necrosis factor-thymidine kinase; TK, thymidine kinase.](ijn-8-3805Fig5){#f5-ijn-8-3805} ![(**A**--**F**) OCM-1 cell morphology. (**A**) human uveal melanoma OCM-1 (×100) was adherent cell in spindle shape. (**B**) Nontransfected control group of human uveal melanoma OCM-1 (×1.0 k) showed microvilli on cell surface, inverse ratio of nucleus to cytoplasm, and multiple nucleoli in nucleus, all suggesting a greater degree of malignancy. (**C**) Non-transfected control group of human uveal melanoma OCM-1 (×7.0 k): rough endoplasmic reticulum and mitochondrial structures could be observed. (**D**) TNF-TK group after using 2 Gy radio-applicator and cultured for 24 hours (×1.2 k): poorly defined cell membrane, formation of vacuole in cytoplasm, and expansion in nuclear space. (**E**) TNF-TK group after using 2 Gy radio-applicator and cultured for 24 hours (×2.0 k): swollen mitochondria, expanded rough endoplasmic reticulum, and vacuole formation in Golgi body. (**F**) TNF-TK group after using 2 Gy radio-applicator and cultured for 24 hours (×1.2 k): empty cell with poorly defined cell membrane and highly expanded organelles.\ **Abbreviations:** OCM-1, human choroidal melanoma; TNF-TK, tumor necrosis factor-thymidine kinase.](ijn-8-3805Fig6){#f6-ijn-8-3805} ![(**A**) Cell-growth curves of non-^125^I radiation OCM-1 cells, the negative control group, with no statistically significant difference from the control group. (**B**) The transfected groups without irradiation were similar to the negative control radiation and OCM-1-radiation groups. (**C** and **D**) cell-growth curve of ^125^I radiation OCM-1 cells, the negative control group, with no statistically significant difference from the control group.\ **Abbreviations:**^125^I, iodine-125; OCM-1, human choroidal melanoma; TNF-TK, tumor necrosis factor-thymidine kinase; TNF, tumor necrosis factor; TK, thymidine kinase.](ijn-8-3805Fig7){#f7-ijn-8-3805} ![(**A**--**C**) Apoptosis analysis of OCM-1 cell lines after treatment for 48 and 72 hours by flow cytometry. (**A**) Apoptosis analysis using annexin V-FITC/PI double-staining after 48 hours. The negative control group compared with the control group showed no statistical significance (*P* \> 0.05). (**B**) Apoptosis analysis using Annexin V-FITC/PI double-staining after 72 hours. The apoptosis ratios in pEgr-TNFα-TK treatment, irradiation, and combined groups were 7.86% ± 0.15%, 5.9% ± 0.17%, and 13.77% ± 0.76%, respectively; n = 3 replicates per condition. (**C**) Caspase-3 fluorescent stain after 48 hours. pEgr-TNFα-TK treatment, irradiation, and combined treatments significantly induced caspase-3 arrest in OCM-1 cells. The negative control group compared with the control group showed no statistical significance (*P* = 0.531).\ **Notes:** \**P* \< 0.005; ^\#^*P* \< 0.01 compared with the negative control group and the control group by one-way analysis of variance; n = 3 replicates per condition.\ **Abbreviations:** OCM-1, human choroidal melanoma; FITC, fluorescein isothiocyanate; PI, propidium iodide; pEgr-TNFα-TK, plasmid early growth response-1 tumor necrosis factor α thymidine kinase.](ijn-8-3805Fig8){#f8-ijn-8-3805} [^1]: \*These authors contributed equally to this work
null
minipile
NaturalLanguage
mit
null
REYNOLDS v. U.S. United States Supreme Court REYNOLDS v. U.S., (1878) Argued: Decided: October 1, 1878 This is an indictment found in the District Court for the third judicial district of the Territory of Utah, charging George Reynolds with bigamy, in violation of sect. 5352 of the Revised Statutes, which, omitting its exceptions, is as follows:-- 'Every person having a husband or wife living, who marries another, whether married or single, in a Territory, or other place over which the United States have exclusive jurisdiction, is guilty of bigamy, and shall be punished by a fine of not more than $500, and by imprisonment for a term of not more than five years.' The prisoner pleaded in abatement that the indictment was not found by a legal grand jury, because fifteen persons, and no more, were impanelled and sworn to serve as a grand jury at the term of the court during which the indictment was found, whereas sect. 808 of the Revised Statutes of the United States enacts that every grand jury impanelled before any District or Circuit Court shall consist of not less than sixteen persons. An act of the legislature of Utah of Feb. 18, 1870, provides that the court shall impanel fifteen men to serve as a grand jury. Compiled Laws of Utah, ed. of 1876, p. 357, sect. 4. The court overruled the plea, on the ground that the territorial enactment governed. The prisoner then pleaded not guilty. Several jurors were examined on their voire dire by the district attorney. Among them was Eli Ransohoff, who, in answer to the question, 'Have you formed or expressed an opinion as to the guilt or innocence of the prisoner at the bar?' said, 'I have expressed an opinion by reading the papers with the reports of the trial.' By the defendant: 'You stated that you had formed some opinion by reading the reports of the previous trial?' A. 'Yes.' Q. 'Is that an impression which still remains upon your mind?'- [98 U.S. 145, 147] A. 'No; I don't think it does: I only glanced over it, as everybody else does.' Q. 'Do you think you could try the case wholly uninfluenced by any thing?' A. 'Yes.' Charles Read, called as a juror, was asked by the district attorney, 'Have you formed or expressed any opinion as to the guilt or innocence of this charge?' A. 'I believe I have formed an opinion.' By the court: 'Have you formed and expressed an opinion?' A. 'No, sir; I believe not.' Q. 'You say you have formed an opinion?' A. 'I have.' Q. 'Is that based upon evidence?' A. 'Nothing produced in court.' Q. 'Would that opinion influence your verdict?' A. 'I don't think it would.' By defendant: 'I understood you to say that you had formed an opinion, but not expressed it.' A. 'I don't know that I have expressed an opinion: I have formed one.' Q. 'Do you now entertain that opinion?' A. 'I do.' The defendant challenged each of these jurors for cause. The court overruled the challenge, and permitted them to be sworn. The defendant excepted. The court also, when Homer Brown was called as a juror, allowed the district attorney to ask him the following questions: Q. 'Are you living in polygamy?' A. 'I would rather not answer that.' The court instructed the witness that he must answer the question, unless it would criminate him. By the district attorney: 'You understand the conditions upon which you refuse?' A. 'Yes, sir.'-Q. 'Have you such an opinion that you could not find a verdict for the commission of that crime?' A. 'I have no opinion on it in this particular case. I think under the evidence and the law I could render a verdict accordingly.' Whereupon the United States challenged the said Brown for favor, which challenge was sustained by the court, and the defendant excepted. [98 U.S. 145, 148] John W. Snell, also a juror, was asked by the district attorney on voire dire: Q. 'Are you living in polygamy?' A. 'I decline to answer that question.'-Q. 'On what ground?' A. 'It might criminate myself; but I am only a fornicator.' Whereupon Snell was challenged by the United States for cause, which challenge was sustained, and the defendant excepted. After the trial commenced, the district attorney, after proving that the defendant had been married on a certain day to Mary Ann Tuddenham, offered to prove his subsequent marriage to one Amelia Jane Schofield during the lifetime of said Mary. He thereupon called one Pratt, the deputy marshal, and showed him a subpoena for witnesses in this case, and among other names thereon was the name of Mary Jane Schobold, but no such name as Amelia Jane Schofield. He testified that this subpoena was placed in his hands to be served. Q. 'Did you see Mr. Reynolds when you went to see Miss Schofield?' A. 'Yes, sir.' Q. 'Who did you inquire for?' A. 'I inquired for Mary Jane Schofield, to the best of my knowledge. I will state this, that I inserted the name in the subpoena, and intended it for the name of the woman examined in this case at the former term of the court, and inquired for Mary Jane Schofield, or Mrs. Reynolds, I do not recollect certainly which.' Q. 'State the reply.' A. 'He said she was not at home.' Q. 'Did he say any thing further.' A. 'I asked him then where I could find her. I said, 'Where is she? And he said, 'You will have to find out." Q. 'Did he know you to be a deputy marshal?' A. 'Yes, sir.' Q. 'Did you tell him what your business was as deputy marshal?' A. 'I don't remember now: I don't think I did.' Q. 'What else did he say?'- [98 U.S. 145, 149] A. 'He said, just as I was leaving, as I understood it, that she did not appear in this case.' The court then ordered a subpoena to issue for Amelia Jane Schofield, returnable instanter. Upon the following day, at ten o'clock A.M., the said subpoena for the said witness having issued about nine o'clock P.M. of the day before, the said Arthur Pratt was again called upon, and testified as follows:-- Q. (By district attorney.) 'State whether you are the officer that had subpoena in your hands.' (Exhibiting subpoena last issued, as above set forth.) A. 'Yes, sir.' Q. 'State to the court what efforts you have made to serve it.' A. 'I went to the residence of Mr. Reynolds, and a lady was there, his first wife, and she told me that this woman was not there; that that was the only home that she had, but that she hadn't been there for two or three weeks. I went again this morning, and she was not there.' Q. 'Do you know any thing about her home,-where she resides?' A. 'I know where I found her before.' Q. 'Where?' A. 'At the same place.' Q. 'You are the deputy marshal that executed the process of the court?' A. 'Yes, sir.' Q. 'Repeat what Mr. Reynolds said to you when you went with the former subpoena introduced last evening.' A. 'I will state that I put her name on the subpoena myself. I know the party, and am well acquainted with her, and I intended it for the same party that I subpoenaed before in this case. He said that she was not in, and that I could get a search-warrant if I wanted to search the house. I said, 'Will you tell me where she is?' He said, 'No; that will be for you to find out.' He said, just as I was leaving the house,-I don't remember exactly what it was, but my best recollection is that he said she would not appear in this case.'- [98 U.S. 145, 150] Q. 'Can't you state that more particularly?' A. 'I can't give you the exact words, but I can say that was the purport of them.' Q. 'Give the words as nearly as you can.' A. 'Just as I said, I think those were his words.' The district attorney then offered to prove what Amelia Jane Schofield had testified to on a trial of another indictment charging the prisoner with bigamy in marrying her; to which the prisoner objected, on the ground that a sufficient foundation had not been laid for the introduction of the evidence. A. S. Patterson, having been sworn, read, and other witnesses stated, said Amelia's testimony on the former trial, tending to show her marriage with the defendant. The defendant excepted to the admission of the evidence. The court, in summing up to the jury, declined to instruct them, as requested by the prisoner, that if they found that he had married in pursuance of and conformity with what he believed at the time to be a religious duty, their verdict should be 'not guilty,' but instructed them that if he, under the influence of a religious belief that it was right, had 'deliberately married a second time, having a first wife living, the want of consciousness of evil intent-the want of understanding on his part that he was committing crime-did not excuse him, but the law inexorably, in such cases, implies criminal intent.' The court also said: 'I think it not improper, in the discharge of your duties in this case, that you should consider what are to be the consequences to the innocent victims of this delusion. As this contest goes on, they multiply, and there are pure-minded women and there are innocent children,-innocent in a sense even beyond the degree of the innocence of childhood itself. These are to be the sufferers; and as jurors fail to do their duty, and as these cases come up in the Territory, just so do these victims multiply and spread themselves over the land.' To the refusal of the court to charge as requested, and to the charge as given, the prisoner excepted. The jury found him guilty, as charged in the indictment; and the judgment that he be imprisoned at hard labor for a term of two years, and pay [98 U.S. 145, 151] a fine of $500, rendered by the District Court, having been affirmed by the Supreme Court of the Territory, he sued out this writ of error. The assignments of error are set out in the opinion of the court. Mr. George W. Biddle and Mr. Ben Sheeks for the plaintiff in error. First, The jury was improperly drawn. Two of the jurors were challenged for cause by the defendant below, because they admitted that they had formed, and still entertained, an opinion upon the guilt or innocence of the prisoner. The holding by a juror of any opinions which would disqualify him from rendering a verdict in accordance with the law of the land, is a valid objection to his serving. It was clearly erroneous for the prosecution to ask several of the jurymen, upon voire dire, whether they were living in polygamy; questions which tend to disgrace the person questioned, or to render him amenable to a criminal prosecution, have never been allowed to be put to a juror. Anonymous, Salk. 153; Bacon, Abr., tit. Juries, 12(f); 7 Dane, Abr. 334; Hudson v. The State, 1 Blackf. (Ind.) 319. Second, The proof of what the witness, Amelia Jane Schofield, testified to in a former trial, under another indictment, should not have been admitted. The constitutional right of a prisoner to confront the witness and cross-examine him is not to be abrogated, unless it be shown that the witness is dead, or [98 U.S. 145, 152] out of the jurisdiction of the court; or that, having been summoned, he appears to have been kept away by the adverse party on the trial. It appeared not only that no such person as Amelia Jane Schofield had been subpoenaed, but that no subpoena had ever been taken out for her. An unserved subpoena with the name of Mary Jane Schobold was shown. At nine o'clock in the evening, during the trial, a new subpoena was issued; and on the following morning, with no attempt to serve it beyond going to the prisoner's usual residence and inquiring for her, the witness Patterson was allowed to read from a paper what purported to be statements made by Amelia Jane Schofield on a former trial. No proof was offered as to the genuineness of the paper or its origin, nor did the witness testify to its contents of his own knowledge. This is in the teeth of the ruling in United States v. Wood (3 Wash. 440), and the rule laid down in all the American authorities. Richardson v. Stewart, 2 Serg. & R. (Pa.) 84; Chess v. Chess, 17 id. 409; Huidekopper v. Cotton, 3 Watts (Pa.) 56; Powell v. Waters, 17 Johns. (N. Y.) 176; Cary v. Sprague, 12 Wend. (N. Y.) 45; The People v. Newman, 5 Hill (N. Y.), 295; Brogy v. The Commonwealth, 10 Gratt . (Va.) 722; Bergen v. The People, 17 Ill. 426; Dupree v. The State, 33 Ala. 380. Third, As to the constitutionality of the Poland Bill. Rev. Stat., sect. 5352. Undoubtedly Congress, under art. 4, sect. 3, of the Constitution, which gives 'power to dispose of and make all needful rules and regulations respecting the territory or other property belonging to the United States,' and under the decisions of this court upon it, may legislate over such territory, and regulate the form of its local government. But its legislation can be neither exclusive nor arbitrary. The power of this government to obtain and hold territory over which it might legislate, without restriction, would be inconsistent with its own existence in its present form. There is always an excess of power exercised when the Federal government attempts to provide for more than the assertion and preservation of its rights over such territory, and interferes by positive enactment with the social and domestic life of its inhabitants and their internal police. The offence prohibited by sect. 5352 is not a malum in se; it is not prohibited by the decalogue; and, if it be said [98 U.S. 145, 153] that its prohibition is to be found in the teachings of the New Testament, we know that a majority of the people of this Territory deny that the Christian law contains any such prohibition. The Attorney-General and The Solicitor-General, contra. MR. CHIEF JUSTICE WAITE delivered the opinion of the court. The assignments of error, when grouped, present the following questions:-- 1. Was the indictment bad because found by a grand jury of less than sixteen persons? 2. Were the challenges of certain petit jurors by the accused improperly overruled? 3. Were the challenges of certain other jurors by the government improperly sustained? 4. Was the testimony of Amelia Jane Schofield, given at a former trial for the same offence, but under another indictment, improperly admitted in evidence? 5. Should the accused have been acquitted if he married the second time, because he believed it to be his religious duty? 6. Did the court err in that part of the charge which directed the attention of the jury to the consequences of polygamy? These questions will be considered in their order. 1. As to the grand jury. The indictment was found in the District Court of the third judicial district of the Territory. The act of Congress 'in relation to courts and judicial officers in the Territory of Utah,' approved June 23, 1874 (18 Stat. 253), while regulating the qualifications of jurors in the Territory, and prescribing the mode of preparing the lists from which grand and petit jurors are to be drawn, as well as the manner of drawing, makes no provision in respect to the number of persons of which a grand jury shall consist. Sect. 808, Revised Statutes, requires that a grand jury impanelled before any district or circuit court of the United States shall consist of not less than sixteen nor more than twenty-three persons, while a statute of the Territory limits the number in the district courts of the Territory [98 U.S. 145, 154] to fifteen. Comp. Laws Utah, 1876, 357. The grand jury which found this indictment consisted of only fifteen persons, and the question to be determined is, whether the section of the Revised Statutes referred to or the statute of the Territory governs the case. By sect. 1910 of the Revised Statutes the district courts of the Territory have the same jurisdiction in all cases arising under the Constitution and laws of the United States as is vested in the circuit and district courts of the United States; but this does not make them circuit and district courts of the United States. We have often so decided. American Insurance Co. v. Canter, 1 Pet. 511; Benner et al. v. Porter, 9 How. 235; Clinton v. Englebrecht, 13 Wall. 434. They are courts of the Territories, invested for some purposes with the powers of the courts of the United States. Writs of error and appeals lie from them to the Supreme Court of the Territory, and from that court as a territorial court to this in some cases. Sect. 808 was not designed to regulate the impanelling of grand juries in all courts where offenders against the laws of the United States could be tried, but only in the circuit and district courts. This leaves the territorial courts free to act in obedience to the requirements of the territorial laws in force for the time being. Clinton v. Englebrecht, supra; Hornbuckle v. Toombs, 18 Wall. 648. As Congress may at any time assume control of the matter, there is but little danger to be anticipated from improvident territorial legislation in this particular. We are therefore of the opinion that the court below no more erred in sustaining this indictment than it did at a former term, at the instance of this same plaintiff in error, in adjudging another bad which was found against him for the same offence by a grand jury composed of twenty-three persons. 1 Utah, 226. 2. As to the challenges by the accused. By the Constitution of the United States (Amend. VI.), the accused was entitled to a trial by an impartial jury. A juror to be impartial must, to use the language of Lord Coke, 'be indifferent as he stands unsworn.' Co. Litt. 155 b. Lord Coke also says that a principal cause of challenge is 'so called because, if it be found true, it standeth sufficient of itself, without [98 U.S. 145, 155] leaving any thing to the conscience or discretion of the triers' (id. 156 b); or, as stated in Bacon's Abridgment, 'it is grounded on such a manifest presumption of partiality, that, if found to be true, it unquestionably sets aside the . . . juror.' Bac. Abr., tit. Juries, E. 1. 'If the truth of the matter alleged is admitted, the law pronounces the judgment; but if denied, it must be made out by proof to the satisfaction of the court or the triers.' Id. E. 12. To make out the existence of the fact, the juror who is challenged may be examined on his voire dire, and asked any questions that do not tend to his infamy or disgrace. All of the challenges by the accused were for principal cause. It is good ground for such a challenge that a juror has formed an opinion as to the issue to be tried. The courts are not agreed as to the knowledge upon which the opinion must rest in order to render the juror incompetent, or whether the opinion must be accompanied by malice or ill-will; but all unite in holding that it must be founded on some evidence, and be more than a mere impression. Some say it must be positive (Gabbet, Criminal Law, 391); others, that it must be decided and substantial (Armistead's Case, 11 Leigh (Va.), 659; Wormley's Case, 10 Gratt. (Va.) 658; Neely v. The People, 13 Ill. 685); others, fixed (State v. Benton, 2 Dev. & B. (N. C.) L. 196); and, still others, deliberate and settled (Staup v. Commonwealth, 74 Pa. St. 458; Curley v. Commonwealth, 84 id. 151). All concede, however, that, if hypothetical only, the partiality is not so manifest as to necessarily set the juror aside. Mr. Chief Justice Marshall, in Burr's Trial (1 Burr's Trial, 416), states the rule to be that 'light impressions, which may fairly be presumed to yield to the testimony that may be offered, which may leave the mind open to a fair consideration of the testimony, constitute no sufficient objection to a juror; but that those strong and deep impressions which close the mind against the testimony that may be offered in opposition to them, which will combat that testimony and resist its force, do constitute a sufficient objection to him.' The theory of the law is that a juror who has formed an opinion cannot be impartial. Every opinion which he may entertain need not necessarily have that effect. In these days of newspaper enterprise and universal education, every case of public interest is almost, as a matter of necessity, [98 U.S. 145, 156] brought to the attention of all the intelligent people in the vicinity, and scarcely any one can be found among those best fitted for jurors who has not read or heard of it, and who has not some impression or some opinion in respect to its merits. It is clear, therefore, that upon the trial of the issue of fact raised by a challenge for such cause the court will practically be called upon to determine whether the nature and strength of the opinion formed are such as in law necessarily to raise the presumption of partiality. The question thus presented is one of mixed law and fact, and to be tried, as far as the facts are concerned, like any other issue of that character, upon the evidence. The finding of the trial court upon that issue ought not to be set aside by a reviewing court, unless the error is manifest. No less stringent rules should be applied by the reviewing court in such a case than those which govern in the consideration of motions for new trial because the verdict is against the evidence. It must be made clearly to appear that upon the evidence the court ought to have found the juror had formed such an opinion that he could not in law be deemed impartial. The case must be one in which it is manifest the law left nothing to the 'conscience or discretion' of the court. The challenge in this case most relied upon in the argument here is that of Charles Read. He was sworn on his voire dire; and his evidence,1 taken as a whole, shows that he 'believed' he had formed an opinion which he had never expressed, but which he did not think would influence his verdict on hearing the testimony. We cannot think this is such a manifestation of partiality as to leave nothing to the 'conscience or discretion' of the triers. The reading of the evidence leaves the impression that the juror had some hypothetical opinion about the case, but it falls far short of raising a manifest presumption of partiality. In considering such questions in a reviewing court, we ought not to be unmindful of the fact we have so often observed in our experience, that jurors not unfrequently seek to excuse themselves on the ground of having formed an opinion, when, on examination, it turns out that no real disqualification exists. In such cases the manner of the [98 U.S. 145, 157] juror while testifying is oftentimes more indicative of the real character of his opinion than his words. That is seen below, but cannot always be spread upon the record. Care should, therefore, be taken in the reviewing court not to reverse the ruling below upon such a question of fact, except in a clear case. The affirmative of the issue is upon the challenger. Unless he shows the actual existence of such an opinion in the mind of the juror as will raise the presumption of partiality, the juror need not necessarily be set aside, and it will not be error in the court to refuse to do so. Such a case, in our opinion, was not made out upon the challenge of Read. The fact that he had not expressed his opinion is important only as tending to show that he had not formed one which disqualified him. If a positive and decided opinion had been formed, he would have been incompetent even though it had not been expressed. Under these circumstances, it is unnecessary to consider the case of Ransohoff, for it was confessedly not as strong as that of Read. 3. As to the challenges by the government. The questions raised upon these assignments of error are not whether the district attorney should have been permitted to interrogate the jurors while under examination upon their voire dire as to the fact of their living in polygamy. No objection was made below to the questions, but only to the ruling of the court upon the challenges after the testimony taken in answer to the questions was in. From the testimony it is apparent that all the jurors to whom the challenges related were or had been living in polygamy. It needs no argument to show that such a jury could not have gone into the box entirely free from bias and prejudice, and that if the challenge was not good for principal cause, it was for favor. A judgment will not be reversed simply because a challenge good for favor was sustained in form for cause. As the jurors were incompetent and properly excluded, it matters not here upon what form of challenge they were set aside. In one case the challenge was for favor. In the courts of the United States all challenges are tried by the court without the aid of triers (Rev. Stat. sect. 819), and we are not advised that the practice in the territorial courts of Utah is different. [98 U.S. 145, 158] 4. As to the admission of evidence to prove what was sworn to by Amelia Jane Schofield on a former trial of the accused for the same offence but under a different indictment. The Constitution gives the accused the right to a trial at which he should be confronted with the witnesses against him; but if a witness is absent by his own wrongful procurement, he cannot complain if competent evidence is admitted to supply the place of that which he has kept away. The Constitution does not guarantee an accused person against the legitimate consequences of his own wrongful acts. It grants him the privilege of being confronted with the witnesses against him; but if he voluntarily keeps the witnesses away, he cannot insist on his privilege. If, therefore, when absent by his procurement, their evidence is supplied in some lawful way, he is in no condition to assert that his constitutional rights have been violated. In Lord Morley's Case (6 State Trials, 770), as long ago as the year 1666, it was resolved in the House of Lords 'that in case oath should be made that any witness, who had been examined by the coroner and was then absent, was detained by the means or procurement of the prisoner, and the opinion of the judges asked whether such examination might be read, we should answer, that if their lordships were satisfied by the evidence they had heard that the witness was detained by means or procurement of the prisoner, then the examination might be read; but whether he was detained by means or procurement of the prisoner was matter of fact, of which we were not the judges, but their lordships.' This resolution was followed in Harrison's Case (12 id. 851), and seems to have been recognized as the law in England ever since. In Regina v. Scaife (17 Ad. & El. N. S. 242), all the judges agreed that if the prisoner had resorted to a contrivance to keep a witness out of the way, the deposition of the witness, taken before a magistrate and in the presence of the prisoner, might be read. Other cases to the same effect are to be found, and in this country the ruling has been in the same way. Drayton v. Wells, 1 Nott & M. (S. C.) 409; Williams v. The State, 19 Ga. 403. So that now, in the leading text-books, it is laid down that if a witness is kept away by the adverse party, [98 U.S. 145, 159] his testimony, taken on a former trial between the same parties upon the same issues, may be given in evidence. 1 Greenl. Evid., sect. 163; 1 Taylor, Evid., sect. 446. Mr. Wharton (1 Whart. Evid., sect. 178) seemingly limits the rule somewhat, and confines it to cases where the witness has been corruptly kept away by the party against whom he is to be called, but in reality his statement is the same as that of the others; for in all it is implied that the witness must have been wrongfully kept away. The rule has its foundation in the maxim that no one shall be permitted to take advantage of his own wrong; and, consequently, if there has not been, in legal contemplation, a wrong committed, the way has not been opened for the introduction of the testimony. We are content with this long-established usage, which, so far as we have been able to discover, has rarely been departed from. It is the outgrowth of a maxim based on the principles of common honesty, and, if properly administered, can harm no one. Such being the rule, the question becomes practically one of fact, to be settled as a preliminary to the admission of secondary evidence. In this respect it is like the preliminary question of the proof of loss of a written instrument, before secondary evidence of the contents of the instrument can be admitted. In Lord Morley's Case (supra), it would seem to have been considered a question for the trial court alone, and not subject to review on error or appeal; but without deeming it necessary in this case to go so far as that, we have no hesitation in saying that the finding of the court below is, at least, to have the effect of a verdict of a jury upon a question of fact, and should not be disturbed unless the error is manifest. The testimony shows that the absent witness was the alleged second wife of the accused; that she had testified on a former trial for the same offence under another indictment; that she had no home, except with the accused; that at some time before the trial a subpoena had been issued for her, but by mistake she was named as Mary Jane Schobold; that an officer who knew the witness personally went to the house of the accused to serve the subpoena, and on his arrival inquired for her, either by the name of Mary Jane Schofield or Mrs. Reynolds; that he was tole by the accused she was not at home; [98 U.S. 145, 160] that he then said, 'Will you tell me where she is?' that the reply was 'No; that will be for you to find out;' that the officer then remarked she was making him considerable trouble, and that she would get into trouble herself; and the accused replied, 'Oh, no; she won't, till the subpoena is served upon her,' and then, after some further conversation, that 'She does not appear in this case.' It being discovered after the trial commenced that a wrong name had been inserted in the subpoena, a new subpoena was issued with the right name, at nine o'clock in the evening. With this the officer went again to the house, and there found a person known as the first wife of the accused. He was told by her that the witness was not there, and had not been for three weeks. He went again the next morning, and not finding her, or being able to ascertain where she was by inquiring in the neighborhood, made return of that fact to the court. At ten o'clock that morning the case was again called; and the foregoing facts being made to appear, the court ruled that evidence of what the witness had sworn to at the former trial was admissible. In this we see no error. The accused was himself personally present in court when the showing was made, and had full opportunity to account for the absence of the witness, if he would, or to deny under oath that he had kept her away. Clearly, enough had been proven to cast the burden upon him of showing that he had not been instrumental in concealing or keeping the witness away. Having the means of making the necessary explanation, and having every inducement to do so if he would, the presumption is that he considered it better to rely upon the weakness of the case made against him than to attempt to develop the strength of his own. Upon the testimony as it stood, it is clear to our minds that the judgment should not be reversed because secondary evidence was admitted. This brings us to the consideration of what the former testimony was, and the evidence by which it was proven to the jury. It was testimony given on a former trial of the same person for the same offence, but under another indictment. It was [98 U.S. 145, 161] substantially testimony given at another time in the same cause. The accused was present at the time the testimony was given, and had full opportunity of cross-examination. This brings the case clearly within the well-established rules. The cases are fully cited in 1 Whart. Evid., sect. 177. The objection to the reading by Mr. Patterson of what was sworn to on the former trial does not seem to have been because the paper from which he read was not a true record of the evidence as given, but because the foundation for admitting the secondary evidence had not been laid. This objection, as has already been seen, was not well taken. 5. As to the defence of religious belief or duty. On the trial, the plaintiff in error, the accused, proved that at the time of his alleged second marriage he was, and for many years before had been, a member of the Church of Jesus Christ of Latter-Day Saints, commonly called the Mormon Church, and a believer in its doctrines; that it was an accepted doctrine of that church 'that it was the duty of male members of said church, circumstances permitting, to practise polygamy ; . . . that this duty was enjoined by different books which the members of said church believed to be of divine origin, and among others the Holy Bible, and also that the members of the church believed that the practice of polygamy was directly enjoined upon the male members thereof by the Almighty God, in a revelation to Joseph Smith, the founder and prophet of said church; that the failing or refusing to practise polygamy by such male members of said church, when circumstances would admit, would be punished, and that the penalty for such failure and refusal would be damnation in the life to come.' He also proved 'that he had received permission from the recognized authorities in said church to enter into polygamous marriage; . . . that Daniel H. Wells, one having authority in said church to perform the marriage ceremony, married the said defendant on or about the time the crime is alleged to have been committed, to some woman by the name of Schofield, and that such marriage ceremony was performed under and pursuant to the doctrines of said church.' Upon this proof he asked the court to instruct the jury that if they found from the evidence that he 'was married as [98 U.S. 145, 162] charged-if he was married-in pursuance of and in conformity with what he believed at the time to be a religious duty, that the verdict must be 'not guilty." This request was refused, and the court did charge 'that there must have been a criminal intent, but that if the defendant, under the influence of a religious belief that it was right,-under an inspiration, if you please, that it was right,-deliberately married a second time, having a first wife living, the want of consciousness of evil intent-the want of understanding on his part that he was committing a crime-did not excuse him; but the law inexorably in such case implies the criminal intent.' Upon this charge and refusal to charge the question is raised, whether religious belief can be accepted as a justification of an overt act made criminal by the law of the land. The inquiry is not as to the power of Congress to prescribe criminal laws for the Territories, but as to the guilt of one who knowingly violates a law which has been properly enacted, if he entertains a religious belief that the law is wrong. Congress cannot pass a law for the government of the Territories which shall prohibit the free exercise of religion. The first amendment to the Constitution expressly forbids such legislation. Religious freedom is guaranteed everywhere throughout the United States, so far as congressional interference is concerned. The question to be determined is, whether the law now under consideration comes within this prohibition. The word 'religion' is not defined in the Constitution. We must go elsewhere, therefore, to ascertain its meaning, and nowhere more appropriately, we think, than to the history of the times in the midst of which the provision was adopted. The precise point of the inquiry is, what is the religious freedom which has been guaranteed. Before the adoption of the Constitution, attempts were made in some of the colonies and States to legislate not only in respect to the establishment of religion, but in respect to its doctrines and precepts as well. The people were taxed, against their will, for the support of religion, and sometimes for the support of particular sects to whose tenets they could not and did not subscribe. Punishments were prescribed for a failure to attend upon public worship, and sometimes for entertaining [98 U.S. 145, 163] heretical opinions. The controversy upon this general subject was animated in many of the States, but seemed at last to culminate in Virginia. In 1784, the House of Delegates of that State having under consideration 'a bill establishing provision for teachers of the Christian religion,' postponed it until the next session, and directed that the bill should be published and distributed, and that the people be requested 'to signify their opinion respecting the adoption of such a bill at the next session of assembly.' This brought out a determined opposition. Amongst others, Mr. Madison prepared a 'Memorial and Remonstrance,' which was widely circulated and signed, and in which he demonstrated 'that religion, or the duty we owe the Creator,' was not within the cognizance of civil government. Semple's Virginia Baptists, Appendix. At the next session the proposed bill was not only defeated, but another, 'for establishing religious freedom,' drafted by Mr. Jefferson, was passed. 1 Jeff. Works, 45; 2 Howison, Hist. of Va. 298. In the preamble of this act (12 Hening's Stat. 84) religious freedom is defined; and after a recital 'that to suffer the civil magistrate to intrude his powers into the field of opinion, and to restrain the profession or propagation of principles on supposition of their ill tendency, is a dangerous fallacy which at once destroys all religious liberty,' it is declared 'that it is time enough for the rightful purposes of civil government for its officers to interfere when principles break out into overt acts against peace and good order.' In these two sentences is found the true distinction between what properly belongs to the church and what to the State. In a little more than a year after the passage of this statute the convention met which prepared the Constitution of the United States.' Of this convention Mr. Jefferson was not a member, he being then absent as minister to France. As soon as he saw the draft of the Constitution proposed for adoption, he, in a letter to a friend, expressed his disappointment at the absence of an express declaration insuring the freedom of religion (2 Jeff. Works, 355), but was willing to accept it as it was, trusting that the good sense and honest intentions of the people would bring about the necessary alterations. [98 U.S. 145, 164] 1 Jeff. Works, 79. Five of the States, while adopting the Constitution, proposed amendments. Three-New Hampshire, New York, and Virginia-included in one form or another a declaration of religious freedom in the changes they desired to have made, as did also North Carolina, where the convention at first declined to ratify the Constitution until the proposed amendments were acted upon. Accordingly, at the first session of the first Congress the amendment now under consideration was proposed with others by Mr. Madison. It met the views of the advocates of religious freedom, and was adopted. Mr. Jefferson afterwards, in reply to an address to him by a committee of the Danbury Baptist Association (8 id. 113), took occasion to say: 'Believing with you that religion is a matter which lies solely between man and his God; that he owes account to none other for his faith or his worship; that the legislative powers of the government reach actions only, and not opinions,-I contemplate with sovereign reverence that act of the whole American people which declared that their legislature should 'make no law respecting an establishment of religion or prohibiting the free exercise thereof,' thus building a wall of separation between church and State. Adhering to this expression of the supreme will of the nation in behalf of the rights of conscience, I shall see with sincere satisfaction the progress of those sentiments which tend to restore man to all his natural rights, convinced he has no natural right in opposition to his social duties.' Coming as this does from an acknowledged leader of the advocates of the measure, it may be accepted almost as an authoritative declaration of the scope and effect of the amendment thus secured. Congress was deprived of all legislative power over mere opinion, but was left free to reach actions which were in violation of social duties or subversive of good order. Polygamy has always been odious among the northern and western nations of Europe, and, until the establishment of the Mormon Church, was almost exclusively a feature of the life of Asiatic and of African people. At common law, the second marriage was always void (2 Kent, Com. 79), and from the earliest history of England polygamy has been treated as an offence against society. After the establishment of the ecclesiastical [98 U.S. 145, 165] courts, and until the time of James I., it was punished through the instrumentality of those tribunals, not merely because ecclesiastical rights had been violated, but because upon the separation of the ecclesiastical courts from the civil the ecclesiastical were supposed to be the most appropriate for the trial of matrimonial causes and offences against the rights of marriage, just as they were for testamentary causes and the settlement of the estates of deceased persons. By the statute of 1 James I. (c. 11), the offence, if committed in England or Wales, was made punishable in the civil courts, and the penalty was death. As this statute was limited in its operation to England and Wales, it was at a very early period re-enacted, generally with some modifications, in all the colonies. In connection with the case we are now considering, it is a significant fact that on the 8th of December, 1788, after the passage of the act establishing religious freedom, and after the convention of Virginia had recommended as an amendment to the Constitution of the United States the declaration in a bill of rights that 'all men have an equal, natural, and unalienable right to the free exercise of religion, according to the dictates of conscience,' the legislature of that State substantially enacted the statute of James I., death penalty included, because, as recited in the preamble, 'it hath been doubted whether bigamy or poligamy be punishable by the laws of this Commonwealth.' 12 Hening's Stat. 691. From that day to this we think it may safely be said there never has been a time in any State of the Union when polygamy has not been an offence against society, cognizable by the civil courts and punishable with more or less severity. In the face of all this evidence, it is impossible to believe that the constitutional guaranty of religious freedom was intended to prohibit legislation in respect to this most important feature of social life. Marriage, while from its very nature a sacred obligation, is nevertheless, in most civilized nations, a civil contract, and usually regulated by law. Upon it society may be said to be built, and out of its fruits spring social relations and social obligations and duties, with which government is necessarily required to deal. In fact, according as monogamous or polygamous marriages are allowed, do we find the principles on which the government of [98 U.S. 145, 166] the people, to a greater or less extent, rests. Professor, Lieber says, polygamy leads to the patriarchal principle, and which, when applied to large communities, fetters the people in stationary despotism, while that principle cannot long exist in connection with monogamy. Chancellor Kent observes that this remark is equally striking and profound. 2 Kent, Com. 81, note (e). An exceptional colony of polygamists under an exceptional leadership may sometimes exist for a time without appearing to disturb the social condition of the people who surround it; but there cannot be a doubt that, unless restricted by some form of constitution, it is within the legitimate scope of the power of every civil government to determine whether polygamy or monogamy shall be the law of social life under its dominion. In our opinion, the statute immediately under consideration is within the legislative power of Congress. It is constitutional and valid as prescribing a rule of action for all those residing in the Territories, and in places over which the United States have exclusive control. This being so, the only question which remains is, whether those who make polygamy a part of their religion are excepted from the operation of the statute. If they are, then those who do not make polygamy a part of their religious belief may be found guilty and punished, while those who do, must be acquitted and go free. This would be introducing a new element into criminal law. Laws are made for the government of actions, and while they cannot interfere with mere religious belief and opinions, they may with practices. Suppose one believed that human sacrifices were a necessary part of religious worship, would it be seriously contended that the civil government under which he lived could not interfere to prevent a sacrifice? Or if a wife religiously believed it was her duty to burn herself upon the funeral pile of her dead husband, would it be beyond the power of the civil government to prevent her carrying her belief into practice? So here, as a law of the organization of society under the exclusive dominion of the United States, it is provided that plural marriages shall not be allowed. Can a man excuse his practices to the contrary because of his religious belief? [98 U.S. 145, 167] To permit this would be to make the professed doctrines of religious belief superior to the law of the land, and in effect to permit every citizen to become a law unto himself. Government could exist only in name under such circumstances. A criminal intent is generally an element of crime, but every man is presumed to intend the necessary and legitimate consequences of what he knowingly does. Here the accused knew he had been once married, and that his first wife was living. He also knew that his second marriage was forbidden by law. When, therefore, he married the second time, he is presumed to have intended to break the law. And the breaking of the law is the crime. Every act necessary to constitute the crime was knowingly done, and the crime was therefore knowingly committed. Ignorance of a fact may sometimes be taken as evidence of a want of criminal intent, but not ignorance of the law. The only defence of the accused in this case is his belief that the law ought not to have been enacted. It matters not that his belief was a part of his professed religion: it was still belief, and belief only. In Regina v. Wagstaff (10 Cox Crim. Cases, 531), the parents of a sick child, who omitted to call in medical attendance because of their religious belief that what they did for its cure would be effective, were held not to be guilty of manslaughter, while it was said the contrary would have been the result if the child had actually been starved to death by the parents, under the notion that it was their religious duty to abstain from giving it food. But when the offence consists of a positive act which is knowingly done, it would be dangerous to hold that the offender might escape punishment because he religiously believed the law which he had broken ought never to have been made. No case, we believe, can be found that has gone so far. 6. As to that part of the charge which directed the attention of the jury to the consequences of polygamy. The passage complained of is as follows: 'I think it not improper, in the discharge of your duties in this case, that you should consider what are to be the consequences to the innocent victims of this delusion. As this contest goes on, they multiply, [98 U.S. 145, 168] and there are pure-minded women and there are innocent children,-innocent in a sense even beyond the degree of the innocence of childhood itself. These are to be the sufferers; and as jurors fail to do their duty, and as these cases come up in the Territory of Utah, just so do these victims multiply and spread themselves over the land.' While every appeal by the court to the passions or the prejudices of a jury should be promptly rebuked, and while it is the imperative duty of a reviewing court to take care that wrong is not done in this way, we see no just cause for complaint in this case. Congress, in 1862 (12 Stat. 501), saw fit to make bigamy a crime in the Territories. This was done because of the evil consequences that were supposed to flow from plural marriages. All the court did was to call the attention of the jury to the peculiar character of the crime for which the accused was on trial, and to remind them of the duty they had to perform. There was no appeal to the passions, no instigation of prejudice. Upon the showing made by the accused himself, he was guilty of a violation of the law under which he had been indicted: and the effort of the court seems to have been not to withdraw the minds of the jury from the issue to be tried, but to bring them to it; not to make them partial, but to keep them impartial. Upon a careful consideration of the whole case, we are satisfied that no error was committed by the court below. Judgment affirmed. MR. JUSTICE FIELD. I concur with the majority of the court on the several points decided except one,-that which relates to the admission of the testimony of Amelia Jane Schofield given on a former trial upon a different indictment. I do not think that a sufficient foundation was laid for its introduction. The authorities cited by the Chief Justice to sustain its admissibility seem to me to establish conclusively the exact reverse. NOTE.-At a subsequent day of the term a petition for a rehearing having been filed, MR. CHIEF JUSTICE WAITE delivered the opinion of the court. Since our judgment in this case was announced, a petition for rehearing has been filed, in which our attention is called to the fact that the sentence of the [98 U.S. 145, 169] court below requires the imprisonment to be at hard labor, when the act of Congress under which the indictment was found provides for punishment by imprisonment only. This was not assigned for error on the former hearing, and we might on that account decline to consider it now; but as the irregularity is one which appears on the face of the record, we vacate our former judgment of affirmance, and reverse the judgment of the court below for the purpose of correcting the only error which appears in the record, to wit, in the form of the sentence. The cause is remanded, with instructions to cause the sentence of the District Court to be set aside and a new one entered on the verdict in all respects like that before imposed, except so far as it requires the imprisonment to be at hard labor.
null
minipile
NaturalLanguage
mit
null
Smart Summer Wardrobe Essentials The first thing that comes to mind when I think of summer is not even the idea of bringing out the floral prints to play… all that resonates in my head is the HEAT that comes with it lol. The elevated temperatures that come with the summer is what I prioritize the most when picking outfits and staying as aerated as possible is what takes the centre stage for me. If I’m going to be out for most part of the day, ease and comfort are always key for whatever I choose to wear. At the same time, summer is the time when most go very casual but there are times when you want to deviate slightly from all of that but not trade-off keeping as much heat away as possible. The foundation for me is wearing as little pieces of clothing as I can but still introducing subtle smart elements for the times I don’t want to come off too casual. For my summer wardrobe selection, shorts would always be summer staples in my wardrobe. Not only are they very easy to wear but they afford my legs more access to the well sought after fresh air lol. Next staples are button-downs either in short sleeves or long sleeves which I always end up folding up anyway. For smart looks, I either opt for plain coloured or stripes but that’s more of a personal preference. I like to bring out the prints when I’m gunning for a more casual finish. The most important part of introducing some smartness to it for me is tucking in the shirt. Shorts are very basic and very casual pieces but tucking in a smart shirt paired with it makes a lot of difference as seen in the pictures here. For my feet, I like to finish it off with sneakers because if I’m going to be out for long, I like to stay as comfortable as possible and sneakers always do it for me. Pairing with a pair of loafers would also be an excellent choice specially to amplify the smart elements of the entire look but I just find sneakers more comfortable especially for longer wearing. For accessories, a smart dress watch, sun-glasses and bracelets would be a good addition… more like the icing on the cake. Feel free to leave me a comment on what you think about the outfits and your own ideas of dressing smart this summer and let’s get a conversation started. Also, don’t forget to check me out on Instagram via the link provided at the end of the post.
null
minipile
NaturalLanguage
mit
null
Praise Be to the Sun Absolute
null
minipile
NaturalLanguage
mit
null
Andrew Yang is a technology expert who is running for president in 2020 on what might be called a Tech-Caution platform. Unlike the clueless characters currently running our national government, Yang understands the danger of automation and artificial intelligence — that when smart machines take over major employment categories in America, the economy will fail from massive, permanent job loss. Curiously, the brilliant captains of industry are big on developing the cheapest possible manufacturing, but have forgotten that shoppers with healthy incomes are a big part of the economy equation. For more details on the issues, see the candidate’s website Yang2020. Consider what technology experts have already predicted for our near future. Oxford researchers forecast in 2013 that nearly half of American jobs were vulnerable to machine or software replacement within 20 years. Rice University computer scientist Moshe Vardi believes that in 30 years humans will become largely obsolete, and world joblessness will reach 50 percent. The Gartner tech advising company believes that one-third of jobs will be done by machines by 2025. The consultancy firm PwC published a report last year that forecast robots could take 38 percent of US jobs by 2030. In November 2017, the McKinsey Global Institute reported that automation “could displace up to 800 million workers — 30 percent of the global workforce — by 2030.” Forrester Research estimates that robots and artificial intelligence could eliminate nearly 25 million jobs in the United States over the next decade, but it should create nearly 15 million positions, resulting in a loss of 10 million US jobs. Kai-Fu Lee, the venture capitalist and author of AI Superpowers: China, Silicon Valley, and the New World Order, forecast on CBS’ Sixty Minutes about automation and artificial intelligence: “in 15 years, that’s going to displace about 40 percent of the jobs in the world.” A February 2018 paper from Bain & Company, Labor 2030, predicted, “By the end of the 2020s, automation may eliminate 20% to 25% of current jobs.” Why isn’t Washington paying attention to tech experts’ warnings? There’s not a whole lot to be done in the face of such fundamental social change, but certainly America won’t need more immigrant workers, as President Trump has recently suggested in a major reversal of a top campaign promise. That flip-flop is doubly bad because: TUCKER CARLSON: Big tech knows a lot about you, in some cases more than you know about yourself. They certainly know where you go and what you eat. May even know what you think. The only thing it can’t control is what your thoughts are, but they are working on that, too. In a 2018 phone recording obtained exclusively by this show, Adam Kovacevich — he is Google’s head of U.S. Public Policy — explains to Google employees why the company was a sponsor for CPAC that year. Google sponsored CPAC, he says, because it wouldn’t let them push the party toward a more open borders agenda. Listen: GOOGLE EXECUTIVE ADAM KOVACEVICH: The Republican Party and the conservatives in general, is also going through a lot of internal debates about what should be the sort of position of the Party and I think that’s one that we should be involved in because we, I think, want probably, the majority of Googlers wants to steer conservatives and Republicans more towards a message of liberty and freedom and away from the more sort of nationalistic incendiary nativist comments and things like that. CARLSON: Now, as noted, Google has more power than any company has ever had. It has the power of its massive data reserves technology, of course, it has financial power. It has one highest market capped companies in history. It also increasingly has political power though they don’t typically admit it in public. Kovacevich did admit it. He said that companies like Google are playing quote, “a leadership role” in American politics. He bragged that the company got a support of mass immigration on to a CPAC panel and that person argued in support of Google’s agenda. It shows a lot about big tech’s attitude toward the country. They are in control — elections, parties, democracy, just a hindrance to their control. So we told you a lot on this show about the potential dangers of big tech. Some of those dangers are imminent, and they are technological, and the main one is robotics and artificial intelligence. Remarkably, the person, the political figure who is making the most sense on this subject, who has thought about it most deeply is a Democrat who is running for President. He is Andrew Yang. He is an entrepreneur and as we said, he is a Democratic presidential candidate. He says that artificial intelligence and expanded automation could potentially cause violence in this country and that we need to do something about it right now. Andrew Yang joins us tonight. Andrew, thanks very much for coming on, and I meant that with sincerity. I haven’t heard anybody in our political conversation describe the threat as clearly and compellingly as you have. Why should would he be worried about automation? CANDIDATE ANDREW YANG: Well, if you look at the backdrop, we automated away four million manufacturing jobs in Michigan, Ohio, Pennsylvania, Wisconsin, Missouri, and those communities have never recovered. Where if you look at the numbers, half of the workers left the workforce and never worked again, and then half of that group filed for disability. Now what happened to the manufacturing workers is now going to happen to the truck drivers, retail workers, call centers and fast-food workers and on and on through the economy as we evolve and technology marginalizes the labor of more and more Americans. CARLSON: What will be the effects of that? That’s a massive displacement of people. What will happen once that happens? YANG: Well, as you said, I think it’s going to be disastrous, where if you look at truck drivers alone, being a trucker is the most common job in 29 states. There are 3.5 million truck drivers in this country, and my friends in Silicon Valley are working on trucks that can drive themselves because that’s where the money is, where we can save tens, even hundreds of billions of dollars by trying to automate that job. But I was just with truck drivers in Iowa last week and imagining that community recovering from their income going from let’s call it $50,000 a year to much, much less than that catastrophically, it’s going to be a disaster for many, many American communities. CARLSON: You are one of the only people I have met honest about the effects of the de-industrialization. I remember in Washington, the idea was, they will all become computer programmers, and so everything is fine, but that didn’t happen. My question is do we have to sit passively back and let this happen to the country? YANG: Well, that’s why I’m running for President, Tucker. I think it would be insane to just sit back and watch this automation wave overtake our communities and our economy. So we are not ostriches. We can get our heads up out of the sand, and say, “Look, we get it. Artificial intelligence is real. Self-driving cars and trucks are being tested on the highways right now and we need to evolve.” We need to actually start pushing the way we think of economic progress to include how our families are doing, how our children are doing, and things that would actually matter to the American people because GDP is going to lead us off a cliff. You know, robot trucks — great for GDP, terrible for many, many American communities. So we need to get with the program and figure out how to actually make this economy work for people. CARLSON: I sit with my jaw open. I agree with you so strongly. Let me ask you finally, why isn’t this a central question in the campaign of everybody running for President on any side, and why instead are they talking about issues that are really are kind of frivolous? Why aren’t they talking about this? YANG: It’s a good question, Tucker. I mean, one of the reasons I’m running for President is to push this in the center of the mainstream agenda where every candidate should be talking about what we are going to do about the fact that we’re automating away the most common jobs in the economy right now. As we are sitting here together, the labor force participation rate in the United States is 63.2%. The same level as Ecuador and Costa Rica, and if anyone thinks that’s where America ought to be, I mean, that number is even going to be further challenged when all of this technology comes online. So we have to make America embrace this challenge of the 21st Century and then try and address it together as a people. CARLSON: Last question. Shouldn’t people who cite unemployment statistics be penalized for saying something so stupid? YANG: Yes, we have a series of bad numbers and I refer to GDP as one. Certainly, a headline unemployment rate is completely misleading and one of my mandates as President is I’m going to update the numbers that actually make sense to the American people. CARLSON: Yes, yes. So we can know what’s going on, otherwise we can’t make wise decisions. YANG; Yes, right now again and you know this, our life expectancy has declined for the last three years, first time in 100 years because of a surge in suicides and drug overdoses. How can you say an economy is healthy when our people are dying? It makes no sense at all. CARLSON: I literally — I don’t even know what you think on the other issues and I just support what you said so much. I appreciate your coming on. YANG; Thank you, Tucker. It’s great to be here.
null
minipile
NaturalLanguage
mit
null
Inflammatory mass of an intrathecal catheter in patients receiving baclofen as a sole agent: a report of two cases and a review of the identification and treatment of the complication. Intrathecal inflammatory masses or granuloma have been described extensively in the literature in patients receiving chronic spinal infusions for pain. After an extensive literature review, no reported cases of baclofen causing this disorder when administered as a sole agent were identified. Intrathecal baclofen has been used to treat spasticity secondary to stroke, multiple sclerosis, cerebral palsy, spinal cord injury, and other neurological disease. Two patients who received intrathecal infusions of baclofen to treat spasticity developed catheter failure. Magnetic resonance imaging analysis showed the presence of an inflammatory mass at the tip of each catheter causing the dysfunction. The catheters were removed and replaced by a percutaneous technique. Inflammatory mass on an intrathecal catheter can result in a variety of symptoms. These problems range from the patient being asymptomatic to flaccid paraplegia. Animal studies have shown an association with high concentrations of morphine and hydromorphone theorized to be related to a mast cell degranulation response. Presence of this lesion in these two patients should heighten the suspicion for inflammatory mass in any patient treated for spasticity. The diagnosis of intrathecal catheter tip inflammatory mass is made after an initial suspicion of a catheter occlusion or failure. The gold standard of diagnosis is T2-weighted magnetic resonance imaging. A computerized tomography myelogram is acceptable if a magnetic resonance imaging is not feasible. We report two cases of inflammatory mass in patients receiving baclofen as a sole intrathecal agent. The authors would recommend vigilance in any patient receiving intrathecal baclofen. If the suspicion arises of this problem, a magnetic resonance imaging or computerized tomography myelogram should be obtained with a focus on the catheter tip.
null
minipile
NaturalLanguage
mit
null
If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. Did somebody give Ike Taylor some stickum or what? 2 picks in 2 weeks. On the year he has 36 Tackles and 2 Interceptions. Considering he's the type of player that teams don't generally throw at very often, those aren't bad numbers. We get on Ike's case, in jest of course, for his lack of INT's or lack of ability to actually catch them when he has the chance but overall the dude is one of the better cornerbacks in the league and certainly deserving of a pro bowl sooner or later. Either way the Steelers are lucky to have him. If you remember back when he was drafted by Bill Cowher out of Louisiana-Lafayette 9 years ago and had so many troubles trying to adjust to corner, he's come a long way. His hard work has paid off not only for himself and the Steelers every year. The Steelers are going to need more of that type of play from the defense the rest of the way out. With the offenses the Steelers will face the rest of the way out, there is no better time than here in the last month of the year to get more turnovers and create a short field for Ben.
null
minipile
NaturalLanguage
mit
null
Assessment of myocardial ischaemia and viability: role of positron emission tomography. In developed countries, coronary artery disease (CAD) continues to be a major cause of death and disability. Over the past two decades, positron emission tomography (PET) imaging has become more widely accessible for the management of ischemic heart disease. Positron emission tomography has also emerged as an important alternative perfusion imaging modality in the context of recent shortages of molybdenum-99/technetium-99m ((99m)Tc). The clinical application of PET in ischaemic heart disease falls into two main categories: first, it is a well-established modality for evaluation of myocardial blood flow (MBF); second, it enables assessment of myocardial metabolism and viability in patients with ischaemic left ventricular dysfunction. The combined study of MBF and metabolism by PET has led to a better understanding of the pathophysiology of ischaemic heart disease. While there are potential future applications of PET for plaque and molecular imaging, as well as some clinical use in inflammatory conditions, this article provides an overview of the physical and biological principles behind PET imaging and its main clinical applications in cardiology, namely the assessment of MBF and metabolism.
null
minipile
NaturalLanguage
mit
null
Editor’s note: The person of interest is apparently from Pottstown. If any Pottstown peeps can identify this person, please call 215-686-TIPS (-8477). Click on the read more link to see a video of the altercation. REGULAR skaters at LOVE Park say they have a subtle truce with the rangers, a cat-and-mouse game of grinding along concrete steps when they’re gone and leaving peacefully with wheels up when they show up. The skaters were OK with that, too, but now they fear that some “suburban kid who always acts hard” has ruined it for all of them by punching, kicking and spitting on a ranger who tried to get him and some other skaters to leave Friday afternoon. “He’s an idiot,” skater Ki Realer, 34, said of the unidentified suspect, who is wanted by police on assault charges. “Not only are the police looking for him, but he’ll also never be welcome here again. Now we all know what’s coming for the rest of us.” The incident happened about 5:30 p.m. Friday, police said – a day after a big antiviolence vigil in LOVE Park – and was witnessed by a large crowd that included Michael “Philly Jesus” Grant, who was profiled in a Daily News cover story on Aug. 8. Some clown from Dauphin County was driving his ATV on the road while in the Weiser State Forest. The forest ranger, Nicola Zulli, tried to stop this bozo from driving his ATV on the road. His response was to accelerate and hit the ranger, breaking her leg. Then our hero leaves the scene of the accident with the ranger lying in the road with a broken leg. WTH is wrong with people????????????????? Police found Mr. Michael Matter, III at a Jackson Twp. residence and took him into custody. This smacked ass is in the Dauphin County jail because he could not post the necessary $250,000 bail. He confessed to hitting Ranger Zulli and is charged with a laundry list of crimes.
null
minipile
NaturalLanguage
mit
null
Gross! Mother Slips Jailed Son Drugs in Open-Mouthed Kiss All mothers love their sons, but some are more affectionate than others. A woman was recently busted for smuggling Oxycodone to her son in an open-mouthed kiss when she visited him in jail in Yates County, New York. Authorities say Kimberly Margeson, 54, was visiting her 30-year-old son William Partridge in the slammer after he was locked up on a weapons charge. Margeson slipped the pills into her mouth and transferred them “from her mouth to her son’s mouth when she kissed him.” Clearly, this is a tight-knit family. While officials cannot confirm whether tongue was involved – sadly, that’s not a joke – they did arrest Margeson on a felony drug count charge. In addition, she and her son were also charged with a count of promoting prison contraband. Ah, like mother, like son. Margeson has pleaded not guilty to both charges and has been released from jail, while her pride and joy remains in the clink, where he's no doubt thinking about the big wet one he's going to plant on his mother once he's free.
null
minipile
NaturalLanguage
mit
null
A Japanese kayaker and 2020 Olympic hopeful is reportedly facing a lifetime ban from competitions after admitting to spiking a rival’s drink with a banned substance, causing him to fail a doping test. Yasuhiro Suzuki, 32, confessed to putting an anabolic steroid in fellow kayaker Seiji Komatsu’s drink during last year’s national championships, the Japan Anti-Doping Agency said Tuesday. The agency consequently slapped Suzuki with an eight-year ban, though on Wednesday the Kyodo News agency reported that the Japan Canoe Federation is considering a life ban on the athlete, who in a statement apologized for his actions, according to Reuters. INDRANIL MUKHERJEE via Getty Images Seiji Komatsu, seen at the 2014 Asian Games, has expressed shock and surprise after learning that a rival would spike his drink. “Instead of working hard, I committed misconduct as an athlete and, further, as a member of society,” Suzuki said in a statement released by his lawyer. The Japan Canoe Federation said it launched an investigation after Komatsu failed his test but denied having taken any drugs. Suzuki later confessed. An investigation also found that he had made other attempts to sabotage his competitors, such as stealing equipment they used in training and competition, Kyodo reported. Komatsu was initially suspended after failing the test but has since been allowed to resume competing for the 2020 Tokyo Olympics as a result of Suzuki’s confession. “At first I couldn’t believe this kind of thing would happen in Japan,” Komatsu said, according to Reuters. “Until Mr. Suzuki confessed, I was in a bad mental state. I began to feel hopeless about (competing at) the Tokyo Olympics, that it was impossible.” This is the first time that an athlete in Japan has knowingly sabotaged another athlete deliberately through a doping test, The Associated Press reported, citing the Japan Anti-Doping Agency.
null
minipile
NaturalLanguage
mit
null
50 Cent Net Worth: Before and After Bankruptcy 50 Cent, the famous American Rapper, actor and business person is worth a $20 million as of 2018. In 2015, Forbes reported his net worth to be around $150 million. With the rapper making his money through various ventures including music albums, concert tours, endorsements, different lines of business, how did his net worth fall tremendously? Early Life Curtis James Jackson III was born in South Jamaica, Queens, New York. When he was 11 years old, Jackson began boxing. 50 cent has been subject to numerous controversies. He has been arrested on various occasions for his involvement in cocaine and possession of a gun. In 2013, he pleaded not guilty to vandalism & domestic violence. Career The Rap Begins Here Jackson began rapping in his friend’s basement and eventually made his career in rapping. In 2002, he released his album Guess Who’s Back? , after which he got a contract by Shady Records, Aftermath Entertainment and Interscope Records. He rose to prominence after the release of his album Get Rich or Die Trying. He has sold over 30 million albums worldwide and has won several awards including Grammy Award, 13 Billboard Music Awards, and four BET Awards. Films and TV shows The rapper earned in millions by appearing in numerous movies and TV shows. 50 cent made his film debut in 2005 in the movie Get Rich or Die Tryin’. He later appeared in over 25 movies including Streets of Blood, 13, Gun, Twelve, Set up, Vengeance and Spy. He also made special appearances in a variety of TV shows including The Howard Show, Jimmy Kimmel Live, Last Call with Carson Daly, Entourage, Canon and 50 Central. He also voiced video games like 50 Cent: Bulletproof, 50 Cent: Blood on the sand and Call of duty: Modern Warfare 2. Business Jackson is not only a flourishing rapper but also a successful businessman. In 2003, he established his record label G-Unit Records. Later that year, he also entered into a five year deal with Reebok to distribute a G-Unit sneaker for his G-unit Clothing Company. In 2007, Jackson started a book publishing imprint, G-unit Books. He entered into a partnership deal with Glaceau to create Formula 50, a flavoured drink. Five years later when Coco-cola purchased the company, 50 Cent, being the minority received $100 million. He later introduced a body spray called Pure 50 RGX after joining Right Guard. Jackson has also founded two film production companies: G-Unit Films and Cheetah Vision. Author Jackson has also authored a few books including his memoir, From Pieces to Weight. Bankruptcy In 2015, Jackson filed for bankruptcy with a debt of around $32 million. 50 Cent Net Worth Now 50 cent’s net worth stands at $ 20 million, which is a massive slump when compared to his net worth of $150 million before filing for bankruptcy. Thanks to the bitcoin, the rapper is still a millionaire. In 2014, 50 Cent became the first rapper to accept bitcoin as payment for his album Animal Ambition when its worth was around $662 a coin. Today, the bitcoin trades at $11,300, which means his 700-bitcoin stash is worth around $8 million! In order to pay his debts, Jackson had to sell his mansion in Farmington, Connecticut. The luxurious 21 bedroom home was sold for $10 million against its listed price $18.5 million. His home in New York, which was purchased by him for $2.4 million in 2007, caught fire the very next year. He owns a fleet of luxurious cars including Rolls Royce Phantom, Ferrari F50 and Lamborghini Murcielago. Philanthropy Jackson is on the BOD of the G-Unity Foundation, which provides grants to non-profit organisations. In 2016, Jackson insulted a hearing-impaired autistic janitor at the airport accusing him of being under the influence. A lawsuit was filed against him by the janitor’s parents after which Jackson offered his apologies and donated $100,000 to Autism Speaks. In 2016, Jackson joined Pure Growth Partners to introduce Street King with a vision to provide food for one billion starving people in Africa. A portion of the proceeds from each Street King purchase would be used to provide a meal for an underprivileged child. He has teamed up with Feeding America to bring meals to children in the US. For every pair of SMS Audio headphone sold, Jackson donates 250 meals. He also supports various charities including Lifeboat, New York Restoration Project, Orca Network and Shines Hospitals for Children. Conclusion 50 cent is definitely not the one who sits back and relaxes with the earnings from the music albums and endorsement deals. He has innovatively invested in various business ventures and has always thrived to move forward. Whether bankrupt or not, with the energy and passion that Jackson has, we can be sure that he’ll rise again no matter what the circumstance is.
null
minipile
NaturalLanguage
mit
null
126 Holocaust Scholars Affirm the Incontestable Fact of the Armenian Genocide and Urge Western Democracies to Officially Recognize it. At the Thirtieth Anniversary of the Scholar's Conference on the Holocaust and the Churches Convening at St. Joseph University, Philadelphia, Pennsylvania, March 3-7, 2000, one hundred twenty-six Holocaust Scholars, holders of Academic Chairs and Directors of Holocaust Research and Studies Centers, participants of the Conference, signed a statement affirming that the World War I Armenian Genocide is an incontestable historical fact and accordingly urge the governments of Western democracies to likewise recognize it as such. The petitioners, among whom is Nobel Laureate for Peace Elie Wiesel, who was the keynote speaker at the conference, also asked the Western Democracies to urge the Government and Parliament of Turkey to finally come to terms with a dark chapter of Ottoman-Turkish history and to recognize the Armenian Genocide. This would provide an invaluable impetus to the process of the democratization of Turkey. Below is a partial list of the signatories: Prof. Yehuda BauerDistinguished Professor Hebrew University Director, The International Institute of Holocaust Research Yad Vashem, Jerusalem Prof. Richard LibowitzTemple University Prof. Israel Charny, DirectorInstitute of the Holocaust and Genocide, Jerusalem Professor at the Hebrew University, Editor-in-Chief of The Encyclopedia of Genocide Dr. Marcia LittellStockton College Exec. Director, Scholars' Conference On the Holocaust and the Churches Prof. Ward ChurchillEthnic Studies The University of Colorado, Boulder Franklin LittellEmeritus Professor Temple University Prof. Stephen Feinstein, DirectorCenter for Holocaust and Genocide Studies University of Minnesota Prof. Hubert G. LockeWashington University Co-founder of the Annual Scholar's Conference On the Holocaust and the Churches
null
minipile
NaturalLanguage
mit
null
Sigmoid Microinvasion by an Ectopic Pregnancy. Approximately 2.1% to 8.6% of all pregnancies after IVF with embryo transfer have been reported to be ectopic. In this report, we present a case of presumed intestinal microperforation caused by an ectopic pregnancy following IVF. A 29-year-old woman presented with rectal bleeding. She had previously been treated for an ectopic pregnancy for which she had received two doses of methotrexate. Colonoscopy and abdominal CT angiography were performed and showed that the ectopic pregnancy was attached to the sigmoid colon. Surgery was performed to remove the ectopic pregnancy. Because intestinal microperforations were suspected, the patient received intravenous antibiotic therapy during her hospitalization. In cases of intestinal bleeding, clinicians should consider the possibility of intestinal involvement of an ectopic pregnancy, even if the response to treatment for the ectopic pregnancy has been appropriate.
null
minipile
NaturalLanguage
mit
null
https://github.com/nlohmann/json version 3.9.1
null
minipile
NaturalLanguage
mit
null
Q: Remover CSS dentro do iframe Olá, tenho uma página e estou exibindo ela em um iframe, quero criar como se fosse uma versão 2 da página com outro estilo, pra isso quero remover o css dela e adicionar outro sem ter que editar no arquivo style.css, pois também quero manter a página v1 com o mesmo estilo. Me ajudem, está no mesmo domínio. A: Você pode acessar os elementos do iframe usando, por exemplo, a id do iframe: <iframe id="frame" width="500" height="200" src="pagina.php"></iframe> <script> var iFrame = document.body.querySelector("#frame"); iFrame.onload = function(){ // aguarda o iframe ser carregado // altera a cor do texto da #div1 dentro do iframe iFrame.contentWindow.document.querySelector("#div1").style.color = "red"; } </script> Usando iFrame.contentWindow você pode acessar os elementos com querySelector ou outra forma de selecionar elementos que quiser.
null
minipile
NaturalLanguage
mit
null
About Jeanne 'Jannetje' le Clercq, a3 SM Johanna/Jeanne/Jannetjie de Klerk first married Andre Gous en later Pieter Bekker. After the banishmenet of Bekker from the Capeshe had an afair with Matthys de Maker from which a child was born. (sic) De Klerk - Van Vogelvalleijna Verre Velde - Dr. David de Klerk 2013 André Gauch, smith and farmer, living in Drakenstein Born: Le Pont-de-Montvert, Languedoc - died 26 February 1698 x Jacqueline Decré 13 January 1683 Celigny, Geneva died before 1691 (assumed) Line is dead after her grandchildren, as far as we can find; xx Johanna de Clercq 19 August 1691 Stellenbosch, born Zeeland, died circa 1748 b3 Pieter Gous, born circa 1692-1693 at the Cape, died circa 1730 farmer of De Doorn Rivier, over 't Roode Sand x Anna Oosthuisen 4 May 1721 Stellenbosch ( born at the Cape, died circa December 1745 (date of inventory) she married secondly Gerrit van Emmenes, 1 April 1731, Drakenstein. De Villiers / Pama and SAG both list what appears to be a spurious daughter named Anna for this couple, but I have been unable to find any evidence of her existence. So far as I can establish there were only the following four children: b3.c1 Johanna Gous, baptised 25 April 1723 Drakenstein, died circa July 1794 xMatthijs Strijdom 11 October 1739 Drakenstein farmer, died before 1758 xx Ockert Brits 16 April 1758 Drakenstein .She had children with both husbands. b3.c2 Dorothea Gous, baptised 26 November 1724 Stellenbosch according to SAG she married Willem Botha (Tulbagh, VC 664, marriages, page 18, 23 May 1762) and they had four children. b3.c3 Pieter Gous, baptised 6 July 1727 Drakenstein of De Voorbaad, situated at the Swarteberg x Magdalena Brits 5 November 1752 Tulbagh I found no evidence from wills or other documents that the Pieter Gous baptised 1727 was the husband of Magdalena Brits, so I checked out the baptisms of their all their children for the clues provided by baptismal witnesses. The witnesses at the baptism of their first child were Johanna Gous and Matthijs Strijdom, the sister and brother-in-law of the Pieter Gous who was born 1727 to Pieter Gous and Anna Oosthuijsen; those of the second child were Johannes Oosthuijsen and Anna Botha, brother of Anna Oosthuijsen and his wife, uncle and Aunt of the Pieter Gous born 1727; those of the third child were Dorothea Gous and Ockert Brits, she the sister of the Pieter Gous born 1727 and the second husband of Johanna Gous, sister of Pieter Gous born 1727. It seems pretty certain to me, therefore, that this is a correct identification. Beyond their baptisms, I have not investigated their children. b3.c4 Sara Gous, baptised 21 August 1729 Cape Town according to the SAG, vol 2, page 494, she married Willem Goosen (Cape Town marriage register, VC 621, page 59, 31st August 1749) and they had eight children. b5 Johanna Gous, baptised 25 September 1695 Drakenstein, died before June 1698 b6 Andries Gous, born circa May 1698, died circa June 1735 farmer of De Melkhouteboom, on the Duijvenhoks River x Johanna Conterman 14 May 1719 Drakenstein she married secondly Jan La Grange, circa 1735 or 1736 (there are no entries for this period in the Drakenstein marriage registers). b6.c1 Pieter Gous, born circa 1719, died 18 January 1790 Waveren x Aletta Vorster 25 November 1742 Drakenstein born circa 1722 at the Cape, and died 20 July 1798 Waveren Please see the article mentioned earlier in the introduction to this family tree for my reasoning in allocating this family to this point of the tree, which I consider their correct location. Their children, so far as I can see, are exactly as laid out in SAG and De Villiers / Pama. b6.c2d2 Johannes Stephanus Gous, baptised 31 March 1748 Stellenbosch The NAAIRS online index suggests (but I have not seen the documents concerned) that he died circa 1792 and that his wife, Anna Magdalena Vosloo died circa 1810 (MOOC 7/1/37, 24 and MOOC 7/1/59, 62). The Cape Death Notifications (MOOC 6, volume2, page 49) lists her death under the district of Swellendam during the year 1794, no date attached. b6.c4 Johanna Gous, baptised 8 July 1725 Cape Town x Wessel Pretorius, farmer of Hollebak, over the Duijvenhoxrivier died circa 1752 (date of inventory) xx Jan Vosloo 26 November 1752 Stellenbosch died circa 1756 (date of inventory) xxx Jan Lasch 15 October 1769 Tulbagh. She appears to have had children by all three husbands. b6.c5 Sara Gous, born circa 1727 at the Cape, died circa 1797 x Claas de Bruijn 9 March 1745 Tulbagh born at the Cape, died before 1797 They appear to have had no children. b6.c6 Stephanus Gous, baptised 25 September 1729 Cape Town of De Elands Valleij, at the Swarte Berg x Catharina Huppenaar 16 June 1756 Cape Town baptised 2 September 1736 at Drakenstein, Father: Frederik Huppenaar Mother: Catharina Hofman .We have seen earlier that it could not have been Stephanus Gous, the son of Steven Gous (b1c1), who married Catharina Huppenaar, as averred by SAG and De Villiers / Pama, since he did not survive (or leave surviving heirs) to be named as an heir in his mother’s will of 1759, having most probably died as an infant. We have also seen that Steven Gous (b1) died around 1758 and was survived by his wife, Catharina Bok, so that he could not have been Catharina Huppenaar’s husband. This leaves as the only other candidate Stephanus Gous (b6.c6). I have, in addition, checked the baptismal witnesses for his first two children (I have not been able to examine details of the baptisms of his later children as I do not currently have access for the appropriate dates to the registers where they were presumably recorded, possibly Cape Town or Drakenstein). The witnesses recorded were; for the first child, Andries, baptised 7 November 1756 at Tulbagh, Johanna Conterman and Jan le Grange, the child's paternal grandmother with her second husband; for the second child, Catharina, baptised 17 December 1758 at Tulbagh, Catharina Hofman and Willem Landman, maternal grandmother of the child with her second husband. Again, this is slim evidence, but I believe that mine is the correct interpretation. SAG lists a good number of other children as well but I have not investigated them.
null
minipile
NaturalLanguage
mit
null
Morphological Characteristics and Individual Differences of Palatal Rugae. This study aims to determine the number, symmetry, shape and individual characteristic of palatal rugae. In our study, we performed on subjects ages 16 to 57 (23.01 ± 7.12), ranging from a total of 230 (108 female and 122 male). Alginant impression material from each took dimensions of the upper jaw. Then casts were obtained by pouring hard casts. The shapes, lengths, and directions of rugae measured on these casts. The palatine photos were taken by using a mobile phone-Samsung brand with 12 MP camera and an orthodontic mirror. The casts and photographs were selected at random belonging to 100 subjects. Selected at random 10 photographs were matched among 100 casts. The ratios of correct matches were determined. In our study, the total number of palatal rugae was found as 9.49 ± 1.87 in females and 9.42 ± 1.92 in males. The most detected rugae pattern was wavy on both females and males. The most rarely seen rugae pattern was converged in the males and circular in the females. Regarding lengths of rugae, the most detected rugae pattern was the primary one. Regarding the direction of rugae pattern, positive-sided one was the most dominant in both genders. We determined the ratio of matching the casts belonging to Palatine with the photos as 63.5%. The number of rugae aged under 18 and above 41 was found to be statistically significant (P = 0.003), but the number of curved and positive-sided rugae in older ages was not found to be statistically significant. Compared with data from earlier studies, the shapes, length, and direction of palatal rugae were seen specific in every individual, and it was seen to have discriminating characteristics among different populations. The possible differences in individual specific palatal rugae require further studies involving larger samples.
null
minipile
NaturalLanguage
mit
null
The Bharathiar University (BU) has released a Brief Press Release seeking duly completed Applications for admission into Post Graduate / Post Graduate Diploma Courses, Master in Business Administration, Master in Computer Applications, Doctor of Philosophy / Master of Philosophy for the Academic Year 2017 – 18. Bharathiar University PG Admission Forms 2017 The Bharathiar University (BU) Submission of the Duly Completed Application Forms commences in the month of April 2017 with the Deadline for acknowledgement of the Applications accompanied by all the Educational Certificates and the prescribed Documents and Demand Draft payable ending in the month of May 2017 for admission into different Post Graduate Courses, Professional Courses and Master of Philosophy and Doctor of Philosophy Programs stated in the following Paragraphs: Admission would be through Centralized Counseling conducted by Anna University, Chennai (Based on TANCET-2017 Marks). Master in Computer Applications (Lateral Entry) (M.C.A. LE) Admission will be made based on a Separate Test conducted by the Department of Computer Applications, Bharathiar University, Coimbatore. Bharathiar University PG Application Form 2017 Application forms and prospectus can be had from the Registrar in person or by post from May 2017 on requisition with a cost of Rs.300/- (Rs.150/- in the case of SC/ST Candidates, on production of attested photocopy of community certificate) in the form of a Demand Draft dated no earlier than 27.04.2016 drawn in favour of the “Registrar, Bharathiar University payable at Coimbatore” and self addressed stamped envelope to the value of Rs.60/-. Please note that cash payment will NOT BE ACCEPTED. The cost of application once paid will not be refunded. Candidates can also apply online (or) download the application form from the website and submit the filled-in application with required fee of Rs.300/- (for others) and Rs.150/- (for SC/ST Candidates). Bharathiar University consists of the following Schools and Departments thereunder disseminating different Courses stated in the Tabular Format here under: Name of the Faculty Name of the Departments Course School of Management (SoM) Ø Bharathiar School of Management & Entrepreneur Development (BSMED) School of Biotechnology & Genetic Engineering (SBGE) Ø Department of Biotechnology (DBT) Ø Department of Microbial – Biotechnology (DMB) Ø Department of Bio-Chemistry (DBC) School of Chemical Sciences (SCS) Ø Department of Chemistry (DoC) School of Commerce (SoC) Ø Department of Commerce (DoC) School of Computer Science & Engineering (SCSE) Ø Department of Computer Applications (DCA) Ø Department of Information Technology (DIT) Ø Department of Computer Science (DCS) School of Economics (SoE) Ø Department of Economics (DoE) Ø Department of Econometrics (DoE) School of Educational Studies (SES) Ø Department of Communication & Media Studies (DCMS) Ø Department of Educational Technology (DET) Ø Department of Education (DoE) Ø Department of Extension, Career Guidance and Student Welfare (DECGSW) Ø Department of Physical Education (DPE) Ø Department of Education (School of Distance Education) School of English & Other Foreign Languages Ø Department of English Language (DEL) Ø Department of Linguistics (DoL) School of Life Sciences Ø Department of Botany (DoB) Ø Department of Bioinformatics (DoB) Ø Department of Environmental Sciences (DES) Ø Department of Zoology (DoZ) Ø Department of Human Genetics & Molecular Biology (DHGMB) Ø Department of Textiles & Apparel Design (DTAD) School of Mathematics & Statistics Ø Department of Mathematics (DoM) Ø Department of Statistics (DoS) School of Physical Sciences Ø Department of Physics (DoP) Ø Department of Medical Physics (DMP) Ø Department of Nano-Science & Technology (DNST) Ø Department of Electronics & Instrumentation (DEI) School of Social Sciences Ø Department of Social Work (DSW) Ø Department of Sociology & Population Studies (DSPS) Ø Department of Psychology (DoP) Ø Department of Women’s Studies (DWS) Ø Department of Library Sciences (DLS) School of Tamil & Other Indian Languages Department of Tamil Language (DTL) In case of clarifications, assistance the candidates are advised to log into www.b-u.ac.in
null
minipile
NaturalLanguage
mit
null
The Duties Test Trap Set by the Department of Labor: How Employer Comments (Due September 4) Should Address It August 12, 2015 Share It has been widely reported that the Department of Labor (DOL) on June 30, 2015 proposed raising the salary level of executive, administrative and professional (EAP) employees as a requirement of exempt status under the Fair Labor Standards Act from $455 per week ($23,660 annually) to $970 per week ($50,440 annually). (See NPRM – Defining and Delimiting the Exemptions for Executive, Administrative, Professional, Outside Sales and Computer Employees, RIN 1235-AA11, (hereinafter "Proposal")). The DOL has also proposed making annual increases to this amount. What was missing, but widely expected by the legal community, was a proposal to change the duties test of the EAP exemptions as well. The DOL specifically made no such proposal but it did note its dissatisfaction with the existing duties test, suggesting that California's 50 percent exempt/nonexempt primary duties requirement might be warranted. The DOL discussed the reasons for its dissatisfaction with the current test and asked for comments on five areas relating to whether the existing duties test should be changed.1 As a result "…[t]he Department is seeking to determine whether, in light of our salary level proposal, changes to the duties tests are also warranted." Proposal p. 95 (Italics added) Could the DOL be planning to make a rule change to the duties test at the same time it changes the salary level? By proposing a very high salary level of $50,440 per year with annual increases and soliciting comments, has the DOL set a trap to justify a change in the duties test? A salary level increase of $50,440 is more than double the existing amount to qualify as an exempt EAP employee. If implemented, this increase will be a major change, and this high salary level would drastically affect the compensation models of numerous industries. Undoubtedly, a large number of employers and interested parties will send comments to the DOL arguing for a lower salary level. Notwithstanding, some increase in the salary level is expected because the current salary level of $23,660 per year, which was implemented in 2004, is below the current poverty level for a family of four. Whatever salary level is chosen, it likely will create a clearer demarcation between exempt and nonexempt for employers, even though a higher salary level would cause significant disruption for employers large and small. On the other hand, a change in the duties test to require that an exempt EAP employee perform no more than 50 percent nonexempt work (or some other percentage) would be an even more drastic disruption to business. For example, part of the current duties test for an exempt lower level manager is that his/her "primary duty is management…of a customarily recognized department or subdivision…" of the enterprise. 29 C.F.R. 541.100(a)(2) "Primary duty" means the principle, main, major or most important duty that the employee performs." 29 C.F.R. 541.700(a). The current regulation states that employees who spend less than 50 percent of their time performing exempt work (in this example, management) may still be exempt.2 "Employees who do not spend more than 50 percent of their time performing exempt duties may none-the-less meet the primary duty requirement if other factors support such a conclusion." 29 C.F.R. 541.700(b). As a result, many employers rely on this definition to allow their managers and other exempt employees to perform more than 50 percent nonexempt duties because the employees will remain exempt. A change to the California rule would mean these employees would no longer be exempt. As stated, the DOL has expressed dissatisfaction in its Proposal with the current definition of "primary" duty: the "Department is concerned that in some instances the current test may allow exemption of employees who are performing such a disproportionate amount of nonexempt work that they are not EAP employees in any meaningful sense." Proposal p. 10. The adoption of a primary duty definition requiring exempt employees to spend at least 50 percent of their time performing exempt work would create a standard difficult to apply in practice. How does an employer prove its exempt employees always perform 50 percent or more of their time on exempt work? Such proof would be necessary to defend a suit for back wages under the FLSA brought by employees who claim they were misclassified. The 50 percent test would, for example, probably make it more difficult for employers to obtain summary judgment in a duties test determination because the employer will have the burden of showing a greater degree of exempt duties performed than under the current rule. In light of that enhanced burden, employees may be more prone to file suit, and when they do file suit, they will have a better chance of going to trial and ultimately to a favorable verdict. So what is the DOL's plan? A possible explanation requires a discussion of how the DOL arrived at the $50,440 salary level. Prior to 2004, when the current regulations were promulgated, two tests existed for the exemption from overtime of white collar employees: the long test and the short test. The long test had a lower salary level but a more stringent duties test to determine exempt status. To be exempt, an employer could devote no more than 20 percent of hours worked in the workweek to nonexempt work. The short test, on the other hand, had a higher salary level but a less stringent duties test, which was more akin to the current duties test. The short duties test did not include a limitation on nonexempt work because employees paid the higher short test salary presumably were likely to meet the duties requirements with respect to nonexempt work. Proposal p. 51. So the long test had a lower salary level and more stringent duties; whereas the short test had a higher salary level requirement and less stringent duties. The 2004 changes to the white collar exemptions did away with these two tests. Afterwards, the current, much simpler test of the exemption was implemented. In its Proposal, the DOL was critical of the 2004 changes to the white collar exemptions. It concluded that the $455 weekly salary level requirement was too low when considering the limitations of the long duties test that had historically been paired with such a low salary level. Proposal p. 49. "This [Proposal] is the first time that the Department has needed to correct for such a mismatch between the existing salary level [$455] and the applicable [current] duties test." Id. (Brackets added). In the DOL's view, the long duties test, eliminated in 2004 but which had a limit on the amount of nonexempt work that could be performed, provided a safeguard against the exemption of white collar workers who should be overtime protected. The DOL justified the increased salary level of $50,440 as being the 40th percentile of all full-time salary workers. Setting the standard salary level at the 40th percentile would effectively correct for the 2004 Rule's single standard duties test that "was equivalent to the former short duties test without a correspondingly higher salary level." Proposal pp. 54-55. "Therefore, without a more rigorous duties test, the salary level set in the 2004 Final Rule is inadequate to serve the salary's intended purpose of the 'drawing of a line separating exempt from nonexempt employees.'" Proposal p. 55. "The salary component of the EAP test for exemption has always worked hand in hand with the duties test in order to simplify the application of the exemption." Proposal p. 57. At a lower salary level, more overtime eligible employees will exceed the salary threshold, and a more rigorous duties test would be required to ensure that they are not classified as falling within an EAP exemption and therefore denied overtime pay. Proposal p. 57. To remedy the DOL's purported error from 2004 of pairing the lower long test salary with a less stringent short test duties, the DOL has proposed setting the salary level in a range of the historical short test salary ratio so that it will work appropriately with the current standard duties test. Proposal p. 58.3 "This suggests that a salary significantly lower than the 40th percentile of full-time salaried workers would pose an unacceptable risk of inappropriate classification of overtime protected employees without a change in the standard duties test." Proposal p. 58 (Italics added). The DOL states that the proposed salary level of $50,440 is at the lower range of the short test salary level – lower than the historical average. Proposal p. 142. "Because the standard duties test [the current duties test] is based on the short duties test – which was intended to work with a higher salary level – and the proposed salary level [$50,440] is below the historic average with a short test salary, a salary level significantly below the 40th percentile would necessitate a more robust duties test to ensure proper application of the exemption." Proposal p. 95. Thus, the DOL wants to decide whether, in view of its proposed salary level of $50,440, changes to the duties test are also warranted. Proposal p. 95. It seems clear if the DOL lowers its proposed salary level after taking into account the comments it receives, it will attempt to modify the current duties test. Such modification would likely involve adding the 50 percent California Rule or something similar. To avoid this trap, in making comments about the proposed high salary level, employers should also vigorously address the issues and impacts surrounding a change in the duties test. The deadline to make comments to the proposed rule is September 4, 2015.4 In addition, employers, in planning for the proposed change in the salary level, should plan for the possibility that the duties test will be changed as well. 1 Specifically, the DOL seeks comments on the following: A.What, if any, changes should be made to the duties tests? B. Should employees be required to spend a minimum amount of time performing work that is their primary duty in order to qualify for exemption? If so, what should that minimum amount be? C. Should the Department look into the State of California's law (requiring that 50 percent of an employee's time be spent exclusively on work that is the employee's primary duty) as a model? Is some other threshold that is less than 50 percent of an employee's time worked a better indicator of the realities of the workplace today? D. Does the single standard duties test for each exemption category appropriately distinguish between exempt and nonexempt employees? Should the Department reconsider our decision to eliminate the long/short duties tests structure? E. Is the concurrent duties regulation for executive employees (allowing the performance of both exempt and nonexempt duties concurrently) working appropriately or does it need to be modified to avoid sweeping nonexempt employees into the exemption? Alternatively, should there be a limitation on the amount of nonexempt work? To what extent are exempt lower-level executive employees performing nonexempt work? Proposal p. 96. 2 Exempt work is defined in the regulations. See 29 CFR 541.702. For a manager, exempt work is one (a) whose primary duty is management of the enterprise in which the employee is employed or of a customarily recognized department or subdivision thereof; (b) who customarily and regularly directs the work of two or more other employees; and (c) who has authority to hire or fire other employees or whose suggestions and recommendations as to the hiring, firing, advancement, promotion or any other change of status of other employees are given particular weight. See 29 CFR 541.100. Management includes activities such as interviewing, selecting and training employees, setting rates of pay and hours of work, directing the work of employees, maintaining production or sales records for use in supervision or control, and the like. 29 CFR 541.102. All other work is nonexempt work. 29 CFR 541.702. 3 Historically, the short test salary level was set at approximately 130 to 180 percent of the long duties test salary level. Proposal p. 58. 4 Comments may be submitted to Mary Ziegler, Director of Division of Regulations, Legislation, and Interpretation, Wage and Hour Division, U. S. Department of Labor, Room S-3502, 200 Constitution Avenue, N.W., Washington, D.C. 20210. Please see the Proposal at page 2 on how to make electronic comments. Also, please note (a) that RIN 1235-AA11 and U.S. Department of Labor, Wage and Hour Division must be placed on all comments; and (b) all comments must be received by 11:59 p.m. on September 4, 2015. For more information please refer to the Proposal at page 2. Email Disclaimer NOTICE: The mailing of this email is not intended to create, and receipt of it does not constitute an attorney-client relationship. Anything that you send to anyone at our Firm will not be confidential or privileged unless we have agreed to represent you. If you send this email, you confirm that you have read and understand this notice.
null
minipile
NaturalLanguage
mit
null