content
stringlengths
7
2.61M
One by one they came, one by one they failed. Joe Clarke, the 23-year-old canoe slalomist who might just be the man to banish any lingering concerns over whether Team GB’s charge up the medal table will fail to catch fire, won gold here with a thrilling final run. The British No1 had been third fastest in his semi-final but then recorded a time of 88.53 seconds in the final to set his two rivals for gold a target they could not better once penalties had been taken into account. Slovenia’s Peter Kauser finished 0.17 seconds behind the Briton, with Czech Jiri Prskavec, who picked up a two second penalty, taking bronze. As father Shaun and mother Mandy cheered from the stands, Clarke celebrated the second British gold of the Games with a hug for the rivals who had failed to best him. “Joe Clarke, Olympic Champion. Joe Clarke, Olympic Champion! It was what I went to bed dreaming about last night and what I’ve dreamed of for so many years,” said a still stunned Clarke. An hour later, Jack Laugher and Chris Mears added a third on the diving board. “To wake up this morning thinking this is actually the finals of the Olympics and I could come away being the Olympic champion is just like ‘wow’. “Everything pieced together so nicely, I can’t put it into words. I knew I was capable but to put down that run in the Olympic final, it is a dream come true. Clarke first made his mark on the senior international stage in 2014 with a silver medal in K1 at a World Cup event and followed that up in 2015 with the K1 team silver at the European Championships, followed by World K1 team bronze. Clarke said he woke up believing he could win a medal. But no one had quite predicted this. The Stafford and Stone Canoe Club canoeist had earned his place at the British Olympic trials at the Lee Valley White Water Centre, the same venue where Tim Baillie and Etienne Stott took gold in the C2 competition at the London 2012 Olympics. And just as Baillie and Stott slipped under the radar in 2012, it was Clarke’s teammate David Florence who was expected to deliver here in Rio in the C1 category. But Florence ended up finishing last following a disastrous final run. His younger teammate showed no such nerves with a flawless performance under dark skies at the same Rio Whitewater Stadium. Ducking his head through the gates on the course, he delivered a time that was good enough to deliver Britain’s first Olympic medal of any colour in the men’s K1 since Campbell Walsh 12 years ago in Athens. He had already served notice of his medal chances in the semi-finals, in which Slovakia’s Jakub Grigar posted the best time followed by Prskavec, with world champion Kauser fourth. In the final, Kauser was out in front with three canoeists remaining but Clarke immediately overhauled him. Prskavec was next to go. He went out aggressively and had he not picked up a penalty then would have won gold. As it was, Clarke was the one standing on the podium with his gold medal clenched between his teeth and a look of vague disbelief on his face. “When I woke up I struggled to have breakfast I was so nervous with all the emotions. I thought if it goes to plan I could come away with a medal but to be Olympic champion it is something you dream about,” he said. His team-mates were equally gobsmacked, with Fiona Pennie tweeting: “Did this just happen to my training buddy?!? Can’t believe it!” Michael Vaughan and Matthew Pinsent were among those to immediately send their congratulations. Meanwhile, another unsuspecting Joe Clarke was being erroneously besieged with congratulations on the social network as he set out to begin his night shift in Worthing. This Joe Clarke can expect his story to become a lot better known in the coming days. He might also be giving thanks for the most important letter he ever wrote. The 23-year-old first stepped in a kayak with the Cubs and was so bitten by the bug he approached his local club. They turned him away, saying he was too young. When he was 11, eight places became available at Stafford and Stone Canoe Club. Asked to write a letter explaining why he should be one of the 60 to apply to be given a place, he gained a precious berth. From training for a couple of hours a week, he quickly rose through the domestic ranks and in 2006 became the youngest canoeist ever to compete in the sport’s premier division. That journey ended here under gloomy skies on a day that ushered in a new wave of Team GB heroes. After getting his gold medal, he was asked how he felt to now be an Olympic champion. “It has a nice ring to it,” he said.
This invention relates to a tool for improving the decorative upholstery tacking process. Historically, decorative upholstery tacks were driven into the desired surface by the practitioner holding the tack with his fingers and then hammering the tack with a tack hammer. Typically, the practitioner will strike his fingers with the hammer. No known commercial tool to assist in the upholstery tacking process to solve the problem of exposing the fingers to impact with the operating tack hammer could be found in the marketplace. Providing such a tool would enhance the upholstery tacking process and provide an improved measure of control and stability in the insertion of upholstery tacks into the desired surface. Some tools to address the problem of holding and stabilizing a tack were identified in a search, including U.S. Pat. No. 608,555 (Nazel); U.S. Pat. No. 2,049,459 (Lipson); U.S. Pat. No. 2,666,201 (Van Orden); U.S. Pat. No. 2,780,811 (Rodin); U.S. Pat. No. 3,218,030 (Baro); U.S. Pat. No. 3,549,075 (Tsunami); U.S. Pat. No. 3,716,088 (Grey); U.S. Pat. No. 3,764,054 (Monacelli); U.S. Pat. No. 4,029,135 Searfoss, Jr.); U.S. Pat. No. 4,061,225 (Pettitt); U.S. Pat. No. 4,676,424 (Meadow); and U.S. Pat. No. 4,709,765 (Campanelli). None of these patents provide the teaching for a tool that will enhance the upholstery tacking process and improve the control and stability of inserting an upholstery tack in the desired surface. While some of the aforementioned patents do incorporate elements of my upholstery tacking tool, such as a shaft and a concave area at the end of the tool to receive the tack to be inserted, most of the tools disclosed in the above-identified prior art patents utilized magnets and intricate slits to fit tacks or nails. It would be desirable to provide a tool for improving the upholstery tacking process that would permit the simultaneous grasping of the tack and the end of the tool with the practitioner""s fingers so that only the practitioner""s fingers and the outside shape of the tool will be required to hold the tack in place before striking the tool with a tacking hammer to insert the upholstery tack into the desired location. In summary, the instant invention will improve the decorative upholstery tacking process by keeping the fingers away from the driving force due to the use of the upholstery tacking tool incorporating the principles of the present invention. It is an important object of this invention that the upholstery tacking tool and the decorative upholstery tack being inserted into the desired surface will be held simultaneously by the practitioner""s finger before inserting the upholstery tack into the desired location. It is an advantage of this invention that the upholstery tacking tool incorporating the principles of the instant invention will enhance safety, significantly improve control and stability of the upholstery tack before being inserted into the desired surface, and substantially decrease the time required for inserting upholstery tacks into furniture and other devices for which upholstery tacks are required.
A Formative Evaluation of a Rent Subsidy Program for Homeless Substance Abusers Abstract This article reports a formative evaluation of the Bridge to Home Program which provides rental subsidies and supportive services to working shelter residents with histories of substance abuse. This evaluation focused on the first 80 participants who departed the program. Thirty-eight percent successfully completed the 2-year program; 26% voluntarily withdrew; and another 36% were terminated for non-compliance. Logistic regression identified several past and present characteristics which significantly increased the odds of successful completion. Among those clients who voluntarily withdrew from the program, variables associated with past independent living experience and recent job terminations distinguished them from those who completed the program. Among those who were involuntarily dismissed from the program, variables associated with past job instability and lack of personal investment at the current job distinguished them from those who completed the program. These characteristics were used to derive working hypotheses for program service delivery revisions. The relevance of these findings for maintaining housing stability and financial self sufficiency for this population were also discussed.
import { Expose } from "class-transformer" import { IsString } from "class-validator" export class WhatToExpect { @Expose() @IsString() applicantsWillBeContacted: string @Expose() @IsString() allInfoWillBeVerified: string @Expose() @IsString() bePreparedIfChosen: string }
This article is over 2 years old Mandatory holiday for all government workers announced as temperatures expected to soar above 50C The Iraqi government has announced a two-day mandatory official holiday beginning on Wednesday due to a heatwave. A statement issued by the Iraqi cabinet said temperatures were expected to soar above 50C (122F). It is the first heat advisory issued by the Iraqi government this summer. The public holiday will apply to all government workers. High temperatures in summer are common in Iraq, and endemic electricity outages make life harder for Iraqis when temperatures soar. To cope with the heat, Iraqis either stay indoors or swim in rivers. In some public places, showers are set up for those who want to cool down. It is not uncommon for such public holidays to be declared when heatwaves hit during Iraq’s long, hot summers.
<gh_stars>0 import pandas as pd """ Feature Importance """ # importance_type # ‘weight’ - the number of times a feature is used to split the data across all trees. # ‘gain’ - the average gain across all splits the feature is used in. # ‘cover’ - the average coverage across all splits the feature is used in. # ‘total_gain’ - the total gain across all splits the feature is used in. # ‘total_cover’ - the total coverage across all splits the feature is used in. # xgboost 모델이 필요 def get_feature_importance(xgb_model): feature_important = xgb_model.get_booster().get_score(importance_type="weight") keys = list(feature_important.keys()) values = list(feature_important.values()) fimp_01 = pd.DataFrame(data=values, index=keys, columns=["score"]).sort_values( by="score", ascending=False ) fimp_01.nlargest(30, columns="score").plot( kind="barh", figsize=(10, 8) ) ## plot top 40 features
1. Field of the Invention The present invention relates to a method of removing post-etch residues, and more particularly to a method of removing post-etch residues without causing arcs. 2. Description of the Prior Art Damascene interconnect processes incorporated with copper are known in the art, which are also referred to as “copper damascene processes” in the semiconductor industry. Generally, the copper damascene processes are categorized into single damascene process and dual damascene process. Because the dual damascene has advantages of simplified processes, lower contact resistance between wires and plugs, and improved reliance, it is widely applied in damascene interconnect technique. In addition, to reducing resistance and parasitic capacitance of the multi-level interconnect and improving speed of signal transmission, the dual damascene interconnect in state-of-the-art is fabricated by filling trench or via patterns located in dielectric layer which comprise low-K material with copper and performing a planarization process to obtain a metal interconnect. According to the patterns located in the dielectric layer, the dual damascene process is categorized into trench-first process, via-first process, partial-via-first process, and self-aligned process. However, when a via or a trench is formed by etching the dielectric layer, lots of charges are accumulated on the dielectric layer. Therefore, when the post-etch residues on the dielectric layer is removed by a cleaning solution, arcs may form when the cleaning solution contacts the surface of the dielectric layer with lots of charges. Then, semiconductive elements below or on the dielectric layer will be cracked because of the arcs.
Fresh violence claims 12 lives as negotiations to end the bloodshed continue. "If the IGAD meeting goes on in spite of our call for it not to go on," said Anyang Nyongo, secretary-general of the opposition Orange Democratic Movement (ODM), "we shall call upon Kenyans to come out in their big numbers for a peaceful demonstration in Nairobi to strongly protest." The government has banned street protests, and earlier ones have led to looting, rioting and a crackdown by police. Kofi Annan, the former UN secretary-general mediating talks between the two sides, rebuked the opposition for the threat. "We have a demand that the parties avoid provocative statements outside negotiations," Annan told reporters. "We are going to be vigilant on that. I think there is a clear understanding that it should not have been done and there will be no mass protests." On Tuesday, Annan pushed the two sides to focus on "the political crisis arising from the disputed presidential electoral results". Annan began his mission in Kenya after Raila Odinga, the opposition leader insisted on external mediation. Odinga says Mwai Kibaki, the Kenyan president returned to power in disputed elections in December, was illegally won his reelection through vote-rigging. Of the 12 people killed in violence on Tuesday, nine were shot by police cracking down on gangs of youths who have attacked houses and other property, police sources said. The death toll from the post-election violence grew to over 1,000 victims, according to the Red Cross, with 300,000 people displaced since elections in December returned Mwai Kibaki, the Kenyan president, to power. The opposition has refused to recognise Kibaki's victory, claiming widespread rigging. International observers have also cited serious flaws during vote-counting. UN human rights investigators headed to Kenya to conduct a three-week investigation into alleged violations committed since the elections, a UN spokesman said on Tuesday. The team of seven will interview officials, survivors of ethnic violence and the relatives of victims, and report to Louise Arbour, the UN High Commissioner for human rights. The team is due to arrive in Nairobi on Wednesday. Annan's efforts to mediate between the two side suffered a set back on Monday when Cyril Ramaphosa, a South African negotiator who he had asked to help mediate the talks, pulled out of mediation efforts amid claims by Kibaki's allied that he was biased. The South African government angrily rejected those claims. Aziz Pahad, South Africa's deputy foreign minister, said Ramaphosa had a proven record as a trouble-shooter and could have played a valuable role in bringing an end to post-election violence. "The role played by Ramaphosa during the South African democratic process, as well as his contribution to the Irish peace process, has indicated his ability to seek solutions in the interests of peace and democracy and without taking sides," Pahad said. The millionaire businessman denied he had business dealings with Odinga but acknowledged he had failed to win the trust of both sides.
Screening for fetal growth restriction with universal third trimester ultrasonography in nulliparous women in the Pregnancy Outcome Prediction (POP) study: a prospective cohort study Summary Background Fetal growth restriction is a major determinant of adverse perinatal outcome. Screening procedures for fetal growth restriction need to identify small babies and then differentiate between those that are healthy and those that are pathologically small. We sought to determine the diagnostic effectiveness of universal ultrasonic fetal biometry in the third trimester as a screening test for small-for-gestational-age (SGA) infants, and whether the risk of morbidity associated with being small differed in the presence or absence of ultrasonic markers of fetal growth restriction. Methods The Pregnancy Outcome Prediction (POP) study was a prospective cohort study of nulliparous women with a viable singleton pregnancy at the time of the dating ultrasound scan. Women participating had clinically indicated ultrasonography in the third trimester as per routine clinical care and these results were reported as usual (selective ultrasonography). Additionally, all participants had research ultrasonography, including fetal biometry at 28 and 36 weeks' gestational age. These results were not made available to participants or treating clinicians (universal ultrasonography). We regarded SGA as a birthweight of less than the 10th percentile for gestational age and screen positive for SGA an ultrasonographic estimated fetal weight of less than the 10th percentile for gestational age. Markers of fetal growth restriction included biometric ratios, utero-placental Doppler, and fetal growth velocity. We assessed outcomes for consenting participants who attended research scans and had a livebirth at the Rosie Hospital (Cambridge, UK) after the 28 weeks' research scan. Findings Between Jan 14, 2008, and July 31, 2012, 4512 women provided written informed consent of whom 3977 (88%) were eligible for analysis. Sensitivity for detection of SGA infants was 20% (95% CI 1524; 69 of 352 fetuses) for selective ultrasonography and 57% (5162; 199 of 352 fetuses) for universal ultrasonography (relative sensitivity 29, 95% CI 2435, p<00001). Of the 3977 fetuses, 562 (141%) were identified by universal ultrasonography with an estimated fetal weight of less than the 10th percentile and were at an increased risk of neonatal morbidity (relative risk 160, 95% CI 122209, p=00012). However, estimated fetal weight of less than the 10th percentile was only associated with the risk of neonatal morbidity (pinteraction=0005) if the fetal abdominal circumference growth velocity was in the lowest decile (RR 39, 95% CI 1981, p=00001). 172 (4%) of 3977 pregnancies had both an estimated fetal weight of less than the 10th percentile and abdominal circumference growth velocity in the lowest decile, and had a relative risk of delivering an SGA infant with neonatal morbidity of 176 (92340, p<00001). Interpretation Screening of nulliparous women with universal third trimester fetal biometry roughly tripled detection of SGA infants. Combined analysis of fetal biometry and fetal growth velocity identified a subset of SGA fetuses that were at increased risk of neonatal morbidity. Funding National Institute for Health Research, Medical Research Council, Sands, and GE Healthcare. Supplementary References Plot of measurements of abdominal circumference (AC) measured in millimeters (mm) at the time of the clinical (reported) 20 week measurement and the research measurements at ~28 and ~36 weeks. The green line is the 50 th percentile and the black lines above and below are the 5 th and 95 th percentiles of a previously published reference range. A. B. The receiver operating characteristic (ROC) curve for A. Small for gestational age (SGA, <10 th percentile), and B. severe SGA (<3 rd percentile), using the estimated fetal weight (EFW) percentile (i.e. the percentile from the last scan performed prior to birth). Solid lines represent universal ultrasonography and dashed lines represent selective ultrasonography. When the results of selective sonography were analysed, 58% (2,311/3,977) women did not have a clinically indicated scan at or after 26 weeks gestational age. In this group, EFW was imputed using a sex-specific population median. Areas under the ROC curves (95% confidence interval) are 0. A. B. The receiver operating characteristic (ROC) curve for A. Small for gestational age (<10 th percentile), and B. severe small for gestational age (<3 rd percentile), using the estimated fetal weight percentile from the 36 week research scan. The outcome was, necessarily, confined to births that occurred following the 36 week research scan. Areas under the ROC curves (95% confidence interval) are 0.86 (0.85-0.88) and 0.91 (0.89-0.94), respectively. Regression models were fitted between each measurement and GA within each GA interval (18-22, 26-30 and 34-38 completed weeks), i.e. excluding GAs without data points using published methodology. 5 Doppler PIs were log-transformed prior to model fitting. The equations of the mean and SD models give the expected mean and SD at each GA within the respective GA range. GA-specific z scores were calculated as (observed value -fitted mean) / fitted SD. Lowest and highest 10% (deciles) were determined from the z score distributions. For the AC growth velocity, change in the z score between 20 week scan and the last scan was calculated and the lowest decile of the difference was determined. For 97% of women (n=3850), the 36 week scan was the last scan. If the 36 week scan measurement was missing (delivery occurred before the 36 week scan or data were missing at the 36 week scan), the decile from the 28 week scan was used instead. Similarly for AC growth velocity, the lowest decile from the change between 20 and 28 week scans was used if the 36 week scan result was missing. For all z scores, mean=0.0 and SD=1.0. For AC growth velocity (change in z score), mean=0.0 and SD=1. Data are mean (SD) for biometric measurements and geometric mean (geometric SD) for Doppler measurements. Biometric and Doppler measurements were performed as previously described. 2;6-8 AC and HC were measured using the ellipse function of the machine. BPD was measured from the outer surface (nearside to the probe) to the inner surface (far side to the probe). Umbilical Doppler was assessed in a free loop of cord in the middle of its length (i.e. outside the regions of the umbilical and placental insertions), and uterine Doppler was assessed where the vessels cross the external iliac artery and vein. Supplementary In the research scans at 28 and 36 weeks, the screen display of gestational age (GA) equivalence of measurements in the machine was disabled, to prevent ad hoc assessment of the appropriateness of growth measurements. EFW was calculated using published formulae. 3 Summary of results Inter-observer reliability and agreement statistics suggest a very small measurement error for fetal biometric measurements in both 20 week and 36 week scans. Differences in measurements between two sonographers were slightly larger for FL than for BPD, HC and AC. These differences had a cumulative effect on EFW which was calculated from the four measurements, resulting in a slightly higher mean coefficient of variation (CV) than was observed for individual measurements. There was more variation in Doppler measurements in both scans (mean CV range: 5.58-8.22%) than in biometric measurements (mean CV range: 0.46-3.15%). There was no clear indication that the difference in measurements between sonographers varied according to the mean of the two measurements, except for uterine artery Doppler PI measurements, where largest differences tended to occur at the top end of the distribution (Supplementary Figure 13: Bland-Altman plots). ROC denotes receiver operating characteristic, GA denotes gestational age and SGA denotes small for gestational age. SGA is defined as birth weight <10 th percentile and severe SGA is defined as birth weight <3 rd percentile. The area under the ROC curve is calculated using the estimated fetal weight percentile. Sensitivity Analysis 1: Including women who defaulted from one or more research scans and defining their missing scan(s) as screen negative. Sensitivity Analysis 2: Including women who defaulted from one or more research scans and, if they had a clinically indicated scan at 26-30 weeks or 34-38 weeks, the last EFW within the respective time window was used as the result of the research scan. If no clinically indicated scan was performed within that time window, the record was treated as screen negative. Sensitivity Analysis 3: Excluding all records where the research scan result was revealed for any reason. Sensitivity Analysis 4: Re-classifying research scan result as screen positive if the last research scan was negative but a subsequent last clinically indicated scan was screen positive. Abbreviations: SGA denotes small for gestational age, EFW denotes estimated fetal weight, ACGV denotes abdominal circumference growth velocity, RR denotes relative risk and CI denotes confidence interval. All EFW are based on population based percentiles. SGA is defined as birth weight <10 th percentile, screen positive is defined as EFW<10 th percentile and screen negative is defined as EFW≥10 th. ACGV is based on the change in the gestational age adjusted z score between the 20 week scan and the last scan before birth. Z score at each scan was calculated using growth charts generated by the Fetal Growth Longitudinal Study component of the INTERGROWTH-21st Project, an international consortium which constructed fetal growth standards using methods recommended by the WHO. 1 The lowest decile of the change in z score between the 20 week scan and the last scan was defined within the study cohort. For 97% of women (n=3850), the 36 week scan was the last scan. If the 36 week scan measurement was missing (delivery occurred before the 36 week scan or data were missing at the 36 week scan), the decile from the 28 week scan was used instead. The change in z score cut-off point of the lowest decile was -1.594 from 20 week scan to 36 week scan (-1.2255 from 20 week to 28 week scan). Neonatal morbidity is a composite outcome, i.e. ≥1 of the three outcomes specified: metabolic acidosis (defined as pH<7.1 and a base deficit of more than 10mmol/L), 5 minute Apgar <7, neonatal unit admission. Neonatal unit admission was defined as admission to the Neonatal Intensive Care Unit, the High dependency Unit, or the Special Care Baby Unit. Severe adverse perinatal outcome is a composite outcome, i.e. ≥1 of the following outcomes specified: stillbirth (not due to congenital anomaly), neonatal death at term (not due to congenital anomaly), hypoxic ischaemic encephalopathy at term, use of inotropes at term, mechanical ventilation at term, severe metabolic acidosis at term (defined as pH<7.0 and a base deficit of more than 12mmol/L). P-values are from 2-sided Fisher's exact test. In the analysis stratified by ACGV (defined by the INTERGROWTH-21 st growth standard), an EFW<10 th percentile was associated with the risk of any neonatal morbidity when the fetal ACGV was in the lowest decile (RR=4.33, 95% CI 1.96 to 9.57) but there was no association in the normal ACGV group (RR=1.12, 95% CI 0.76 to 1.66), P for interaction = 0.003. Abbreviations: FGR denotes fetal growth restriction, SGA denotes small for gestational age, AC denotes abdominal circumference, ACGV denotes abdominal circumference growth velocity, FL denotes femur length, HC denotes head circumference, RR denotes relative risk, CI denotes confidence interval and N/A denotes not applicable. The five previously described indicators of FGR were classified as the extreme decile associated with FGR (highest or lowest, as appropriate) compared with the other 9 deciles in the cohort. Neonatal morbidity is a composite outcome, i.e. ≥1 of the three outcomes specified: metabolic acidosis (defined as pH<7.1 and a base deficit of more than 10mmol/L), 5 minute Apgar <7, neonatal unit admission. Neonatal unit admission was defined as admission to the Neonatal Intensive Care Unit, the High dependency Unit, or the Special Care Baby Unit. Severe adverse perinatal outcome is a composite outcome, i.e. ≥1 of the following outcomes specified: stillbirth (not due to congenital anomaly), neonatal death at term (not due to congenital anomaly), hypoxic ischaemic encephalopathy at term, use of inotropes at term, mechanical ventilation at term, severe metabolic acidosis at term (defined as pH<7.0 and a base deficit of more than 12mmol/L). P-values are from 2-sided Fisher's exact test.
/** * Helper class that supports the creation of the context menus that are shown in the graph. */ public final class CMenuBuilder { /** * You are not supposed to instantiate this class. */ private CMenuBuilder() { } /** * Adds menus related to comments to a given node context menu. * * @param menu The node context menu that is extended. * @param node The clicked node. */ public static void addCommentMenu( final JPopupMenu menu, final CGraphModel model, final INaviViewNode node) { Preconditions.checkNotNull(menu, "IE02140: Menu argument can not be null"); Preconditions.checkNotNull(node, "IE02143: Node argument can not be null"); menu.add(CActionProxy.proxy(new CActionEditComments(model, node))); menu.add(CActionProxy.proxy( new CActionCreateCommentNode(model.getParent(), model.getGraph().getRawView(), node))); menu.addSeparator(); } /** * Adds menus related to node selection to a given node context menu. * * @param menu The node context menu to extend. * @param graph The graph the clicked node belongs to. * @param node The clicked node. */ public static void addSelectionMenus( final JPopupMenu menu, final ZyGraph graph, final NaviNode node) { Preconditions.checkNotNull(menu, "IE02144: Menu argument can not be null"); Preconditions.checkNotNull(graph, "IE02145: Graph argument can not be null"); Preconditions.checkNotNull(node, "IE02146: Node argument can not be null"); final JMenu selectionMenu = new JMenu("Selection"); selectionMenu.add(CActionProxy.proxy(new CActionSelectNodePredecessors(graph, node))); selectionMenu.add(CActionProxy.proxy(new CActionSelectNodeSuccessors(graph, node))); if (graph.getSelectedNodes().size() > 0) { selectionMenu.add(CActionProxy.proxy(new CGroupAction(graph))); } if (node.getRawNode() instanceof INaviCodeNode) { try { final INaviFunction parentFunction = ((INaviCodeNode) node.getRawNode()).getParentFunction(); selectionMenu.add(CActionProxy.proxy( new CActionSelectSameParentFunction(graph, parentFunction))); } catch (final MaybeNullException exception) { // Obviously we can not select nodes of the same parent function if there // is no parent function. } } else if (node.getRawNode() instanceof INaviFunctionNode) { final INaviFunction function = ((INaviFunctionNode) node.getRawNode()).getFunction(); selectionMenu.add(CActionProxy.proxy( new CActionSelectSameFunctionType(graph, function.getType()))); } menu.add(selectionMenu); menu.addSeparator(); } /** * Adds menus related to node tagging to a given node context menu. * * @param menu The node context menu to extend. * @param model The model of the graph the clicked node belongs to. * @param node The clicked node. */ public static void addTaggingMenu( final JPopupMenu menu, final CGraphModel model, final NaviNode node) { Preconditions.checkNotNull(menu, "IE02147: Menu argument can not be null"); Preconditions.checkNotNull(model, "IE02148: Model argument can not be null"); Preconditions.checkNotNull(node, "IE02149: Node argument can not be null"); final JMenuItem tagNodeItem = new JMenuItem(CActionProxy.proxy( new CTagNodeAction(model.getParent(), model.getGraphPanel().getTagsTree(), node))); final JMenuItem tagSelectedNodesItem = new JMenuItem(CActionProxy.proxy(new CTagSelectedNodesAction( model.getParent(), model.getGraphPanel().getTagsTree(), model.getGraph()))); final CTagsTree tree = model.getGraphPanel().getTagsTree(); tagNodeItem.setEnabled(tree.getSelectionPath() != null); tagSelectedNodesItem.setEnabled( (tree.getSelectionPath() != null) && !model.getGraph().getSelectedNodes().isEmpty()); menu.add(tagNodeItem); menu.add(tagSelectedNodesItem); menu.addSeparator(); } }
package de.bruenni.wjax2017.soccer; import javax.enterprise.context.ApplicationScoped; @ApplicationScoped public class SoccerService { public SoccerService() { } }
Television Advertising As An Artwork In Representing National Identity Advertising has a dual role, one side of the advertising is a medium of information to convey messages, both commercial and non-commercial to the audience, and the other side as artwork (applied art) with all its appeal. The priority for advertising is marketing and selling products or services. In its representation, advertising always uses any aesthetic elements, which in principle can potentially be a great attraction for the products or services offered. Concepts that are often represented in ad impressions include, social status, ideal image, lifestyle, identity, etc., which are displayed implicitly or explicitly. This study focuses on the representation of Indonesias national identity in the SGM formula milk television commercials. The purpose of the study is to provide a description and description so as to open up insights and knowledge in understanding how Indonesias national identity is represented in advertisements for SGM childrens formula milk. The method used is interpretive qualitative The results of the research, that the advertising of SGM formula milk as a work of applied art represents Indonesian national identity, which can be classified into three parts 1) Culture, Religion, Ethnicity of Indonesia; 2) Nusantara Territory (Enchantment of Indonesian and Urban Nature); 3) Characteristics of Indonesian Communities (Habits / Lifestyle) The characteristics of Indonesian society can be interpreted as a socialist and minimalist society. Based on the three classifications of the representation of Indonesias national identity, the most dominant part displayed in the SGM formula milk television commercials is the element of religion Introduction Advertising is a bridge between entrepreneurs or producers and the community. Advertising has a dual role, one side is a medium of information to convey messages, both commercial and non-commercial to the audience, and on the other hand as an artwork of applied art with all its appeal. Soedarsono (1998: 223-233) further explained that advertising is a part of fine art that is included in the scope of applied art, which is an art category whose aesthetic expressions are categorized into the 'art in frame' genre. As part of applied artwork, the representation of aesthetic values that contain in advertisements has not intended as an expression of mere beauty, but it functions for 'other interests' outside the realm of art. A similar statement is also conveyed by Adorno (2004: 284), that advertising as applied artwork is an art creation whose the main realization is not intended to fulfill the aesthetic dimension, but rather to other interests outside the art itself. Advertising prioritizes the marketing and sale of products or services, based on this, in its representation advertising always uses any aesthetic elements, which in principle can potentially be a great attraction for the products or services offered. Through advertising, using print and electronic media, one of which is television advertising, products or services are introduced and promoted to the community in a sustainable manner. Television advertising is a medium for selling goods or services, not entertain because an advertisement only reports an item or service and has nothing to do with enjoying the display of advertisements (Bungin, 2011: 212). Television advertising is widely recognized as a creative artwork because it can build a positive image of a brand through an emphasis on visual arts, audio, and motion. Creativity in the design of television advertising as a collective work of art (team) is not only measured by the achievement of quality or aesthetic value but also could be observed from the power of advertising to attached messages that able to influence potential customers. The actual advertisement does not only offer a product, but also represents something, and become a medium for the delivery of meaning that producers want to convey to consumers. The representation can be seen through the structure of advertising, which namely visual and audio elements. Representation is a simple term that contains meaning, a representation, or a description of a matter in life through a media. Danesi (2010: 3-4) defines representation is a process of recording physical ideas, knowledge, or messages. It can be more precisely defined as the use of "signs" (pictures, sounds, etc.) to re-display something that is absorbed, sensed, imagined, or felt in physical form. While David Croteau and William Hoynes (2000: 194), explained that representation is the result of a selection process that underlines certain things and others are ignored. In media representation, the sign to be used to rep-resent something experiences a selection process. Which one is in accordance with the interests and the achievement of the communication objectives is used while other signs are ignored. Representation in television advertising certainly cannot be separated from its realization in the form of collaboration concepts, which are formed from the relationship between signs and meanings. The concepts commonly represented social status, ideal image, lifestyle, identity, etc., which are displayed implicitly or explicitly Nowadays advertisements that are aired on television are competing to outperform each other, not only in different product categories, but competition also exists in the same type of product category. Advertisers try to create preferences, and unique ideas, always offering a new concept that is claimed to be better than its competitors. So it is not surprising, to maintain existence, and create differentiation, the producers work together with advertising agencies to develop and process their intellect, insight, and taste or sensitivity in order to attract interest and influence the target audience. One of them is by carrying out the concept of Indonesia's national identity in promoting its products or services. The use of the concept of 'Indonesians' in advertising is expected to be able to communicate the message well, especially to the people of Indonesia. So that people can easily remember the ad. Although the trend of advertising with the concept has been going on for a long time, in Indonesia in 2016, advertisements that raised the culture issues and natural charm of Indonesia are increasingly appearing, and are usually used by several commercial products such as energy drink products, medicines, instant foods, and dairy products for children. One of them is carried out by formula milk producer, PT Sarihusada Generasi Mahardhika, with its SGM product brand. Based on observations that have been undertaken by the author on six SGM formula milk television commercials, which aired in 2016-2019. The author assumes that PT.Sari Husada, which is a producer of SGM brand formula milk in Indonesia, in addition, to comply with Indonesian government regulations, regarding the rules for the promotion of formula milk listed in Permenkes 39/2013. SGM formula milk producers also want to try to reach out, get support (acceptance) from all Indonesian people, one way by carrying out the concept of 'Indonesian-ness' which displays various elements that represent Indonesia's national identity. The term national identity comes from the word identity and national. Identity literally means the characteristics, signs of identity inherent in someone or something that distinguishes it from others (Azra, 2005: 23). Whereas the word national is an identity that is attached to larger groups that are bound by similarities, both physical like culture, religion, language and non-physical like desires, ideals, and goals. The term national identity or national identity that cause group actions (collective actions that are given national attributes) that are manifested in forms of organization or movements that are given national at-tributes (Azra, 2005: 25). National identity is essentially a manifestation of cultural values that grow and develop in life nation aspect with special characteristics, that will make the difference from other nations in its life. Research related to the relationship between advertising and Indonesian nationalism has been reviewed by several researchers before. From the many existing studies, two previous studies have been chosen, which can be used as a reference or reference in this study, including First, the research published in the Thesis of the Faculty of Social and Political Sciences, Department of Communication, Postgraduate Program, University of Indonesia, in 2012. Entitled "Indonesia, Nationalism, and Advertising (Reception Analysis of 3 Television Advertisements with Indonesian Theme)", written by Rizky Rachdian S. Rizky Research, This study aims to see how audiences interpret messages or discourses from television advertisements. The main question of the research is how audiences interpret the phenomenon of advertising with the theme of Indonesian and how then does the public interpret or interpret Indonesian nationalism. Rizky's research uses a qualitative approach with in-depth interviews as a method for finding primary data. The results of the study refer to Stuart Hall's television theory that audience reception analysis is divided into three positions of meaning, namely dominant-hegemonic, negotiating, and opposition. This research has theoretical implications on the understanding of nationalism in Indonesia, especially among the younger generation and will also be useful for the development of industries related to Indonesia in the future. Rizky in his research concluded, that the understanding of nationalism and Indonesia for each informant was motivated by a picture of Indonesia that they received by a hegemonic system, making them accept the images of Indonesia that they received from their knowledge during the school age. What makes the difference is that the presence of the advertisement phenomenon with the theme of Indonesia is not only due to the chaotic condition of Indonesia, according to some informants also caused by trends around the media industry. Another cause is to show local companies whose products are advertised, become a kind of pride in the midst of the rise of foreign companies entering Indonesia. Second, research conducted by Ria Angelia Wibisono, in 2008 entitled "Representation of Nationalism in Corporate Advertising of PT. Gudang Garam Tbk "), which was published in the E-Journal, http://puslit.petra.ac.id/ journals/communication/Communication Studies Department, Faculty of Communication Sciences, Petra Christian University. Ria's research discusses how nationalism is represented in the corporate advertising of PT. Gudang Garam Tbk. versions of "My Indonesian Home" and "My Indonesian Home: Light of Hope". The approach used is a qualitative approach with a semiotic method, based on John Fiske's television code theory, Saussure's syntagmatic concept, and grammar of film and television. Nationalism represented in the advertisements studied included independence, social solidarity, patriotism, social justice, and national identity. Ria explained that, although the advertisement was made as a CSR (Corporate Social Responsibility) campaign, there were still certain symbols in it that were identical to the symbols commonly used to represent Gudang Garam cigarette products. Although advertising is called a nationalism campaign which is a form of CSR (Corporate Social Responsibility) of PT. Gudang Garam Tbk., The advertisement does not merely contribute to society, but also benefits the company itself. One of them is a positive image as a nationalist company. In addition, in the ad "My Indonesian House: Light of Asa", a symbol identical to Gudang Garam is tucked, which is a musical instrument trumpet blown by a woman. A trumpet is a musical instrument that is always present in the advertisements for PT. Gudang Garam Tbk. since the 1990s until now, especially for Surya products. As far as observations from previous studies, it can be concluded that the two studies above have relevance to this study because they both analyze advertisements relating to Indonesian nationalism. The difference is, the research conducted by Rizky Rachdian S, aims to see how audiences interpret messages or discourses from television advertisements. The research object used was a television advertisement showing in 2011, including Djarum Super advertisement "My Great Adventure Indonesia", Kopi Kapal Api advertisement "A cup of spirit for Indonesia", and Nutrisari "Heritage" advertisement. While this research focuses on identifying, analytically describing the representation of Indonesia's national identity, and the research object used is six television commercials of SGM formula milk products with a screening of 2016-2019, this study uses the same research object with a certain time period, so that the expected results research will be more detailed and in-depth. Ria Angelia Wibisono's research, compared to this study, although both use a qualitative approach with the semiotics method, Ria in her research is based on John Fiske's television code theory, Saussure's syntagmatic concept, and grammar of film and television. Different in this study, using Roland Barthes's semiotic approach, with the unit of analysis in the form of an ad structure that refers to Rossiter and Percy's theory. Ria chose two corporate advertisements for PT. Gudang Garam Tbk as the object of research, while this study chose six television advertisements for SGM formula milk products as a study material. Although there are some differences found in the two previous studies, it can enrich the insights and provide ade-quate references in this study. From all previous studies, during observations, there are no articles, theses or dissertations that analyze the representation of Indonesian national identity as a concept in SGM formula milk television commercials. Therefore, it is expected that research has its own novelty value that can distinguish it from previous research and has relevance to the occupied field. Based on the above background, the discussion in this study focuses on analyzing the representation of Indonesian national identity in television advertisements for SGM children's formula milk. The purpose and benefits of this research are to provide a description to open up insights and knowledge in understanding how Indonesia's national identity is represented in SGM children's formula milk advertisements. Theory And Research Method Television is an audiovisual media, to analyze how Indonesia's national identity is represented in SGM children's formula milk television commercials, then the visual and audio elements that will be described in the six advertisements are the object of research, based on Rossiter and Percy's theory which divides the structure of television advertisements into several elements including seen words, namely the words that appear on ad impressions that can affect the product image in the minds of viewers; picture elements, namely images or ad impressions that include the objects used, the models used, and scenes shown; the color element, the composition or harmony of the color of the image and the light settings contained in the display of ad impressions; the movement element, the movements that exist or are seen in ad impressions that can influence one's emotions to dissolve in it; the heard word element or known as audio elements (Rossiter and Percy (1997: 209). Representation connects concepts in mind. The concept that is in the mind must be translated in a universal language so that it can translate concepts or ideas into written language, body language, oral language as well as photos and visuals (signs). These signs represent concepts, which in this study are the concepts of Indonesian national identity. The concepts that exist in the mind together form a meaning system (meaning system) in culture. So in analyzing the representation of Indonesian national identity requires a theory of signs with an interpretive approach, namely the theory of Roland Barthes's Semiotics. According to Fiske (2007: 118), the core of Roland Barthes's theory is the idea of two order of signs (order of significations). Likewise Pilliang (2003: 261) explained, Barthes developed two levels of signification that made it possible to produce meaning which is also stratified, namely the level of denotation (denotatiom) and connotation (connotation). This study uses interpretative qualitative research methods with a textual analysis approach. According to Stokes (2007: 15), qualitative research is the name given to a research paradigm that is primarily concerned with meaning and interpretation. The textual analysis which gives the viewpoint that discourse consists of form and meaning so that the relationship between the discourse parts can be divided into two types, namely the form relationship called cohesion and the relationship of meaning or semantic relations called coherence. Text can be understood as a series of structured language statements. Text is all that is written, pictures, films, photographs, graphic designs, song lyrics and others that produce meaning (Ida, 2014:62;McKee, 2001). Textual analysis is a discourse analysis that relies internally on the text under study. The textual analysis approach is interpretations produced from the text. These interpretations are the process of encoding and decoding the signs produced in text units. The study of textual analysis begins its research by interpreting the signs produced in a media text (Ida, 2014: 65;McKee, 2001). The application of textual analysis in this study is also related to the theories that used, specifically Roland Barthes's semiotic theory. The data generated from the research are descriptive by applying the inductive method as steps in analyzing the problems that have been formulated previously. The primary data source in this research is the capture of SGM formula milk television commercials. In an effort to facilitate data collection, as well as to obtain adequate recording quality, the data collection uses computer and internet technology intermediaries, by downloading SGM formula milk advertisement records, which are uploaded by the SGM formula milk company itself on its official channel on the site. Youtube website, with link https://www.youtube.com/user/AkuAnakSGM/videos. Here are six SGM formula milk television commercials which are used as research objects, with the reason that the six ads are the most representative and in accordance with the theme of this study While for secondary data used are data in the form of literature studies such as books, journals, articles, videos, print and online newspapers about television advertisements, SGM products, and everything which has relevance to the research topic. There are several steps in the data collection process. The first step, making observations on several formula milk advertisements on several Indonesian national TV stations, then determining or selecting the most representative children's formula milk advertisements and in accordance with the research topic, as research objects. The second step is to document the recording technique of formula milk advertisements that have been selected as research objects. The third step, listening again and observing the research object which is then documented, is reduced in the form of pieces of a picture/scene that can show the storyline of the advertisement. The final step is to classify data based on visual and audio elements Discussion Based on the analysis and interpretation of data that has been carried out on six SGM formula milk advertisements from2016-2019, researchers can classify the analysis of the representation of Indonesia's national identity into three parts, the following below is a description that can be identified through visual and audio elements from each advertisement scene: Representation of Indonesian Culture, Religion, Ethnicity Balinese Culture Bali as the Island of the Gods is very famous in the eyes of the world as an island with stunning natural and cultural tourist destinations. It makes a lot of tourists, both domestic and foreign tourists flocking on vacation to enjoy the panorama as well as its enchanting culture. For the people of Indonesia, the famous name of Bali in the eyes of the world becomes a matter of pride. The above can be indicated to be one of the considerations of SGM children's formula milk producers to display Balinese culture in the initial seconds of the advertisement of their products. This is represented through the visual display of advertisements, including being shown through typical Balinese clothes (udeng, kamen), traditional Balinese dance activities and other attributes. Bali is famous for its cultural diversity, dance for the people of Bali is one form of culture that is often exhibited in the fabric of society. By displaying a Balinese culture at the beginning of the SGM advertisement, it is hoped that it can attract the attention of the target audience to see the advertisements. Indonesia is a Moslem majority Table 3. Screenshot from six SGM ads. Indonesia has the largest Moslem population in the world, to represent that the majority Muslim community of Indonesia, in a number of advertisements SGM child formula milk always displays a visual element that represents it, including showing the Muslim sacred buildings (mosques), in one scene showing some children, boys use the cap (peci) while girls use the hijab. The same thing is also shown in several SGM formula milk advertisements, it can be seen that from the many characters or models presented in one advertisement, there will definitely be an adult woman wearing hijab. Traditional music Table 4. Screenshot of SGM eksplor-wujudkan si kecil jadi #generasimajuads andIklan SGM Esplor Presinutri-beri dukungan komplit untuk dukung 5 potensi prestasi anak generasi maju ads. Music is one of the elements of cultural development, in Indonesia, each tribe has a different and distinctive type of music according to local cultural customs. This can be a differentiator between the cultural identities of each tribe. In the SGM formula milk advertisement, Angklung musical instrument was displayed, which traditionally developed and was an important part of the cultural identity of the people in West Java and Banten. In addition, Angklung has also been designated a world cultural heritage (The Intangible Heritage) by UNESCO. In one of the SGM formula milk commercials that featured a closeup, a black girl with curly braid two hair were in the classroom with the other children. When viewed from the physical possessed, it can be interpreted that the child is from ethnic Papua. Papua is at the eastern end of Indonesia, listed as the poorest province in Indonesia. Children with typical physical characteristics of Papua are shown in advertisements, this can be interpreted that SGM producers through their products, embrace all levels of society, including children from eastern Indonesia. Through the display of its advertisements, trying to emphasize that Papuan children are also part of the pride and identity of the Indonesian people. Audio "For more than 60 years SGM Eksplor has supported Indonesian Parents to realize the advanced generation, for the past... now... and later..." (Quoted from the dialog of SGM eksplor-wujudkan si kecil jadi #generasimaju ads). The sentence above is an audio text originating from the narrator, which is in one of these SGM formula milk advertisements, which functions as a closing word in the entire advertisement display. the emphasis is on the phrase "Indonesian Parents", the term, in addition to providing a clue that the main market share of SGM products is Indonesian society, this can also be interpreted as an affirmation of identity or national identity. Indonesia's territory consists of thousands of islands spread out on the equator with a resource of amazing beautiful nature. SGM producers try to represent this in a visual display that shows the natural beauty of Indonesia such as wide rice fields, mountains, and lakes. Considering that Indonesia is a developing country, therefore in an advertisement, Indonesia's national territory is not only shown in terms of its natural beauty but also clearly shows urban development, one of which is by showing towering buildings. Representation of Indonesian Community Characteristics (Habits / Lifestyle) Characteristics can be interpreted as traits that are created naturally from the habit and lifestyle of the people who inhabit a nation. One of the national identities of the Indonesian people can be shown from the characteristics of the Indonesian people themselves, which are related to habits or lifestyles. The characteristics of Indonesian society by the researcher are interpreted into two characteristics, below will be described along with a brief description This is represented in the following scenes which will be identified and described in four sections Utilization of transportation In some scenes, the advertisements above show the utilization of public transportation (train, bus) and show more on foot activities, none showing private car usage. Children activities Table 9. Screenshot from six SGM ads. Simplicity is also shown in some scenes of children's daily activities, shown in visual displays such as riding bikes, dancing traditional dances, playing chases and swimming in the lake with friends. Food dish Table 8. Screenshot of SGM eksplor 3 plus -rahasia kepintaran si kecilads and iklan SGM -lengkapi nutrisinya, jadikan dunia sahabatnya bersama SGM eksplor ads A minimalist lifestyle or simplicity is also represented through the display of food menus and the habit of preparing food supplies, this scene can be seen in two SGM formula milk advertisements. The food displayed is a simple food typical of the people of Indonesia, such as tempeh, tofu, eggs, fish, vegetables (sayur bening), etc. Audio "Every day we fight for the future of the little...." "Whatever the circumstances, we give everything the best for a better life..." (Quoted from the dialog of SGM -lengkapi nutrisinya, jadikan dunia sahabatnya bersama SGM eksplor ads ) The two sentences above are audio texts, originating from the dialogue of the parents cast in one of the SGM formula milk advertisements. The emphasis in the above sentence is the word "fight" and "whatever the circumstances". These two words can describe the essence of society with a minimalist lifestyle, which has the motivation, focus to carry out its role as a parent, by prioritizing something more important (children's life) above their own needs. These words can be interpreted to mean that the character of Indonesian society is not only concerned with outward symbolic equality or equality of economic conditions that are medium to lower. But what is more essential is the interrelation and commitment to the same cultural values, related to the collective consciousness that is formed through a long historical process inherited from the wisdom of the nation's formers and leaders. One of the cultural values that shape the character of Indonesian society is toughness and adaptive in facing any situation and condition. Conclusion Based on the results of the discussion outlined above, which relate to the analysis of the representation of Indonesia's national identity in the six SGM formula milk advertisements, below are the results of the analysis that can be concluded into several main points, including: 1. The advertisement of SGM formula milk as an artwork of applied art represents Indonesian national identity, which can be classified into three parts 1) Culture, Religion, Ethnic groups of Indonesia; 2) Nusantara Territory (Enchantment Nature of Indonesian and Urban); 3) Characteristics of Indonesian Communities (Habits / Lifestyle). 2. Characteristics of Indonesian society as one of Indonesia's national identities. In six SGM formula milk television commercials, it can be interpreted that the character of Indonesian society is socialist and minimalist. 3. Of the three classifications of representation of Indonesian national identity, the most dominant part shown in the SGM formula milk television commercials is the element of religion, represented on the display of Muslim women (Muslim women) which are always present in the six advertisements scene.
After the concealed-weapons permitting processed revealed serious flaws under Adam Putnam, attempts are afoot to move the process to state police, out of the agriculture commissioner’s purview. Some 30 local Florida governments are challenging a state law that forbids cities and counties from passing stricter gun regulations than the state allows. An appeals court Friday backed Florida State University in much of a legal battle with a gun-rights group about weapons on campus, though the case goes back to circuit court. There’s an effective way to end the stalemate between the Flagler Sheriff and the school board over deputies in schools without breaking the bank or compromising security. The Palm Coast demonstrators joined some 800 planned March For Our Lives protests across the globe today, calling for sensible gun control and a ban on assault-type weapons. Floridians won’t have an opportunity to decide whether the state should ban semi-automatic weapons or have gun-related restrictions after the Constitution Revision Commission rejected attempts to debate the proposals. A proposed constitutional amendment would a minimum age of 21 on all firearm purchases, a 3-day waiting period and a comprehensive background check. Americans possesses an unalienable and inherent right of self-defense, a lawfully armed citizenry is a free citizenry, and no government has merited the total trust of its people. Major political donors on both sides plan to use support for “common-sense” legislation as a litmus test for candidates during the 2018 midterm elections. Weak security practices at many gun stores have made commercial burglaries an increasingly significant source of weapons for criminals in Florida and beyond. Car burglaries are driving the epidemic as many gun owners leave their vehicles unlocked. Gun stores offer another easy target. Firearms stolen from these businesses during burglaries have more than quadrupled over the last five years. State lawmakers have proposed measures that would allow people with concealed-weapons licenses to openly carry firearms, but the proposals have not passed. The plaintiffs in the case, including individual doctors, argued that the restrictions were a violation of their First Amendment rights. A federal court agreed. Some 39 bills, resolutions and resolution-like memorials have been filed in the Legislature so far that include language that would make gun possession and carrying more permissive in Florida. Major portions of a controversial Florida law restricting physicians and other health-care providers from asking patients about guns is unconstitutional, a federal appeals court ruled. One of the proposals would decriminalize the penalty for people who briefly display a firearm in public, others would allow concealed carry permit holders to carry guns in courthouses, jails and government meetings, among other places. In a 5-2 decision, justices cleared Weeks on the gun-possession charge because state law treats antique firearms — and their replicas — different from other guns. The ruling said lawmakers exempted firearms manufactured in or before 1918 and their replicas from the prohibition on felons possessing guns. Flagler County Tax Collector Suzanne Johnston took herself and most of her staff through a gun-safety class and shooting session at the range to prepare for her office’s new service: processing and fast-tracking concealed-weapons permits, starting today (June 22). Putnam runs the state Department of Agriculture and Consumer Services, which oversees weapons permits in Florida. The department hasn’t released Mateen’s application paperwork. Justices Barbara Pariente and Peggy Quince questioned how the current state law allowing citizens to receive concealed-weapons licenses to carry firearms suppresses gun ownership. The iGun’s chip technology only works within centimeters and makes it impossible for anyone other than the person wearing the ring to fire it. Some gun advocates are resistant for various reasons. Flagler County’s gun shop owners say fear and a need for protection rather than hunting still drives much of their business, but they have differing views on gun regulations and the need for additional laws. The Florida Sheriffs Association, which has opposed the open-carry measure, outlined proposed steps that would provide immunity to people who inadvertently or accidentally display firearms.
<reponame>KriptYashka/3D_Visual_OOP #include "facade.h" using namespace std; Facade::Facade(QWidget* q){ fileReader = new FileReader(); sceneDrawer = new QtSceneDrawer(q); } FacadeOperationResult Facade::drawScene(){ sceneDrawer->drawScene(picture); return FacadeOperationResult(); } FacadeOperationResult Facade::loadScene(string path, NormalizationParameters _normalizationParameters){ /* Загрузка сцены */ FacadeOperationResult response("Нельзя открыть файл!", false); picture.clear(); picture = fileReader->readScene(path, _normalizationParameters); filedata = picture; if (!picture.isEmpty()){ response.setMessage("Файл успешно загружен!"); response.setIsSuccess(true); } return response; } FacadeOperationResult Facade::offsetScene(double x, double y, double z){ /* Перемещение сцены */ FacadeOperationResult response("Empty file!",false); if (!filedata.isEmpty()){ filedata.transformFigures(TransformMatrixBuilder::createMoveMatrix(x,y,z)); picture = filedata; response.setIsSuccess(true); response.setMessage("Сцена передвинута!"); } return response; } FacadeOperationResult Facade::rotateScene(double x, double y, double z){ /* Поворот сцены */ FacadeOperationResult response("Пустой файл!",false); if (!filedata.isEmpty()){ filedata.transformFigures(TransformMatrixBuilder::createRotationMatrix(x,y,z)); picture = filedata; response.setIsSuccess(true); response.setMessage("Сцена повернута!"); } return response; } FacadeOperationResult Facade::scaleScene(double x, double y, double z){ /* Масштабирование сцены */ FacadeOperationResult response("Пустой файл!",false); if (!filedata.isEmpty()){ filedata.transformFigures(TransformMatrixBuilder::createScaleMatrix(x,y,z)); picture = filedata; response.setIsSuccess(true); response.setMessage("Сцена масштабирована!"); } return response; } FacadeOperationResult Facade::normalizeScene(NormalizationParameters params){ /* Нормализация сцены */ FacadeOperationResult response("Пустой файл!",false); if (!filedata.isEmpty()){ picture.normalizationVertex(filedata.getFigures().at(0).getVertices(), params); response.setIsSuccess(true); response.setMessage("Сцена нормализованна!"); } return response; }
<reponame>1690296356/jdk<gh_stars>1-10 /* * Copyright (c) 2017, 2020, Oracle and/or its affiliates. All rights reserved. * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. * * This code is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License version 2 only, as * published by the Free Software Foundation. Oracle designates this * particular file as subject to the "Classpath" exception as provided * by Oracle in the LICENSE file that accompanied this code. * * This code is distributed in the hope that it will be useful, but WITHOUT * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License * version 2 for more details (a copy is included in the LICENSE file that * accompanied this code). * * You should have received a copy of the GNU General Public License version * 2 along with this work; if not, write to the Free Software Foundation, * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. * * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA * or visit www.oracle.com if you need additional information or have any * questions. */ package jdk.incubator.vector; import java.nio.ByteOrder; import java.util.function.IntUnaryOperator; /** * Interface for managing all vectors of the same combination * of <a href="Vector.html#ETYPE">element type</a> ({@code ETYPE}) * and {@link VectorShape shape}. * * @apiNote * User code should not implement this interface. A future release of * this type may restrict implementations to be members of the same * package. * * @implNote * The string representation of an instance of this interface will * be of the form "Species[ETYPE, VLENGTH, SHAPE]", where {@code * ETYPE} is the primitive {@linkplain #elementType() lane type}, * {@code VLENGTH} is the {@linkplain #length() vector lane count} * associated with the species, and {@code SHAPE} is the {@linkplain * #vectorShape() vector shape} associated with the species. * * <p>Vector species objects can be stored in locals and parameters and as * {@code static final} constants, but storing them in other Java * fields or in array elements, while semantically valid, may incur * performance penalties. * * @param <E> the boxed version of {@code ETYPE}, * the element type of a vector */ public interface VectorSpecies<E> { /** * Returns the primitive element type of vectors of this * species. * * @return the primitive element type ({@code ETYPE}) * @see Class#arrayType() */ Class<E> elementType(); /** * Returns the vector type of this species. * A vector is of this species if and only if * it is of the corresponding vector type. * * @return the vector type of this species */ Class<? extends Vector<E>> vectorType(); /** * Returns the vector mask type for this species. * * @return the mask type */ Class<? extends VectorMask<E>> maskType(); /** * Returns the lane size, in bits, of vectors of this * species. * * @return the element size, in bits */ int elementSize(); /** * Returns the shape of vectors produced by this * species. * * @return the shape of any vectors of this species */ VectorShape vectorShape(); /** * Returns the number of lanes in a vector of this species. * * @apiNote This is also the number of lanes in a mask or * shuffle associated with a vector of this species. * * @return the number of vector lanes */ int length(); /** * Returns the total vector size, in bits, of any vector * of this species. * This is the same value as {@code this.vectorShape().vectorBitSize()}. * * @apiNote This size may be distinct from the size in bits * of a mask or shuffle of this species. * * @return the total vector size, in bits */ int vectorBitSize(); /** * Returns the total vector size, in bytes, of any vector * of this species. * This is the same value as {@code this.vectorShape().vectorBitSize() / Byte.SIZE}. * * @apiNote This size may be distinct from the size in bits * of a mask or shuffle of this species. * * @return the total vector size, in bytes */ int vectorByteSize(); /** * Loop control function which returns the largest multiple of * {@code VLENGTH} that is less than or equal to the given * {@code length} value. * Here, {@code VLENGTH} is the result of {@code this.length()}, * and {@code length} is interpreted as a number of lanes. * The resulting value {@code R} satisfies this inequality: * <pre>{@code R <= length < R+VLENGTH} * </pre> * <p> Specifically, this method computes * {@code length - floorMod(length, VLENGTH)}, where * {@link Math#floorMod(int,int) floorMod} computes a remainder * value by rounding its quotient toward negative infinity. * As long as {@code VLENGTH} is a power of two, then the result * is also equal to {@code length & ~(VLENGTH - 1)}. * * @param length the input length * @return the largest multiple of the vector length not greater * than the given length * @throws IllegalArgumentException if the {@code length} is negative and the result would overflow to a positive value * @see Math#floorMod(int, int) */ int loopBound(int length); /** * Returns a mask of this species where only * the lanes at index N such that the adjusted index * {@code N+offset} is in the range {@code [0..limit-1]} * are set. * * <p> * This method returns the value of the expression * {@code maskAll(true).indexInRange(offset, limit)} * * @param offset the starting index * @param limit the upper-bound (exclusive) of index range * @return a mask with out-of-range lanes unset * @see VectorMask#indexInRange(int, int) */ VectorMask<E> indexInRange(int offset, int limit); /** * Checks that this species has the given element type, * and returns this species unchanged. * The effect is similar to this pseudocode: * {@code elementType == elementType() * ? this * : throw new ClassCastException()}. * * @param elementType the required lane type * @param <F> the boxed element type of the required lane type * @return the same species * @throws ClassCastException if the species has the wrong element type * @see Vector#check(Class) * @see Vector#check(VectorSpecies) */ <F> VectorSpecies<F> check(Class<F> elementType); /** * Given this species and a second one, reports the net * expansion or contraction of a (potentially) resizing * {@linkplain Vector#reinterpretShape(VectorSpecies,int) reinterpretation cast} * or * {@link Vector#convertShape(VectorOperators.Conversion,VectorSpecies,int) lane-wise conversion} * from this species to the second. * * The sign and magnitude of the return value depends on the size * difference between the proposed input and output * <em>shapes</em>, and (optionally, if {@code lanewise} is true) * also on the size difference between the proposed input and * output <em>lanes</em>. * * <ul> * <li> First, a logical result size is determined. * * If {@code lanewise} is false, this size that of the input * {@code VSHAPE}. If {@code lanewise} is true, the logical * result size is the product of the input {@code VLENGTH} * times the size of the <em>output</em> {@code ETYPE}. * * <li> Next, the logical result size is compared against * the size of the proposed output shape, to see how it * will fit. * * <li> If the logical result fits precisely in the * output shape, the return value is zero, signifying * no net expansion or contraction. * * <li> If the logical result would overflow the output shape, the * return value is the ratio (greater than one) of the logical * result size to the (smaller) output size. This ratio can be * viewed as measuring the proportion of "dropped input bits" * which must be deleted from the input in order for the result to * fit in the output vector. It is also the <em>part limit</em>, * a upper exclusive limit on the {@code part} parameter to a * method that would transform the input species to the output * species. * * <li> If the logical result would drop into the output shape * with room to spare, the return value is a negative number whose * absolute value the ratio (greater than one) between the output * size and the (smaller) logical result size. This ratio can be * viewed as measuring the proportion of "extra padding bits" * which must be added to the logical result to fill up the output * vector. It is also the <em>part limit</em>, an exclusive lower * limit on the {@code part} parameter to a method that would * transform the input species to the output species. * * </ul> * * @param outputSpecies the proposed output species * @param lanewise whether to take lane sizes into account * @return an indication of the size change, as a signed ratio or zero * * @see Vector#reinterpretShape(VectorSpecies,int) * @see Vector#convertShape(VectorOperators.Conversion,VectorSpecies,int) */ int partLimit(VectorSpecies<?> outputSpecies, boolean lanewise); // Factories /** * Finds a species with the given element type and the * same shape as this species. * Returns the same value as * {@code VectorSpecies.of(newType, this.vectorShape())}. * * @param newType the new element type * @param <F> the boxed element type * @return a species for the new element type and the same shape * @throws IllegalArgumentException if no such species exists for the * given combination of element type and shape * or if the given type is not a valid {@code ETYPE} * @see #withShape(VectorShape) * @see VectorSpecies#of(Class, VectorShape) */ <F> VectorSpecies<F> withLanes(Class<F> newType); /** * Finds a species with the given shape and the same * elementType as this species. * Returns the same value as * {@code VectorSpecies.of(this.elementType(), newShape)}. * * @param newShape the new shape * @return a species for the same element type and the new shape * @throws IllegalArgumentException if no such species exists for the * given combination of element type and shape * @see #withLanes(Class) * @see VectorSpecies#of(Class, VectorShape) */ VectorSpecies<E> withShape(VectorShape newShape); /** * Finds a species for an element type and shape. * * @param elementType the element type * @param shape the shape * @param <E> the boxed element type * @return a species for the given element type and shape * @throws IllegalArgumentException if no such species exists for the * given combination of element type and shape * or if the given type is not a valid {@code ETYPE} * @see #withLanes(Class) * @see #withShape(VectorShape) */ static <E> VectorSpecies<E> of(Class<E> elementType, VectorShape shape) { LaneType laneType = LaneType.of(elementType); return AbstractSpecies.findSpecies(elementType, laneType, shape); } /** * Finds the largest vector species of the given element type. * <p> * The returned species is a species chosen by the platform that has a * shape with the largest possible bit-size for the given element type. * The underlying vector shape might not support other lane types * on some platforms, which may limit the applicability of * {@linkplain Vector#reinterpretShape(VectorSpecies,int) reinterpretation casts}. * Vector algorithms which require reinterpretation casts will * be more portable if they use the platform's * {@linkplain #ofPreferred(Class) preferred species}. * * @param etype the element type * @param <E> the boxed element type * @return a preferred species for an element type * @throws IllegalArgumentException if no such species exists for the * element type * or if the given type is not a valid {@code ETYPE} * @see VectorSpecies#ofPreferred(Class) */ static <E> VectorSpecies<E> ofLargestShape(Class<E> etype) { return VectorSpecies.of(etype, VectorShape.largestShapeFor(etype)); } /** * Finds the species preferred by the current platform * for a given vector element type. * This is the same value as * {@code VectorSpecies.of(etype, VectorShape.preferredShape())}. * * <p> This species is chosen by the platform so that it has the * largest possible shape that supports all lane element types. * This has the following implications: * <ul> * <li>The various preferred species for different element types * will have the same underlying shape. * <li>All vectors created from preferred species will have a * common bit-size and information capacity. * <li>{@linkplain Vector#reinterpretShape(VectorSpecies, int) Reinterpretation casts} * between vectors of preferred species will neither truncate * lanes nor fill them with default values. * <li>For any particular element type, some platform might possibly * provide a {@linkplain #ofLargestShape(Class) larger vector shape} * that (as a trade-off) does not support all possible element types. * </ul> * * @implNote On many platforms there is no behavioral difference * between {@link #ofLargestShape(Class) ofLargestShape} and * {@code ofPreferred}, because the preferred shape is usually * also the largest available shape for every lane type. * Therefore, most vector algorithms will perform well without * {@code ofLargestShape}. * * @param etype the element type * @param <E> the boxed element type * @return a preferred species for this element type * @throws IllegalArgumentException if no such species exists for the * element type * or if the given type is not a valid {@code ETYPE} * @see Vector#reinterpretShape(VectorSpecies,int) * @see VectorShape#preferredShape() * @see VectorSpecies#ofLargestShape(Class) */ public static <E> VectorSpecies<E> ofPreferred(Class<E> etype) { return of(etype, VectorShape.preferredShape()); } /** * Returns the bit-size of the given vector element type ({@code ETYPE}). * The element type must be a valid {@code ETYPE}, not a * wrapper type or other object type. * * The element type argument must be a mirror for a valid vector * {@code ETYPE}, such as {@code byte.class}, {@code int.class}, * or {@code double.class}. The bit-size of such a type is the * {@code SIZE} constant for the corresponding wrapper class, such * as {@code Byte.SIZE}, or {@code Integer.SIZE}, or * {@code Double.SIZE}. * * @param elementType a vector element type (an {@code ETYPE}) * @return the bit-size of {@code elementType}, such as 32 for {@code int.class} * @throws IllegalArgumentException * if the given {@code elementType} argument is not * a valid vector {@code ETYPE} */ static int elementSize(Class<?> elementType) { return LaneType.of(elementType).elementSize; } /// Convenience factories: /** * Returns a vector of this species * where all lane elements are set to * the default primitive value, {@code (ETYPE)0}. * * Equivalent to {@code IntVector.zero(this)} * or an equivalent {@code zero} method, * on the vector type corresponding to * this species. * * @return a zero vector of the given species * @see IntVector#zero(VectorSpecies) * @see FloatVector#zero(VectorSpecies) */ Vector<E> zero(); /** * Returns a vector of this species * where lane elements are initialized * from the given array at the given offset. * The array must be of the the correct {@code ETYPE}. * * Equivalent to * {@code IntVector.fromArray(this,a,offset)} * or an equivalent {@code fromArray} method, * on the vector type corresponding to * this species. * * @param a an array of the {@code ETYPE} for this species * @param offset the index of the first lane value to load * @return a vector of the given species filled from the array * @throws IndexOutOfBoundsException * if {@code offset+N < 0} or {@code offset+N >= a.length} * for any lane {@code N} in the vector * @see IntVector#fromArray(VectorSpecies,int[],int) * @see FloatVector#fromArray(VectorSpecies,float[],int) */ Vector<E> fromArray(Object a, int offset); // Defined when ETYPE is known. /** * Loads a vector of this species from a byte array starting * at an offset. * Bytes are composed into primitive lane elements according * to the specified byte order. * The vector is arranged into lanes according to * <a href="Vector.html#lane-order">memory ordering</a>. * <p> * Equivalent to * {@code IntVector.fromByteArray(this,a,offset,bo)} * or an equivalent {@code fromByteArray} method, * on the vector type corresponding to * this species. * * @param a a byte array * @param offset the index of the first byte to load * @param bo the intended byte order * @return a vector of the given species filled from the byte array * @throws IndexOutOfBoundsException * if {@code offset+N*ESIZE < 0} * or {@code offset+(N+1)*ESIZE > a.length} * for any lane {@code N} in the vector * @see IntVector#fromByteArray(VectorSpecies,byte[],int,ByteOrder) * @see FloatVector#fromByteArray(VectorSpecies,byte[],int,ByteOrder) */ Vector<E> fromByteArray(byte[] a, int offset, ByteOrder bo); /** * Returns a mask of this species * where lane elements are initialized * from the given array at the given offset. * * Equivalent to * {@code VectorMask.fromArray(this,a,offset)}. * * @param bits the {@code boolean} array * @param offset the offset into the array * @return the mask loaded from the {@code boolean} array * @throws IndexOutOfBoundsException * if {@code offset+N < 0} or {@code offset+N >= a.length} * for any lane {@code N} in the vector mask * @see VectorMask#fromArray(VectorSpecies,boolean[],int) */ VectorMask<E> loadMask(boolean[] bits, int offset); /** * Returns a mask of this species, * where each lane is set or unset according to given * single boolean, which is broadcast to all lanes. * * @param bit the given mask bit to be replicated * @return a mask where each lane is set or unset according to * the given bit * @see Vector#maskAll(boolean) */ VectorMask<E> maskAll(boolean bit); /** * Returns a vector of the given species * where all lane elements are set to * the primitive value {@code e}. * * <p> This method returns the value of this expression: * {@code EVector.broadcast(this, (ETYPE)e)}, where * {@code EVector} is the vector class specific to the * the {@code ETYPE} of this species. * The {@code long} value must be accurately representable * by {@code ETYPE}, so that {@code e==(long)(ETYPE)e}. * * @param e the value to broadcast * @return a vector where all lane elements are set to * the primitive value {@code e} * @throws IllegalArgumentException * if the given {@code long} value cannot * be represented by the vector species {@code ETYPE} * @see Vector#broadcast(long) * @see #checkValue(long) */ Vector<E> broadcast(long e); /** * Checks that this species can represent the given element value, * and returns the value unchanged. * * The {@code long} value must be accurately representable * by the {@code ETYPE} of the vector species, so that * {@code e==(long)(ETYPE)e}. * * The effect is similar to this pseudocode: * {@code e == (long)(ETYPE)e * ? e * : throw new IllegalArgumentException()}. * * @param e the value to be checked * @return {@code e} * @throws IllegalArgumentException * if the given {@code long} value cannot * be represented by the vector species {@code ETYPE} * @see #broadcast(long) */ long checkValue(long e); /** * Creates a shuffle for this species from * a series of source indexes. * * <p> For each shuffle lane, where {@code N} is the shuffle lane * index, the {@code N}th index value is validated * against the species {@code VLENGTH}, and (if invalid) * is partially wrapped to an exceptional index in the * range {@code [-VLENGTH..-1]}. * * @param sourceIndexes the source indexes which the shuffle will draw from * @return a shuffle where each lane's source index is set to the given * {@code int} value, partially wrapped if exceptional * @throws IndexOutOfBoundsException if {@code sourceIndexes.length != VLENGTH} * @see VectorShuffle#fromValues(VectorSpecies,int...) */ VectorShuffle<E> shuffleFromValues(int... sourceIndexes); /** * Creates a shuffle for this species from * an {@code int} array starting at an offset. * * <p> For each shuffle lane, where {@code N} is the shuffle lane * index, the array element at index {@code i + N} is validated * against the species {@code VLENGTH}, and (if invalid) * is partially wrapped to an exceptional index in the * range {@code [-VLENGTH..-1]}. * * @param sourceIndexes the source indexes which the shuffle will draw from * @param offset the offset into the array * @return a shuffle where each lane's source index is set to the given * {@code int} value, partially wrapped if exceptional * @throws IndexOutOfBoundsException if {@code offset < 0}, or * {@code offset > sourceIndexes.length - VLENGTH} * @see VectorShuffle#fromArray(VectorSpecies,int[],int) */ VectorShuffle<E> shuffleFromArray(int[] sourceIndexes, int offset); /** * Creates a shuffle for this species from * the successive values of an operator applied to * the range {@code [0..VLENGTH-1]}. * * <p> For each shuffle lane, where {@code N} is the shuffle lane * index, the {@code N}th index value is validated * against the species {@code VLENGTH}, and (if invalid) * is partially wrapped to an exceptional index in the * range {@code [-VLENGTH..-1]}. * * <p> Care should be taken to ensure {@code VectorShuffle} values * produced from this method are consumed as constants to ensure * optimal generation of code. For example, shuffle values can be * held in {@code static final} fields or loop-invariant local variables. * * <p> This method behaves as if a shuffle is created from an array of * mapped indexes as follows: * <pre>{@code * int[] a = new int[VLENGTH]; * for (int i = 0; i < a.length; i++) { * a[i] = fn.applyAsInt(i); * } * return VectorShuffle.fromArray(this, a, 0); * }</pre> * * @param fn the lane index mapping function * @return a shuffle of mapped indexes * @see VectorShuffle#fromOp(VectorSpecies,IntUnaryOperator) */ VectorShuffle<E> shuffleFromOp(IntUnaryOperator fn); /** * Creates a shuffle using source indexes set to sequential * values starting from {@code start} and stepping * by the given {@code step}. * <p> * This method returns the value of the expression * {@code VectorSpecies.shuffleFromOp(i -> R(start + i * step))}, * where {@code R} is {@link VectorShuffle#wrapIndex(int) wrapIndex} * if {@code wrap} is true, and is the identity function otherwise. * <p> * If {@code wrap} is false each index is validated * against the species {@code VLENGTH}, and (if invalid) * is partially wrapped to an exceptional index in the * range {@code [-VLENGTH..-1]}. * Otherwise, if {@code wrap} is true, also reduce each index, as if * by {@link VectorShuffle#wrapIndex(int) wrapIndex}, * to the valid range {@code [0..VLENGTH-1]}. * * @apiNote The {@code wrap} parameter should be set to {@code * true} if invalid source indexes should be wrapped. Otherwise, * setting it to {@code false} allows invalid source indexes to be * range-checked by later operations such as * {@link Vector#rearrange(VectorShuffle) unary rearrange}. * * @param start the starting value of the source index sequence, typically {@code 0} * @param step the difference between adjacent source indexes, typically {@code 1} * @param wrap whether to wrap resulting indexes modulo {@code VLENGTH} * @return a shuffle of sequential lane indexes * @see VectorShuffle#iota(VectorSpecies,int,int,boolean) */ VectorShuffle<E> iotaShuffle(int start, int step, boolean wrap); /** * Returns a string of the form "Species[ETYPE, VLENGTH, SHAPE]", * where {@code ETYPE} is the primitive {@linkplain #elementType() * lane type}, {@code VLENGTH} is the {@linkplain #length() * vector lane count} associated with the species, and {@code * SHAPE} is the {@linkplain #vectorShape() vector shape} * associated with the species. * * @return a string of the form "Species[ETYPE, VLENGTH, SHAPE]" */ @Override String toString(); /** * Indicates whether this species is identical to some other object. * Two species are identical only if they have the same shape * and same element type. * * @return whether this species is identical to some other object */ @Override boolean equals(Object obj); /** * Returns a hash code value for the species, * based on the vector shape and element type. * * @return a hash code value for this species */ @Override int hashCode(); // ==== JROSE NAME CHANGES ==== // ADDED: // * genericElementType()-> E.class (interop) // * arrayType()-> ETYPE[].class (interop) // * withLanes(Class), withShape(VectorShape) strongly typed reinterpret casting // * static ofLargestShape(Class<E> etype) -> possibly non-preferred // * static preferredShape() -> common shape of all preferred species // * toString(), equals(Object), hashCode() (documented) // * elementSize(e) replaced bitSizeForVectorLength // * zero(), broadcast(long), from[Byte]Array(), loadMask() (convenience constructors) // * lanewise(op, [v], [m]), reduceLanesToLong(op, [m]) }
Computerized pathway elucidation for hydroxyl radical-induced chain reaction mechanisms in aqueous phase advanced oxidation processes. The radical reaction mechanism that is involved in advanced oxidation processes is complex. An increasing number of trace contaminants and stringent drinking water standards call for a rule-based model to provide insight to the mechanism of the processes. A model was developed to predict the pathway of contaminant degradation and byproduct formation during advanced oxidation. The model builds chemical molecules as graph objects, which enables mathematic abstraction of chemicals and preserves chemistry information. The model algorithm enumerates all possible reaction pathways according to the elementary reactions (built as reaction rules) established from experimental observation. The method can predict minor pathways that could lead to toxic byproducts so that measures can be taken to ensure drinking water treatment safety. The method can be of great assistance to water treatment engineers and chemists who appreciate the mechanism of treatment processes.
<filename>src/main/java/gov/nasa/jpl/nexus/ningester/datatiler/properties/SliceFileByTilesDesired.java /***************************************************************************** * Copyright (c) 2018 Jet Propulsion Laboratory, * California Institute of Technology. All rights reserved *****************************************************************************/ package gov.nasa.jpl.nexus.ningester.datatiler.properties; import org.springframework.boot.context.properties.ConfigurationProperties; import org.springframework.stereotype.Component; import java.util.ArrayList; import java.util.List; @ConfigurationProperties @Component("sliceFileByTilesDesiredProperties") public class SliceFileByTilesDesired { private Integer tilesDesired; private List<String> dimensions = new ArrayList<>(); private String timeDimension; public Integer getTilesDesired() { return tilesDesired; } public void setTilesDesired(Integer tilesDesired) { this.tilesDesired = tilesDesired; } public List<String> getDimensions() { return dimensions; } public void setDimensions(List<String> dimensions) { this.dimensions = dimensions; } public String getTimeDimension() { return timeDimension; } public void setTimeDimension(String timeDimension) { this.timeDimension = timeDimension; } }
State ownership and stock liquidity: Evidence from privatization We provide unique firm-level evidence of the relation between state ownership and stock liquidity. Using a broad sample of newly privatized firms (NPFs) from 53 countries over the period 19942014, our study identifies a non-monotonic association between state ownership and stock liquidity. The inverse U-shaped relation is consistent with trade-offs between costs and benefits of state ownership and suggests an optimal level of government shareholdings that maximizes stock liquidity of NPFs. We further identify that the inflection point from the cost/benefit trade-off is contingent upon characteristics of the nation's institutional environment. Introduction Government bailout programs during the global financial crisis (GFC) led to a significant increase in state ownership around the world, giving rise to what is now called State Capitalism. This phenomenon was perceived as an overturn of decades of privatizations (i.e., divestitures of government assets) that sought to disengage the economy from state dominance. Governments' equity ownership driven by "reverse privatizations" accounted for nearly one-fifth of stock market capitalization worldwide (;Megginson, 2017) 1 renewing the debate about the role of governments as shareholders. In this vein, recent research examines how and to what extent state ownership affects the valuation of corporate assets and equity (e.g., Holland, 2019), the cost of equity (), the cost of debt (Borisova and Megginson, 2011;), cash holdings (), corporate risktaking (), governance quality (), and corporate investment efficiency (Jaslowitzer et al., Ownership structure, corporate governance, and stock liquidity Prior studies contend that block ownership affects liquidity through one of two main channels: trading activity and information environment (Bhide, 1993;Bolton and Von Thadden, 1998;Maug, 1998;). The first channel identifies that the existence of large blockholders, which generally trade infrequently, leaves fewer shares available in the market. This lessens liquidity by discouraging other investors from trading in a stock. The second channel recognizes that blockholders are also likely to trade based on private information. This translates into higher informational costs for uninformed investors, and hence lower liquidity. Additionally, concentrated ownership reduces other shareholders' benefit from monitoring the firm, which limits the availability of public information (Holmstrm and Tirole, 1993) and increases information acquisition costs. This further dampens incentives to trade. In the same vein, Attig et al. suggest that controlling blockholders have motivation to increase firm opacity to avoid the detection of expropriation of minority shareholders. Anticipating such incentives, minority shareholders are reluctant to participate/trade, which contributes to reduced liquidity. While this framework can apply to all types of blockholders, we next focus on a specific blockholder (the state) and develop our hypotheses regarding the link between state ownership and stock liquidity. State ownership and stock liquidity Extant literature justifies privatization by emphasizing the inefficiencies (i.e., costs) of continued government ownership. According to the political view of state ownership, these inefficiencies are due to rent extraction by politicians from the firms under their control (e.g., Shleifer and Vishny, 1994), which they use to build political support among voters rather than to maximize profits. In line with this view, prior research suggests that the weak corporate governance and pronounced information asymmetry problems associated with government ownership (Shleifer and Vishny, 1997;Megginson and Netter, 2001;Megginson, 2017) are related to reduced firm liquidity. Weak corporate governance increases managers' ability and opportunities to distort financial information (;). This decreases financial transparency and in turn decreases investors' incentives to invest in the stock. Indeed, Chung et al. find that corporate governance is positively associated with stock liquidity. Given that NPFs with residual state ownership are more likely to be characterized by weaker governance, we expect investors to avoid these stocks. Overall, the political view of government ownership suggests that residual government stakes in NPFs discourage other shareholders from trading and thus should lead to a negative association between state ownership and stock liquidity. In contrast, the soft-budget-constraint view holds that government ownership has a number of benefits, including an implicit guarantee of rescue in times of financial distress (;Borisova and Megginson, 2011;), prolonged and easier access to finance (;), and availability of subsidies from the state budget or tax concessions (e.g., remission, reduction, or deferral of taxes) as well as other means of indirect support. Faccio et al., for instance, show that politically-connected firms are more likely than non-politically-connected firms to be bailed out by the state. Similarly, Boubakri et al. find that politically-connected firms enjoy a lower cost of equity, especially in countries where the likelihood of a government bailout is higher. Chaney et al. additionally observe that investors do not penalize politically-connected firms for lower earnings quality by requiring higher returns. This suggests that investors value the benefits that such firms receive by being linked to the government. As a result, investors may be more willing to buy the shares of NPFs, thus increasing their liquidity. Building on these studies, and to the extent that the state is more inclined to support firms with connections to the government (such as NPFs with residual state ownership), the presence of the state as a blockholder enhances the liquidity of firms with residual government shareholdings. We also recognize that there are negative implications associated with the soft budget constraint, especially in countries with leftwing governments (which are more inclined to use SOE resources for political expediency). 6 For example, noting the costs of the soft budget constraint, Megginson and Netter contend that a driver of post-privatization efficiency improvements is the motivation brought on by the elimination of the "safety net" of the soft budget constraint (that was previously provided to state-owned firms but is no longer available following privatization). While certainly contributing to greater motivation and focus, working without a safety net also, by definition, increases risk. The findings of Boubakri et al. indicate that the risk-reduction benefits of having the safety net (i.e., the soft budget constraint) outweigh the efficiency improvements that result from removing it. Specifically, Boubakri et al. (2018;p.52) identify that investors assign greater importance to the benefits of the soft budget constraint and conclude that "easier and sustained access to financial resources provides government owned firms with a significant comparative advantage". Therefore, following Boubakri et al., we contend that the comparative advantage from the soft budget constraint should contribute to greater liquidity for the stock of state owned firms. Accordingly, the two competing views (soft budget constraint and political view) suggest that the relation between stock liquidity 6 Boycko et al., Shleifer, Shleifer and Vishny, Beck et al., Biais and Perotti, and Megginson et al. hold that left-wing governments are more prone to exert control over economic activities and impose redistributive policies while right-wing governments are less intrusive in economic issues and more supportive of market-oriented policies. From the perspective of state ownership, left-wing governments are less committed to reducing government spending and less likely to implement privatization programs (Bortolotti and Faccio, 2009;Roland, 2008;). and state ownership is ultimately an open question. It is important to note that our hypotheses are based on the countervailing influences of two factors that are both unique to state ownership. Political benefits of ownership are only valuable to owners who are politicians. As an example of the use of an SOE for political purposes, state-owned firms may choose to overstaff in order to create jobs and thus curry political favor with voters. Winning voters is very important to politicians but is not important to other types of blockholders. Therefore, the political benefits (as per the "political view") apply to state owners (politicians) but would not apply to other types of blockholders. Similarly, the soft-budget-constraint refers to funding advantages provided by the state to firms with state ownership. That is, the soft-budget-constraint results in state-owned firms receiving financing advantages that would not be available to firms owned by other types of blockholders. Therefore, the financing benefits as per the "soft-budget-constraint view" apply to state owners but would not apply to other blockholders. 7 In the following empirical analysis, we consider how this specific trade-off (political view vs soft-budget-constraint view) affects the relation between state ownership and stock liquidity. 8 We formalize our primary predictions as follows: H1. The level of state ownership affects stock liquidity. H1a. Under the political view, residual state ownership is negatively related to stock liquidity. H1b. Under the soft-budget-constraint view, residual state ownership is positively related to stock liquidity. Sample To empirically examine the relation between state ownership and stock liquidity, we construct a sample of 473 NPFs from 53 countries over the period 1994-2014. Our initial data are from Boubakri et al., which we update using Privatization Barometer, Thomson Reuters, the SDC Platinum Global New Issues, and SDC Platinum Mergers & Acquisitions databases. By using these data to track the change in government shareholdings after the first privatization, we investigate how the effect of state ownership on stock liquidity varies over time. We obtain stock liquidity and financial statement information from Compustat Global and ownership statistics from Boubakri et al., firms' annual reports, Bureau van Dijk's Osiris database, and Bloomberg. Because the behavior of financial firms (SIC codes between 6000 and 6999) is heavily influenced by a country's regulatory environment, we exclude these firms from our analysis. After also removing observations with missing data, our final sample contains 3759 firm-year observations. Table 1 summarizes the sample distribution by country, year, and industry. Panel A shows that our firms are widely distributed across both developing and developed countries, with 17.05% of observations (17.97% of firms) from China, 5.32% of observations (6.13% of firms) from India, 4.89% of observations (4.23% of firms) from France, and 4.89% of observations (4.02% of firms) from Italy. 9 Panel B shows that more than 90% of our sample firms were privatized in the 1990s and 2000s, with privatizations peaking between 2010 and 2012. Panel C shows that these firms are also widely distributed across Fama and French industries, with 17.93% in manufacturing, 16.84% in utilities, and 15.22% in telecom. 10 Liquidity Following Lesmond et al. or LOT, we first measure firm-level stock liquidity using the proportion of trading days with zero returns during the year (ZEROS). The denominator of ZEROS is the actual number of a firm's total trading days in a given year on its respective exchange. 11 Securities with lower liquidity are likely to have more zero-volume days and thus more zero-return days. Bekaert et al. show that zero-return days is a good measure to predict future returns in emerging markets compared with alternative measures such as turnover. Moreover, they argue that transaction data (such as bid-ask spreads) are not widely available in emerging markets, while zero-return days only require a time-series of daily equity returns. Lesmond presents evidence that the LOT statistic (i.e., ZEROS) captures cross-country liquidity effects better than other metrics. As a robustness check, we measure stock 7 Given these unique views of state ownership, substantial literature exclusively focuses on the impact of state ownership (;Borisova and Megginson, 2011;Borisova et al.,, 2015;;Holland, 2019;, among others). 8 It would be interesting to compare the liquidity implications of changes in state blockholdings to the liquidity implications of changes in other types of blockholdings (e.g., changes in ownership blocks by founding families, private equity, etc.). However, our specific hypotheses, by focusing on factors unique to state blockholders, would not facilitate such a comparison. Nevertheless, we recognize the importance of considering how changes in ownership by different types of blockholders may have different effects on stock liquidity. While beyond the scope of this paper, we identify such comparisons as interesting avenues for future research. 9 Our main findings are not sensitive to sequentially excluding each country from our analysis. 10 All of our inferences continue to hold when we sequentially exclude each industry from our analysis. 11 To account for zero-return days due to holidays or market closures, we calculate ZEROS excluding days when there are more than 5 or 10 consecutive days of zero returns. In unreported tests, we confirm that our main findings are statistically unchanged. We thank an anonymous reviewer for raising this point. liquidity using Fong et al.'s variable FHT, a percent-cost proxy that simplifies the LOT measure. Moreover, we adopt an alternative liquidity measure (e.g., ;), namely, AMIHUD. This metric is the average across stocks of the daily ratio of absolute stock return to dollar volume. 12 Because ZEROS, FHT, and AMIHUD reflect stock illiquidity, higher values of these metrics indicate lower stock liquidity. We summarize the definitions for these and all other variables in the Appendix. 1994-2014. 12 In robustness tests, we also proxy for stock liquidity using an alternative measure from Roll. We emphasize that our metrics focus on firm-level liquidity. Alternatively, Boutchkova and Megginson and Bortolotti et al. use the market turnover ratio as a country-level measure of liquidity and provide evidence that privatization affects aggregate liquidity. Specifically, Boutchkova and Megginson find a positive relation between the turnover ratio of a market and the number of privatizations in the country. Bortolotti et al., also using a turnover-based measure, further document an increase in aggregate liquidity for privatized IPOs in a sample of 19 OECD countries between 1985 and 2002. However, the previously used measures of aggregate liquidity (particularly the country-level turnover variables), while offering broad insights regarding countrylevel liquidity, are less well-suited for assessing firm-level liquidity. First, as noted by Jun et al., there is an important distinction between the liquidity of an individual stock and the liquidity of the total equity market. 13 Bortolotti et al. and Boutchkova and Megginson do an excellent job of addressing the latter, but do not address the former. Also, as in many empirical endeavors, there are trade-offs regarding the choice of statistical measures. Bortolotti et al., citing Pstor and Stambaugh, concede that the turnover ratio may not always accurately reflect market liquidity. Historically, there have been market environments exemplified by high levels of turnover but low degrees of market liquidity (such as October 1987). Lee and Swaminathan additionally identify a relation between trading volume and past price momentum and thus warn that turnover measures may provide less reliable assessments of market liquidity. Specifically, Lee andSwaminathan (2000, p.2061) conclude, "This evidence further supports the notion that past turnover is a measure of fluctuating investor sentiment and not a liquidity proxy". Accordingly, in our study, we attempt to overcome these potential weaknesses of the previously used measures of aggregate liquidity (i.e., the turnover ratios) by applying more precise measures of firm-level liquidity . State ownership and control variables We capture state ownership using the percentage of shares held by a government (STATE). Our regressions also include several firm-and country-level control variables to ensure that the relation between state ownership and stock liquidity is not driven by confounding factors. At the firm level, we follow prior literature (e.g., ;;Stoll, 2000) and control for firm size as measured by the log of a firm's market value of equity (LOG MV), book-to-market (BM), return variability (STDRET), transparency as reflected by earnings smoothness (EM), analyst coverage as indicated by the number of analysts forecasting currentyear earnings (ANALYST), and an indicator for whether the firm had a loss (LOSS). We also include indicator variables for whether the stock trades in the U.S., either on an exchange (ADR_EX) or on the OTC or PORTAL markets (ADR_NEX). Trading in the U.S. is likely to lead to higher transparency (), and it may also draw liquidity from local markets to the extent that shares are less costly to trade in the U.S. (). We further control for whether the firm reports under IFRS or U.S. GAAP (INTGAAP). Leuz and Verrecchia show that the securities of firms that convert to IAS or U.S. GAAP are associated with higher liquidity; we therefore expect a positive relation between INTGAAP and stock liquidity. Moreover, we control for the stock trading activities (STOCK TURNOVER), stock price (LOG (PRICE)), and stock trading days (LOG (TRADING DAYS)). 14 At the country level, we control for institutions that are likely to influence the extent to which firm-level transparency affects liquidity (e.g., Lesmond, 2005;). Specifically, we include the number of listed firms in the country (LISTED) to control for the level of stock market development, the extent of press freedom (MEDIA) to indicate the degree of media penetration, and log GDP per capita (LGDPC) to capture aggregate income. 15 Descriptive statistics Panel A of Table 2 reports summary statistics. We find that ZEROS has a mean (median) of 0.11 (0.07). Residual state ownership (STATE) has a mean (median) of 0.27 (0.18), in line with a sharp decline in state ownership after privatization (). Panel B of Table 2 presents Pearson correlation coefficients among key variables. As can be seen, state ownership is negatively correlated with all measures of stock illiquidity, indicating that higher state ownership is associated with higher stock liquidity. Preliminary analyses In Table 3, we perform univariate analysis of the relation between state ownership and stock liquidity. In Panel A, we first split the sample of privatized firms into two groups: partially privatized firms (Column 2) and fully privatized firms (Column 3). We find that partially privatized firms have significantly higher stock liquidity than fully privatized firms. These results, which suggest that some residual state ownership in NPFs enhances liquidity, are consistent with the soft-budget-constraint hypothesis. In Panel B, we examine the relation between partially privatized firms and fully privatized firms during the GFC. We find that 13 Further documenting the important distinction between aggregate liquidity and firm-level stock liquidity, Jun et al. emphasize that the liquidity of a stock is influenced by its unique characteristics while a country's equity market liquidity is mainly determined by macroeconomic factors that are systemic to the economy. 14 We thank an anonymous reviewer for suggesting these controls. 15 In countries with better stock market development and higher income, we expect stock liquidity to be higher. Similarly, we expect that when media penetration is poor, corporate governance may be less effective, which may also reduce stock liquidity (e.g., ). partially privatized firms show higher stock liquidity than fully privatized firms, both before and during the financial crisis. This indicates that the residual government ownership, by endowing partially privatized firms with the security of state support, contributes to greater stock liquidity. Taken together, the results of the univariate analysis provide preliminary evidence that the soft budget constraint associated with state ownership may contribute to higher stock liquidity. In the following section, we further explore these relations with our multivariate analysis. Table 2 reports summary statistics and the correlation matrix for key variables. Multivariate analysis In Table 4, we first examine the impact of state ownership on stock liquidity by using a continuous measure of state ownership (STATE) as our independent variable of primary interest. We use ZEROS as the dependent variable and estimate the following model (subscripts omitted for simplicity): To control for within-firm correlation, we present significance levels based on robust standard errors adjusted for clustering at the firm level. In Model, we find that STATE is negatively associated with ZEROS. This relation is statistically significant at the 5% level. It is also economically significant, with the coefficient on STATE suggesting that, all other variables held constant, increasing state ownership by one standard deviation will result in a 6.1% (= 0.28 (− 0.024)/0.11) decrease in zero-return days (increase in stock liquidity). Thus, in line with the univariate results, these findings support the soft-budget-constraint view of state ownership. Model explores the non-monotonic relation between state ownership and stock liquidity using a quadratic model. We continue to find that STATE loads with a negative coefficient (that is statistically significant at the 1% level). Additionally, we find that the quadratic term STATESQR has a positive coefficient (that is statistically significant at the 1% level). These results confirm the curvilinear relation between state ownership and stock liquidity. This finding is similar to that of Borisova and Megginson who find that the cost of debt is non-monotonically related to residual state ownership. Further, the Model results (illustrated in Fig. 1) show that stock liquidity is highest at an inflection point of 44% government ownership, a level consistent with the government retaining some influence over the firm. This suggests that reducing state ownership to lower levels may decrease government influence to the point that the benefits of the soft budget constraints are diminished (which reduces the NPF's liquidity). However, when government ownership exceeds 44%, liquidity is also adversely affected. This is consistent with the political view of state ownership, which holds that investors exhibit greater fear of the "grabbing hand" of political interference as government ownership increases. Overall, this non-monotonic relation appears to be reflective of a trade-off between the costs (political view) and the benefits (soft-budget-constraint view) of state ownership. In Column of Table 4, we replace the continuous state ownership metric (STATE) with a dummy variable PARTIAL (which indicates whether a government retains shares in a firm after privatization (i.e., STATE > 0)) as an alternative independent variable. We estimate the following specification (subscripts omitted for simplicity): ZEROS = + 1 PARTIAL + 2 LOG MV + 3 BM + 4 STDRET + 5 EM + 6 ANALYST + 7 LOSS + 8 ADR EX + 9 ADR NEX + 10 INTGAAP + 11 STOCK TURNOVER + 12 LOG(PRICE) + 13 LOG (TRADING DAYS) + 14 LISTED + 15 MEDIA We find that the coefficient on PARTIAL is negative and statistically significant at the 5% level, suggesting that partially privatized firms are associated with higher stock liquidity than fully privatized firms. This finding is also economically significant in that a firm that is fully privatized observes on average 12.7% (= (− 0.014)/0.11) more zero-return days (all other variables constant) and therefore exhibits significantly lower stock liquidity than a firm that is partially privatized. These results indicate that partial privatization is associated with higher stock liquidity. Consistent with the soft-budget-constraint view of state ownership, these results identify that firms have higher liquidity when the government retains some shares in NPFs. Endogeneity In the context of privatization, a major econometric concern is selection bias. As Megginson and Netter (2001, p.346) point out, "sample selection bias can arise from several sources, including the desire of governments to make privatization look good by privatizing the healthiest firms first". 16 Also, governments may retain larger stakes in firms with higher liquidity to extract greater private/political benefits. In addition, the relation between state ownership and stock liquidity could be driven by unobserved determinants of liquidity that also explain residual state ownership. We address these issues using several approaches in Table 5. We first estimate instrumental variables regressions. Following Bortolotti et al., we use lagged values of the average level of state ownership by country as instruments for state ownership. Specifically, our instrument (STATE COUNTRY) is the country's average level of state ownership, lagged by 3 years. 17 In the first-stage model (Model of Panel A), we regress STATE on STATE COUNTRY together with the full set of control variables. We find that STATE COUNTRY is positively and significantly associated with STATE. To check the validity of our instruments, we first conduct an F-test of the excluded exogenous variable. The results reject the null hypothesis that the instrument does not explain state ownership. We next implement a Kleibergen-Paap rk LM test and reject the null hypothesis that the model is underidentified (at the 1% level). Model of Panel A reports the results of the second-stage regression. We again find that STATE is significantly negatively associated with ZEROS. In Model, we treat both state ownership and state ownership squared as endogenous variables. Again, in the first-stage regression (Model of Panel A), we regress STATE on STATE COUNTRY together with the full set of control variables. We then use the predicted state ownership and the squared value of predicted state ownership in the second-stage regression of stock liquidity on state ownership. We find that state ownership is associated with fewer zero-return days, while state ownership squared is associated with more zero-return days. This confirms the existence of a nonlinear relation between liquidity and state ownership. In Models and, we perform a Heckman two-stage analysis to address sample selection bias. In the first stage, we use a Probit model to predict whether governments retain control over privatized firms. We regress Control (a dummy variable indicating whether governments retain more than 50% of privatized firms) on STATE COUNTRY, the full set of control variables, and country, industry, Table 4 reports regression results relating partial privatization to stock liquidity. The sample comprises 3759 firm-year observations representing 473 newly privatized firms from 53 countries over the period 1994-2014. The dependent variable is ZEROS, which is the percentage of days during the fiscal year that the stock price does not change and is calculated as ZEROS = ZeroReturnDays/Total Trading Days. We winsorize all financial variables at the 1% level in both tails of the distribution. The Appendix provides variable definitions and sources. t-statistics based on robust standard errors clustered at the firm level are in parentheses below each coefficient. ***, **, and * indicate significance at the 1%, 5%, and 10% level, respectively. and year fixed effects. This step allows us to estimate the inverse Mills ratio (Lambda). In the second stage, we include LAMBDA as an additional independent variable in the liquidity regression. The results in Model show that the coefficient on STATE is significantly negative (at the 1% level), indicating that stock liquidity increases as residual state ownership increases. In Model, we further find that the coefficient for STATESQR is significantly positive. This reinforces our earlier evidence of a nonlinear relation between state ownership and stock liquidity. Models and employ propensity score matching, which allows us to randomize the sample selection by using observable firm characteristics to match privatized firms under government control with those that are not. 18 In the first stage, we use the same Probit model as in the Heckman first-stage analysis. We then match state-controlled firms to NPFs (not controlled by the state) with the closest propensity score. In the second stage (Models and ), we estimate the regressions using the matched sample. Consistent with our main analysis, the instrumental variables analysis, and the Heckman analysis, we continue to find that state ownership is nonlinearly related to stock liquidity. State ownership and stock liquidity -Soft budget constraint In this section, we further consider how soft budget constraints may affect the relation between state ownership and stock liquidity. To do so, in Model of Table 6 we interact state ownership with a measure of the government ownership of banks (GOVBANK). We expect state-owned firms to benefit from preferential access to financing in countries with higher government ownership of banks (e.g., ;;). We estimate the following model (subscripts omitted for simplicity): ZEROS = + 1 STATE + 2 GOVBANK + 3 STATE GOVBANK + 4 LOG MV + 5 BM + 6 STDRET + 7 EM + 8 ANALYST + 9 LOSS + 10 ADR EX + 11 ADR NEX + 12 INTGAAP + 13 STOCK TURNOVER + 14 LOG(PRICE) Table 5 reports regression results addressing endogeneity of state ownership using instrumental variables, Heckman two-stage selection, and propensity score matching. In the first-stage regressions, we regress state ownership (STATE) on the country-level state ownership (STATE COUNTRY), which is lagged 3 years, together with all control variables and country, year, and industry fixed effects. The sample comprises 473 newly privatized firms from 53 countries over the period 1994-2014. The dependent variable is ZEROS, which is the percentage of days during the fiscal year that the stock price does not change and is calculated as ZEROS = ZeroReturnDays/Total Trading Days. We winsorize all financial variables at the 1% level in both tails of the distribution. The Appendix provides variable definitions and sources. t-statistics based on robust standard errors clustered at the firm level are in parentheses below each coefficient. ***, **, and * indicate significance at the 1%, 5%, and 10% level, respectively. Table 6 reports regression results relating soft budget constraints, state ownership, and stock liquidity. The sample comprises 3759 firm-year observations representing 473 newly privatized firms from 53 countries over the period 1994-2014. The dependent variable is ZEROS, which is the percentage of days during the fiscal year that the stock price does not change and is calculated as ZEROS = ZeroReturnDays/Total Trading Days. We winsorize all financial variables at the 1% level in both tails of the distribution. The Appendix provides variable definitions and sources. t-statistics based on robust standard errors clustered at the firm level are in parentheses below each coefficient. ***, **, and * indicate significance at the 1%, 5%, and 10% level, respectively. residual state ownership. Barth et al. provide data measuring government ownership of banks (GOVBANK) at the country level. We find that the coefficient on STATE GOVBANK is negative and statistically significant at the 1% level. Consistent with the softbudget-constraint view, this indicates that the liquidity-enhancing effects of residual state ownership are stronger in countries with more government ownership of banks. 19 Because the state ownership of banks may amplify the financing advantages provided to SOEs, we expect the benefits of state ownership to be greater in countries with a higher prevalence of state-owned banks (SOBs). In Fig. 2A, we measure how the inflection point (identifying the ownership mix providing the maximum degree of stock liquidity) is affected by differing levels of state ownership of banks. We find that the inflection point shifts in a manner consistent with the soft-budget-constraint view. Specifically, in countries with a greater prevalence of SOBs, Fig. 2A shows that the inflection point increases to 49%. This suggests that markets are willing to accept a higher level of state ownership (and endure the resultant higher costs from potential political interference) when financing advantages of state ownership are magnified by the presence of SOBs. Conversely, in countries with fewer SOBs (where the financing benefits of state ownership are not as significant), investors are less forgiving of the political costs that accompany state ownership. This reduced threshold for the tolerance of state ownership is reflected by the lower inflection point. The financing advantage from the soft budget constraint should be especially valuable during periods of financial crisis. Amihud and Mendelson argue that during financial crises the shortage of funding and high uncertainty about asset values lead to a dramatic reduction in the provision of liquidity services by market participants. In Model of Table 6, we investigate how the GFC might have affected the relation between state ownership and stock liquidity. Faccio et al. find that politically-connected firms are more likely to be bailed out by the government during times of financial distress. Similarly, Boubakri et al. show that firms increase leverage after a politician joins the board of directors, and Chaney et al. identify that politically-connected firms have a lower cost of borrowing (even though their reporting quality is poorer). Therefore, to the extent that the soft-budget-constraint view is true, we should observe a stronger relation between state ownership and stock liquidity during the financial crisis period. We test this conjecture in Model We find that the coefficient on the interaction term between STATE and DURING CRISIS is negative and statistically significant. This identifies that when state ownership increases, the stocks of these firms became more liquid during the financial crisis. This is consistent with an increase in the liquidity-enhancing effect of the soft budget constraint during the crisis. In Models of Table 6, we more extensively consider how the financial crisis may have affected the relation between state ownership and firm-level stock liquidity. Specifically, we focus on observations from 2008 to 2010. Supporting our conjecture from the previous analysis, the data indicate that state ownership significantly enhances firm-level stock liquidity during the years of the financial crisis. Interestingly, the coefficient on STATESQR is positive but statistically indistinguishable from zero, suggesting that the non-monotonic relation between state ownership and liquidity weakens during the crisis years. This evidence suggests that the benefits (soft-budget-constraint view) of state ownership become more valuable during crisis periods and overcome the costs (political view) of state ownership. Overall, these findings are consistent with our expectation that the specter of the financial crisis substantially heightened investor appreciation of the bailout potential and other fiscal advantages stemming from the state's soft budget constraint. 20 As such, the crisisinduced increase in the benefits of government ownership contributed to greater firm-level liquidity for NPFs with larger residual state shareholdings. State ownership and stock liquiditypolitical view (i.e., the "grabbing hand" effect) So far, we have shown how the soft budget constraint associated with state ownership helps improve stock liquidity. In this section, we further explore the costs of state ownership. Specifically, the "grabbing hand" effect suggests that the costs of state ownership become higher as the government retains a higher stake of privatized firms. Fear of the "grabbing hand" leads to less demand for NPFs' 19 In unreported results, we also proxy for soft budget constraints using the extent to which foreign banks are allowed to enter a country's banking industry and own domestic banks (LIMFOREIGN). A higher value indicates fewer restrictions on foreign entry and therefore less comparative advantage of state ownership. Additionally, we capture soft budget constraints using the degree to which the supervisory authority is independent of political influence (POLITICAL INDP). In both supplemental regressions, we find that when there is comparative advantage of state ownership in terms of access to finance, state ownership is associated with greater stock liquidity. 20 This result also augments the conclusion from Beuselinck et al. that state ownership had a favorable valuation effect during the financial crisis. That is, the stronger positive relation between state ownership and firm-level liquidity during the crisis years (that we identify in Table 6) may have contributed to the favorable valuation impact documented by Beuselinck et al.. stock and reduced liquidity. One may argue that because governments tend to sell more profitable firms, the negative relationship between high state ownership and stock liquidity is simply driven by firms' profitability. In Model of Table 7, we disentangle the "grabbing hand" effect from the profitability effect. To isolate the impact of the "grabbing hand", we interact state ownership with earnings management (EM), a proxy for the expected agency costs of expropriation (). Prior accounting literature suggests that opaque financial reporting, evident in higher earnings management, help controlling shareholders hide their extraction of private benefits of control at the expense of minority shareholders (;;Kim and Yi, 2006;Gopalan Table 7 reports regression results considering how the relation between government ownership and stock liquidity is potentially affected by risk of expropriation and profitability. The dependent variable is ZEROS, which is the percentage of days during the fiscal year that the stock price does not change and is calculated as ZEROS = ZeroReturnDays/Total Trading Days. We winsorize all financial variables at the 1% level in both tails of the distribution. The Appendix provides variable definitions and sources. t-statistics based on robust standard errors clustered at the firm level are in parentheses below each coefficient. ***, **, and * indicate significance at the 1%, 5%, and 10% level, respectively. and Jayaraman, 2012). Consistent with this view, Leuz, Gopalan and Jayaraman, and Attig et al. report higher earnings management in closely-held firms. We find that the interaction term between STATE and EM loads positive and is statistically significant. This suggests that stock liquidity is lower when there is a higher level of earnings management (which is symptomatic of greater intervention by the state). Therefore, the results are consistent with the "grabbing hand" effect of government ownership. 21 Moreover, in Model of Table 7, we replace EM with ROA to examine the effect of profitability on the relationship between state ownership and stock liquidity. The interaction term is statistically insignificant. Taken together, the results in Table 7 support the political view of state ownership. To further validate the "grabbing hand" effect, we construct a dummy variable, LEFT, which equals one when the political orientation of a country's ruling executive is communist, socialist, social democratic, or left-wing, and zero otherwise. We obtain this metric from the Database of Political Institutions (). 22 Justification for this variable is based on the political interference hypothesis (;Shleifer, 1998;Shleifer and Vishny, 1994;;Biais and Perotti, 2002;). More specific to the notion of the "grabbing hand" aspect of the political view, D'Souza and Nash and Shleifer and Vishny present evidence that left-wing governments attach greater value to the political benefits obtained by directing SOE resources to favored constituents (such as by creating jobs for public sector employees). Chen et al. find that leftwing governments are more likely to use SOEs to grant larger amounts of trade credit, which they show is politically-motivated and value-reducing. Therefore, because left-wing governments will generally be more inclined to use SOE resources for political expediency (as opposed to economic optimality), investors should be more apprehensive about potential expropriation. Providing evidence that left-oriented governments may be willing to sacrifice shareholder wealth maximization in order to achieve political objectives, Holland notes that the financial performance of partially privatized firms is weaker in countries with left-leaning political orientations. Similarly, in a comparison of the credit-granting decisions of state-owned versus privately owned firms, Chen et al. describe how the state has different objectives (i.e., political goals). Chen et al. document that efforts to achieve those political goals have negative implications for shareholder value (and those negative implications are significantly more severe in countries with left-wing governments). Overall, these studies identify that politically motivated endeavors by SOEs have real economic costs for minority investors. Minority shareholders may opt to avoid these costs by choosing not to hold or trade the stocks of state-owned firms, which may reduce the liquidity of these securities. Accordingly, we expect that firms with higher state ownership will exhibit lower stock liquidity in countries with left-wing governments. 23 To test this prediction, we estimate the following model (subscripts omitted for simplicity): ZEROS = + 1 STATE + 2 LEFT + 3 STATE LEFT + 4 LOG MV + 5 BM + 6 STDRET + 7 EM + 8 ANALYST + 9 LOSS + 10 ADR EX + 11 ADR NEX + 12 INTGAAP + 13 STOCK TURNOVER + 14 LOG(PRICE) We present results from this specification in Table 8. We find that the coefficients of STATE LEFT load positively and are statistically significant. This indicates that stock liquidity is lower when state ownership is higher in countries with left-wing governments. Importantly, the coefficient of STATE LEFT is 0.05, suggesting that firms from nations with left-wing governments have 45% (=0.05/0.11) more zero-return days. This finding is consistent with the view that investors are less likely to invest in privatized firms with higher potential for government expropriation. In additional analysis, we re-estimate the models of Table 8 by splitting our sample into firms from countries with left-wing and center/right-wing governments. We plot the results in Fig. 2B. For firms in countries with left-wing governments, we find that the inflection point decreases from 44% to 21%. This suggests that shareholders in countries with left-wing governments are more reluctant to invest once the state retains more than 21% of the firm (due to the greater fear of political intervention in countries with left-wing governments). Interestingly, we find that the inflection point increases to 55% for nations with center/right-wing governments, suggesting that investors are more tolerant of higher state ownership if the government is less likely to be involved in economic activity (and is therefore less likely to expropriate minority shareholders). 24 21 Using the country median ratio of the firm-level standard deviations of income and cash flow as an alternative proxy for earnings management (), we find our results remain statistically the same. (decision to privatize using public markets vs. private placements). 23 One concern is that political orientation is highly correlated with the economic development of countries so that our measure of political orientation is capturing the effect of economic development (rather than the extent of political costs of state ownership). In our sample, the average score of LEFT for developed countries is 0.29, while the average score for developing countries is 0.30. Moreover, developed countries account for 2077 of our observations (i.e., 55%), while developing countries account for 1682 observations (i.e., 45%). Taken together, our data are not primarily skewed toward developed or developing countries in terms of government political orientation. 24 We acknowledge that these comparisons of inflection points are descriptive in nature because we cannot test the statistical differences in inflection points across different regressions. Robustness Another concern with our main analysis is that the relation between state ownership and stock liquidity may have alternative explanations. We investigate several possibilities in Table 9. First, Lesmond finds that political risk helps explain stock liquidity in emerging markets. Our results could therefore be driven by an omitted variable-political risk. Second, Megginson et al. show that countries with more equal income distributions have a broader base of potential shareholders, which can contribute to greater country-level liquidity. Third, Sarkissian and Schill argue that firms producing tradeable goods have wider name Table 8 reports regression results relating government orientation, state ownership, and stock liquidity. The sample comprises 3759 firm-year observations representing 473 newly privatized firms from 53 countries over the period 1994-2014. The dependent variable is ZEROS, which is the percentage of days during the fiscal year that the stock price does not change and is calculated as ZEROS = ZeroReturn-Days/Total Trading Days. We winsorize all financial variables at the 1% level in both tails of the distribution. The Appendix provides variable definitions and sources. t-statistics based on robust standard errors clustered at the firm level are in parentheses below each coefficient. ***, **, and * indicate significance at the 1%, 5%, and 10% level, respectively. Boubakri et al. recognition, and the stocks of these firms are more warmly received by potential shareholders in those markets and thus may be more liquid. We test these possibilities in Models by including the political risk measure of the International Country Risk Guide (ICRG) (POLRISK), the income inequality measure from All the Ginis Dataset (INEQUALITY), and an indicator for firms that produce tradeable goods (TRADEABLE) as additional control variables. After including these additional controls, the coefficient on STATE remains negative and statistically significant at the 1% level. In Model, we continue to find that STATE is negatively and STATESQR is positively associated with zero-return days. Another concern is that our main results regarding the relation between state ownership and stock liquidity are driven by other types of blockholders. Specifically, residual state ownership in partially privatized firms may naturally induce lower liquidity since the state (relative to other blockholders or other investors) may be less inclined to actively trade shares. Therefore, shares of privatized Table 9 reports regression results relating state ownership to stock liquidity using additional controls and alternative dependent variables. The full sample comprises 3759 firm-year observations representing 473 newly privatized firms from 53 countries over the period 1994-2014. The dependent variable in Models to is ZEROS, which is the percentage of days during the fiscal year that the stock price does not change and is calculated as ZEROS = ZeroReturnDays/Total Trading Days. The dependent variable in Models is AMIHUD, which is the average stock return over trading volume. The dependent variable in Models is FHT, which is a liquidity proxy based on low-frequency data and is defined as 2 SigmaProbit ((1 + ZEROS)/2), where Sigma = Std (Returns). We winsorize all financial variables at the 1% level in both tails of the distribution. The Appendix provides variable definitions and sources. t-statistics based on robust standard errors clustered at the firm level are in parentheses below each coefficient. ***, **, and * indicate significance at the 1%, 5%, and 10% level, respectively. firms are less frequently traded. 25 To rule out this possibility, in Models, we control for ownership stakes by two additional types of blockholders, namely foreign investors (FOREIGN) and institutional investors (INSTITUTIONAL). We find that the coefficients of FOREIGN and INSTITU-TIONAL are both statistically insignificant. More importantly, our primary findings remain statistically the same. 26 27 In Models, we address the concern that acquisition activities may affect stock trading and therefore drive our main results. To control for acquisition activity, we include a dummy variable (ACQUISITION), which is equal to one if a firm has acquisition expenditures that are larger than zero. Our main results are unaffected. 28 In Models, we exclude China from our sample to mitigate the concern that our results are driven by the country with the largest number of observations in our data. Moreover, in Models through, we replicate our main analyses using Panel regressions (i.e., including firm and year fixed effects) to examine the within-firm effect of state ownership on stock liquidity, and Tobit regressions to address concerns that our dependent variable (ZEROS) is truncated at zero. Our results are statistically similar in all of these specifications. To address the possibility that our findings may be driven by our choice of the proxies for stock liquidity, we adopt alternative liquidity measures. Following Bekaert et al. and Goyenko et al., we measure liquidity as the average across stocks of the daily ratio of absolute stock return to dollar volume (AMIHUD). We also use FHT (from ) as an alternative measure of liquidity. Fong et al. define FHT as 2 STDRETProbit ((1+ ZEROS)/2), where STDRET is the standard deviation of stock returns over the year. Fong et al. find that this metric is useful in cross-country studies. In Models through, we verify that our results are not sensitive to using AMIHUD or FHT as an alternative measure of liquidity. Also, in unreported results, we use the Roll illiquidity measure, which is a covariance spread estimator of stock illiquidity, and obtain similar results. ROLL is calculated as 2 where P t is the observed closing price on day t and is equal to the stock's true value plus or minus half of the effective spread. Additional analysesevidence from re-nationalizations In this section, we validate our main evidence regarding the relation between state ownership and stock liquidity by examining a sample of re-nationalized firms. Specifically, we use SDC Platinum to identify previously-privatized firms that governments have subsequently re-nationalized. Using propensity score matching, we then paired those re-nationalized firms with firms from the private sector (according to all firm and country characteristics). The matched sample allows us to consider how re-nationalization affects stock liquidity. Our variable of interest is a dummy variable (RE-NATIONALIZATION) indicating a privatized firm that was renationalized. We estimate the following difference-in-difference model (subscripts omitted for simplicity): ZEROS = + 1 RE − NATIONALIZATION + 2 LOG MV + 3 BM + 4 STDRET + 5 EM + 6 ANALYST + 7 LOSS + 8 ADR EX + 9 ADR NEX + 10 INTGAAP + 11 STOCK TURNOVER + 12 LOG(PRICE) + 13 LOG (TRADING DAYS) + 14 LISTED + 15 MEDIA + 16 LGDPC + 17 FIRM FIXED EFFECTS + 18 YEAR FIXED EFFECTS + As we report in Table 10, RE-NATIONALIZATION loads positively and is statistically significant. This indicates that stock liquidity is lower for re-nationalized firms compared to private sector firms. This is consistent with the view that a greater level of investor fear of the "grabbing hand" leads to a lower level of stock liquidity. 25 However, Boutchkova and Megginson note significant differences in the share-ownership structure of NPFs and always-private firms. Specifically, after comparing the shareholder rosters of privatized firms to a capitalization-based matched sample of private firms, Boutchkova and Megginson find that privatized firms generally have a larger number of shareholders than private firms and that the composition of the shareholder base is more likely to change in NPFs. Therefore, even if the residual shares held by the state are likely to trade less frequently, there may a countervailing effect on the liquidity of privatized firms due to the larger number of shareholders and the dynamic nature of the shareholdings of those investors. Furthermore, our findings that differences in the institutional environment contribute to differences in the inflection points (from the cost/benefit trade-off) provide additional evidence to mitigate the potential conjecture that lower stock liquidity in NPFs is simply driven by fewer trading activities by the state. 26 In unreported tests, we examine the effects of ownership blocks by foreign and institutional owners measured at 10%, 20%, and 30% of proportional ownership. We also consider whether foreign and institutional owners are the largest shareholder in a firm. Our models indicate that these ownership variables are not statistically significant, while our inferences on the role of state ownership are not affected. We thank an anonymous reviewer for suggesting this analysis. 27 As a further control, we add the number of shareholders (e.g., ) to Model 2 in Table 4. Although the sample size drops significantly (756 firm-year observations, representing 20% of our full sample), we continue to find that state ownership and its squared term remain statistically significant, consistent with our main evidence. 28 To further confirm the robustness of our results, we account for changes in trading rules that may affect stock liquidity. We identify changes in trading rules and securities regulation from Cumming et al., Bhattacharya and Daouk, Edmans et al., and Fauver et al.. Our findings are unaffected by including these additional controls. Conclusion We investigate the link between state ownership and firm-level stock liquidity. The expected relation between state ownership and stock liquidity is not evident as it depends on two different views. According to the political view, the continued involvement of the state distorts the firm's objectives. In addition, the perceived weaker corporate governance and the increased information asymmetry associated with residual state ownership might lead to less demand for stocks with government ownership, and hence may contribute to lower liquidity. On the other hand, according to the soft-budget-constraint view, investors value the preferential access to credit and the implicit government guarantees received by firms with state ownership. This could result in stronger levels of demand for the firm's stock (which may lead to higher liquidity). Using a unique sample of 473 newly privatized firms (NPFs) from 53 countries during 1994-2014, we show that state ownership is significantly related to stock liquidity. We further identify that the relation is non-monotonic. This is consistent with a countervailing influence of both the political view and the soft-budget-constraint view of state ownership. For lower levels of state ownership, we find evidence consistent with the soft-budget-constraint view. That is, the benefits of state ownership (such as the financing advantages inherent in the soft-budget-constraint view) appear to exceed the costs from potential political interference (as espoused by the political view). However, as state ownership increases beyond a certain point, investors appear to become more averse to state ownership, which contributes to a reduction in firm-level liquidity. This suggests that there is a tipping point at which the political costs of state ownership begin to overwhelm the benefits from the soft-budget-constraint view. Through additional analyses, we Table 10 reports regression results relating government renationalization and stock liquidity. The dependent variable is ZEROS, which is the percentage of days during the fiscal year that the stock price does not change and is calculated as ZEROS = ZeroReturnDays/Total Trading Days. We winsorize all financial variables at the 1% level in both tails of the distribution. The Appendix provides variable definitions and sources. t-statistics based on robust standard errors clustered at the firm level are in parentheses below each coefficient. ***, **, and * indicate significance at the 1%, 5%, and 10% level, respectively. determine that the specific location of this tipping point is affected by characteristics of the nation's institutional environment (such as the political/economic orientation of the government and the prevalence of state-owned banks). Our study contributes to the privatization literature by presenting unique firm-level evidence regarding the liquidity implications of privatization reforms across a broad sample of countries. 29 In particular, high levels of state ownership in NPFs could dissuade investors who fear the "grabbing hand" of the government, which would reduce the liquidity of newly privatized stocks and in turn increase their cost of capital and decrease their value. However, at least from a liquidity perspective, lower levels of state ownership may also be disadvantageous. That is, especially during times when the scars of the financial crisis are still fresh, the financing advantages (and the implicit and not-so-implicit bailout guarantees) provided by government shareholdings may resonate with investors and thus enhance the liquidity of firms with some state ownership. Overall, consistent with a trade-off between the benefits and the costs of state ownership, our results suggest that there is a level of state ownership that maximizes the liquidity of the stock of NPFs. Declaration of Competing Interest None. Variable Definition Source Firm-and industry-level variables ZEROS The proportion of days with zero returns during the year. Authors' calculation based on Lesmond et al. FHT A liquidity proxy based on low-frequency data calculated as: FHT = 2 STDRETProbit ((1+ ZEROS)/2), where SIGMA is the standard deviation of stock returns over the year. Authors' calculation based on Fong et al. AMIHUD A liquidity proxy developed by Amihud, which is: AMIHUD = Average (|r|/Volume), where r is the stock return on day t and Volume is the dollar volume on day t. The average is calculated over all positive-volume days during the year. Authors' calculation based on Amihud ROLL A liquidity proxy developed by Roll, which is: ROLL = 2 − Cov(Pt, Pt − 1 ) √, where P t is the observed closing price on day t and is equal to the stock's true value plus or minus half of the effective spread. Author's calculation based on Roll SOEs A dummy variable equal to one if the government remains the largest shareholder in a privatized firm, and zero otherwise. Firm's annual report PARTIAL A dummy variable equal to one if the government retains shares (i.e., STATE > 0) in a privatized firm, and zero otherwise. STATE The percentage of state ownership. As above STATESQR The square of state ownership. Author's calculation FOREIGN The percentage of foreign ownership. Firm's annual report INSTITUTIONAL The percentage of institutional ownership. As above LOG MV The log of the market value of equity at year-end. Compustat Global BM The book value of common equity divided by the market value of equity. As above STDRET The annual standard deviation of daily stock returns. As above EM The standard deviation of income over the standard deviation of cash flows. As above ANALYST The number of analysts that follow the firm. I/B/E/S LOSS A dummy variable equal to one if net income before extraordinary items is negative, and zero otherwise. ADR_EX A dummy variable equal to one if the firm trades on a U.S. exchange during the year, and zero otherwise. As above ADR_NEX A dummy variable equal to one if the firm has an ADR but is not traded on a U.S. exchange during the year, and zero otherwise. INTGAAP A dummy variable equal to one if the firm reports under IFRS or U.S. GAAP during the year, and zero otherwise. As above LOG ASSETS The natural logarithm of total assets. As above LEV Total debt over total assets. As above (continued on next page) 29 These findings are especially important because a primary objective of privatization programs in many countries is the development of stock markets by promoting an "equity culture" or "people's capitalism" among investors (e.g., Megginson and Netter, 2001;Boutchkova and Megginson, 2000;). In turn, this equity culture is conducive to a change in the trading behavior of investors, thus affecting stock liquidity. By addressing the liquidity implications of state ownership at the firm level, we provide important insights to policymakers attempting to spur economic development by fostering an equity culture. (continued ) Variable Definition Source CASH Cash and short-term investment divided by total assets. As above CAPX Capital expenditure divided by total assets. As above DV DUMMY A dummy variable that equals one if dividend payout is greater than zero, and zero otherwise. As above CASH FLOW Income before extraordinary items, plus R&D expenditures and depreciation, all deflated by total assets. As above NWCAP Current assets minus current liabilities, delated by total assets. As above LOG (PRICE) The natural log of stock price. As above LOG (TRADING DAYS) The natural log of the number of trading days for a firm in a given year. As above BIAS Forecast optimism bias defined as the difference between the one-year-ahead consensus earnings forecast and realized earnings delated by June-end stock price. Author's calculation based on I/ B/E/S data INFLATION Inflation rate of a country. World Bank TRADEABLE A dummy variable that equals one for chemicals, consumer goods, electronics, manufacturing, healthcare, mining, oil and gas, and paper industry, and zero otherwise. ACQUISITION A dummy variable equal to one if a firm has acquisition expenses that are larger than zero, and zero otherwise. As above STOCK TURNOVER Stock turnover ratio of each firm, defined as the stock's total trading volume divided by the total outstanding shares. As above RE-NATIONALIZATION A dummy variable that equals one for previously privatized firms that are acquired by government or government-controlled entities, and zero otherwise. SDC Platinum Country-level variables LISTED The number of firms listed on a nation's stock market. World Bank MEDIA A variable that rates each country's media freedom from 0 to 100. Transformed to 100 minus the original Freedom House index so that higher values indicate that a country's media are more independent. Freedom House LGDPC Log GDP per capita. World Bank LEFT A dummy variable equal to 1 if the Chief Executive Party Orientation is left-wing, and zero otherwise The Database of Political Institutions (DPI) STATE COUNTRY The average level of state ownership by each country (lagged 3 years). Author's calculation POLRISK A variable measured as an amalgamation of 12 country elements and that ranges from zero to 100. A higher value indicates less political risk. International Country Risk Guide (ICRG) GOVBANK The extent to which the banking system's assets are government-owned. Specifically, this metric reflects the percentage of banking system's assets in banks that are 50% or more governmentowned (based on surveys conducted by the World Bank in 1999Bank in, 2003Bank in, 2007Bank in, and 2011. Barth et al. LIMFOREIGN The extent to which foreign banks may enter a country's banking industry and own domestic banks. A higher value indicates fewer restrictions. As above POLITICAL INDP The degree to which the supervisory authority is independent of political influence. A higher value indicates greater independence. BEFORE CRISIS A dummy variable that indicates time periods before the global financial crisis. Equal to one for period before 2008, and zero otherwise. Author's calculation DURING CRISIS A dummy variable that indicates time periods during the global financial crisis. Equal to one for period between 2008 and 2010, and zero otherwise. Author's calculation
Moderate exercise ameliorates osteoarthritis by reducing lipopolysaccharides from gut microbiota in mice Lipopolysaccharides (LPSs) released by gut microbiota are correlated with the pathophysiology of osteoarthritis (OA). Exercise remodels the composition of gut microbiota. The present study investigated the hypothesis that wheel-running exercise prevents knee OA induced by high-fat diet (HFD) via reducing LPS from intestinal microorganisms. Male C57BL/6 J mice were treated with sedentary or wheel-running exercise, standard diet (13.5% kcal) or HFD (60% kcal), berberine or not according to their grouping. Knee OA severity, blood and synovial fluid LPS, cecal microbiota, and TLR4 and MMP-13 expression levels were determined. Our findings reveal that HFD treatment decreased gut microbial diversity. Increase in endotoxin-producing bacteria, decrease in gut barrier-protecting bacteria, high LPS levels in the blood and synovial fluid, high TLR4 and MMP-13 expression levels, and severe cartilage degeneration were observed. By contrast, voluntary wheel running caused high gut microbial diversity. The gut microbiota were reshaped, LPS levels in the blood and synovial fluid and TLR4 and MMP-13 expression levels were low, and cartilage degeneration was ameliorated. Berberine treatment reduced LPS levels in the samples, but decreased the diversity of intestinal flora with similar changes to that caused by HFD. In conclusion, unlike taking drugs, exercising can remodel gut microbial ecosystems, reduce the circulating levels of LPS, and thereby contribute to the relief of chronic inflammation and OA. Our findings showed that moderate exercise is a potential therapeutic approach for preventing and treating obesity-related OA. Introduction Osteoarthritis (OA) is one common arthritic illness, but its etiology and pathogenesis are not fully understood. Current pharmacologic treatments may improve the pain but there is no effect on the progression of the illness. However, non-pharmacologic treatments in cartilage disease, such as physical activity, nutrients and vitamins, have been shown to affect the course of the disease (Ageberg and Roos, 2015;). Obesity or overweight is considered to be a predisposing risk inductor for OA. Nowadays, exercise is prescribed as an indispensable treatment. Apart from weight loss, exercise improves joint biomechanics and thus beneficial to OA treatment (). However, the high frequency of OA occurrence in non-weight-bearing joints suggests that the investigation of the mechanism of exercise in OA treatment is incomplete (). In obesity, chronic inflammation, as a result of homeostasis imbalance between immune and metabolic responses, underlies many chronic metabolic diseases (Hotamisligil and Erbay, 2008). Emerging evidence suggests that enteric dysbacteriosis caused by high -fat diet (HFD) feeding is the potential drivers of metabolic inflammation, and perturbations in gut microbiota composition and intestinal barrier disruption can increase epithelial permeability and the translocation of lipopolysaccharides (LPSs) from the intestinal cavity to the circulatory system and activate innate immune responses (). As a pathogen-related molecular pattern, the bacterial membrane component LPS can trigger many proinflammatory pathways, thereby initiating signaling cascades (Huang and Kraus, 2016). Toll-like receptors (TLRs) can recognize LPS, thereby activating the TLR pathways (Huang and Kraus, 2016). In LPS/TLR4 pathways, the upregulated expression of TLR4 directly or indirectly activates matrix metalloproteinases (MMPs) and leads to cartilage degeneration (). Hence, high circulating LPS levels and imbalanced gut microbiota are presumed to be closely associated to the initiation of OA (Huang and Kraus, 2016;). Some animal experiments have supported this inference. In the rats fed with HFD, the serum LPS levels were higher than those fed with chow; moreover, the abundance in Lactobacillus spp. exhibited a significant negative relationship with the progression of OA, whereas the abundance in Methanobrevibacter spp. showed a strong positive relationship (). Similarly, an HFD-induced increase in LPS levels, but different microbes, such as Bacteroides/Prevotella, Bifidobacterium, and Roseburia, were negatively related to OA (). Exercise improves human health and fitness, exemplified by reducing body fat and fasting blood insulin level. An experimental study proved that autonomous wheel movement relieves the development of OA (). The comprehensive treatment combined with exercise and dietary intake exerted better protective effect on knee health than that a single intervention (). To date, OA is no longer considered as a simple cartilage degenerative disease, but as a global joint chaos with heterogeneity and multiple etiologies (). Similar to obesity, metabolic OA is a chronic, systemic, and low-grade inflammatory condition (Berenbaum, 2013;). Scientific evidence suggests that moderate-level running provides not only cyclical loading to the knee, which is important to maintaining cartilage homeostasis, but also improves metabolic state in individuals with chronic systemic inflammation (Ageberg and Roos, 2015;), wherein the improvement may be driven by reshaped gut microbiota. Exercise, unlike HFDs, can be a stronger modulator of gut intestinal homeostasis. Akkermansia muciniphilla abundance was found inversely correlated with obesity and associated metabolic disorders; interestingly, athletes with low body mass indices (BMIs) demonstrated higher Akkermansia levels than those with high BMIs (). Lactobacillus and Bifidobacterium, which have potential values in OA treatment (), also can be positively regulated by exercise (). Increasing evidence reveals that exercise can diversify intestinal microorganisms and reform the balance between the richness of beneficial and harmful microbes (;). Therefore, moderate running exercise, such as voluntary wheel running, seems to reduce systemic inflammation and metabolic dysregulation and thus contribute to OA prevention. Insights into the inflammatory pathophysiology underpinnings of OA suggest that alleviating inflammation may prevent the onset or minimize the progression of OA (Berenbaum, 2013;;Huang and Kraus, 2016). To clarify the mechanism of obesity-associated OA and find a potential therapeutic approach, we used male C57BL/6J mice to establish an OA model of obese mice with HFD feeding and investigated the difference in intestinal flora composition, LPS level in the blood and knee joint cavity, LPS/ TLR4 pathway, and the degrading degree of OA after HFDs and/or voluntary wheel running. The data reveal that voluntary wheel running ameliorates OA by reducing LPS level in the blood and knee joint cavity. Whether the mechanism on how exercise reshapes intestinal flora community and alleviates inflammation is the same as that of intestinal -regulating drugs on intestinal flora must be investigated (a). Therefore, berberine, which is an isoquinolinederivative alkaloid that has been traditionally used in the treatment of gastrointestinal infections, was selected as the control due to its antimicrobial properties. Berberine also regulates metabolic endotoxemia levels and demonstrates therapeutic potential for OA (). Animals and treatment The experimental protocol has been approved by Animal Care and Use Committee of Shandong Sport University. Dietary intake in this experiment included either standard diets (13.5% kcal) or HFDs (60% kcal; No. D12492, Beijing Keao Co-operative Feed Co., Ltd.). All the mice were allowed to eat freely, weighed once a week, kept one per cage throughout the experiment and maintained in the same environment. We used the standard = RAND () function in Microsoft Excel to realize the random grouping. After 4 weeks of environmental acclimatization (1 rat per cage; fed with standard diet), 54 male C57BL/6J mice (body mass = 20.1 ± 2.0 g) were randomly divided into either the standard diet group (control group, n = 18) or HFD group (fed with HFDs, n = 36) at 12 weeks of age. When the 8-week dietary intervention was ended, six mice in each group were randomly selected to take their knee joints for histological and histochemical analyses to verify whether knee OA has been successfully induced by the 8-week-old HFDs. The remaining mice aged 20 weeks in the control group were randomly divided into RC_Sed group (sedentary, n = 6) and RC_Ex group (exercise, n = 6). In the HFD group, the obese mice with a body weight of 20% greater than that of the control group were divided into HF_Sed group (sedentary, n = 6) and HF_Ex group (exercise, n = 6). A free-wheel running device (with a diameter of 11.0 cm) was introduced to the exercise group but not to the sedentary group. The number of cycles was recorded through a magnetic inductor (). The wheel revolutions in the previous day were counted at 8:00 a.m. every day in the following 4-week experimental period. In the following description, exercise refers to voluntary wheel running, unless otherwise specified. Six obese mice were exposed to drug treatment (HF_Bbr), with a dose of 150 mg/kg of berberine administered twice a week. After the last exercise, the mice were subjected to 12 h fasting, but water was made available ad libitum. Blood samples were obtained by retro-orbital phlebotomy under ether anesthesia. The mice were then sacrificed through cervical dislocation. We obtained the following materials from the mice: 1) The left and right knee joints were obtained. 2) The knee -joint synovial fluid was collected from their knee joint cavities. We used a syringe to prick one side of the joint capsule along the frontal surface to form a ''puncture hole." Endotoxin-free water (500 ll) was slowly pushed into the joint cavity with the syringe on the opposite side of the ''puncture hole" and then flowed from the ''puncture hole" into an endotoxin-free tube. The fluid from both knee joints was pooled as one sample. 3) The cecum contents were collected in a sterile Eppendorf tube for the following intestinal flora composition analysis. Bacterial community analysis Microbial DNA was extracted from the cecum contents for bacterial analysis. Illumina MiSeq platform (Majorbio Co, Shanghai, China) was used to amplify and sequence the 16S rRNA gene of the bacterial microbes. The raw sequencing data were qualityfiltered for further analysis on I-Sanger Platform. Operational taxonomic units were clustered with a similarity of 97%. Histological and histochemical analyses We used hematoxylin-eosin (HE) staining to assess the complete morphology and structural change of articular cartilage and toluidine blue staining to display proteoglycan content. The muscles, ligaments, and patella around the joint were removed (by paying attention not to injure the cartilage surface). The whole left joint was fixed in 4% paraformaldehyde fixative solution. After half an hour of flushing with running tap water, the knee joint was cut along the median sagittal plane and decalcificated in a solution of 10% EDTA (pH 7.4) for one month (b). The decalcification solution was replaced every 3 days, the needle could easily penetrate the bone for complete decalcification, and the joint was paraffin-embedded. Then, these samples were sliced serially on the coronal plane with a thickness of 4 lm. One section every 100 lm was stained with HE and toluidine blue referring to the method of Schmitz et al. with minor modifications. Evaluation of OA Sections stained with HE and Toluidine blue were evaluated for degenerative joint changes (at least 10 sections scored per knee) by three trained, blinded reviewers. Scores were recorded for four areas, lateral femur, lateral tibia, medial femur, and medial tibia according to a 14-point Mankin scale, which included structure, cellularity, staining intensity, and tidemark integrity (). Scores were averaged to determine Mankin scores for the entire joint with high values indicating severe cartilage degeneration. Quantification of cartilage thickness Six sections of each joint (3 sections of the inner and outer joints) were collected. The thickness of cartilage layer (from the cartilage surface to tidemark) was measured at 11 pointscentered on the weight-bearing area of the tibial plateau. Five points were taken every 25 lm left and right at 40 magnification, and the average value was obtained. LPS determination in blood and knee joint synovial fluid The serum samples were obtained by centrifugation of the blood samples at 3500 rpm for 20 min. To remove the cellular elements of synovial fluid, the sample was centrifuged at 13,500 rpm for 10 min and the supernatant was taken for further testing. LPS determination in blood and knee joint cavity fluid was conducted by the Limulus amebocyte lysate chromogenic endpoint assay (HIT302; Hycult Biotec). The LPS concentration of samples was calculated by a logarithmic standard curve. Western blot analysis We extracted total protein from articular cartilage homogenate with 1 ml of tissue lysate and protease inhibitor, and collected the supernatant after centrifugation at 12,000 rpm for 15 min at 4°C, then determined the protein concentration via a BCA Protein Assay Kit (Beyotime Biotechnology Co.). An equal amount of protein was put into gels and resolved. Proteins were transferred to polyvinylidene difluoride membranes and exposed first to anti-MMP13 antibody ab39012, anti-TLR4 antibody ab13556, and anti -GAPDH antibody (ab8245) (Abcam, UK) and then to goat anti-rabbit IgG H&L (HRP) ab6721 (Abcam, MA). Bands were detected and analyzed using ChemiDoc TM XRS+ with Image LabTM software (BIO-RAD, USA). Statistical analysis The a-diversity indices (Ace index and Simpson index) were calculated. Community bar plot analysis was conducted by calculating the average of the absolute abundance values within each group. SPSS version 20 (IBM; Chicago, IL) and GraphPad Prism V 6.02 (La Jolla, California, USA) were used for statistical analysis. LPS level in the blood is shown as the mean ± SEM, and other data are shown as mean ± SD. The two-way ANOVA with Tukey's multiple comparison post-test was used for the comparisons between multiple groups. For comparisons between two groups, Student's t test was used for parametric distribution data, and Mann-Whitney tests for non-parametric distribution data. Adjustment for multiple testing was estimated using false discovery rate functions. The differences were considered significant at p < 0.05. The weight loss effect of exercise We fed the mice with HFDs from the age of 12 weeks to obtain obese mice. At 20 weeks old, the high-fat diet significantly increased the weight of the mice (Fig. 1A, p < 0.01). With prolonged feeding time, the average body weight of HF_Sed mice was 1.36 times that of RC_Sed mice at 24 weeks of age with significant difference (Fig. 1B, p < 0.0001). Thus, the obesity level of mice was increased. With exercise intervention, the weight of HF_Ex mice decreased significantly (Fig. 1B, p < 0.01), suggesting that 4-week free-wheel running played a role in weight loss. No significant difference were observed in the exercise distance between RC_Ex and HF_Ex groups (Fig. 1C), indicating that HFDs did not affect the activity of mice. Evaluation of knee OA Severe articular cartilage fibrillation, loss of tidemark integrity, and surface irregularities were observed in the HF_Sed group (Fig. 2E). The mice in the HF_Ex group exhibited ameliorative cartilage degeneration compared with the sedentary mice. Compared with RC_Sed, the Mankin scores of medial femur, medial tibia, and lateral tibia in HF_Sed were significantly higher ( Fig. 2A, B, and C). The score of lateral femur in HF_Sed increased by an insignificant amount (p = 0.209; Fig. 2D). On the contrary, the scores of medial tibia decreased significantly (p < 0.05, Fig. 2A), and those of medial femur, lateral tibia, and femur in HF_Ex decreased insignificantly (p = 0.485, 0.270, 0.687; Fig. 2B, C, and D) compared with those in the HF_Sed group. Table 1 shows that HFDs could significantly reduce the cartilage thickness of medial joint and lateral joint (RC_Sed vs. HF_Sed), whereas exercise could increase the cartilage thickness of medial joint (p < 0.001). These results indicated that HFDs lead to severe cartilage degeneration, whereas wheel running exercise could ameliorate cartilage degeneration. Microbial community diversity characterized by exercise, HFDs, and berberine We examined whether the dietary and exercise interventions in mice could alter the gut microbial communities in the investigation of the latent mechanism of diet and exercise on OA. The results of bacterial community diversity ( Fig. 3A; measured using the Ace index) revealed no difference in community richness between RC_Sed and RC_Ex groups. The diversity of HF_Sed group was significantly lower than that of RC_Sed group (p < 0.05). However, the diversity of HF_Ex group was significantly higher than that of RC_Sed group and HF_Sed group (p < 0.05, p < 0.01, respectively) groups. There was no statistically significant difference among groups tested with the Simpson diversity index (Fig. 3B). These results showed that HFDs can reduce the community richness of intestinal microflora in mice. Exercise had no effect on the microbiota richness in mice fed with normal diets but increased mircobiota richness in the mice fed with HFDs. We also investigated whether berberine has an effect on Intestinal microorganism. The biodiversity of HF_Bbr group was significantly lower than that of HF_Ex group (p < 0.01). There was no change between HF_Sed group and HF_Bbr group. These results suggested that berberine treatments could not augment intestinal microbial diversity. Distinct intestinal microbial populations were observed among the RC_Sed, RC_Ex, HF_Sed, and HF_Ex groups. Firmicutes, Bacteroidetes, and Proteobacteria are the three major phyla of gut microbiota. Obese mice in HF_Sed demonstrated increased abundance in Firmicutes and Proteobacteria but decreased Bacteriodetes, and the Firmicutes/ Bacteriodetes ratio increased relative to that in RC_Sed (Fig. 4A). Exercise intervention can reverse the pattern of increase or decrease caused by HFDs. Lower abundance in Firmicutes and Proteobacteria, higher abundance in Bacteriodetes and decreased Firmicutes/ Bacteriodetes ratio were observed in HF_Ex compared with those in HF_Sed. The performance of the phyla Firmicutes, Bacteroidetes, and Proteobacteria after berberine treatment (HF_Bbr) was similar to that of the HF_Sed group, wherein the distinct increases in phyla Firmicutes and Proteobacteria and distinct decrease in phylum Bacteroidetes were observed (Fig. 4A). At the family level, the described reversal role of exercise was found in Bacteroidales_S24-7, Lachnospiraceae, Desulfovibrionaceae, Ruminococcaceae, Lactobacillaceae, Prevotellaceae, Peptostreptococcaceae, Bifidobacteriaceae, and Staphylococcaceae (Fig. 4B). For instance, Bacteroidales_S24-7, Prevotellaceae, and Bifidobacteriaceae appeared at the lower level in HF_Sed compared with those in RC_Sed and at the higher level in the HF_Ex mice compared with that in the HF_Sed ones. However, Desulfovibrionaceae and Peptostreptococcaceae indicated reverse patterns. Additionally, the performance of berberine treatment was the same as that of HFD and even led to the complete disappearance of certain bacteria, such as Bifidobacteriaceae and Prevotellaceae (Fig. 4B). This pattern indicated that exercise, unlike berberine, can remodel the composition of the intestinal microflora in HFD animals. We further investigated which bacteria changed significantly due to exercise (Fig. 5). Compared with HF_Sed, exercise led to a significant increase in some beneficial bacteria (p < 0.05), such as two members of the Prevotellaceae family (Prevotel-laceae_UCG-001 and another unidentified species) and an unidentified member of the family Bacteroidales_S24-7. LPS levels determined in the knee -joint synovial fluid and blood We measured the LPS levels in the samples to determine the effects of HFDs and exercise on endotoxin translocation or clearance. The results showed that the 12-week feeding of HFDs without exercise significantly increased the LPS levels in the blood and synovial fluid compared with that in RC_Sed, RC_Ex, and HF_Ex (Fig. 6). The blood LPS level in HF_Ex was significantly higher than that in RC_Ex (Fig. 6A). These results suggested that HFDs contribute to the high LPS level in the blood. The blood LPS level of RC_Ex group was significantly lower than that of RC_Sed group (p < 0.05, Fig. 6A), and that of HF_Ex group was significantly lower than that of the HF_Sed group (p < 0.01, Fig. 6A). The LPS level in the synovial fluid significantly decreased in the HF_Ex group compared with that in the HF_Sed group (Fig. 6B). No significant changes were found in the HF_Ex and RC_Ex mice. Our results indicated that exercise intervention could reduce the LPS concentrations in the blood and joint fluids. Compared with HF_Sed group, berberine treatment significantly decreased the LPS level in synovial fluid in the HF_Bbr group (p < 0.001, Fig. 6B). Additionally, compared with HF_Sed group, an insignificant decrease in blood LPS level was found in the HF_Bbr group (p > 0.05, Fig. 6A). TLR4 and MMP-13 expression profiles induced by HFD and exercise The severity and progress of OA are closely related to the levels of TLR4 and MMP (such as MMP-13) in chondrocytes. The association of HFDs and exercise with changes in the expression of TLR4 and MMP-13 in cartilage is depicted in Fig. 7. Compared with RC_Sed group, the expression levels of TLR4 and MMP-13 were significantly upregulated in the HF_Sed group (p < 0.05). The levels of HF_Ex mice were significantly downregulated compared with those in HF_Sed (p < 0.05). No significant changes were observed between the RC_Sed and HF_Ex mice. These results indicated that HFDs led to severe cartilage degeneration, whereas exercise might ameliorate this phenomenon. Discussion HFDs are an unhealthy dietary pattern contributing to obesity. HFD can boost the proliferation of proinflammatory microbiota but inhibit the probiotics and prebiotics and increase the intestinal permeability and circulating levels of LPS. The absence of activity combined with obesity is a high risk factor of OA onset. According to our data, HFDs can significantly induce weight gain and OA onset. It has been proved that the gut microbiota holds high -fat feeding, obesity, and OA together. Therefore, controlling obesity by modulating the intestinal microflora may be beneficial to prevent obesity-associated OA. Prior works have shown that voluntary exercise is more beneficial to the improvement of inflammation than compulsive exercise (). Hence, we used a free-wheel running protocol on the C57BL/6J mice to evaluate the role of exercise. The intestinal microorganism of humans constitutes a superorganism. Thus, the balance between good and maleficent bacteria in the gut microbial community fluctuates due to the host diets. HFDs Table 1 Comparison of cartilage thickness of medial and lateral joints (lm; mean ± SD). Note: *p < 0.05, *** p < 0.001, vs. C-Sed group; ### p < 0.001, vs. HF-Sed group. can alter the microbial community structure and reduce the microbial diversity, whereas exercise can diversify the gut microbiota (). It has been hypothesized that Firmicutes are more effective as an energy source than Bacteroidetes, thus promoting more energy harvest from colonic fermentation and subsequently gaining weight (). Fluctuations in phyla Bacteroidetes and Firmicutes are often used to analyze the changes in intestinal flora. Although with inconsistencies, the majority of results showed that HFDs and obesity increase the Firmicutes/Bacteroidetes ratio whereas exercise decreases this value (). This viewpoint was validated in our study. We observed a significantly low diversity in the mice fed with HFDs and no movements. Exercise had no effect on the microbiota richness in the mice fed with normal diets but could increase the richness in the mice fed with HFDs. The changes in microbial ecology induced by HFDs and exercise were shown by the comparison of dominant phyla with HFDs or exercise intervention. The results revealed that Firmicutes, Bacteroidetes, and Proteobacteria are susceptible to the treatment with exercise and HFDs. HFDs increased Firmicutes and Proteobacteria and reduced Bacteroidetes. Conversely, exercise increased Bacteroidetes and reduced Firmicutes and Proteobacteria. At the family level, we also observed the remolding of the flora driven by exercise. The family Bacteroidales_S24-7, Prevotellaceae, and Bifidobacteriaceae decreased but Desulfovibrionaceae and Peptostreptococcaceae increased in the HFD-fed mice. However, exercise tended to reverse these changes induced by HFD. The families Desulfovibrionaceae and Peptostreptococcaceae have been identified as endotoxin-producing bacteria (;), and Bifidobacteriaceae and Prevotellaceae can strengthen the intestinal barrier function, reduce the endotoxin level, and ameliorate metabolic inflammation (). Notably, an uncultured family S24-7 was dominant in the mouse gut microbiota (), and the emerging data suggest that these microbes are associated with positive health effects. HFD reduced the abundance of S24-7, whereas exercise substantially rescued its loss in obesity. Additionally, few Bacteroidetes and substantial Firmicutes are associated with the increased gut permeability and LPS translocation (;). Therefore, the above results suggested that exercise might facilitate the enhancement of intestinal barrier but inhibit LPS translocation. LPS serves as a key mediator of metabolic perturbations in obesity and OA (;Huang and Kraus, 2016). High endotoxin levels have been observed in a sedentary lifestyle; conversely, low endotoxin levels are induced by physical activity (). This phenomenon suggests that physical activity may be beneficial to eliminating LPS in vivo. However, different patterns and intensities of exercise may have various effects on the intestinal flora; for example, voluntary wheel running decreases the Firmicutes/Bacteroidetes ratio, whereas forced treadmill running increases this value (;). Exercise intensity has similar effects on the intestinal permeability and LPS levels. The intestinal permeability () and LPS level () increase after heavy exercise. Therefore, a compulsory highintensity exercise seems to be a proinflammatory factor in the body. By contrast, moderate running exercise, such as voluntary wheel running, may contribute to the relief of inflammation, maintenance of metabolic homeostasis (), and protection against OA. Our results in the LPS measurement concur with this hypothesis, that is, exercise intervention can reduce the high LPS concentrations induced by HFDs in the blood and joint fluids. The increased protein expression levels of TLR4 and MMP-13 induced by HFDs was also decreased by exercise. The histologic assessment showed that exercise could reduce the high rate of OA induced by HFDs, increase cartilage thickness, and ameliorate cartilage degeneration. We further tested the role of berberine on the gut microbiota and LPS clearance in this experiment to determine whether or not exercise had the same effect. We found berberine could reduce the level of LPS in the serum and synovial fluid, which is similar to the effect of exercise. However, the mechanism of berberine on LPS may be different from that of exercise. Berberine reduces the diversity of intestinal flora along with substantial Firmicutes and few Bacteroidetes. These results are similar to those of HFDs but different from those of exercise. The clearance mechanism of LPS through exercise is different from that via drugs. The results of blood LPS determination indicated that berberine showed a less scavenging effect on LPS compared with exercise. Based on the above results and discussions, exercise is highly conducive to the reconstruction of healthy intestinal flora and the elimination of LPS in vivo. The appropriate modality and intensity of exercise have positive and multifaceted effects on OA. In addition to losing weight (benefits of biomechanics), exercise can alleviate OA by reducing the LPS production (improving intestinal flora) and transport (strengthening intestinal barrier) and improving the LPS clearance in the circulatory system (Fig. 8). However, many problems need to be discussed further. For example, the microbiota is altered as a result of physical activity, but the positive modulation of such activity remains unclear. A one-unit decrease in pH (from 6.5 to 5.5) has an important selective effect on the microbiota and is conducive to the reproduction of probiotics (). Thus, could exercise directly or indirectly alter the intestinal pH or could the biomechanic force from exercise increase the gut motility and then accelerate the mixing of intestinal contents? Whether exercise can directly or indirectly reduce the circulating LPS (activate LPS clearance or metabolic mechanism) when the microbiota is unregulated also remains unclear. Conclusions LPS is recently considered as a trigger for the pathology of OA. The LPS level of HFD-fed mice increased but decreased after exercise intervention. These changes were highly related to the alteration of the intestinal microorganism. The HFD treatment resulted in decreased gut microbial diversity, increases in endotoxin-producing bacteria, and decrease in gut barrierprotecting bacteria. Voluntary wheel running caused high gut microbial diversity and reshaped gut microbiota. Microbiota alterations caused by exercise were completely different from those induced by intestinal -regulating drugs. The increased protein expression levels of TLR4 and MMP-13 induced by HFDs decreased because of exercise. The histologic assessment showed that exercise can reduce OA rate induced by HFDs, increase cartilage thickness, and ameliorate cartilage degeneration. This study revealed that, apart from losing weight, exercise has protective effects on the cartilage by influencing the modification of the gut microbiota and reducing circulating LPS level. We proposed that voluntary wheel running should be an intervention strategy for the prevention and treatment of OA. Moreover, microbiome monitoring can be translated into diagnostic and clinical practice. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Streptococcal Pharyngitis and Rheumatic Fever Streptococcus pyogenes (Group A Streptococcus) causes a variety of diseases, from benign self-limiting infections of the skin or throat to lethal infections of soft tissue accompanied by multi-organ failure. GAS is one of significant species among Gram-positive pathogens which is responsible for several suppurative infections and non-suppurative sequelae. They also cause pharyngitis, streptococcal toxic shock syndrome (STSS), necrotizing fasciitis and other diseases. Currently, global burden of RF / RHD is undervalued. In 2010, RF and RHD were estimated as 15.6 million cases and deaths around 200,000 annually. Laboratory diagnosis includes cultural techniques, serology, PYR test, Bacitracin susceptibility test and antibiotic resistance testing helps in differentiating the Streptococcus pyogenes from other groups of Streptococci. Most of the Acute Rheumatic Fever cases gets missed or does not present in the initial stage rather it has been developed into advanced Rheumatic Heart Disease condition. Modified Jones criteria in 2015 will be helpful especially to the low risk population as it is challenging because of limited access to primary health care, diagnosis of streptococcal disease. In addition to this revised criteria, diagnosis still relies on clinical diagnostic algorithm. Vaccines based on M protein and T antigens are continuing to evolve with different results. Ongoing vaccine development is still challenging for the GAS research community, it will make a positive and lasting impact on the peoples globally.
"""Detect the attributes inside each mask""" import random import time import os import csv import numpy as np import cv2 from colormath.color_objects import LabColor, HSVColor from colormath.color_diff import delta_e_cie2000 as color_diff from colormath.color_conversions import convert_color from .image_processing import get_mode, get_major_color, get_contour_area from .__settings__ import COLOR_CODE, COLOR_MUNSELL, COLOR_HSV, TESTING OUTPUT_DIR = os.path.abspath('./files/annotation') COLOR_RANGE_HSV = np.array([10, 32, 32], dtype=np.int32) class ObjAttrs: """The class for attribute detection""" def __init__(self): self.color_list = None # self.init_munsell() self.init_hsv() self.color_codes = None self.img = None self.img_rgb = None def init_munsell(self): """Initialize the 330 Munsell colors""" color_list = [] with open(COLOR_MUNSELL, newline='', encoding='UTF-8-sig') as csvfile: reader = csv.DictReader(csvfile, delimiter=',') for row in reader: color_lab = LabColor(lab_l=row['l'], lab_a=row['a'], lab_b=row['b']) color_hsv = convert_color(color_lab, HSVColor) print(color_hsv.get_value_tuple()) color = { 'color': color_lab, #'code': COLOR_CODE.index(row['name']) } color_list.append(color) self.color_list = color_list def init_hsv(self): """Initialize the hsv color thresholds""" color_list = {} with open(COLOR_HSV, newline='', encoding='UTF-8-sig') as csvfile: reader = csv.DictReader(csvfile, delimiter=',') for row in reader: color_name = row['name'] if color_list.get(color_name) is None: color_list[color_name] = { 'low': [], 'high': [], 'code': COLOR_CODE.index(row['name']) } hsv_threshold = np.array([row['h'], row['s'], row['v']], dtype=np.uint8) if str(row['threshold']) == '0': color_list[color_name]['low'].append(hsv_threshold) else: color_list[color_name]['high'].append(hsv_threshold) self.color_list = color_list def infer_color(self, lab): """Get the name for one color""" color = LabColor(lab_l=lab[0], lab_a=lab[1], lab_b=lab[2]) min_dist = float('inf') code = -1 for ms_color in self.color_list: dist = color_diff(color, ms_color['color']) if dist < min_dist: code = ms_color['code'] return code def infer_pixel_munsell(self, img): """Get the color name for each pixel""" height = img.shape[0] width = img.shape[1] codes = np.ones((height, width), dtype=np.int16) print('Infer image started: ' + str(width) + ' * ' + str(height)) timer = time.time() for row in range(height): for col in range(width): codes[row][col] = self.infer_color(img[row][col]) print('Infer image ended: {:.3f}s'.format(time.time() - timer)) self.color_codes = codes def infer_pixel_hsv(self, img): """Get the color name for each pixel""" height = img.shape[0] width = img.shape[1] codes = np.zeros((height, width), dtype=np.uint8) for color in self.color_list.values(): threshold_num = len(color['low']) if threshold_num == 1: mask = cv2.inRange(img, color['low'][0], color['high'][0]) else: mask = np.zeros((height, width), dtype=np.uint8) for i in range(threshold_num): mask_i = cv2.inRange(img, color['low'][i], color['high'][i]) mask = cv2.bitwise_or(mask, mask_i) color_map = np.empty((height, width), dtype=np.uint8) color_map.fill(color['code']) color_map = cv2.bitwise_and(color_map, color_map, mask=mask) codes = codes + color_map self.color_codes = codes def infer(self, img, mode='hsv'): """Get the color names for the whole image""" self.img = img self.img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) if mode == 'munsell': # Turn the image from BGR to LAB img_lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB) # Classify the color of each pixel self.infer_pixel_munsell(img_lab) else: # Turn the image from BGR to HSV img_hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) # Classify the color of each pixel self.infer_pixel_hsv(img_hsv) def get_mask_color(self, mask_img): """Count the colors inside the mask""" color_codes = self.color_codes img = self.img img_rgb = self.img_rgb img_hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) masked = cv2.bitwise_and(color_codes, color_codes, mask=mask_img) unique, counts = np.unique(masked, return_counts=True) code_dict = dict(zip(unique, counts)) pixel_num = float(0) for code, num in code_dict.items(): if code != 0: pixel_num += num color_dict = {} rgb_dict = {} for code in code_dict: if code != 0: color_name = COLOR_CODE[code] color_dict[color_name] = round(code_dict[code] / pixel_num, 4) # The hue of some color inside the mask hsv_in_mask = img_hsv[np.where((mask_img > 0) & (color_codes == code))] major_color_hsv = np.mean(hsv_in_mask, axis=0).astype(np.uint8) fake_img = np.array([[major_color_hsv]], dtype=np.uint8) fake_img = cv2.cvtColor(fake_img, cv2.COLOR_HSV2RGB) major_color_rgb = fake_img[0][0].tolist() rgb_dict[color_name] = major_color_rgb return color_dict, rgb_dict def get_mask_size(self, contour_list): """Get the size of the mask: area, x_range, y_range""" total_area = 0 areas = [] # area of each sub-contour invalid_ids = [] # find the invalid sub-contours, store their IDs ctr_range = { 'x': [float('inf'), -1], 'y': [float('inf'), -1] } for index, contour in enumerate(contour_list): contour_area = cv2.contourArea(contour) if contour_area == 0: # Invalid sub-contour: area=0 invalid_ids.insert(0, index) continue areas.append(contour_area) total_area += contour_area left_x, left_y, rect_w, rect_h = cv2.boundingRect(contour) right_x = left_x + rect_w right_y = left_y + rect_h if left_x < ctr_range['x'][0]: ctr_range['x'][0] = left_x if right_x > ctr_range['x'][1]: ctr_range['x'][1] = right_x if left_y < ctr_range['y'][0]: ctr_range['y'][0] = left_y if right_y > ctr_range['y'][1]: ctr_range['y'][1] = right_y # Remove the invalid sub-contours for index in invalid_ids: del contour_list[index] return areas, { 'area': total_area, 'x_range': ctr_range['x'], 'y_range': ctr_range['y'] } def get_mask_position(self, contour_list, areas, total_area): """Get the position of the (centroid of the) mask""" ctr_centroid = { 'x': -1, 'y': -1, } for index, contour in enumerate(contour_list): contour_area = areas[index] moment = cv2.moments(contour) centroid_x = int(moment['m10']/moment['m00']) centroid_y = int(moment['m01']/moment['m00']) ctr_centroid['x'] += contour_area * centroid_x ctr_centroid['y'] += contour_area * centroid_y if total_area != 0: ctr_centroid['x'] = int(ctr_centroid['x'] / total_area) if total_area != 0: ctr_centroid['y'] = int(ctr_centroid['y'] / total_area) return ctr_centroid def clear_all(self): """Clear the temporary data""" self.color_codes = None self.img = None self.img_rgb = None def get_mask(self, mask_img, contour_list): """Get the colors inside the mask""" # mask_img: the binary masked image color, color_values = self.get_mask_color(mask_img) areas, size = self.get_mask_size(contour_list) position = self.get_mask_position(contour_list, areas, size['area']) return { 'color': color, 'color_rgb': color_values, 'size': size, 'position': position } def replace_color(self, img, target_color, bg_color): """Replace the target color with the background color""" target_code = self.color_list.get(target_color) if target_code is None: return False target_code = int(target_code['code']) img[np.where((self.color_codes == target_code))] = bg_color return True def fix_contours(self, bbox, attributes, contour_list): """Fix the contour for pure-color shapes""" colors = attributes.get("color") colors_rgb = attributes.get("color_rgb") if colors is None or colors_rgb is None: return contour_list img = self.img # Set up the mask image mask_img = np.zeros(img.shape[:2], dtype=np.uint8) bbox_mask = np.zeros(img.shape[:2]) bbox_poly = np.array([\ [bbox["x"], bbox["y"]],\ [bbox["x"] + bbox["width"], bbox["y"]],\ [bbox["x"] + bbox["width"], bbox["y"] + bbox["height"]],\ [bbox["x"], bbox["y"] + bbox["height"]],\ ]) cv2.fillPoly(bbox_mask, [bbox_poly], 255) # Find the contour of the pure-color block major_color_hsv = get_major_color(colors, colors_rgb, "hsv") major_color_upper = np.array(major_color_hsv, dtype=np.int32) + COLOR_RANGE_HSV major_color_lower = np.array(major_color_hsv, dtype=np.int32) - COLOR_RANGE_HSV img_hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) color_mask = cv2.inRange(img_hsv, major_color_lower, major_color_upper) mask_img[np.where((bbox_mask > 0) & (color_mask > 0))] = 255 mask_img = cv2.bilateralFilter(mask_img, 4, 50, 50) if TESTING["label"]["sign"]: rand_id = random.randint(0, 99) cv2.imwrite(TESTING['dir'] + '/mask_' + str(rand_id) + '.png', \ mask_img, \ [int(cv2.IMWRITE_PNG_COMPRESSION), 0]) contours, hier = cv2.findContours(mask_img, \ cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) # Remove the bad contours new_contour_list = [] if contours: max_contour = None max_contour_area = float('-inf') for contour in contours: area = get_contour_area(contour, "np") if area > max_contour_area: max_contour_area = area max_contour = contour new_contour_list.append(max_contour) return new_contour_list
// Copyright (c) OpenMMLab. All rights reserved. #ifndef MMDEPLOY_CSRC_EXPERIMENTAL_EXECUTION_LET_VALUE_H_ #define MMDEPLOY_CSRC_EXPERIMENTAL_EXECUTION_LET_VALUE_H_ #include <optional> #include "utility.h" namespace mmdeploy { namespace __let_value { template <typename T> using __decay_ref = std::decay_t<T>&; template <typename Func, typename... As> using __result_sender_t = __call_result_t<Func, __decay_ref<As>...>; template <typename Func, typename Tuple> struct __value_type {}; template <typename Func, typename... As> struct __value_type<Func, std::tuple<As...>> { using type = __result_sender_t<Func, As...>; }; template <typename Func, typename Tuple> using __value_type_t = typename __value_type<Func, Tuple>::type; template <typename CvrefSender, typename Receiver, typename Fun> struct _Storage { using Sender = remove_cvref_t<CvrefSender>; using operation_t = connect_result_t<__value_type_t<Fun, completion_signatures_of_t<Sender>>, Receiver>; std::optional<completion_signatures_of_t<Sender>> args_; // workaround for MSVC v142 toolset, copy elision does not work here std::optional<__conv_proxy<operation_t>> proxy_; }; template <typename CvrefSender, typename Receiver, typename Func> struct _Operation { struct type; }; template <typename CvrefSender, typename Receiver, typename Func> using operation_t = typename _Operation<CvrefSender, remove_cvref_t<Receiver>, Func>::type; template <typename CvrefSender, typename Receiver, typename Func> struct _Receiver { struct type; }; template <typename CvrefSender, typename Receiver, typename Func> using receiver_t = typename _Receiver<CvrefSender, Receiver, Func>::type; template <typename CvrefSender, typename Receiver, typename Func> struct _Receiver<CvrefSender, Receiver, Func>::type { operation_t<CvrefSender, Receiver, Func>* op_state_; template <typename... As> friend void tag_invoke(set_value_t, type&& self, As&&... as) noexcept { auto* op_state = self.op_state_; auto& args = op_state->storage_.args_.emplace((As &&) as...); op_state->storage_.proxy_.emplace([&] { return Connect(std::apply(std::move(op_state->func_), args), std::move(op_state->receiver_)); }); Start(**op_state->storage_.proxy_); } }; template <typename CvrefSender, typename Receiver, typename Func> struct _Operation<CvrefSender, Receiver, Func>::type { using _receiver_t = receiver_t<CvrefSender, Receiver, Func>; friend void tag_invoke(start_t, type& self) noexcept { Start(self.op_state2_); } template <typename Receiver2> type(CvrefSender&& sender, Receiver2&& receiver, Func func) : op_state2_(Connect((CvrefSender &&) sender, _receiver_t{this})), receiver_((Receiver2 &&) receiver), func_(std::move(func)) {} connect_result_t<CvrefSender, _receiver_t> op_state2_; Receiver receiver_; Func func_; _Storage<CvrefSender, Receiver, Func> storage_; }; template <typename Sender, typename Func> struct _Sender { struct type; }; template <typename Sender, typename Func> using sender_t = typename _Sender<remove_cvref_t<Sender>, Func>::type; template <typename Sender, typename Func> struct _Sender<Sender, Func>::type { template <typename Self, typename Receiver> using _operation_t = operation_t<_copy_cvref_t<Self, Sender>, Receiver, Func>; using value_types = completion_signatures_of_t<__value_type_t<Func, completion_signatures_of_t<Sender>>>; template <typename Self, typename Receiver, _decays_to<Self, type, int> = 0> friend auto tag_invoke(connect_t, Self&& self, Receiver&& receiver) -> _operation_t<Self, Receiver> { return _operation_t<Self, Receiver>{((Self &&) self).sender_, (Receiver &&) receiver, ((Self &&) self).func_}; } Sender sender_; Func func_; }; using std::enable_if_t; struct let_value_t { template <typename Sender, typename Func, enable_if_t<_is_sender<Sender> && _tag_invocable_with_completion_scheduler<let_value_t, Sender, Func>, int> = 0> auto operator()(Sender&& sender, Func func) const { auto scheduler = GetCompletionScheduler(sender); return tag_invoke(let_value_t{}, std::move(scheduler), (Sender &&) sender, std::move(func)); } template <typename Sender, typename Func, enable_if_t<_is_sender<Sender> && _tag_invocable_with_completion_scheduler<let_value_t, Sender, Func> && tag_invocable<let_value_t, Sender, Func>, int> = 0> auto operator()(Sender&& sender, Func func) const { return tag_invoke(let_value_t{}, (Sender &&) sender, std::move(func)); } template <typename Sender, typename Func, enable_if_t<_is_sender<Sender> && !_tag_invocable_with_completion_scheduler<let_value_t, Sender, Func> && !tag_invocable<let_value_t, Sender>, int> = 0> sender_t<Sender, Func> operator()(Sender&& sender, Func func) const { return {(Sender &&) sender, std::move(func)}; } template <typename Func> _BinderBack<let_value_t, Func> operator()(Func func) const { return {{}, {}, {std::move(func)}}; } }; } // namespace __let_value using __let_value::let_value_t; inline constexpr let_value_t LetValue{}; } // namespace mmdeploy #endif // MMDEPLOY_CSRC_EXPERIMENTAL_EXECUTION_LET_VALUE_H_
A falling U.S. currency is a worrisome indicator for the vitality of the overall economy. But some small businesses say now is a good time to push their products overseas. A weaker greenback can help U.S. exporters, says Larry Harding, founder and chief executive of High Street Partners Inc., an Annapolis, Md., firm that advises businesses on expanding globally. U.S. businesses that sell their products overseas can often tout lower prices if the U.S. dollar is weaker than the overseas currency.
n = list(map(int, input().split())) dec_num, base_num = n[0], n[1] ans, n = "", 0 while True: if dec_num <= 0: break n = dec_num % base_num dec_num = int(dec_num / base_num) ans += str(n) if ans[:1:-1] == "0": ans += "1" ans[::-1] print(len(ans))
x,y,z=map(int, input().split()) t=(x+y)//z x1=(x//z) y1=(y//z) f=0 if ((x1+y1)<t): xa=x%z ya=y%z f=z-max(xa,ya) else:f=0 print(t,f)
. OBJECTIVE To compare the uptake of four contrast agents: Tc(m)-RGD-4CK, Tc(m)-N(NOET), Tc(m)-MIBI and F-FDG in Bal B/c nude mice bearing human non-small cell lung cancer NCI-H358 and evaluate their diagnostic value in low-metabolic lung cancer. METHODS Human bronchioloalveolar carcinoma NCI-H358 cells were subcutaneously inoculated in Bal B/c nude mice to establish mouse models bearing human lung cancer. Twenty tumor-bearing nude mice were given injection of the four contrast agent, respectively, 5 mice in each group. SPECT imaging and biodistribution of the 4 tracers in the tumor-bearing nude mice were performed. The ratios of tumor to non-tumor (T/NT) of the tracers were compared. RESULTS The results from semi-quantification of the planar image and assessment of biodistribution showed that tumor to contralateral muscle activity ratios (T/NT) of the four tracers had statistically significant difference between each two of the four tracer groups of tumor-bearing mice (P < 0.001), with a highest value of T/NT ratio in the Tc(m)-RGD-4CK group. CONCLUSIONS NCI-H358 tumors show a higher uptake of Tc(m)-RGD-4CK than F-FDG. It suggests that when diagnosing a well-differentiated lung cancer such as bronchioloalveolar carcinoma, the contrast agent Tc(m)-RGD-4CK may be more sensitive than F-FDG, and it may become a promising contrast agent in tumor imaging diagnosis.
# -*- coding: utf-8 -*- """ Hardcoded movements to display emotions """ __author__ = "<NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>" __copyright__ = "Copyright 2018, DEVINE Project" __credits__ = ["<NAME>", "<NAME>", "<NAME>"] __license__ = "BSD" __version__ = "1.0.0" __email__ = "<EMAIL>" __status__ = "Production" import rospy from std_msgs.msg import Float64MultiArray from std_msgs.msg import Bool from devine_config import topicname TOPIC_GUESSWHAT_CONFIDENCE = topicname('objects_confidence') TOPIC_GUESSWHAT_SUCCEED = topicname('object_guess_success') EMOTION_THRESHOLD = 0.75 class Movement(object): """ According to game state, emote emotion with complex movement """ def __init__(self, controller): self.controller = controller self.confidence_max = None self.is_guesswhat_succeed = None # TODO verify if mutex needed rospy.Subscriber(TOPIC_GUESSWHAT_CONFIDENCE, Float64MultiArray, self.confidence_callback) rospy.Subscriber(TOPIC_GUESSWHAT_SUCCEED, Bool, self.is_guesswhat_succeed_callback) def confidence_callback(self, msg): """ GuessWhat confidence level """ rospy.loginfo(msg.data) if msg.data: self.confidence_max = max(msg.data) def is_guesswhat_succeed_callback(self, msg): """ GuessWhat succeeded or not """ rospy.loginfo(msg.data) if msg.data is not None: self.is_guesswhat_succeed = msg.data self.choose_move() def choose_move(self): """ Select the emotion to emote """ if not self.is_guesswhat_succeed and self.confidence_max >= EMOTION_THRESHOLD: rospy.loginfo('Sad') self.head_down_shoulder_in() elif not self.is_guesswhat_succeed and self.confidence_max < EMOTION_THRESHOLD: rospy.loginfo('Disappointed') self.head_shake_hand_up() elif self.is_guesswhat_succeed and self.confidence_max < EMOTION_THRESHOLD: rospy.loginfo('Satisfied') self.head_up_arm_up() elif self.is_guesswhat_succeed and self.confidence_max >= EMOTION_THRESHOLD: rospy.loginfo('Happy') self.dab('left') rospy.sleep(3) self.dab('right') rospy.sleep(3) self.controller.move_init(5) def head_down_shoulder_in(self): """ Sad emotion movement """ right_joints_position = ui_to_traj([-0.22, 0.47, 0.18, -0.4]) left_joints_position = ui_to_traj([-0.70, -0.62, -0.31, -0.67]) head_joints_position = [-0.15, 0.79] time = 4 self.controller.move({ 'head': head_joints_position, 'arm_left': left_joints_position, 'arm_right': right_joints_position }, time) rospy.sleep(1) time = 2 left_joints_position = ui_to_traj([-0.35, -0.62, -0.31, -0.67]) self.controller.move({'arm_left': left_joints_position}, time) rospy.sleep(0.1) left_joints_position = ui_to_traj([-0.70, -0.62, -0.31, -0.67]) self.controller.move({'arm_left': left_joints_position}, time) rospy.sleep(0.1) left_joints_position = ui_to_traj([-0.35, -0.62, -0.31, -0.67]) self.controller.move({'arm_left': left_joints_position}, time) rospy.sleep(1) def head_shake_hand_up(self): """ Disappointed emotion movement """ right_joints_position = ui_to_traj([-0.75, -0.31, 0.31, -1.07]) left_joints_position = ui_to_traj([-0.75, 0.31, -0.31, -1.07]) head_joints_position = [0.4, 0.47] time = 3 self.controller.move({ 'head': head_joints_position, 'arm_left': left_joints_position, 'arm_right': right_joints_position }, time) time = 2 head_joints_position = [-0.4, 0.47] self.controller.move({'head': head_joints_position}, time) rospy.sleep(0.1) head_joints_position = [0.4, 0.47] self.controller.move({'head': head_joints_position}, time) rospy.sleep(0.1) head_joints_position = [-0.4, 0.47] self.controller.move({'head': head_joints_position}, time) rospy.sleep(0.1) def head_up_arm_up(self): """ Satisfied emotion movement """ right_joints_position = ui_to_traj([-0.34, -0.22, 0, -2.48]) left_joints_position = ui_to_traj([-0.34, 0.22, 0, -2.48]) head_joints_position = [-0.5, -0.17] time = 5 self.controller.move({ 'head': head_joints_position, 'arm_left': left_joints_position, 'arm_right': right_joints_position }, time) time = 2 head_joints_position = [0.5, -0.17] self.controller.move({'head': head_joints_position}, time) rospy.sleep(0.5) head_joints_position = [-0.5, -0.17] self.controller.move({'head': head_joints_position}, time) rospy.sleep(0.5) head_joints_position = [0.5, -0.17] self.controller.move({'head': head_joints_position}, time) rospy.sleep(0.5) def dab(self, direction): """ Happy emotion movement """ if direction == 'right': left_joints_position = [1.57, -1.87, 0, -0.16] right_joints_position = [0, -2, -1.57, 1.22] head_joints_position = [-0.31, 0.79] elif direction == 'left': right_joints_position = [-1.57, -1.87, 0, -0.16] left_joints_position = [0, -2.0, 1.57, 1.22] head_joints_position = [0.31, 0.79] time = 3 self.controller.move({'head': head_joints_position, 'arm_left': left_joints_position, 'arm_right': right_joints_position }, time) def ui_to_traj(joints_position): """ Convert position from rqt_joint_trajectory_controller UI to JointTrajectoryController In: UI Joints Position [L_elbow_tilt_joint, L_shoulder_pan_joint, L_shoulder_roll_joint, L_shoulder_tilt_joint] Out: Trajectory Client [L_shoulder_pan_joint, L_shoulder_tilt_joint, L_shoulder_roll_joint, L_elbow_tilt_joint] """ joints_position_out = [0, 0, 0, 0] joints_position_out[0] = joints_position[1] joints_position_out[1] = joints_position[3] joints_position_out[2] = joints_position[2] joints_position_out[3] = joints_position[0] return joints_position_out
Harnessing the Anti-Nociceptive Potential of NK2 and NK3 Ligands in the Design of New Multifunctional /-Opioid AgonistNeurokinin Antagonist Peptidomimetics Opioid agonists are well-established analgesics, widely prescribed for acute but also chronic pain. However, their efficiency comes with the price of drastically impacting side effects that are inherently linked to their prolonged use. To answer these liabilities, designed multiple ligands (DMLs) offer a promising strategy by co-targeting opioid and non-opioid signaling pathways involved in nociception. Despite being intimately linked to the Substance P (SP)/neurokinin 1 (NK1) system, which is broadly examined for pain treatment, the neurokinin receptors NK2 and NK3 have so far been neglected in such DMLs. Herein, a series of newly designed opioid agonist-NK2 or -NK3 antagonists is reported. A selection of reported peptidic, pseudo-peptidic, and non-peptide neurokinin NK2 and NK3 ligands were covalently linked to the peptidic -opioid selective pharmacophore Dmt-DALDA (H-Dmt-d-Arg-Phe-Lys-NH2) and the dual / opioid agonist H-Dmt-d-Arg-Aba-Ala-NH2 (KGOP01). Opioid binding assays unequivocally demonstrated that only hybrids SBL-OPNK-5, SBL-OPNK-7 and SBL-OPNK-9, bearing the KGOP01 scaffold, conserved nanomolar range -opioid receptor (MOR) affinity, and slightly reduced affinity for the -opioid receptor (DOR). Moreover, NK binding experiments proved that compounds SBL-OPNK-5, SBL-OPNK-7, and SBL-OPNK-9 exhibited (sub)nanomolar binding affinity for NK2 and NK3, opening promising opportunities for the design of next-generation opioid hybrids. Introduction From the ancient use of the plant alkaloid morphine up until the discovery of modern medicine technologies, opioid receptor-targeting analgesics still occupy a prominent place in the management of acute or chronic pain. Through a competition with endogenous neuropeptides (e.g., endorphin, enkephalins, and dynorphins), these small molecule drugs work by primarily activating the -opioid receptor (MOR), and to a lesser extent, the -opioid (DOR), -opioid (KOR), and nociceptin (NOP) receptors. Aside from their pivotal role in nociception, these receptors modulate other vital biological processes such as respiration, gastrointestinal transit, stress responses, neuroendocrine, and immune functions. This variety in physiological processes, combined with the multitude of neurotransmitters and associated receptors involved in pain pathways, explain the challenges that have to be overcome when designing new analgesic drugs. Decades of research unveiled that despite their undeniable efficiency in pain control, prolonged use of agonist pharmacophores Dmt-DALDA and KGOP01 were covalently linked to a selection of peptidic, pseudo-peptidic, and non-peptidic NK2 and NK3 pharmacophores SBL-OPNK-1, SBL-OPNK-2, and SBL-OPNK-3 ( Figure 1B). As described below, the design of these neurokinin-targeting moieties was inspired by the structures of reported and approved NK2 and NK3 antagonists exhibiting good biological activity. The first-generation peptidic NK2 ligand MEN 10,376 ( Figure 1A) displayed good antagonistic activity at the NK2 receptor (pA 2 = 8.08 ± 0.1 in RPA-endothelium-deprived rabbit pulmonary artery against neurokinin A as an agonist), hence offering an adequate peptide-based candidate for this study. Exhibiting subnanomolar and selective NK2 binding affinity, and investigated in clinical phases for irritable bowel syndrome treatment, Ibodutant, and one of its parents, MEN 14,268, were selected as starting scaffolds for peptidomimetic NK2 ligand design ( Figure 1A). Finally, as a major representative of the NK3 antagonist family, and as a compound examined in clinical trials for treatment of CNS disorders, Talnetant exhibited high affinity for human NK3 and long-lasting in vivo activity, justifying its selection as a starting scaffold for non-peptidic NK3 ligand design. As the first challenge in multifunctional analgesic ligand design is to maintain affinity to both targets and agonist efficacy at the opioid receptors, this initial report aimed to investigate the binding affinity and opioid receptor activation of the newly designed opioid-NK2 and -NK3 hybrids. series of multifunctional opioid agonist-neurokinin NK2 or NK3 antagonist ligands were designed, synthesized, and evaluated in vitro in light of exploring new therapeutic pathways in pain and related disorders. Reported for their high MOR binding affinity, the two putative opioid agonist pharmacophores Dmt-DALDA and KGOP01 were covalently linked to a selection of peptidic, pseudo-peptidic, and non-peptidic NK2 and NK3 pharmacophores SBL-OPNK-1, SBL-OPNK-2, and SBL-OPNK-3 ( Figure 1B). As described below, the design of these neurokinin-targeting moieties was inspired by the structures of reported and approved NK2 and NK3 antagonists exhibiting good biological activity. The first-generation peptidic NK2 ligand MEN 10,376 ( Figure 1A) displayed good antagonistic activity at the NK2 receptor (pA2 = 8.08 ± 0.1 in RPA-endothelium-deprived rabbit pulmonary artery against neurokinin A as an agonist), hence offering an adequate peptide-based candidate for this study. Exhibiting subnanomolar and selective NK2 binding affinity, and investigated in clinical phases for irritable bowel syndrome treatment, Ibodutant, and one of its parents, MEN 14,268, were selected as starting scaffolds for peptidomimetic NK2 ligand design ( Figure 1A). Finally, as a major representative of the NK3 antagonist family, and as a compound examined in clinical trials for treatment of CNS disorders, Talnetant exhibited high affinity for human NK3 and long-lasting in vivo activity, justifying its selection as a starting scaffold for non-peptidic NK3 ligand design. As the first challenge in multifunctional analgesic ligand design is to maintain affinity to both targets and agonist efficacy at the opioid receptors, this initial report aimed to investigate the binding affinity and opioid receptor activation of the newly designed opioid-NK2 and -NK3 hybrids. Chemistry The structures of the targeted opioid agonist-neurokinin NK2 or NK3 antagonist ligands are presented in Figure 2. Calcium Mobilization Assay In the calcium mobilization assay, all compounds were assessed under the same experimental conditions and their effects were compared to the reference compounds: dermorphin, DPDPE, and dynorphin A for MOR, DOR, and KOR, respectively (Table 2 and Figures 3 and 4). Calcium Mobilization Assay In the calcium mobilization assay, all compounds were assessed under the same experimental conditions and their effects were compared to the reference compounds: dermorphin, DPDPE, and dynorphin A for MOR, DOR, and KOR, respectively (Table 2 and Figures 3 and 4). In the NK3 radioligand binding assay, the parent NK3 pharmacophore SBL-OPNK-3 showed a slightly reduced affinity compared to the reference compound, while the corresponding hybrid ligand SBL-OPNK-9 presented again a 1.5-fold increase of its NK3 affinity and potency compared to the reference compound SB 222200. At MOR, compound Dmt-DALDA mimicked the stimulatory effect of dermorphin in the calcium release test and KGOP01 showed full efficacy, but potency about 8-folds higher than the standard compound ( = 1.04, EC 50 = 0.54). Hybrids SBL-OPNK-5 and SBL-OPNK-7 showed slightly reduced potencies compared to the reference compound dermorphin (2-and 3-fold, respectively) and to the opioid parent pharmacophore KGOP01 (18and 24-fold, respectively). Hybrid SBL-OPNK-9 moderately stimulated calcium release, while the remaining compounds showed strong (>200-fold) reduction in potencies compared to dermorphin (Table 2 and Figure 3A). With regard to DOR agonism, compounds KGOP01, SBL-OPNK-5, SBL-OPNK-7, and SBL-OPNK-9 evoked calcium signaling responses with a stimulatory effect of the reference DPDPE, but with 2-3-fold decreased potencies (Table 2 and Figure 3B). As for potency at KOR, compounds Dmt-DALDA and SBL-OPNK-5 were able to elicit a weak stimulatory response at micromolar concentrations. Compound SBL-OPNK-7 was completely inactive, while the other tested compounds weakly stimulated calcium release only at the highest dose (10 M), providing incomplete concentration-response curves ( Table 2 and Figure 3C). In the NK3 radioligand binding assay, the parent NK3 pharmacophore SBL-OPNK-3 showed a slightly reduced affinity compared to the reference compound, while the corresponding hybrid ligand SBL-OPNK-9 presented again a 1.5-fold increase of its NK3 affinity and potency compared to the reference compound SB 222200. Discussion To address the side effects associated with the long-term use of opioid agonists, the design of multifunctional ligands has emerged as a valuable strategy in the treatment of chronic pain. More precisely, -opioid pharmacophores were covalently combined to a series of antagonists targeting G protein-coupled receptors (here, the NK2 and NK3 receptors) known to cooperatively act with the opioid system or more widely described as nociception modulators. Following this approach, we previously reported the design of opioid agonist-NK1 antagonist hybrids, composed out of the putative opioid pharmacophore KGOP01 H-Dmt-D-Arg-Aba-Ala-NH 2. As a second benchmark MOR ligand, the Dmt-DALDA pharmacophore was included, since this reference compound has been described as a highly -selective agonist with a favorable amphipathic profile for BBB crossing and improved metabolic stability. The design of the constrained analogue KGOP01 addressed the beneficial effect of a concomitant activation of DOR for application in chronic pain treatment. Because the opioid pharmacophores are primarily recognized through the N-terminus, they were covalently linked to a selection of NK2 and NK3 ligands through their C-terminus. The NK2 and NK3 ligands were selected based on their structure, featuring either a peptidic or a non-peptidic scaffold, and depending on a good biological affinity, selectivity, and activity. Such a differentiated nature of the NK pharmacophores could eventually lead to distinct pharmacokinetic properties. The Menarini Lab was a pioneer in the development of NK receptor ligands and the initial efforts were logically directed toward sequential modifications of the endogenous neurokinins SP, NKA, and NKB. In the first generation of patented NK2 receptor antagonists, the peptidic MEN 10,376 resulted from a D-tryptophan insertion in the truncated endogenous NKA sequence and exhibited selectivity and nanomolar affinity toward NK2. Given this selectivity profile, it constituted an adequate peptidic neurokinin pharmacophore in the context of opioid-NK ligands. To answer metabolic issues, peptide scaffolds were then gradually modified into pseudo-peptidic drugs, and a new generation of NK antagonists was disclosed. Among those, MEN 15,596 or Ibodutant stood out as a NK2 antagonist with great potential for the treatment of abdominal pain, which ended up in clinical trials for IBS (Irritable Bowel Syndrome) therapy. Its parent compound, MEN 14,268, which mainly differs at the C-terminal basic appendage, displayed slightly reduced antagonism and NK2 affinity, but higher apparent permeability. Considering synthetic feasibility, MEN 14,268 appeared more attractive. For the purpose of the current study, MEN 14,268 was eventually simplified by replacing the benzothiophene unit with a simple benzylamine group, leading to a pseudo-peptidic NK2 antagonist pharmacophore SBL-OPNK-2 ( Figure 1). Finally, as a representative of a non-peptidic scaffold, Talnetant matched, as this compound combines good affinity (nanomolar range), selectivity, and antagonist activity of the NK3 receptor. This small molecule, bearing a central quinoline core, exhibited great potential in inhibiting nociception associated with intestinal distension. To avoid any potent steric hindrance between the opioid and the NK3 pharmacophore SBL-OPNK-3, it was decided to insert a short ether linker in place of the hydroxyl group on the quinoline ring ( Figure 1 and Scheme 2). Synthesis of all pharmacophores and hybrids proceeded smoothly following adapted reported procedures, allowing access to an adequate amount of material for biological evaluation. In vitro binding studies demonstrated that the conjugation of the NK2 and NK3 pharmacophores generally allows for the selectivity toward MOR and DOR originally acquired by the individual parent opioid pharmacophores to be maintained, while still being much less active on KOR. In the same way as their opioid parent Dmt-DALDA, the three hybrids SBL-OPNK-4, SBL-OPNK-6, and SBL-OPNK-8 all exhibited selectivity toward MOR, while unfortunately, the affinity and potency were reduced. In our hands, fusion of the Dmt-DALDA sequence with additional pharmacophores at the opioid's C-terminal end generally led to such a decrease in affinity and potency (unpublished results). On the other hand, the KGOP01-based hybrids, SBL-OPNK-5, SBL-OPNK-7, and SBL-OPNK-9, all displayed both good affinity and activity toward MOR and DOR, despite slightly lowered K i and EC 50 values; results that can be correlated to the fusion of an additional pharmacophore. It could, therefore, be suggested that the impact of NK2 and NK3 pharmacophores on opioid activity will strongly depend on the nature of the opioid pharmacophore. The data suggest that anchorage of any C-terminal appendage to the constrained structure of KGOP01 is more tolerated, which is in line with previous studies. In light of the promising opioid data, the KGOP01-based hybrids were therefore selected for further biological evaluation, and NK2 and NK3 receptor binding assays were performed. NK pharmacophores SBL-OPNK-1 and SBL-OPNK-3 both displayed similar affinity and potency as reported in the literature, with the important side note that compound SBL-OPNK-3 represents an altered form of the literature compound (see Figure 1). Compound SBL-OPNK-2, however, failed to reach sufficient effects at the highest test concentrations. This result could be correlated to the development history of the FDA-approved Ibodutant and MEN 14,268, for which structural examination of the 'N-terminal' fragment was subjected to extensive efforts, underlining its critical impact on neurokinin efficacy. Unlike the original benzothiophene moiety, the N-acetylbenzylamine group thus appeared detrimental for the activity and potency of SBL-OPNK-2 on NK2. It was therefore highly satisfying to note that this negative effect disappeared for the corresponding NK2 hybrid SBL-OPNK-7, which displayed the best biological profile with subnanomolar affinity and activity. The latter compound even outperformed hybrid SBL-OPNK-5 in terms of binding. It might be hypothesized that the KGOP01 pharmacophore adds extra key and stabilizing contacts with the neurokinin receptor binding site, a finding which was also previously noted for the opioid-neurokinin 1 receptor hybrid, SBCHM01. Structural analysis might be implemented in future studies to confirm this, but the observation of enhanced and (sub)nanomolar IC 50 and K i values, demonstrated for NK2 hybrid SBL-OPNK-5 and NK3 hybrid SBL-OPNK-9, also supports this hypothesis. It can be noted that the increase in affinity was slightly less important for SBL-OPNK-9, suggesting that the binding ability of the more compact NK3 moiety is less influenced by the opioid unit. In this study, we disclosed, for the first time, a series of novel opioid agonist-nonopioid NK2 and NK3 receptor antagonist hybrid compounds as an unprecedented, yet promising approach toward innovative pain therapeutics. The NK2 and NK3 receptors have been overlooked and might provide beneficial effects compared to opioid-NK1 receptor ligands. As for any new multifunctional drugs, the challenge of conserving affinity and activity to both or more targets had to first be resolved, and the herein disclosed hybrids have eventually fulfilled this criterion. The covalent combination of neurokinin pharmacophores to the opioid agonist moiety globally improves neurokinin binding affinity, while maintaining low nanomolar and -opioid affinity and functional activity. These highly promising results will lead to further in vivo evaluation, which will be reported in due time. Chemistry Peptide Synthesis Peptide compounds KGOP01, SBL-OPNK-4, and SBL-OPNK-5 were assembled on Rink Amide resin (ChemImpex, Wood Dale, IL, USA-Loading 0.47 mmol/g) following iterative couplings of the required Fmoc-protected residues. Canonical L-amino acids (four equivalents with respect to the resin) were coupled in 30 min using a mixture of four equivalents HBTU/DIPEA in DMF. D-amino acids (2 equivalents with respect to the resin) were coupled for 1 h using a mixture of two equivalents HBTU/DIPEA in DMF. Fmoc-Aba-Ala-OH (1 equivalent), previously prepared according to the reported procedure, was coupled overnight using HBTU/DIPEA (one equivalent). Ultimately, 1.5 equivalents of Boc-Dmt-OH (commercially available) was coupled at the C-terminal position using DIC/Oxyma (1.5/3 equivalents) and stirred overnight. Full cleavage from the resin was carried out in standard conditions using a cocktail cleavage of TFA/TIS/H 2 O (95:2.5:2.5, v/v/v). Fully protected opioid pharmacophores Boc-Dmt-D-Arg(Pbf)-Phe-Lys(Boc)-OH 1 and Boc-Dmt-D-Arg(Pbf)-Aba-Ala-OH 2 were assembled on 2-chlorotrityl resin following standard conditions as previously described for Rink Amide resin procedures. Cleavage from the resin was carried out using HFIP/DCM (1:4, v/v), in order to preserve protecting groups for further coupling (detailed data available in the Supplementary Materials). Drugs Cell culture media and fetal bovine serum were obtained from Euroclone (Pero, Italy) and supplements were purchased from Invitrogen (Paisley, UK). Standard ligands (dermorphin, dynorphin A, and DPDPE) were from Sigma-Aldrich Chemical Co. Membrane preparations were incubated at 25 C for 120 min with an appropriate concentration of a tested compound in the presence of 0.5 nM radioligand in a total volume of 0.5 mL of Tris/HCl (50 mM, pH 7.4), containing bovine serum albumin (BSA, 1 mg/mL), bacitracin (50 mg/L), bestatin (30 M), and captopril (10 M). Non-specific binding was determined in the presence of 10 M of naloxone. Incubations were terminated by rapid filtration through Whatman GF/B (Brentford, UK) glass fiber strips, which were presoaked for 2 h in 0.5 % polyethylamine using Millipore Sampling Manifold (Billerica, MA, USA). The filters were washed three times with 4 mL of ice-cold Tris buffer solution. The bound radioactivity was measured in Packard Tri-Carb 2100 TR liquid scintillation counter (Ramsey, MN, USA) after overnight extraction of the filters in 4 mL of Perkin Elmer Ultima Gold scintillation fluid (Wellesley, MA, USA). Three independent experiments for each assay were carried out in duplicate. The data were analyzed by a nonlinear least square regression analysis computer program Graph Pad PRISM 6.0 (GraphPad Software Inc., San Diego, CA, USA). The IC 50 values were determined from the logarithmic concentration-displacement curves, and the values of the inhibitory constants (K i ) were calculated according to the equation of Cheng et al.. Calcium Mobilization Assay CHO cells stably co-expressing human recombinant MOR or KOR opioid receptors and the C-terminally modified G qi5, and CHO cells co-expressing the human recombinant DOR opioid receptor and the G qG66Di5 chimeric protein were a generous gift from Prof. Girolamo Calo', Ferrara University, Italy and have been successfully used in our laboratory. The tested compounds were dissolved in 5% DMSO in bi-distilled water to the final concentration of 1 mM. The successive dilutions were made in the HBSS/HEPES (20 mM) buffer (containing 0.005% BSA fraction V). For the experiment, cells were seeded at a density of 50,000 cells/well into 96-well black, clear-bottom plates. After 24 h incubation, the cells were treated with the loading solution of the culture medium supplemented with 2.5 mM probenecid, 3 M of the calcium-sensitive fluorescent dye Fluo-4 AM, and 0.01% pluronic acid for 30 min at 37 C. The loading solution was aspirated and a 100 L/well of the assay buffer (Hank's Balanced Salt Solution (HBSS) supplemented with 20 mM HEPES, 2.5 mM probenecid, and 500 M Brilliant Black) was added. After placing both plates (cell culture and compound plate) into the FlexStation III plate reader, the on-line additions were carried out in a volume of 50 L/well and fluorescence changes were measured at 37 C. Agonist potencies were given as EC 50 representing the molar concentration of an agonist that produces 50% of the maximal possible effect. Concentration-response curves were fitted with the four parameters logistic nonlinear regression model: where X is the agonist concentration and n is the Hill coefficient. Ligand efficacy was expressed as intrinsic activity () calculated as the E max of the ligand to E max of the standard agonist ratio. At least five independent experiments for each assay were carried out in duplicate. Curve fittings were performed using Graph Pad PRISM 6.0 (GraphPad Software Inc., San Diego, CA, USA). Data were statistically analyzed with one way ANOVA followed by the Dunnett's test for multiple comparisons; p values less than 0.05 were considered significant. NK2 and NK3 Binding Assays Binding affinities for neurokinin receptors, NK2 and NK3, were carried out through Eurofins' service according to standard procedures, in which affinities were determined by displacing -labelled NKA and -labeled SR 142801 (Table 4). The IC 50 values (concentration causing a half-maximal inhibition of control specific binding) and Hill coefficients (nH) were determined by non-linear regression analysis of the competition curves generated with mean replicate values using Hill equation curve fitting: where Y = specific binding; A = left asymptote of the curve; D = right asymptote of the curve; C = compound concentration; C 50 = IC 50 ; and nH = slope factor. This analysis was performed using software developed at Cerep (Hill software) and validated by comparison with data generated by the commercial software SigmaPlot ® 4.0 for Windows ® (© 2021 by SPSS Inc., Chicago, IL, USA). The inhibition constants (K i ) were calculated using the Cheng Prusoff equation where L = concentration of radioligand in the assay; and K D = affinity of the radioligand for the receptor. A Scatchard plot is used to determine the K D. Supplementary Materials: The following are available online, Scheme S1: Solid-phase peptide and peptidomimetic synthesis; Scheme S2: Solution-phase peptides and peptidomimetics synthetic pathways; Scheme S3: NK3 pharmacophore and hybrids synthetic pathway; Figure S1: Correlation between radioligand binding and calcium mobilization assays for MOR and DOR.
<filename>app/src/main/java/com/ilifesmart/activity/DevicesInfoActivity.java package com.ilifesmart.activity; import android.app.AlertDialog; import android.content.BroadcastReceiver; import android.content.Context; import android.content.Intent; import android.content.IntentFilter; import android.content.pm.PackageManager; import android.os.Bundle; import android.support.annotation.NonNull; import android.util.Log; import android.widget.Button; import com.ilifesmart.App; import com.ilifesmart.androiddemo.R; import com.ilifesmart.interfaces.ILocationChanged; import com.ilifesmart.interfaces.INetworkAccessableCB; import com.ilifesmart.util.NetworkUtils; import com.ilifesmart.util.Utils; import butterknife.BindView; import butterknife.ButterKnife; import butterknife.OnClick; public class DevicesInfoActivity extends BaseActivity { @BindView(R.id.network) Button mNetwork; @BindView(R.id.latitude) Button mLatitude; private NetworkChangedReceiver mNetworkChangedReceiver; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_devices_info); ButterKnife.bind(this); mNetworkChangedReceiver = new NetworkChangedReceiver(); registerReceiver(mNetworkChangedReceiver, new IntentFilter("android.net.conn.CONNECTIVITY_CHANGE")); } @Override protected void onDestroy() { super.onDestroy(); unregisterReceiver(mNetworkChangedReceiver); } private void getNetworkInfo(final boolean isOnline) { App.postRunnable(new Runnable() { @Override public void run() { StringBuilder builder = new StringBuilder(); builder.append("网络状态: ").append(isOnline ? "可用" : "不可用").append("; ") .append("网络类型: ").append(NetworkUtils.getNetworkType(DevicesInfoActivity.this)).append("; "); if (NetworkUtils.isWifiConnected(DevicesInfoActivity.this)) { builder.append("网络名称: ").append(NetworkUtils.getNetworkName(DevicesInfoActivity.this)); } String text = builder.toString(); mNetwork.setText(text); } }); } @OnClick(R.id.clipboard) public void onClipboardClicked() { popupDialog("剪贴板内容", Utils.getClipboardContent()); } @Override protected void onResume() { super.onResume(); if (Utils.checkPermissionGranted(new String[]{Utils.PERMISSIONS_WRITE_EXTERNAL_STORAGE, Utils.PERMISSIONS_READ_PHONE_STATE, Utils.PERMISSIONS_ACCESS_FINE_LOCATION})) { startLocation(); } else { Utils.requestPermissions(this, new String[]{Utils.PERMISSIONS_WRITE_EXTERNAL_STORAGE, Utils.PERMISSIONS_READ_PHONE_STATE, Utils.PERMISSIONS_ACCESS_FINE_LOCATION}, true, Utils.PERMISSION_CODE_ACCESS_FINE_LOCATION); } } @Override protected void onPause() { super.onPause(); stopLocation(); } private void startLocation() { App.startLocation(new ILocationChanged() { @Override public void onLocationChanged(double latitude, double longitude) { mLatitude.setText("Lat:" + latitude + ",Lon:" + longitude); } @Override public void onLocationError(int errCode, String errInfo) { Log.d("ILocationChanged", "onLocationError: errCode " + errCode); Log.d("ILocationChanged", "onLocationError: errInfo " + errInfo); } }); } private void stopLocation() { App.stopLocation(); } @OnClick(R.id.contacts) public void onContactsClicked() { Utils.startActivity(this, ContactsActivity.class); } @OnClick(R.id.os) public void onOsInfo() { popupDialog("系统信息", Utils.getDevInfo()); } private class NetworkChangedReceiver extends BroadcastReceiver { @Override public void onReceive(final Context context, Intent intent) { new Thread(new Runnable() { @Override public void run() { NetworkUtils.isNetworkConnected(context, new INetworkAccessableCB() { @Override public void isNetWorkOnline(boolean isOnline) { DevicesInfoActivity.this.getNetworkInfo(isOnline); } }); } }).start(); } } @Override public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) { super.onRequestPermissionsResult(requestCode, permissions, grantResults); if (requestCode == Utils.PERMISSION_CODE_ACCESS_FINE_LOCATION) { boolean isAllGranted = true; for (int result : grantResults) { if (result != PackageManager.PERMISSION_GRANTED) { isAllGranted = false; break; } } if (!isAllGranted) { alertPermissionRequest(permissions); } else { startLocation(); } } } }
// convertToTweet merges twitters tweet and includes metadata // into a Tweet object func convertToTweet(tweet tweet, incl includes, matches *[]StreamRule) Tweet { var author Author for _, user := range incl.Users { if user.ID == tweet.AuthorID { author.Name = user.Name author.Handle = user.Handle author.Picture = user.Picture author.Verified = user.Verified author.ID = user.ID break } } var images []string var hasVideo bool var videoPreview string for _, mediaKey := range tweet.Attachments.MediaKeys { for _, mediaItem := range incl.Media { if mediaKey == mediaItem.MediaKey { switch mediaItem.Type { case "photo": images = append(images, mediaItem.URL) case "video": hasVideo = true videoPreview = mediaItem.VideoPreviewImageURL } } } } var pollOptions []PollOption for _, pollID := range tweet.Attachments.PollIDs { for _, poll := range incl.Polls { if poll.ID == pollID { pollOptions = poll.Options } } } var rules []string if matches != nil { rules = make([]string, len(*matches)) for i, rule := range *matches { rules[i] = rule.ID } } return Tweet{ ID: tweet.ID, Text: tweet.Text, Author: author, Created: tweet.CreatedAt, Images: images, Retweets: tweet.Metrics.Retweets, Replies: tweet.Metrics.Replies, Likes: tweet.Metrics.Likes, Quotes: tweet.Metrics.Quotes, RuleIDs: rules, HasVideo: hasVideo, VideoPreviewURL: videoPreview, Sensitive: tweet.Sensitive, Poll: pollOptions, } }
The technological exposure of populations; characterisation and future reduction Highlights Individual vulnerability to technology failure can be quantified by considering the exposure of relevant technological systems, and high technological exposure fields can be identified Evaluation of technological vulnerability can be extended to populations of individuals, and population exposure is identified as high and increasing. Working from a theoretical basis, approaches to reduced population technological exposure/vulnerability can be identified, and the value of decentralisation can be specifically identified. The current and future significance of technological vulnerability/exposure, and the practicality of paradigm changed to lower exposure are identified. Background The fundamental impossibility of never-ending growth have been recognised for a long time (e.g. Daly, 1990;Henrique & Romeiro, 2013) and similarly authors (e.g. Rao, 1975) have recognised the danger of corporates of transnational size. This paper considers an associated but distinct issue, the growing technological vulnerability of populations. Increasingly, essential goods and services are only accessible via technological systems that are both sophisticated, and also centralized. In this situation, end-user technological vulnerability becomes significant, but quantifying the extent and nature of such vulnerabilities have been hindered by the complexity of the analysis (Haimes & Jiang, 2001) and almost equally by the inadequacy and imprecision of terminology in common use. A previously-developed theory of exposure considered the technological systems supplying individuals and showed that the calculated 'exposure' was a valid measure of the level of vulnerability. This paper extends the application of an exposure analysis to consider the technological exposure of a population of individuals, assumed to live predominantly in an urban setting. The significance of the issue is developed, and then from the basic analysis of situation some quite generalised approaches to reduction of vulnerability are proposed and shown to be practical. Complete journals are devoted to the sociological effects of technology (J Technology in Society, IEEE Transactions on Technology and Society) and significant psycho/social issues (better communications with dispersed families but also include online abuse and personal loneliness) are linked to technological advances. These are acknowledged but have limited applicability to the scope of this paper. This paper will therefore primarily consider the exposure of city-dwellers to technological systems delivering services or consumables. Applied to the supply of specified goods/services The supply of essential goods and services will involve heterogeneous systems (i.e. systems where goods/services are created by a progression of steps, rather than a system in which completed goods are simply transported/distributed), that involve an arbitrary number of linked operations each of which requires inputs, executes some transformation process, and produces an output that is received by a subsequent process. Four essential principles have been proposed to allow and justify the development of a metric that evaluates the contribution of a heterogeneous technological system, to the vulnerability of an individual. These principles are: 1) The metric is applicable to an individual end-user: When an individual user is considered, not only is the performance of the supply system readily defined, but the relevant system description is clearer. 2) The metric acknowledges that events external to a technology system only threaten the output of the technology system if the external events align with a weakness in the technology system. If a hazard does not align with a weakness then it has no significance. Conversely if a weakness exists within a technological system and has not been identified, then hazards that can align with the weakness are also unlikely to be recognised. If the configuration of a particular technology system is changed, weaknesses may be removed while other weaknesses may be added. 3) The metric depends upon the observation that although some hazards occur randomly and can be assessed statistically, over a sufficiently long period the probability of these occurring approaches 1.0. The metric also depends upon the observation that intelligently (mis)guided hazards (i.e. arising when a person intentionally seeks a weakness in order to create a hazard) do not occur randomly. The effect of a guided hazard upon a risk assessment is qualitatively different from the effect of a random hazard. The guided hazard will occur every time the perpetrator elects to cause the hazard and therefore such a hazard has a probability of 1.0. 4) The metric depends upon the observation that it is possible to not only describe goods or services that are delivered to the individual (end-user), but also to define a service level at which the specified goods or services either are or are-not delivered. This approach allows the output of a technological system to be expressed as a Boolean variable (True/False), and allows the effect of the configuration of a technological system to be measured against a single performance criterion. Applying these principles, the arbitrary heterogeneous technological system supplying goods/services at a defined level to an individual, can be described by a configured system of notional AND/OR/NOT functions. Having represented a specific technological system using a Boolean algebraic expression, a 'truth table' can be constructed to display all permutations of process and stream availabilities as inputs, and technological system output as a single True or False value. From the truth table, a count of the cases in which a single input failure will cause output failure, and assign that total to the variable "E 1 ". A count of the cases where two input failures (exclusive of inputs whose failure will alone cause output failure) cause output failure, and assign that total value to E 2. A further count of the cases in which three input failures cause output failure (and where neither single nor double input failures within that triple combination would alone cause output failure) and assign that total value to the variable "E 3 " and similarly for further "E" values. The exposure metric, {E 1, E 2, E 3 }, for a defined system supplying specific goods/services, can be shown to measure the level of vulnerability of the individual to that technological system. Applied to generic goods/services and need-levels For many practical cases a generic service could be considered -e.g. "basic food" rather than "fresh milk", and vulnerability assessed by nominating multiple specific services as alternative suppliers (an "OR" gate) of a nominated generic service. The exposure of the individual to lack of secure access to the necessities of life, can thus be assessed using exactly the same approach used to assess exposure to non-availability of any specific need. Illustrative examples of generic services could include "All needs for Maslow's physiological level". Conceptually this would include ((Food_A OR Food_B OR Food_C) AND warmth AND water AND shelter) or possibly "all needs to occupy apartment on month-by-month basis". (Sewage services AND water supply AND power supply). Applied to populations The concept of an individual's exposure may be yet further modified and extended to consider the quantification of a population's exposure, in regard to supply of either a particular, or a generic service. It is common for some "population" to need the same services/goods, and obtain these via an identical technological system. The "technological system" that supplies such a population may not be identical to that which supplies an individual: Consider a case in which four individuals each rely on their cars to commute to work; for the output "arrive at work", each of their individuallyowned cars contribute to the E 1 exposure of each of the 4 individuals. If they reach a car-pooling agreement, the "car" then contributes to the E 4 exposure for "arrive at work" of each individual, since any of the 4 cars can supply that functionality. A population exposure is developed by modifying an individual exposure evaluation (for a population using a functionally identical service) by considering each functional element used by more than one of the population, and whether each such functional element can be made accessible to more than one. If an exposure level (metric) is then nominated, it is possible in principle to establish the largest population which is subject to this level of exposure. For example, we might identify the largest population using a functionally identical service or supply of goods, that has a non-zero E 1 exposure (i.e. has at least on single point of failure): Should a city discover that the progressive centralisation of services had resulted in a situation where the whole population of the city had a non-zero E 1 value for an essential service, some urgent action would be strongly suggested. Conversely, if the supply of generic service to even small suburbs had low exposure values, then a good level of robustness would have been demonstrated for that generic service, at the suburb level. Extrapolating these principles, where a "population" is actually the totality of the homo-sapiens species, and the "service" is essential for every individual's survival, then a definition of "existential threat" as proposed by Bostrom, is approached. Population hazards The analysis of exposure considers loci of failure and hence intentionally avoids consideration of specific hazards or their probabilities. Nevertheless, hazards that affect various populations certainly exist, and a short list will serve to illustrate the significance of evaluating population exposure levels. The following list makes no attempt to be comprehensive, but illustrates hazards relevant to large corporates, apartment-dwellers, members of isolated communities and users of electronic communications. A large group could be criminally held to ransom by malicious encryption of essential data. A large group could be effectively excluded from access to a service because of conditions imposed by a (single, national) owner (Deibert, 2003;Sandywell, 2006;Tanczer, McConville, & Maynard, 2016). An isolated group (on Antarctica, Moon or perhaps Mars) could cease to be viable if they cannot import supplies that cannot be generated locally (Pkalski & Sznajd-Weron, 2001). The occupiers of an apartment block could encounter a situation in where there was minimal (financial or other) pressure on the provider of sewage removal services to restore service, but where the occupiers faced major consequences (need to de-camp) in the absence of this service. Similarly, has noted that a failure to provide financial transaction services to a small retailer may incur negligible financial cost to a bank, but may cause the retailer's business to fail. These are examples of asymmetric consequences. The whole of humanity exists on the planet earth: a failure of the natural environment can doom either a regional or possibly a global population, and can be certainly be considered to be an "existential threat", as defined by Bostrom. Persons born early in the 20th century were very familiar with the possibility of highly unjust treatments (e.g. loss of employment), if private information became known -and they acted accordingly. For many born early in this century, such concerns seem remote. Personal information privacy issues have thus failed to gain a high visibility, and we have perhaps become blas about the costs of lack of privacy. Nevertheless, the ubiquity of surveillance (Citron & Gray, 2013;Miller, 2014), the increasing frequency of major data breaches (Edwards, Hofmeyr, & Forrest, 2016), and the recent rise in algorithmic decision-making on such matters as health insurance, credit worthiness, validity of job application and security targeting are bringing this issue to increased prominence. The recent revelation of hardware vulnerabilities in general processors (Eckersley & Portnoy, 2017;Ermolov & Goryachy, 2017) demonstrated the significance of unappreciated weaknesses, even when overall operation of a system has been reliable. High-exposure technology fields Other publications have noted society's vulnerability to technological systems and considered the relationship between time-to-disaster and time-to-repair: that analysis was not an exposure analysis, but did consider the range of services corresponding to a broad category of human needs and effectively considered the significance of the exposure categories noted above. Later works (Author, 2017a) have analysed the individual exposure associated with a broad range of goods and services. Those works also noted that the analysed values for these specific examples could be reasonably extrapolated to many other similar goods and services. The general analyses of did attempt to cover the scope of individual needs and as such identified a range of services (or goods-supply needs) that were considered to incur high levels of vulnerability. The more detailed exposure analyses of were indexed to the exposure of an individual, and although a limited number of examples were studied, these were considered to be representative of the high-exposure items relevant to individuals living in urbanised settings in the 21 st century. The assessment of the population exposure of those exemplar technological systems shows several distinctive changes from the analysis indexed to the individual. Primarily, the E 1 items that arise close to the final point of use by the individual lose significance and for many cases cease to be significant at the E 3 level (arbitrarily chosen as the cut-off point for consideration). The components that remain significant are those that genuinely represent exposure values for the population under consideration. The analysis of population exposure of a broad spectrum of needs showed that it was possible to identify some high exposure technological fields, specifically complex components, complex artificial substances, finance, communications, energy and information, These are considered more fully as follows; Complex components: Complex components may be distinguished from large components (such as a building, hydroelectric dam, or road-bridge), and qualitatively described as being beyond the capacity of a skilled individual craftsperson to fashion in reasonable time and to required tolerances. Under this category we might consider such items as a carburettor body for an internal combustion engine, food processing equipment and construction machinery (crane). For many consumer items within the "complex component" category, the population exposure is significantly lower than the individual exposure, since numerous examples of item exist (either owned by others, or stockpiled for sale). The level of centralisation may however be very high, with (for example) perhaps only one worldwide source of a carburettor body for a specific vehicle. Complex artificial substances: Complex artificial substances includes advanced metallurgical alloys, advanced plastics and composites, drugs, and vaccines: These are distinguished by the complexity of composition, rather than complexity of form. In many cases the centralisation that causes high exposure levels for the production of complex substances, has resulted primarily from available economies of scale, and only secondarily from the substances' complexity. Some complex substances (notably pharmaceuticals) have patent protection, which creates centralisation at up to global level. Finance: As the range of goods and services, and the geographical scope of supply of those goods and services has increased, so has the need for the facility to exchange value in return for goods and services: this has inevitably led to elaborate mechanisms for secure exchange of value. Recognising that the exchange of value can also facilitate illegal activities, state actors have also enacted significant surveillance and control of financial transactions. Technologies associated with the exchange of value have acquired high large levels of exposure (colloquially, many things that can go wrong) and high levels of centralisation, in the process of meetings these demands. Communications: The ubiquity of the internet has become a remarkable feature of the last 20 years: although there are exceptions, most areas of the earth and a very large proportion of total population can communicate via internet. Although cellphone use is common, it is evolving as a mobile access connection, with the internet carrying the data. Internet communications has been made possible by well-established protocols, however high level systems for routing continue to be centralised, and while problems are rare, nation-states have occasionally decided to discontinue connection to the internet, showing that high levels of centralisation exist (Howard, Agarwal, & Hussain, 2011). Although the feasibility of internet communications can be attributed to open source protocols, the practicality of current connectivity has been largely enabled by the very large data rates possible via fibre-optic cables, yet this capacity has also introduced a high level of exposure for both individuals and populations. Duplication of undersea fibre-optic cables has somewhat reduced the level of population exposure, yet the trend for dependence on high-data-rates has increased at similar pace (market forces have driven the need for additional cables), and communications via internet carry a high level of population exposure. Recent reports have noted that the disruption of a very small number of fibre-optic cable would lead to unacceptable slowdowns, and trends to ownership of undersea cables by corporate entities further contributes to E 1 values of exposure and high population exposure. The numbers of cables in service show that internet communications is centralised at a small-nation level. Energy: Energy has been noted as key to civilisation. Coke allowed the smelting of iron, and oil enabled almost all current transportation. Coal, nuclear and geothermal heat and hydro storage allow electricity generation on-demand. National power transmission systems generally have high reliability and many have some level of design redundancy. Nevertheless, the generation and distribution of electric power incurs significant individual and population exposure: Large power stations are bespoke designs as are large transformers (Sobczak & Behr, 2015), and transmission networks may face significant "resilience" issues (Carvajal, Serrano, & Arango, 2013;Sidhu, 2010). Liquid fuels can be stockpiled at national level, and to a lesser extent locally, however the level of stockpiling is limited -and even the higher levels of stockpiling are likely to be small compared to the time to rebuild a major production plant (terminal, or refinery). At the consumer and industrial user-level, although solar PV and wind can produce power at $/kWh rates close to thermal generation, but currently no economically viable technology that allows storage of MegaWatt-hours of electricity from intermittent sources at small/medium-scale. Currently most users are still therefore reliant on significantly centralised power generation and depend on transmission systems that contribute significant population exposure. Exemplar cases such as the Tesla large-scale battery in South Australia have created much interest but are dependent on goodwill capex for installation and are actually used for frequency-correction rather than large-scale energy storage (Agnew & Dargusch, 2017). Recognition of possibilities of cascading failure (Dobson & Newman, 2017;Guo, Zheng, Ho-ChingIu, & Fernando, 2017) have led to some reductions in exposure incurred by major power transmission facilities. Information: Information, whether medical reference information, contractual records or engineering design information, is not a coincidental by-product of a technological society, it is information that fundamentally allows the design and construction of technological systems and to the full operation of society (Dartnell, 2014;Shapiro, 2009;van den Heuvel, 2017). Yet while it has become possible to generate and transmit enormous quantities of information, the information storage remains a particularly high-exposure issue. Currently the ASCII codes largely standardise information representation, and protocols for transmission are also largely standardised but a gap in the standardisation of information storage (including writing and recovery) contributes to a high exposure for the final delivery of information to users. Hard-disk drives are still the most common storage technology, yet these have short lives, and use proprietary data storage formats plus proprietary approaches to the writing and recovery of data. This issue has been well-reported, authors such as have predicted a "digital dark age", i.e. a future society that cannot recover/read most of the information generated in the current era. This describes a situation of high individual and population exposure, and since there are few of manufacturers of HDD's, information storage can also be noted to illustrate centralisation at multi-nation level. Several technologies allowing long-term storage have been proposed (, Longnow Foundation, 2017; Permanent Archival Solution, 2020, Nanoarchival™ technology) but these are currently expensive and lack the integration to allow them to truly offer changes to population exposure. By contrast, some fields have generally low population exposure: basic building materials, foodstuffs, natural fabrics and clothing are commonly supplied via relatively simple technological systems in which technological design redundancies are commonly large, and for which population exposure is therefore low. Technologies such as creation of ceramics and glassblowing may require skill but are also not dependent on high technological sophistication and so contribute low technological exposure. Similarly, the collection and storage of rainwater is feasible with low technological exposure for even densely populated urban populations. Exposure categories and types In addition to identifying high exposure fields, it is also possible to identify categories of exposure contribution that commonly apply across a range of technological fields. These are proposed to include initial resource availability, complex unit operations, lack of buffering, single points of failure (SPOF), contributory systems, highly centralised processes and "practical unavailability", and are described more fully as follows: Initial resource availability All technological systems producing goods and services for users, ultimately depend upon raw materials and viable environmental conditions. Where raw materials cease to be available, or environmental conditions change permanently, services to users will inevitably be affected. Raw material supplies and acceptable environmental conditions must therefore be identified as sources of exposure and hence vulnerability to users. Complex unit operations We use the descriptor "complex" as a characteristic of a process whose internal operation is practically unknowable to the user and cannot realistically be repaired by the user. Personal computers, routers and related equipment are examples. It is also possible to consider situations where a critical application has been compiled from an outdated programming language and runs on a computer for which no spare hardware is available. Another example might consider critical information held on a very old storage medium. These examples illustrate three categories of complex processes: in the first case, while the inner workings of a PC may be exceedingly complex, the format of incoming data (TCP/IP packets) and protocols (WWW, email etc.) are in the public domain (Fall & Stevens, 2011) and so it is not only possible but practical for alternative machines to offer the same services. In the second case, assuming the functional specifications of the application processes are known, the application can be recoded (and fully documented) using a language for which larger numbers of maintenance programmers exist, and on a more common platform. The third case of data encoded on old storage medium, illustrates a subcategory where the internal details of the storage are proprietary (not in public domain), alternative equipment is unavailable, and creation of replica equipment for reading the data is probably impractical, leading some authors e.g. to express fears of a "digital dark age". Lack of buffering For the supply of long-life products, it is both possible and practical to provide buffer stocks at various points in the process. By contrast, since AC power (for example) is not readily storable, all processes involving uses of AC power will fail immediately if the power supply fails. Single points of failure (SPOF) All single points-of-failure (SPOF) contribute to E 1 values and so make a primary contribution to users' vulnerability. Three subcategories of SPOF are noted: The first is where delivery of services to users involves some processes immediately adjacent to the user, known as "last mile" services in the telecommunications field. The second subcategory of SPOF is illustrated by considering a small rural town whose EFTPOS, landline phone service, cell-phone service and internet connection have all been progressively migrated to data services, carried by a single fibre-optic cable and thus have inadvertently created a SPOF. The third is where a particular failure will inevitably cause failure of other components that are not functionally connected -a cascading failure. Finally it is noted that Contributory systems are a common source of exposure: whenever a system is made dependent upon another, the contributory system's exposures are reflected in the total exposure to the user. A common example is where a simple user service is made dependent upon internet access; the mandatory internet access may add huge levels of exposure to a system that would otherwise incur low levels of vulnerability. Some specific technologies including artificial intelligence, nuclear weapons and asteroid strikes have been examined, and authors such as Baum have pondered their potential to incur existential threats by threatening multiple systems. Others including Baum and Tonn and Green Brian Patrick have considered approaches to limiting the scope of such hazards. The above categories of exposure may apply to any technological process; there are additionally several categories that are specifically relevant to the study of "population exposure", these include Highly centralised processes, for example, the evacuation and treatment of sewage requires a network of pipes and pumps to collect sewage and deliver it to the treatment station. This is an example of a centralised system that is large but technologically simple. Other examples of large centralised systems include financial transaction systems (O'Mahony, Peirce, & Te Wari, 1997), and the international data transmission systems of undersea fibre-optic cables. Such systems tend to be monopolies and are commonly controlled by entities that have little if any obligation to provide service or to negotiate terms acceptable to individual users. Authors such as Li et al. have shown that highly interconnected systems have similar characteristics to centralised systems, and it is well-recognised that the most "centralised" system is earth's natural environment, because the natural environment (including air and water) are essential to all life. Practical unavailability Consider the hypothetical case where a user wishes to communicate sensitive information, but only has access to one data transmission facility that is known to be under surveillance. Although technically operational, the inevitable absence of privacy associated with that data transmission facility has made the facility practically unavailable. For technological systems that are highly centralised and near-monopoly, practical unavailability is a significant possibility. A THEORETICAL BASIS FOR EXPOSURE REDUCTION The E n metric has been explained earlier in terms of its significance for measuring vulnerability, however a wider and more future-oriented applicability of the metric itself can also be demonstrated in several ways. Population exposure insights from exposure metric The assumption that there is an average cost of defending any vulnerability could be challenged but across a broad enough spread of examples, it is workable. Under that broad assumption, whereas the cost of defence for an E 3 exposure-contributor is precisely the same as the cost of defence for a contributor to an E 1 vulnerability (it is the cost of protecting one only vulnerability, since the E 3 contributor requires successful attack of all three vulnerabilities), the cost of mounting a successful attack on a E 3 vulnerability is 3 times greater than the cost of attacking a vulnerability contributing to the E 1 value, since the attacker needs to identify and successfully attack all 3 E 3 -contributory nodes simultaneously. In addition to its value for measuring vulnerability, the exposure metric is therefore also broadly significant for planning the reduction in vulnerability of a system. Population exposure insights from exposure analysis Considering the generation of an exposure metric, the process itself provides valuable insights into options for reduction of the final values. The process of generating an exposure metric must be started from the delivery point, and follow a systematic redrawing and track-back process until a justifiable end-point is reached. The selection of a justifiable end-point has been addressed elsewhere, and could be associated with "no further contributors to an E n level" criterion. The process of achieving a final representation of the system will very likely require a progressive redrawing of stream and process relationships; an example of a common re-drawing is presented in Fig. 1. In a practical process of building an exposure metric, subsystems (which may have been analysed separately) commonly contribute goods or services to "higher" systems. If the exposure metric of a subsystem is known then there is little value in recreating a truth table for a higher level super-system, that re-explores all of the inputs to every sub-system. The more-effective approach is to consider the point at which the subsystem output contributes to the input of a gate within the higher level system, and how the subsystem exposure metric is transmitted/accumulated by the upstream gate of the super-system. We may generalise this process by considering that each input to a gate (Boolean AND or OR operation) has an exposure vector, and developing the principles by which the gate output can be calculated from these inputs. This is illustrated in Fig. 2. For the AND gate, the contributory exposure vectors are simply added. For the OR gate, the issue is more complex, but the higher levels of exposure are quickly avoided. The process of generating the metric will itself highlight sources of large exposure, and hence specific options for reduction. Exposure reduction approaches The detailed process for analysis of exposure has illustrated how the E values are accumulated and how specific exposure L.J. Robertson Futures 121 102584 reduction options may be identified. Generalised approaches to the reduction of the exposure of a population can also be identified. Ensure weaknesses and dependencies are not missed The process of redrawing shows that even where alternative subsystems can supply higher systems, if there is a locus of weakness that is common to alternative subsystems, the exposure analysis will ensure that such facts are preserved and the E 1 contribution will actually appear in the exposure value for the whole target consumer (group). If the "O" ring seal failure had been identified as a contributor to the E 1 exposure of the Thiokol Solid Booster Rocket then the elimination of all E 1 values for the parent system that was the Challenger space shuttle, could not have been achieved unless the "O" ring weakness were addressed. The use of an exposure metric therefore potentially addresses the colloquial saying "it's always the smallest things that get you". One of the learnings from the analysis of the "Challenger" tragedy was that even known subsystem weaknesses could become lost as analyses were progressively summarised. Independent alternatives for lower exposure When genuinely independent, alternative sources are available, their effect on the next-highest systems (process, consumer or group) is illustrated by the operation of an "OR" gate. If 4 genuinely independent/alternative subsystems that each have exposure vectors with non-zero values of E 1, are combined via an "OR" gate, the higher system does not see any exposure at above the E 4 level from that source. This principle might be qualitatively or intuitively perceived, but the effect upon an analysis of exposure provides a starkly quantitative analysis. It may also be observed that while reducing the exposure of each subsystem would require separately addressing each contributor to the subsystems E 1, E 2 and E 3 values (potentially a major task), if 4 genuinely alternative subsystems exist then the combined "OR" exposure has no non-zero contributor more serious than E 4 and may warrant no further action. Re-purposable equipment Whereas many 20th century devices were designed for a single purpose and high throughput (mass production), some recent trends have been to devices that can be re-purposed -and the pinnacle of flexibility is the human being! For any case where a piece of equipment contributes to the E 1 value (is a single point-of-failure), if the capability of that equipment were able to be undertaken by multiple options (e.g. alternative human operators), the exposure contribution may be reduced to the E n level, where n is the number of alternatives or persons capable of undertaking the equipment's function. An illustration may help to clarify this principle: if a sensor provides input that is required by an upstream function, that upstream function and all higher dependencies are exposed to the capability of the sensor to provide input: If 'n' humans are able to provide their best assessment of the sensor's input, then the exposure of the higher system to that functionality is reduced to the E n level. Generalised approaches for high-exposure technology fields Considering the high-exposure technological fields that were identified earlier, generalised approaches to reducing the accrued actual exposure include standardisation of specifications that allow competitive supply of complex components and complex substances, avoiding the large exposure of some contributory systems, retention of genuine alternatives (e.g. cash/gold as well as electronic transactions). Generalised approached to reduction of population exposure There is an intuitive appreciation that it is undesirable to have high vulnerability levels for systems that affect large numbers of persons. Some may also intuitively appreciate that trends to centralisation, driven by economies of scale, increases the technological vulnerability of large population groups. The analysis of population exposure and the principles of exposure analysis therefore provide a quantitative approach that can be used not only to assess the level of exposure of current systems, but more importantly to show quite widely applicable principles for exposure reduction. Specifically the analysis shows that centralisation of production almost inevitably creates higher population exposure values, and provides a sound theoretical basis for promoting decentralisation of production as an approach for the reduction of population exposure. Practicality of exposure reduction for high-exposure fields It is important to consider the practicality of population exposure reduction by decentralisation; this is reviewed within each of the fields previously identified as incurring high exposure: a) Complex components. In order for this functionality to be genuinely available at a significantly decentralised level, we can consider the sub-systems that are required, and the current level of maturity of technology options within each of those subsystems. For a complex component, relevant subsystems include those associated with the supply of materials and the creation of complex shapes from basic material stocks. For many cases, assembly of components is likely to be straightforward but we must also consider cases where assembly itself requires specialised equipment. For the majority of cases, the composition of the complex component can be assumed to be uniform, however cases where composition is not uniform (e.g. multilayer circuit board) must also be acknowledged. Equipment for additive and subtractive manufacturing is available; specifically 3D printing equipment is readily available at small scale, and large scale implementations (Bassoli, Gatto, Iuliano, & Violante, 2007;Kraft, McMahan, & Henry, 2013) have been tested. A moderately standardised format for 3D printer design information exists as ISO 14649-1, 2003, andISO 10303, 2014 (many sections), and instructions for operating additive and subtractive manufacturing equipment are such that high skill levels are un-necessary. A significant body of designs (Pinshape™, Thingiverse™, GrabCAD™, CGTrader™ etc.) for complex components are already available, and processes that create design information from a component template () exist. Subtractive manufacturing using either 5D milling (Comak & Altintas, 2017), spark erosion (Gupta & Jain, 2017) or similar techniques is available and is entering commercial usage including the printing of obscure car parts, which is not mature but is entering the commercial-uptake phase). Considering the generalisability of the current options: 3D printing is currently somewhat limited in the material that can be printed, however this range is expanding and the powder-sintering and other approach promise access to a wide variety of sophisticated materials. Most printers/millers are currently designed to create small components, however there seems no fundamentally reason that a 5-axis milling head cannot be mounted on a mobile platform (). It is reasonable to assume that some assembly of components by a human operator is possible. It is possible that manufacture of some components would require a multi-stage process. For simple cases this might involve 3D printing a component using a thermoplastic material, using this to create a sand mould, followed by metal casting. Current additive and subtractive manufacturing equipment has finite limits to the component tolerances that can be achieved, and to the raw materials that can be processed. Small-scale subtractive manufacturing equipment at current levels of maturity is however capable of actually creating specialised machinery with capabilities that the basic subtractive manufacturing equipment lacks. With minimal extrapolation, additive and subtractive manufacturing equipment is currently capable of re-creating all parts required for a duplicate additive and subtractive manufacturing equipment: this is a significant threshold indicating that a sustainable level of decentralisation is possible. Complex substances. In order for complex substance synthesis to be genuinely available at a significantly decentralised level, we can consider the types of complex substances that are of interest, the sub-systems that are required for each, and the current level of maturity of technology options within each of those subsystems. Broadly, the complex substances could be categorised as: Complex alloys, complex inorganic liquids (oils, detergents, etc.), complex organic materials (pharmaceuticals, insecticides, herbicides), complex substances derived from living organisms -yeasts, vaccines, fermentation bacteria, and Polymers. For a complex molecular substance, it is currently common for a range of supply chains to bring raw materials to a synthesis plant. A sequence of synthesis steps (including unit operations of heating and cooling, separation, reaction, precipitation and dissolution) are carried out to generate a complex substance. For "organic" compounds, temperatures are generally limited to below 120°C. For a complex metallic component, it is currently common for granulated pure metals or metal-compounds to be melted together, sometimes in vacuum or inert gas, and then a controlled set of cooling/holding/reheating steps are used to generate the final material. Considering whether these syntheses could be practically decentralised, Drexler's seminal paper considered the options and practicality issues associated with general-purpose synthesis of complex substances. Since 2010, the "Engineering and Physical Sciences Research Council" (EPSRC) have been funding a "dial-a-molecule" challenge which runs parallel to the work of others, and commercial ventures such as "Mattersift™" (Manipulating matter at the molecular scale) have demonstrated the progress towards this goal. Even where general-purpose synthesis capabilities are not available, the availability of knowledge does in principle allow relatively complex syntheses e.g. Daraprim™, to be undertaken by small groups. For inorganic materials, progress has been made with solar pyrometallurgy and although much development is needed, there would seem to be no fundamental reason why the required temperatures and composition constraints could not be achieved on small scale and with limited equipment. Published proposals have proposed that it is possible to build a standardised facility capable of carrying out an arbitrary sequence of unit operations required to make any organic compound. While the technologically maturity of decentralised synthesis of complex materials is lower than the decentralised production of complex components, many of the processes are actually feasible at present, and others are rapidly maturing. Finance: Decentralised tokens of wealth (tokens of exchange) have existed for as long as societies have existed. In order for a token of exchange to continue to avoid the large exposure of current centralised financial systems, a token must retain the qualities of ubiquitous acceptance, transparent and irrevocable transactions; this is currently feasible. Blockchain technology has recently offered another decentralised system for secure and irrevocable transfer of wealth, allowing broad acceptance and thus meets the criteria for acceptability. Blockchain-based currencies are however reliant on a communications system that is currently highly centralised, and so fall short of the security expected and exist in numbers of as-yet-incompatible forms. The difficulties with current blockchain technologies seem to be solvable, and this technology offers a promising approach to decentralised exchange of value. Current, high-security banknotes and bullion fulfil many of the requirements for a decentralised approach to transactions, and do not incur the exposure that is inherent with blockchain-based currencies, but do incur a high risk of theft and the exposure of a physical transmission system if the transaction is to span significant distance. Communications: A communications system could be considered decentralised (within the population size envisaged) when it has no E 1 value above zero, and is capable of communicating with any other (decentralised) population. Secure encryption is currently possible, and despite some practical difficulties, mature approaches such as one-time-pads, seem to be proof against even projected (de Wolf, 2017) technologies. Radio transmission on shortwave bands (using ionospheric reflection) of encrypted material can be received globally. Assuming a 10 MHz "open" band, and a very narrow bandwidth (and very slow transmission rate), many channels would be available in principle. This is a low level of practicality, but is must be noted that completely decentralised communication is inherently feasible. Massively-redundant fibre-optic cable systems with decentralised routing systems are also technically feasible, and it is even feasible to consider significantly higher levels of design redundancy for undersea cables. While both feasible and practical for land-based systems, massive design redundancy appears to be feasible for undersea routes but not practical for the exposure level sought. Self-discovering radio-based mesh communications for land-based systems are feasible at present and are likely to be more practical and economical than massively redundant fibre-optic systems for land-based communications. Hybrid approaches, e.g. using self-discovering mesh for communications within a single land mass, and allowing access to a significant number of undersea cables, could meet the population levels and exposure levels to allow a claim of feasible decentralisation using current technology. Energy. A number of forms of energy must be at least considered: Liquid fuel for vehicles, solid/liquid/gas fuels for heating, high voltage AC electricity for drives and low-voltage DC for electronics are all relevant. The analysis of energy need is also not trivial; if vehicles are powered by DC electricity in the future, the dependence upon high quality liquid fuel is changed markedly. Lighting and consumer electronic devices have rapidly transitioned from dependence on 230 VAC to low voltage DC sources. Decentralised generation of 230 V AC electricity has been the subject of numerous studies and at small-scale distributed generation is considered mature technology, but the context of research has generally been integration with a wider 230 V 50 Hz power network. Solar Photovoltaic power generation costs have decreased recently, and now shows unit costs close to those commonly paid for centrally-generated power -but supply is intermittent and seasonal. While technologies for storage of Watt-hour power levels exist, the large-scale (MWh) storage of electrical energy has been noted as one of the technological gaps in the options for decentralisation, and the level of R&D expended on battery research suggests that fundamental breakthroughs may not be likely. Socalled "Ultra-capacitor" technology (De Rosa, Higashiya, Schulz, Rane-Fondacaro, & Haldar, 2017) may or may-not overtake battery storage for smaller power levels in the future. The decentralised production of biofuel using macro-algae (Chen, Zhou, Luo, Zhang, & Chen, 2017;Zeb, Park, Riaz, Ryu, & Kim, 2017) is also immature, but the capability to treat sewage and create storable hydrocarbon fuel with high energy-density is promising and is perhaps the decentralised heating-energy storage mode that is closest to technical maturity at present. Information The information storage requirements for a community could include significant medical information, all data required for manufacture of complex components and synthesis of complex substances in addition to contractual, financial, genealogical etc. data. Human-readable information storage (books, tablets) have very low exposure and high decentralisation currently. A decentralised and low-exposure approach for storage of machine readable information does however pose a non-trivial technological challenge. Existing computer hard-disk drive storage technology is mature, but this approach has a very high technological exposure since the format of storage and the recovery of data is via a complex and proprietary system, and the actual storage medium is not useraccessible. A criterion of E 1 < 1 could in principle be achieved by massive redundancy, but in practice the use of identical operating systems and software make it likely that residual exposure will remain. Technologies such as the 5D glass storage approach () or proposals by the Longnow Foundation, 2017; or organisations such as "Nanoarchival™ technology" avoid the reliance upon proprietary data retrieval systems and provide very long life -but still lack a mature and decentralised data-writing approach, and a mature and decentralised approach for reading stored data back into a machine-readable form. The advances needed in order to achieve a durable, low-exposure storage medium are immature but are technologically achievable. Other It is useful to note many other fields in which significant capabilities can be achieved with simple components: while far short of laboratory quality, the principles of spectrography and chromatography (Ghobadian, Chaichi, Ganjali, Norouzi, & Khaleghzadeh-Ahangar, 2017;) are actually accessible at a domestic level, and microscope (Prakash, Cybulski, & Clements, 2017) capabilities depend primarily on knowledge and not on sophisticated components. The current centralisation of processing of building materials, natural fabrics and food is driven only by economies of scale, and not be the necessity of complex equipment: decentralisation in these cases is technologically feasible. Many examples of highly standardised equipment and components (examples include shipping containers, roofing iron, nuts and bolts and steel extrusions) exist, and should be considered within the more general principle of layered standardization. The practicality of designing equipment for long operational life has been very welldemonstrated by such examples as the Voyager spacecraft and Mars explorer spacecraft. Organisations such as the Longnow Corporation have considered the requirements for durable engineering, and plan items such as a "10,000 year clock". In the context of this paper and earlier studies, we can consider that it is practical to design at least some classes of components and equipment for a usable timeframe that is demonstrably longer than the time required to re-develop a production facility to recreate them; this criterion is a valid test of low exposure practicality. In summary Low exposure and decentralised options for a range of technologies have been examined: Some are mature and some have a low level of technological maturity. It is likely that some could mature rapidly (e.g. additive/subtractive manufacturing, self-discovering communications networks, information storage), while others such as energy storage have absorbed enormous R&D effort already with limited progress. This does not mean that a trend to decentralisation cannot begin, simply that a decentralised society may have more sophisticated capabilities in some fields than others. Practicality of connected decentralisation The ability to form fixed-duration, ad-hoc associations to enable some large-scale development are not only possible for a society comprising decentralised groups, but are considered to be essential. This conclusion differs from the assertion by Jebari and Jebari that "isolated, self-sufficient, and continuously manned underground refuges" are appropriate, rather proposing intentionally independent population units who control their specific interactions with other units, achieving reduced vulnerability but without forfeiting the capability to aggregate selected resources for mutual gain. Smaller-scale concepts such a crowdsourcing approaches are already demonstrating somewhat similar options for shorter-duration, ad-hoc design advances. Situations such as ships travelling in convoy are common and perhaps form a close analogy, demonstrating both the independence of each and the possibility of cooperation. Other practical considerations This paper focussed on broad issues of technological vulnerability, but implementation details (resource usage efficiency, waste creation/treatment, gross greenhouse gas emissions and many other issues are also acknowledged. Sociological/anthropological research has indicated that quite small populations provide a sufficient sociological group size for most, and this conclusion seems to remain broadly valid even where high levels of electronic connection are possible (Mac Carron, Kaski, & Dunbar, 2016). Is technological vulnerability significant? The concept of a technological system's exposure provides both a tool and a metric, which can be applied to either individuals or to populations of individuals, and can supply useful data to the forecasting process. Population exposure is found to be high for many categories of current technological systems, and the current trend is to increased levels of population exposure. The categories that have been considered as typical, span and affect many goods and services that would be considered essential. Items as diverse as internet usage and financial transactions already have multi-national levels of population exposure (and hence high levels of centralisation) and although various authors does not use the term "exposure" nor precisely describe the concept, he explains that environmental damage actually has a global exposure level. Population exposure is a topic that does generate intuitive awareness of vulnerability: it can be observed that awareness of technological vulnerability exists at both national level (see J Critical Infrastructure Protection, Pub Elsevier) and at individual level (Huddleston, 2017;Kabel & Chmidling, 2014;Martin, 1996;Riederer, 2018;Slovic, 2016). Since a measure of exposure correlates closely with the effort required to protect a system, a continued trend to centralisation and increased population exposure is very likely to lead to progressively Herculean efforts to ensure that vulnerable loci are not attacked; such efforts are likely to include progressively wider surveillance to identify potential threats, and progressively stronger efforts to control and monitor access -each of which are themselves factors that can serve to make the service practically unavailable to individuals. "Sophisticated decentralisation" as a forcastable option If no value were attached to high and rising levels of population exposure, then highly centralised options are likely to continue to be preferred -but the consequences of the high and rising population exposure are demonstrated by reference to a metric of exposure, are also intuitively understood and are illustrated by the "continued centralisation" option in Fig. 3. If there were indeed no Fig. 3. Alternative futures. L.J. Robertson Futures 121 102584 technological options for reducing the level of population exposure without major reductions in the levels of technological sophistication available, then large populations are indeed exposed to the danger of losing access to services. That scenario would indeed result in a catastrophic situation where a major reduction in sophistication of services occurs, but the remnant exposure is also reduced. That scenario has been labelled as the "apocalyptic survivor" scenario in Fig. 3. The third alternative shown in Fig. 3, is for progress towards a significantly decentralised society, with a comparatively low population exposure and a level of sophistication (in terms of technological services and goods available) that does not decrease and actually has the real possibility of advancing with time. In this paper, the description of "forecastable options" has been considered under each of the categories of high population exposure, and it has been noted that for each of them technological developments that enable significantly reduced population exposure exist. The current level of technological maturity of these options vary, but none are infeasible and complement the proposals advanced by a number of authors including Gillies and Blhdorn. Trajectory changes In the course of considering decentralisation options, this paper has identified a small number of technological capabilities that would facilitate a more decentralised option, but which are currently at a low level of technological maturity. These include: Further advances on general purpose chemical synthesis A durable, open-source machine-accessible information storage system that can be created and read in a decentralised context (requiring only minor development) An self-discovering network using an open-source approach to allow information transmission (requiring some development) A large-capacity, durable and economically accessible energy storage technology (requiring significant development) Further development of trustworthy and decentralised financial transaction systems (requiring some development). A non-technological system to allow ad-hoc cooperation between decentralised groups, allowing resource aggregation without long-term centralisation (requiring significant development). Despite the inevitable variations in technological maturity across a broad range of technologies, the analyses have concluded firstly that centralisation and high population exposure result in severe and increasing vulnerabilities for large numbers of persons, and secondly that the combination of maturing decentralised technological capabilities and the storage of knowledge allows a transition to a "sophisticated decentralisation" model to be considered as a serious option. While not the primary topic of this paper, it is noted that even in the presence of technological options, change may not occur until some triggering event occurs; events that could trigger a more rapid transition to a "Sophisticated decentralisation" could include a truly major and long-term disruption of some highly centralised technology such as undersea cables or an irremediable malfunction of international financial systems, or a pandemic requiring high levels of population isolation 1. Summary The analysis of technological exposure and reduced exposure options have concluded that practical options for substantial decentralisation exist, or could be reasonably forecast as possible. It has also been proposed that there are substantive and immediate reasons to consider a qualitatively distinct "fork" from the high population exposure of current centralised society to a more decentralised model. Any selection of a decentralised technological model might be triggered by some event which crystallised the exposure of a highly centralised model: The drivers that had produced the centralised model will still remain however, and will arguably tend to cause a re-centralisation without ongoing efforts. This issue is outside the scope of this paper, but is noted as a topic requiring further research. The capability for decentralised and machine-accessible storage of knowledge and the creation of complex components and substances has recently, or will soon, create a cusp at which local equipment is capable of reproducing itself. Sufficient computation facilities at local scale are already able to make genuine advances, and combined with equipment capable of replication, is sufficient to allow fully sustainable, sophisticated and decentralised communities to diverge from current trends. Declarations of interest None.
package lora; import javax.servlet.http.HttpServletRequest; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.http.MediaType; import org.springframework.http.ResponseEntity; import org.springframework.web.bind.annotation.PathVariable; import org.springframework.web.bind.annotation.PostMapping; import org.springframework.web.bind.annotation.RequestBody; import org.springframework.web.bind.annotation.RestController; import lora.rest.LNSRestController; @RestController public class OrbiwanRestController { @Autowired LNSRestController restController; @PostMapping(value = "/{lnsInstanceId}/rest/callback/payloads/ul", consumes = MediaType.APPLICATION_JSON_VALUE, produces = MediaType.APPLICATION_JSON_VALUE) public ResponseEntity<String> lnsUp(@RequestBody String event, @PathVariable String lnsInstanceId) { return restController.lnsUp(event, lnsInstanceId); } @PostMapping(value = "/{lnsInstanceId}/rest/callback/payloads/dl", consumes = MediaType.APPLICATION_JSON_VALUE, produces = MediaType.APPLICATION_JSON_VALUE) public ResponseEntity<String> lnsDown(@RequestBody String event, @PathVariable String lnsInstanceId) { return restController.lnsDown(event, lnsInstanceId); } }
/**************************************************************************** * NxWidgets/nxwm/include/nxwmconfig.hxx * * Copyright (C) 2012 <NAME>. All rights reserved. * Author: <NAME> <<EMAIL>> * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * 3. Neither the name NuttX, NxWidgets, nor the names of its contributors * me be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS * OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED * AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE * POSSIBILITY OF SUCH DAMAGE. * ****************************************************************************/ #ifndef __INCLUDE_NXWMCONFIG_HXX #define __INCLUDE_NXWMCONFIG_HXX /**************************************************************************** * Included Files ****************************************************************************/ #include <nuttx/config.h> #include <nuttx/input/touchscreen.h> #include "nxconfig.hxx" #include "crlepalettebitmap.hxx" /**************************************************************************** * Pre-Processor Definitions ****************************************************************************/ /* General Configuration ****************************************************/ /** * Required settings: * * CONFIG_HAVE_CXX : C++ support is required * CONFIG_NX : NX must enabled * CONFIG_NX_MULTIUSER=y : NX must be configured in multiuse mode * CONFIG_NXCONSOLE=y : For NxConsole support * CONFIG_SCHED_ONEXIT : Support for on_exit() * * General settings: * * CONFIG_NXWM_DEFAULT_FONTID - the NxWM default font ID. Default: * NXFONT_DEFAULT * CONFIG_NXWM_TOUCHSCREEN - Define to build in touchscreen support. * CONFIG_NXWM_KEYBOARD - Define to build in touchscreen support. */ #ifndef CONFIG_HAVE_CXX # error "C++ support is required (CONFIG_HAVE_CXX)" #endif /** * NX Multi-user support is required */ #ifndef CONFIG_NX # error "NX support is required (CONFIG_NX)" #endif #ifndef CONFIG_NX_MULTIUSER # error "NX multi-user support is required (CONFIG_NX_MULTIUSER)" #endif /** * NxConsole support is (probably) required */ #ifndef CONFIG_NXCONSOLE # warning "NxConsole support may be needed (CONFIG_NXCONSOLE)" #endif /** * on_exit() support is (probably) required. on_exit() is the normal * mechanism used by NxWM applications to clean-up on a application task * exit. */ #ifndef CONFIG_SCHED_ONEXIT # warning "on_exit() support may be needed (CONFIG_SCHED_ONEXIT)" #endif /** * Default font ID */ #ifndef CONFIG_NXWM_DEFAULT_FONTID # define CONFIG_NXWM_DEFAULT_FONTID NXFONT_DEFAULT #endif /* Colors *******************************************************************/ /** * Color configuration * * CONFIG_NXWM_DEFAULT_BACKGROUNDCOLOR - Normal background color. Default: * MKRGB(148,189,215) * CONFIG_NXWM_DEFAULT_SELECTEDBACKGROUNDCOLOR - Select background color. * Default: MKRGB(206,227,241) * CONFIG_NXWM_DEFAULT_SHINEEDGECOLOR - Color of the bright edge of a border. * Default: MKRGB(255,255,255) * CONFIG_NXWM_DEFAULT_SHADOWEDGECOLOR - Color of the shadowed edge of a border. * Default: MKRGB(0,0,0) * CONFIG_NXWM_DEFAULT_FONTCOLOR - Default fong color. Default: * MKRGB(0,0,0) * CONFIG_NXWM_TRANSPARENT_COLOR - The "transparent" color. Default: * MKRGB(0,0,0) */ /** * Normal background color */ #ifndef CONFIG_NXWM_DEFAULT_BACKGROUNDCOLOR # define CONFIG_NXWM_DEFAULT_BACKGROUNDCOLOR MKRGB(148,189,215) #endif /** * Default selected background color */ #ifndef CONFIG_NXWM_DEFAULT_SELECTEDBACKGROUNDCOLOR # define CONFIG_NXWM_DEFAULT_SELECTEDBACKGROUNDCOLOR MKRGB(206,227,241) #endif /** * Border colors */ #ifndef CONFIG_NXWM_DEFAULT_SHINEEDGECOLOR # define CONFIG_NXWM_DEFAULT_SHINEEDGECOLOR MKRGB(248,248,248) #endif #ifndef CONFIG_NXWM_DEFAULT_SHADOWEDGECOLOR # define CONFIG_NXWM_DEFAULT_SHADOWEDGECOLOR MKRGB(35,58,73) #endif /** * The default font color */ #ifndef CONFIG_NXWM_DEFAULT_FONTCOLOR # define CONFIG_NXWM_DEFAULT_FONTCOLOR MKRGB(255,255,255) #endif /** * The transparent color */ #ifndef CONFIG_NXWM_TRANSPARENT_COLOR # define CONFIG_NXWM_TRANSPARENT_COLOR MKRGB(0,0,0) #endif /* Task Bar Configuation ***************************************************/ /** * Horizontal and vertical spacing of icons in the task bar. * * CONFIG_NXWM_TASKBAR_VSPACING - Vertical spacing. Default: 2 pixels * CONFIG_NXWM_TASKBAR_HSPACING - Horizontal spacing. Default: 2 rows * * Task bar location. Default is CONFIG_NXWM_TASKBAR_TOP. * * CONFIG_NXWM_TASKBAR_TOP - Task bar is at the top of the display * CONFIG_NXWM_TASKBAR_BOTTOM - Task bar is at the bottom of the display * CONFIG_NXWM_TASKBAR_LEFT - Task bar is on the left side of the display * CONFIG_NXWM_TASKBAR_RIGHT - Task bar is on the right side of the display * * CONFIG_NXWM_TASKBAR_WIDTH - Task bar thickness (either vertical or * horizontal). Default: 25 + 2*spacing */ /** * Horizontal and vertical spacing of icons in the task bar. */ #ifndef CONFIG_NXWM_TASKBAR_VSPACING # define CONFIG_NXWM_TASKBAR_VSPACING (2) #endif #ifndef CONFIG_NXWM_TASKBAR_HSPACING # define CONFIG_NXWM_TASKBAR_HSPACING (2) #endif /** * Check task bar location */ #if defined(CONFIG_NXWM_TASKBAR_TOP) # if defined(CONFIG_NXWM_TASKBAR_BOTTOM) || defined (CONFIG_NXWM_TASKBAR_LEFT) || defined (CONFIG_NXWM_TASKBAR_RIGHT) # warning "Multiple task bar positions specified" # endif #elif defined(CONFIG_NXWM_TASKBAR_BOTTOM) # if defined (CONFIG_NXWM_TASKBAR_LEFT) || defined (CONFIG_NXWM_TASKBAR_RIGHT) # warning "Multiple task bar positions specified" # endif #elif defined(CONFIG_NXWM_TASKBAR_LEFT) # if defined (CONFIG_NXWM_TASKBAR_RIGHT) # warning "Multiple task bar positions specified" # endif #elif !defined(CONFIG_NXWM_TASKBAR_RIGHT) # warning "No task bar position specified" # define CONFIG_NXWM_TASKBAR_TOP 1 #endif /** * At present, all icons are 25 pixels in "width" and, hence require a * task bar of at least that size. */ #ifndef CONFIG_NXWM_TASKBAR_WIDTH # if defined(CONFIG_NXWM_TASKBAR_TOP) || defined(CONFIG_NXWM_TASKBAR_BOTTOM) # define CONFIG_NXWM_TASKBAR_WIDTH (25+2*CONFIG_NXWM_TASKBAR_HSPACING) # else # define CONFIG_NXWM_TASKBAR_WIDTH (25+2*CONFIG_NXWM_TASKBAR_VSPACING) # endif #endif /* Tool Bar Configuration ***************************************************/ /** * CONFIG_NXWM_TOOLBAR_HEIGHT. The height of the tool bar in each * application window. At present, all icons are 21 pixels in height and, * hence require a task bar of at least that size. */ #ifndef CONFIG_NXWM_TOOLBAR_HEIGHT # define CONFIG_NXWM_TOOLBAR_HEIGHT (21+2*CONFIG_NXWM_TASKBAR_HSPACING) #endif /* Background Image **********************************************************/ /** * CONFIG_NXWM_BACKGROUND_IMAGE - The name of the image to use in the * background window. Default:NXWidgets::g_nuttxBitmap */ #ifndef CONFIG_NXWM_BACKGROUND_IMAGE # define CONFIG_NXWM_BACKGROUND_IMAGE NXWidgets::g_nuttxBitmap #endif /* Start Window Configuration ***********************************************/ /** * Horizontal and vertical spacing of icons in the task bar. * * CONFIG_NXWM_STARTWINDOW_VSPACING - Vertical spacing. Default: 4 pixels * CONFIG_NXWM_STARTWINDOW_HSPACING - Horizontal spacing. Default: 4 rows * CONFIG_NXWM_STARTWINDOW_ICON - The glyph to use as the start window icon * CONFIG_NXWM_STARTWINDOW_MQNAME - The well known name of the message queue * Used to communicated from CWindowMessenger to the start window thread. * Default: "/dev/nxwm" * CONFIG_NXWM_STARTWINDOW_MXMSGS - The maximum number of messages to queue * before blocking. Defualt 32 * CONFIG_NXWM_STARTWINDOW_MXMPRIO - The message priority. Default: 42. * CONFIG_NXWM_STARTWINDOW_PRIO - Priority of the StartWindoW task. Default: * SCHED_PRIORITY_DEFAULT. NOTE: This priority should be less than * CONFIG_NXWIDGETS_SERVERPRIO or else there may be data overrun errors. * Such errors would most likely appear as duplicated rows of data on the * display. * CONFIG_NXWM_STARTWINDOW_STACKSIZE - The stack size to use when starting the * StartWindow task. Default: 2048 bytes. */ #ifndef CONFIG_NXWM_STARTWINDOW_VSPACING # define CONFIG_NXWM_STARTWINDOW_VSPACING (4) #endif #ifndef CONFIG_NXWM_STARTWINDOW_HSPACING # define CONFIG_NXWM_STARTWINDOW_HSPACING (4) #endif /** * The start window glyph */ #ifndef CONFIG_NXWM_STARTWINDOW_ICON # define CONFIG_NXWM_STARTWINDOW_ICON NxWM::g_playBitmap #endif /** * Start window task parameters */ #ifndef CONFIG_NXWM_STARTWINDOW_MQNAME # define CONFIG_NXWM_STARTWINDOW_MQNAME "/dev/nxwm" #endif #ifndef CONFIG_NXWM_STARTWINDOW_MXMSGS # ifdef CONFIG_NX_MXCLIENTMSGS # define CONFIG_NXWM_STARTWINDOW_MXMSGS CONFIG_NX_MXCLIENTMSGS # else # define CONFIG_NXWM_STARTWINDOW_MXMSGS 32 # endif #endif #ifndef CONFIG_NXWM_STARTWINDOW_MXMPRIO # define CONFIG_NXWM_STARTWINDOW_MXMPRIO 42 #endif #ifndef CONFIG_NXWM_STARTWINDOW_PRIO # define CONFIG_NXWM_STARTWINDOW_PRIO SCHED_PRIORITY_DEFAULT #endif #if CONFIG_NXWIDGETS_SERVERPRIO <= CONFIG_NXWM_STARTWINDOW_PRIO # warning "CONFIG_NXWIDGETS_SERVERPRIO <= CONFIG_NXWM_STARTWINDOW_PRIO" # warning" -- This can result in data overrun errors" #endif #ifndef CONFIG_NXWM_STARTWINDOW_STACKSIZE # define CONFIG_NXWM_STARTWINDOW_STACKSIZE 2048 #endif /* NxConsole Window *********************************************************/ /** * NxConsole Window Configuration * * CONFIG_NXWM_NXCONSOLE_PRIO - Priority of the NxConsole task. Default: * SCHED_PRIORITY_DEFAULT. NOTE: This priority should be less than * CONFIG_NXWIDGETS_SERVERPRIO or else there may be data overrun errors. * Such errors would most likely appear as duplicated rows of data on the * display. * CONFIG_NXWM_NXCONSOLE_STACKSIZE - The stack size to use when starting the * NxConsole task. Default: 2048 bytes. * CONFIG_NXWM_NXCONSOLE_WCOLOR - The color of the NxConsole window background. * Default: MKRGB(192,192,192) * CONFIG_NXWM_NXCONSOLE_FONTCOLOR - The color of the fonts to use in the * NxConsole window. Default: MKRGB(0,0,0) * CONFIG_NXWM_NXCONSOLE_FONTID - The ID of the font to use in the NxConsole * window. Default: CONFIG_NXWM_DEFAULT_FONTID * CONFIG_NXWM_NXCONSOLE_ICON - The glyph to use as the NxConsole icon */ #ifndef CONFIG_NXWM_NXCONSOLE_PRIO # define CONFIG_NXWM_NXCONSOLE_PRIO SCHED_PRIORITY_DEFAULT #endif #if CONFIG_NXWIDGETS_SERVERPRIO <= CONFIG_NXWM_NXCONSOLE_PRIO # warning "CONFIG_NXWIDGETS_SERVERPRIO <= CONFIG_NXWM_NXCONSOLE_PRIO" # warning" -- This can result in data overrun errors" #endif #ifndef CONFIG_NXWM_NXCONSOLE_STACKSIZE # define CONFIG_NXWM_NXCONSOLE_STACKSIZE 2048 #endif #ifndef CONFIG_NXWM_NXCONSOLE_WCOLOR # define CONFIG_NXWM_NXCONSOLE_WCOLOR CONFIG_NXWM_DEFAULT_BACKGROUNDCOLOR #endif #ifndef CONFIG_NXWM_NXCONSOLE_FONTCOLOR # define CONFIG_NXWM_NXCONSOLE_FONTCOLOR CONFIG_NXWM_DEFAULT_FONTCOLOR #endif #ifndef CONFIG_NXWM_NXCONSOLE_FONTID # define CONFIG_NXWM_NXCONSOLE_FONTID CONFIG_NXWM_DEFAULT_FONTID #endif /** * The NxConsole window glyph */ #ifndef CONFIG_NXWM_NXCONSOLE_ICON # define CONFIG_NXWM_NXCONSOLE_ICON NxWM::g_cmdBitmap #endif /* Touchscreen device *******************************************************/ /** * Touchscreen device settings * * CONFIG_NXWM_TOUCHSCREEN_DEVNO - Touchscreen device minor number, i.e., the * N in /dev/inputN. Default: 0 * CONFIG_NXWM_TOUCHSCREEN_DEVPATH - The full path to the touchscreen device. * Default: "/dev/input0" * CONFIG_NXWM_TOUCHSCREEN_SIGNO - The realtime signal used to wake up the * touchscreen listener thread. Default: 5 * CONFIG_NXWM_TOUCHSCREEN_LISTENERPRIO - Priority of the touchscreen listener * thread. Default: SCHED_PRIORITY_DEFAULT * CONFIG_NXWM_TOUCHSCREEN_LISTENERSTACK - Touchscreen listener thread stack * size. Default 1024 */ #ifndef CONFIG_NXWM_TOUCHSCREEN_DEVNO # define CONFIG_NXWM_TOUCHSCREEN_DEVNO 0 #endif #ifndef CONFIG_NXWM_TOUCHSCREEN_DEVPATH # define CONFIG_NXWM_TOUCHSCREEN_DEVPATH "/dev/input0" #endif #ifndef CONFIG_NXWM_TOUCHSCREEN_SIGNO # define CONFIG_NXWM_TOUCHSCREEN_SIGNO 5 #endif #ifndef CONFIG_NXWM_TOUCHSCREEN_LISTENERPRIO # define CONFIG_NXWM_TOUCHSCREEN_LISTENERPRIO SCHED_PRIORITY_DEFAULT #endif #ifndef CONFIG_NXWM_TOUCHSCREEN_LISTENERSTACK # define CONFIG_NXWM_TOUCHSCREEN_LISTENERSTACK 1024 #endif /* Keyboard device **********************************************************/ /** * Keyboard device settings * * CONFIG_NXWM_KEYBOARD_DEVPATH - The full path to the keyboard device. * Default: "/dev/console" * CONFIG_NXWM_KEYBOARD_SIGNO - The realtime signal used to wake up the * touchscreen listener thread. Default: 6 * CONFIG_NXWM_KEYBOARD_BUFSIZE - The size of the keyboard read data buffer. * Default: 16 * CONFIG_NXWM_KEYBOARD_LISTENERPRIO - Priority of the touchscreen listener * thread. Default: SCHED_PRIORITY_DEFAULT * CONFIG_NXWM_KEYBOARD_LISTENERSTACK - Keyboard listener thread stack * size. Default 1024 */ #ifndef CONFIG_NXWM_KEYBOARD_DEVPATH # define CONFIG_NXWM_KEYBOARD_DEVPATH "/dev/console" #endif #ifndef CONFIG_NXWM_KEYBOARD_SIGNO # define CONFIG_NXWM_KEYBOARD_SIGNO 6 #endif #ifndef CONFIG_NXWM_KEYBOARD_BUFSIZE # define CONFIG_NXWM_KEYBOARD_BUFSIZE 6 #endif #ifndef CONFIG_NXWM_KEYBOARD_LISTENERPRIO # define CONFIG_NXWM_KEYBOARD_LISTENERPRIO SCHED_PRIORITY_DEFAULT #endif #ifndef CONFIG_NXWM_KEYBOARD_LISTENERSTACK # define CONFIG_NXWM_KEYBOARD_LISTENERSTACK 1024 #endif /* Calibration display ******************************************************/ /** * Calibration display settings: * * CONFIG_NXWM_CALIBRATION_BACKGROUNDCOLOR - The background color of the * touchscreen calibration display. Default: Same as * CONFIG_NXWM_DEFAULT_BACKGROUNDCOLOR * CONFIG_NXWM_CALIBRATION_LINECOLOR - The color of the lines used in the * touchscreen calibration display. Default: MKRGB(0, 0, 128) (dark blue) * CONFIG_NXWM_CALIBRATION_CIRCLECOLOR - The color of the circle in the * touchscreen calibration display. Default: MKRGB(255, 255, 255) (white) * CONFIG_NXWM_CALIBRATION_TOUCHEDCOLOR - The color of the circle in the * touchscreen calibration display after the touch is recorder. Default: * MKRGB(255, 255, 96) (very light yellow) * CONFIG_NXWM_CALIBRATION_ICON - The ICON to use for the touchscreen * calibration application. Default: NxWM::g_calibrationBitmap * CONFIG_NXWM_CALIBRATION_SIGNO - The realtime signal used to wake up the * touchscreen calibration thread. Default: 5 * CONFIG_NXWM_CALIBRATION_LISTENERPRIO - Priority of the calibration listener * thread. Default: SCHED_PRIORITY_DEFAULT * CONFIG_NXWM_CALIBRATION_LISTENERSTACK - Calibration listener thread stack * size. Default 2048 */ #ifndef CONFIG_NXWM_CALIBRATION_BACKGROUNDCOLOR # define CONFIG_NXWM_CALIBRATION_BACKGROUNDCOLOR CONFIG_NXWM_DEFAULT_BACKGROUNDCOLOR #endif #ifndef CONFIG_NXWM_CALIBRATION_LINECOLOR # define CONFIG_NXWM_CALIBRATION_LINECOLOR MKRGB(0, 0, 128) #endif #ifndef CONFIG_NXWM_CALIBRATION_CIRCLECOLOR # define CONFIG_NXWM_CALIBRATION_CIRCLECOLOR MKRGB(255, 255, 255) #endif #ifndef CONFIG_NXWM_CALIBRATION_TOUCHEDCOLOR # define CONFIG_NXWM_CALIBRATION_TOUCHEDCOLOR MKRGB(255, 255, 96) #endif #ifndef CONFIG_NXWM_CALIBRATION_ICON # define CONFIG_NXWM_CALIBRATION_ICON NxWM::g_calibrationBitmap #endif #ifndef CONFIG_NXWM_CALIBRATION_SIGNO # define CONFIG_NXWM_CALIBRATION_SIGNO 5 #endif #ifndef CONFIG_NXWM_CALIBRATION_LISTENERPRIO # define CONFIG_NXWM_CALIBRATION_LISTENERPRIO SCHED_PRIORITY_DEFAULT #endif #ifndef CONFIG_NXWM_CALIBRATION_LISTENERSTACK # define CONFIG_NXWM_CALIBRATION_LISTENERSTACK 2048 #endif /* Hexcalculator applications ***********************************************/ /** * Calibration display settings: * * CONFIG_NXWM_HEXCALCULATOR_BACKGROUNDCOLOR - The background color of the * calculator display. Default: Same as CONFIG_NXWM_DEFAULT_BACKGROUNDCOLOR * CONFIG_NXWM_HEXCALCULATOR_ICON - The ICON to use for the hex calculator * application. Default: NxWM::g_calculatorBitmap * CONFIG_NXWM_HEXCALCULATOR_FONTID - The font used with the calculator. * Default: CONFIG_NXWM_DEFAULT_FONTID */ #ifndef CONFIG_NXWM_HEXCALCULATOR_BACKGROUNDCOLOR # define CONFIG_NXWM_HEXCALCULATOR_BACKGROUNDCOLOR CONFIG_NXWM_DEFAULT_BACKGROUNDCOLOR #endif #ifndef CONFIG_NXWM_HEXCALCULATOR_ICON # define CONFIG_NXWM_HEXCALCULATOR_ICON NxWM::g_calculatorBitmap #endif #ifndef CONFIG_NXWM_HEXCALCULATOR_FONTID # define CONFIG_NXWM_HEXCALCULATOR_FONTID CONFIG_NXWM_DEFAULT_FONTID #endif /* Media Player application ***********************************************/ /** * * CONFIG_NXWM_HEXCALCULATOR_BACKGROUNDCOLOR - The background color of the * calculator display. Default: Same as CONFIG_NXWM_DEFAULT_BACKGROUNDCOLOR * CONFIG_NXWM_HEXCALCULATOR_ICON - The ICON to use for the hex calculator * application. Default: NxWM::g_calculatorBitmap * CONFIG_NXWM_HEXCALCULATOR_FONTID - The font used with the calculator. * Default: CONFIG_NXWM_DEFAULT_FONTID */ #ifndef CONFIG_NXWM_MEDIAPLAYER_BACKGROUNDCOLOR # define CONFIG_NXWM_MEDIAPLAYER_BACKGROUNDCOLOR CONFIG_NXWM_DEFAULT_BACKGROUNDCOLOR #endif #ifndef CONFIG_NXWM_MEDIAPLAYER_ICON # define CONFIG_NXWM_MEDIAPLAYER_ICON NxWM::g_mediaplayerBitmap #endif #ifndef CONFIG_NXWM_MPLAYER_FWD_ICON # define CONFIG_NXWM_MPLAYER_FWD_ICON NxWM::g_mplayerFwdBitmap #endif #ifndef CONFIG_NXWM_MPLAYER_PLAY_ICON # define CONFIG_NXWM_MPLAYER_PLAY_ICON NxWM::g_mplayerPlayBitmap #endif #ifndef CONFIG_NXWM_MPLAYER_PAUSE_ICON # define CONFIG_NXWM_MPLAYER_PAUSE_ICON NxWM::g_mplayerPauseBitmap #endif #ifndef CONFIG_NXWM_MPLAYER_REW_ICON # define CONFIG_NXWM_MPLAYER_REW_ICON NxWM::g_mplayerRewBitmap #endif #ifndef CONFIG_NXWM_MPLAYER_VOL_ICON # define CONFIG_NXWM_MPLAYER_VOL_ICON NxWM::g_mplayerVolBitmap #endif #ifndef CONFIG_NXWM_MEDIAPLAYER_FONTID # define CONFIG_NXWM_MEDIAPLAYER_FONTID CONFIG_NXWM_DEFAULT_FONTID #endif /**************************************************************************** * Global Function Prototypes ****************************************************************************/ /** * Hook to support monitoring of memory usage by the NxWM unit test. */ #ifdef CONFIG_NXWM_UNITTEST # ifdef CONFIG_HAVE_FILENAME void _showTestStepMemory(FAR const char *file, int line, FAR const char *msg); # define showTestStepMemory(msg) \ _showTestStepMemory((FAR const char*)__FILE__, (int)__LINE__, msg) # else void showTestStepMemory(FAR const char *msg); # endif #else # define showTestStepMemory(msg) #endif #endif // __INCLUDE_NXWMCONFIG_HXX
Monitoring of inflammation using novel biosensor mouse model reveals tissue- and sex-specific responses to Western diet ABSTRACT Obesity is an epidemic, and it is characterized by a state of low-grade systemic inflammation. A key component of inflammation is the activation of inflammasomes, multiprotein complexes that form in response to danger signals and that lead to activation of caspase-1. Previous studies have found that a Westernized diet induces activation of inflammasomes and production of inflammatory cytokines. Gut microbiota metabolites, including the short-chain fatty acid butyrate, have received increased attention as underlying some obesogenic features, but the mechanisms of action by which butyrate influences inflammation in obesity remain unclear. We engineered a caspase-1 reporter mouse model to measure spatiotemporal dynamics of inflammation in obese mice. Concurrent with increased capsase-1 activation in vivo, we detected stronger biosensor signal in white adipose and heart tissues of obese mice ex vivo and observed that a short-term butyrate treatment affected some, but not all, of the inflammatory responses induced by Western diet. Through characterization of inflammatory responses and computational analyses, we identified tissue- and sex-specific caspase-1 activation patterns and inflammatory phenotypes in obese mice, offering new mechanistic insights underlying the dynamics of inflammation.
Asthma and COVID-19 among healthcare workers from a Mexican Hospital: is there an association? Background Asthma does not appear to be a risk factor for developing COVID-19. Objective The objective of the study was to analyze the role of asthma as a factor associated with COVID-19 among healthcare workers (HW). Methods A crosssectional study was conducted in HW from a Mexican hospital. Data were obtained through an epidemiological survey that included age, sex, and history of COVID-19. Multivariate logistic regression analysis was performed to identify factors associated with COVID-19. Results In total, 2295 HW were included (63.1% women; mean age 39.1 years); and 1550 (67.5%) were medical personnel. The prevalence of asthma in HW with COVID-19 was 8.3%; for the group without COVID-19, the prevalence was 5.3% (p = 0.011). The multivariate analyses suggested that asthma was associated with COVID-19 (OR 1.59, p = 0.007). Conclusion Our study suggests that asthma could be a factor associated with COVID-19 in HW.
MANCHESTER UNITED have joined Bayern Munich in the race for Callum Hudson-Odoi, according to reports. Bayern made the Chelsea winger their No1 priority in the winter, but saw all three of their bids rejected by Stamford Bridge chiefs. According to The Daily Mail, United have set their sights on Hudson-Odoi as interim boss Ole Gunnar Solskjaer searches for a right winger. SunSport understand the United boss in waiting will hold of any urge to strengthen his defence as he searches for a right sided attacker. Old Trafford chiefs will hand their manager £80million in the summer, with Borussia Dortmund winger Jadon Sancho top of the wishlist. However, if the 18-year-old, who started on the right in England's 5-0 Euro qualifier win against the Czech Republic on Friday, wishes to stay in Germany, it is understood Hudson-Odoi is the alternative. The Blues forward lodged a transfer request in January, attempting to force through the move to Germany, though Chelsea remained firm in their decision to hold on to the ace. The 18-year-old has played just 274 minutes of football for the Maurizio Sarri's side this term, netting four goals in his eight Europa League games. With the arrival of Christian Pulisic from Dortmund next season, Hudson-Odoi could again face time on the bench. Blues bosses have offered him an £85,0o0 week deal to extend his contract, which expires in 2020, though the teen will demand a guarantee of first-team football. But with Chelsea’s potential two-year transfer ban imminent, and Eden Hazard’s future in doubt, no sale of Hudson-Odoi will be rushed.
2014 Novak Djokovic tennis season Yearly summary Djokovic began the year with a warmup tournament win at the World Tennis Championship. At the Australian Open, he won against Lukáš Lacko in straight sets for the first round, won against Leonardo Mayer in straight sets, winning the first set with a bagel, and won against Denis Istomin in straight sets too. He continued his straight sets streak beating no.15 seed Fabio Fognini. Djokovic then met eventual champion Stanislas Wawrinka in the quarterfinals of the tournament, who defeated Djokovic in five sets, ending his 25 match winning streak at the Australian Open. Djokovic chose to withdraw from the first round of the Davis Cup and returned in late February attempting to defend his Dubai title, however he reached the semi finals falling to eventual champion Roger Federer. In March he returned to Indian Wells and Miami, winning both tournaments, in the first he avenged Federer in three sets and in the latter he defeated Rafael Nadal in straight sets, in their 40th match. Djokovic played in the Monte-Carlo Masters, losing to Federer in the semifinals. This ended a remarkable unbeaten run in the Masters 1000 tournaments, starting with Shanghai in 2013, during which he won four consecutive Masters 1000 tournaments: (Shanghai, Paris, Indian Wells, and Miami). On 4 May, withdrew from ATP World Tour Masters 1000 Madrid having suffered a recurrence of the right arm injury that afflicted him at ATP World Tour Masters 1000 Monte-Carlo... On 18 May, he defeated Nadal in Rome, it was his 19th ATP World Tour Masters 1000 trophy and he has now won five of the past seven titles at this tournament level; now tied at No. 13 with Muster in the Open Era titles leader list, with 44 crowns… On 8 June, failed in his bid to win a first Roland Garros title, regain No. 1 in the Emirates ATP Rankings for the first time since 6 October 2014 and also complete a career Grand Slam (he would be the eighth man in tennis history)…Finished runner-up for a second time (also 2012), losing to Nadal in four sets… Wins seventh Grand Slam championship and second Wimbledon crown (also 2011), beating No. 4 seed Federer in five sets in the final. Lost prior to ATP World Tour Masters 1000 Toronto QFs for first time, when he saw his 11-match winning streak against Tsonga end...Saw his hopes of completing a Career Golden Masters end at the hands of Robredo in the ATP World Tour Masters 1000 Cincinnati 4R... Dropped one set en route to reaching his eighth straight US Open SF (l. to Nishikori)…It was his 17th major SF in his past 18 Grand Slam championships…Beat seeds Kohlschreiber (4R) and Murray (QFs)… On 5 October, improves to 24-0 in Beijing with fifth title (d. Berdych 60 62 in final)...Did not drop a set all week to win 46th career title... On 11 October, saw his 28-match winning streak on Chinese soil come to an end in ATP World Tour Masters 1000 Shanghai SFs (l. to Federer)… On 2 November, became the fifth active player (23rd in Open Era) to record 600 match wins as he captured his 20th ATP World Tour Masters 1000 title (d. Raonic) in Paris (his third trophy at the tournament, also 2009, 2013)… On 16 November won the Barclays ATP World Tour Finals for the third straight year – and fourth time overall (also 2008)…He is the third player to win three straight year-end titles, after Ilie Nastase (1971-73) and Ivan Lendl (1985-87)…Went undefeated 4-0, but did not contest final due to Federer’s back injury…Finished 2014 with a 61-8 match record, including seven titles and $14,250,527 in prize money Timeline Novak Djokovic number one rankings timeline: │   │ 0 │ 4 │ 8 │ 12 │ 16 │ 20 │ 24 │ 28 │ 32 │ 36 │ 40 │ 44   ATP Number 1   Race (to London) Number 1
<filename>pkg/runtime/httphandler/tls.go<gh_stars>10-100 /* * Copyright 2022 The Furiko Authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package httphandler import ( "net/http" "github.com/pkg/errors" ) // tlsServer is a wrapper around *http.Server that overrides the ListenAndServe // implementation for TLS. type tlsServer struct { *http.Server certFile string keyFile string } func newTLSServer(server *http.Server, certFile, keyFile string) *tlsServer { return &tlsServer{ Server: server, certFile: certFile, keyFile: keyFile, } } var _ Server = (*tlsServer)(nil) func (s *tlsServer) ListenAndServe() error { if s.certFile == "" { return errors.New("certFile must be specified") } if s.keyFile == "" { return errors.New("keyFile must be specified") } return s.ListenAndServeTLS(s.certFile, s.keyFile) }
Efficient target tracking using mobile sensors A mathematical model for tracking of a moving target by multiple mobile sensors in the framework of a partially observable Markov decision process is discussed. Applications include the use of a fleet of unmanned aerial vehicles for purposes such as search, surveillance, and target tracking. Computationally efficient approximate policies for controlling the mobile sensors are proposed, and a guarantee on their performance losses relative to that of the optimal policy is provided. Simulation results show that our proposed policies do perform close to the optimum for certain stationary models in which a mobile sensor can always move as fast as the target.
Neuroendocrine Disruption: The Emerging Concept On July 10, 2010, the beautiful City of Rouen (France) hosted what we believe was the First International Symposium on the Neuroendocrine Effects of Endocrine Disruptors (NEED). Several laboratories had been working independently on the possibility that endocrine-disrupting chemicals were having major effects on the central nervous system. Luckily, an opportunity to assemble this group came about when Dr. Hubert Vaudry, the main organizer of the 7th International Congress of Neuroendocrinology (ICN) 2010, sent out a request for proposals for satellite symposia to be associated with the main event. One of us (O.K.) proposed the idea for this symposium, and it was readily endorsed by the ICN. Moreover, the idea to publish a special volume came soon after. This idea was also very well supported by Dr. Sam Kacew, editor-in-chief of the Journal of Toxicology and Environmental Health. We thank the generous sponsors and volunteers who helped to finance and run this symposium. They are lUniversit de Rennes 1, INRA, INERIS, CNRS, la Ville de Rouen, Science Action Haute-Normandie, and the ANR program NEED. We also appreciated the critical help of many other people, in particular, Philippe Chan Tchi Song (lUniverist de Rouen), Monsieur Hbert from la Ville de Rouen, Emmanuelle Guiot, Cyril Gabbero, Arianna Servili, Maria Rita Prez, and
Six senators to President Bush: Congress will have a say in post-war plans RAW STORY Published: Thursday December 6, 2007 del.icio.us Print This Email This Six United States Senators have co-authored a letter to President Bush insisting that Congress is to be included in the decision-making process for a plan to maintain a post-war American presence in Iraq. A copy of the letter, in PDF format, can be accessed HERE. Text of the full letter follows below. # Dear Mr. President: We write you today regarding the “Declaration of Principles” agreed upon last week between the United States and Iraq outlining the broad scope of discussions to be held over the next six months to institutionalize long term U.S.-Iraqi cooperation in the political, economic, and security realms. It is our understanding that these discussions seek to produce a strategic framework agreement, no later than July 31, 2008, to help define “a long-term relationship of cooperation and friendship as two fully sovereign and independent states with common interests”. The future of American policy towards Iraq, especially in regard to the issues of U.S. troop levels, permanent U.S. military bases, and future security commitments, has generated strong debate among the American people and their elected representatives. Agreements between our two countries relating to these issues must involve the full participation and consent of the Congress as a co-equal branch of the U.S. government. Furthermore, the future U.S. presence in Iraq is a central issue in the current Presidential campaign. We believe a security commitment that obligates the United States to go to war on behalf of the Government of Iraq at this time is not in America’s long-term national security interest and does not reflect the will of the American people. Commitments made during the final year of your Presidency should not unduly or artificially constrain your successor when it comes to Iraq. In particular, we want to convey our strong concern regarding any commitments made by the United States with respect to American security assurances to Iraq to help deter and defend against foreign aggression or other violations of Iraq’s territorial integrity. Security assurances, once made, cannot be easily rolled back without incurring a great cost to America’s strategic credibility and imperiling the stability of our nation’s other alliances around the world. Accordingly, security assurances must be extended with great care and only in the context of broad bipartisan agreement that such assurances serve our abiding national interest. Such assurances, if legally binding, are generally made in the context of a formal treaty subject to the advice and consent of the U.S. Senate but in any case cannot be made without Congressional authorization. Our unease is heightened by remarks made on November 26th by General Douglas Lute, the Assistant to the President for Iraq and Afghanistan, that Congressional input is not foreseen. General Lute was quoted as asserting at a White House press briefing, “We don’t anticipate now that these negotiations will lead to the status of a formal treaty which would then bring us to formal negotiations or formal inputs from the Congress.” It is unacceptable for your Administration to unilaterally fashion a long-term relationship with Iraq without the full and comprehensive participation of Congress from the very start of such negotiations. We look forward to learning more details as the Administration commences negotiations with the Iraqi government on the contours of long-term political, economic, and security ties between our two nations. We trust you agree that the proposed extension of long-term U.S. security commitments to a nation in a critical region of the world requires the full participation and consent of the Congress as a co-equal branch of our government. Sincerely, Robert P. Casey, Jr. (D-PA) United States Senator Robert C. Byrd (D-WV) United States Senator Edward M. Kennedy (D-MA) United States Senator Jim Webb (D-VA) United States Senator Hillary Rodham Clinton (D-NY) United States Senator Carl Levin (D-MI) United States Senator
Contrasting target, stray-light, and other performance metrics for MISR The Multi-angle Imaging Spectrometer (MISR) is an Earth-observing sensor to be flown as part of the Earth Observing System (EOS) in 1998. The radiometric and spectral calibration of the nine cameras which compose this instrument will be done using targets which are uniform in space and in angle, unpolarized, and lacking in absorption lines. A calibration uncertainty will also be determined for this configuration. This allows one to estimate the accuracy of measured radiances, assuming the scene is likewise featureless with respect to these parameters. In addition to these calibrations, the MISR engineering team will be responsible for verification of certain performance specifications which assure data products can be produced for a range of target types. MISR is specified to be insensitive to the state of polarization of the incident field to within /spl plusmn/1%; it must recover from saturation within eight line repeat times; blooming in the event of saturation shall be limited to the eight adjacent pixels; stray-light shall be rejected to a degree sufficient to maintain the radiometric-requirements of the within-field target; and radiometry will be preserved while observing two specific contrasting scenes. The first scene is 5% in reflectance for one half-plane, and 100% in reflectance for the other half-plane. Radiance retrieval over the dark scene 24 pixels distance from the bright/dark boundary shall differ by no more than 2% from the retrieval over a uniform 5% dark plane (lack of bright half-plane). This specification guarantees a specified level of accuracy for a large dark expanse, such as the ocean surface. The second specification defines a scene which is 50% in reflectance except for the center 24/spl times/24 pixels, which are 5% in reflectance. The radiance retrieved anywhere within the dark region shall differ by no more than 2% than for the case where the scene is completely 5% dark. This scene type could be used, for example, in the aerosol retrieval algorithm where a lake surrounded by brighter land is investigated. Due to the need to estimate performance prior to hardware build, and due to the difficulties in constructing test targets for an unlimited number of scene types MISR will combine test and analyses to verify these specifications. Currently a stray-light analysis program is assisting in the camera design process, for the purpose of minimizing ghost imagery and spectral cross-talk. The point source transmittance function from the stray-light code is used to predict the blurring of energy in the presence of a contrasting target. Results of these analyses, and test plans are reviewed.<<ETX>>
Queue-Aware Optimal Frequency Selection for Energy Minimization in Wireless Networks Reducing the energy consumption of wireless networks is one of the fundamental objectives of green networks. While the use of dynamic frequency scaling has been proposed for reducing the energy consumption, they increase the packet delays and loss rates in the network. This paper addresses the problem of optimally selecting the clock frequency for wireless network interface cards with dynamic frequency scaling, so that the energy consumption is minimized while meeting the packet level performance requirements. The proposed framework is based on modeling the interface card as a MAP/G/1/K with threshold based service. The proposed frequency selection mechanism has been verified through simulations.
It was less than a month ago that the first trailer went up for Hulu’s new Marvel series Runaways, about a group of kids who accidentally uncover the fact that their parents are supervillains. It looked pretty appealing. Today a longer trailer was released, and now it looks even better. This is basically a more involved version of the shorter trailer we already got, but it adds layers to the story and starts to convey more of the tone of the show. Fans of the comics already know this, but the trailer makes clear that these kids were former friends who drifted apart as they entered their teenage years. So when they get back together only to discover their parents’ terrible secret, there’s plenty of built-up emotional baggage accompanying them as they go on the run from their families. Additionally, we start to see the powers some of them possess. Some are inherited mutant abilities and some are just the result of very cool technology, but everyone has something that makes them stand out—and although the trailer only shows a very brief glimpse of it, there’s a dinosaur involved. As long as the creative team didn’t water down the story or try to shy away from the darker elements of the original narrative, there’s a good chance that Runaways could end up being something very special—everything that Inhumans isn’t, in other words. Runaways premieres on Hulu November 21.
<gh_stars>0 #pragma once #include "EmulatorCommon.h" #include <SDL.h> #include <SDL_syswm.h> #include <shobjidl.h> #include "CPU.h" #include "GameTimer.h" #include "ImGuiImpl.h" #include "imgui_memory_editor.h" /* * Main application class for running the emulator and managing the overall program state */ class Emulator { public: /// <summary> /// Initialises the various sub-systems required by the emulator /// </summary> /// <returns>True if program initialisation was successful or false if something went wrong and we cannot continue</returns> bool Initialise(); /// <summary> /// Main program loop. Handles Window events, CHIP-8 emulation, timers etc /// </summary> void Run(); /// <summary> /// Stops the emulator and releases and in-use resources /// </summary> void Stop(); private: /// <summary> /// Initialises SDL2 and ensures we're setup to be able to draw to a window /// </summary> /// <returns>True if SDL2 initialisation was successful or false if something went wrong and we cannot continue</returns> bool InitSDL(); /// <summary> /// Initialises the CHIP-8 CPU emulator and loads a program /// </summary> void InitCpu(); /// <summary> /// Initialises Dear ImGui integration /// </summary> void InitImGui(); /// <summary> /// Displays a 'File Browse Dialog' for the end-user to select a ROM to load from disk. /// </summary> /// <returns>True if a ROM was selected and loaded successfully. False if either no ROM was selected or nothing loading failed.</returns> bool LoadRom(); /// <summary> /// Handles any pending SDL or Windows window events /// </summary> void HandleEvents(); /// <summary> /// Checks for key press events and if found /// </summary> void Update(); /// <summary> /// Clears the display and prepares to draw a new frame /// </summary> void Clear(); /// <summary> /// Draws the emulator VRAM and Dear ImGui UI to screen /// </summary> void Draw(); /// <summary> /// Pushes the current back buffer to screen to be drawn /// </summary> void Present(); /// <summary> /// Decrements the CPU's 'Sound' and/or 'Delay' timers each second if they're currently greater than 0 /// </summary> void UpdateTimers(); private: /// <summary> /// Draws the ImGui menu bar at the top of the screen /// </summary> void DrawMainMenu(); /// <summary> /// Draws the ImGui debug overlay showing the contents of each CPU register /// </summary> void DrawDebugOverlay(); private: // Set to true if the emulator is currently running (Not including the CPU) bool m_bIsRunning = false; // Set to true if the CPU is paused and not executing any more instructions bool m_bIsPaused = true; // Set to true if a ROM has been loaded into the CPU's memory ready for execution. False if no ROM has been loaded. bool m_bIsProgramLoaded = false; // Buffer for uploading VRAM to the GPU for rendering uint32_t m_PixelBuffer[2048]; // State of each keyboard key (i.e. Is it pressed or not) Uint8* m_KeyStates = nullptr; // Main application window for the emulator SDL_Window* m_GameWindow = nullptr; // The renderer that will draw the CHIP-8 VRAM to the screen SDL_Renderer* m_Renderer = nullptr; // Texture the CHIP-8 display will be drawn to before being presented to the display SDL_Texture* m_RenderTexture = nullptr; // Pointer to the CPU instance that will be used for emulation CPU* m_Cpu = nullptr; // Game timer class used for handling timer-related emulation tasks GameTimer* m_GameTimer = nullptr; // ImGui implementation. Handles key presses and state for the ImGui integration ImGuiImpl* m_ImGuiContext = nullptr; private: /*ImGui Memory Viewers*/ // Memory viewer for the CPU's VRAM MemoryEditor* m_VRamWindow = nullptr; // Memory viewer for the CPU's Stack MemoryEditor* m_StackMemoryWindow = nullptr; // Memory viewer for the CPU's full memory view MemoryEditor* m_SystemMemoryWindow = nullptr; /* ImGui State variables */ // Set to true if the ImGui VRAM memory viewer should be displayed on-screen bool m_bShowVRamView = false; // Set to true if the ImGui Stack memory viewer should be displayed on-screen bool m_bShowStackView = false; // Set to true if the ImGui CPU memory viewer should be displayed on-screen bool m_bShowSystemMemoryView = true; // Set to true if the ImGui registers overlay should be displayed on-screen bool m_bShowDebugOverlay = true; // If set to true the CPU will execute a single instruction and then pause again bool m_bExecuteSingleInstruction = false; private: /* Constants */ // Dimensions the main application window will be created at int k_WindowWidth = 1440; int k_WindowHeight = 900; // Title displayed for the main appliaction window const char* k_WindowTitle = "CHIP-8 Emulator"; // List of file types selectable on the 'Open File Dialog' when browsing to a ROM file on disk. const COMDLG_FILTERSPEC k_FileFilterSpec[3] = { { L"CHIP-8 Program", L"*.ch8" }, { L"Binary Files", L"*.bin" }, { L"All Files", L"*.*" } }; // Map of SDL2 keycodes for the various keyboard keys the CHIP-8 can handle/react to const Uint8 k_KeyCodes[16] = { SDL_SCANCODE_0, SDL_SCANCODE_1, SDL_SCANCODE_2, SDL_SCANCODE_3, SDL_SCANCODE_4, SDL_SCANCODE_5, SDL_SCANCODE_6, SDL_SCANCODE_7, SDL_SCANCODE_8, SDL_SCANCODE_9, SDL_SCANCODE_A, SDL_SCANCODE_B, SDL_SCANCODE_C, SDL_SCANCODE_D, SDL_SCANCODE_E, SDL_SCANCODE_F }; };
Generation of Triplet Excited States via Photoinduced Electron Transfer in meso-anthra-BODIPY : Fluorogenic Response toward Singlet Oxygen in Solution and in Vitro Heavy atom-free BODIPY-anthracene dyads (BADs) generate locally excited triplet states by way of photoinduced electron transfer (PeT), followed by recombination of the resulting charge-separated states (CSS). Subsequent quenching of the triplet states by molecular oxygen produces singlet oxygen (O2), which reacts with the anthracene moiety yielding highly fluorescent species. The steric demand of the alkyl substituents in the BODIPY subunit defines the site of O2 addition. Novel bisand tetraepoxides and bicyclic acetal products, arising from rearrangements of anthracene endoperoxides were isolated and characterized. O2 generation by BADs in living cells enables visualization of the dyads distribution, promising new imaging applications. O probes based on photoinduced electron transfer (PeT) in donor−acceptor dyads have broad use in diagnostics, particularly for detection of biomolecules, metal ions, reactive oxygen species (ROS) and measurement of intracellular pH. The PeT process leads to formation of nonemissive charge-separated states (CSS) that decay back to the ground state via different pathways. Among those is recombination of CSS, which may lead to locally excited triplet states of the molecule. Recently this process has attracted attention as a method to increase intersystem crossing without relying on the heavy atom effect. The possibility of singlet oxygen (O2) generation by donor− acceptor dyads has not been realized so far in a practical sense. It could be expected that O2 generation by PeT-based optical probes in biological environments would affect their optical response and simultaneously induce cytotoxicity. This is of concern especially in the case of ROS detection, where sensitization of O2 by the probe itself may lead to false positives and incorrect interpretations. On the other hand, PeT-mediated O2 generation could provide a new tool for theranostic applications, because the process of charge separation can be turned on/off by various stimuli. Herein we report readily accessible heavy atom-free BODIPY-anthracene dyads (BADs) that can act as efficient triplet sensitizers, and become fluorescent in response to the generated O2. Although a number of triplet sensitizers based on halogenated BODIPYs have been reported in the past decade, observations of triplet excited states formation in heavy atom-free BODIPYs are rare. In our search for efficient donor−acceptor photosensitizers, we focused on BADs 1 and 2 (Scheme 1). BODIPYs are known to be efficient energy and electron acceptors when combined with anthracene. Although compound BAD1 has been reported to exhibit PeT, no triplet excited states formation has been noted. Upon broad-band visible light irradiation of air-saturated solutions of BAD1 in a range of polar solvents we observed, to our surprise, completely selective formation of BAD1-BE, which could be isolated in 5% yield along with recovered unreacted starting material (Scheme 1). In contrast, irradiation of BAD2 under the same conditions resulted in complete conversion of Received: January 17, 2017 Published: April 13, 2017 Scheme 1. Photoinduced Transformations of BADs Communication (BADs) upon photoexcitation generate locally excited triplet excited states by way of photoinduced electron transfer (PeT), followed by recombination of the resulting charge-separated states (CSS). Subsequent quenching of the triplet states by molecular oxygen produces singlet oxygen ( 1 O2), which reacts with the anthracene moiety yielding highly fluorescent species. The steric demand of the alkyl substituents in the BODIPY subunit defines the site of 1 O 2 addition. Novel bisand tetraepoxides along with bicyclic acetal products arising from a chain of rearrangements of anthracene endoperoxides were isolated and characterized. 1 Optical probes based on photoinduced electron transfer (PeT) in donor-acceptor dyads have found broad use in diagnostics, particularly for the detection of biomolecules, metal ions, reactive oxygen species (ROS) and measurement of intracellular pH. 1 The PeT process leads to formation of nonemissive charge-separated states (CSS) which decay back to the ground state via different pathways. Among those is recombination of CSS, which may lead to locally excited triplet states in the molecule. 2 Recently this process has attracted attention as a method to increase intersystem crossing without directly relying to the heavy atom effect. 3 The possibility for singlet oxygen ( 1 O 2 ) generation by donoracceptor dyads mediated by PeT has not been realized so far in practical sense. It could be expected that 1 O 2 generation by PeT-based optical probes in biological environments would affect their optical response and simultaneously induce cytotoxicity. This is of special concern in the case of ROS detection, where sensitization of 1 O 2 by the probe itself may lead to false positives and incorrect interpretations. 4 On the other hand, PeT-mediated 1 O 2 generation could provide a new tool for theranostic applications, since the process of charge separation can be turned on/off by various stimuli. Herein we report readily accessible heavy atom-free BODIPY-anthracene dyads (BADs) that can act as efficient triplet sensitizers, providing fluorescent response towards generated 1 While a number of triplet sensitizers based on halogenated BODIPYs have been reported during the last decade, 5 observations of triplet excited state formation in heavy atom-free BODIPYs are rare. 6 In our search into efficient donor-acceptor PS, we have focused on BADs 1 and 2 (Scheme 1). BODIPYs are known to be efficient energy and electron acceptors when combined with anthracene. 7 Although compound BAD1 has been reported to exhibit PeT, no triplet excited states formation has been noted. 8 Scheme 1. Photoinduced transformations of BADs. Upon broad-band visible light irradiation of air-saturated solutions of BAD1 in a range of polar solvents we observed, to our surprise, completely selective formation of BAD1-BE, which could be isolated in 5% yield along with recovered unreacted starting material (Scheme 1). In contrast, irradiation of BAD2 under the same conditions resulted in complete conversion of the substrate with formation of two products, bicyclic acetal derivative (BAD2-BA) and tetraepoxide (BAD2-TE), which were isolated in 80% and 10% yields, respectively. The structures of the products were confirmed by NMR spectroscopy and X-ray crystallography (for details see Supporting Information (SI)). Unlike BADs 1 and 2, isolated compounds exhibit bright fluorescence independent of the solvent polarity. For instance, the emission quantum yields of BAD1-BE in CH 2 Cl 2 and hexane were determined to be 0.91 and 0.89, respectively. The formation of these products appears to be due to the sensitization of oxygen and subsequent cycloaddition of the resulting 1 O 2, which is typical for anthracene derivatives. 9 Singlet oxygen quantum yields of BADs were measured using 1,3-diphenylisobenzofuran as 1 O2 trap, giving values of 0.67 and 0.38 in ethanol, for BAD1 and BAD2, respectively. In order to understand the mechanism of 1 O 2 formation we studied the excited state dynamics of the dyad BAD1 by broadband Vis-NIR sub-pico-to microsecond transient absorption (TA) pump-probe spectroscopy.. c) ns-s Transient absorption spectra of degassed BAD1 solutions following excitation at 355 nm by 700 ps laser pulses. The spectra were integrated from 3-5 ns (black line), 10-100 ns (red line), 0.1-1 s (green line), 1-5 s (blue line), and 10-100 s (cyan line). d) Kinetics observed for the bands at 570 nm and 680 nm, assigned to the BODIPY triplet state and anthracene radical-cation, respectively, in the absence and presence of oxygen (solid and dotted lines, respectively). Immediately after photoexcitation with fs pulses at 355 nm a broad band around 360 nm due to the anthracene's singlet excited state (S 1 ) absorption, which partially overlaps with the ground state photo bleach (PB), was observed ( Figure 1a). The concomitant decay of this band and simultaneous rise of the bleach at 505 nm indicate ultrafast energy transfer (EnT) from anthracene to BODIPY subunit, populating the singlet state of the latter, S BDP. Furthermore, another absorption band grows at 580 nm, and it was assigned to the BODIPY radical-anion (BDP - ), 10 forming due to the PeT process. This band rises during the first 100 ps, simultaneously shifting to 570 nm, indicating a transition of the radical-anion to another excited state (see inset of Figure 1a). Synchronously with the rise of the BDP - absorption (580 nm), yet another absorption band, centered at 680 nm rises, presumably due to the anthracene rad-ical-cation (Ant + ), in line with previous reports. 1 1 Global fitting of the PB decays at 380 nm and 400 nm and the rise of BDP - and Ant + bands (Figure 1b) yields time constants of 1.15 ps and 0.54 ps for the EnT and PeT processes, respectively. In the ns-s TA experiments, a rise of an absorption band at 570 nm over 1 s was observed, indicating formation of longlived states (Figure 1c). Previous reports on the TA spectra of BODIPY support the assignment of this band to the BODIPY triplet state (T BDP ) absorption. 10b The band at 570 nm was quenched and decayed faster in the presence of oxygen (Figure 1d). In contrast, the anthracene radical cation-absorption band at 680 nm was impacted by oxygen significantly less. The T BDP lifetime in the absence of O 2 was determined to be 41 s. The observed transition from the bands originating in CSS to the absorption by the triplet suggests that the formation of CSS is a prerequisite for populating of T BDP. The frontier molecular orbitals diagram (Figure 2a) shows that the two highest occupied orbitals ant and BDP located on the anthracene and BODIPY subunit, respectively, are nearly degenerate. Density functional theory (DFT) calculations on these molecules (see SI for computational details) confirm that in BAD1 PeT could take place from ant to singly occupied BDP thus leading to singlet charge transfer state S CSS that is 0.4 eV more stable than the S BDP excited state. Unlike the valence excited states, CSS has a very low ferromagnetic exchange coupling integral due to negligible overlap of singly occupied orbitals ant and * BDP located in mutually orthogonal molecular moieties thus leading to a very small singlet-triplet energy gap (S-T gap). Two pathways for triplet state generation from CSS may yield the lowest local T BDP state ( Figure 2b): spin-orbit charge transfer intersystem crossing (SOCT-ISC) and radical pair intersystem crossing (RP-ISC) ), followed by triplet charge recombination. 12 As has been shown in extensive works of Wasielewski and co-workers, 13 SOCT-ISC prevails for systems with strong electronic couplings, requiring short distances between the subunits (4.3 in BAD1). On the other hand, due to the small S-T gap in the RP state, mixing of S CSS and T CSS states is possible due to e.g. electron-nuclear hyperfine coupling. More detailed studies will be necessary to distinguish between mechanisms governing spin interconversion in BADs. The observed PeT process is clearly manifested in the spectroscopic properties of BADs. The fluorescence of the BODIPY is quenched in polar solvents as evidenced by the negligible values of f observed, compared to the strong emission in non-polar solvents (Table S3). A broad emission band at 610 nm was observed in polar solvents. Such red-shifted broad emission bands arising from charge transfer excited states were reported for various donor-acceptor systems. 14 DFT calculations in vacuo show that S CSS state is approximately 0.2 eV higher in energy than the valence S BDP state. The dipole moment for the S CSS state was computed to be = 19 D in vacuo, which is much higher than that for the valence S BDP state (5 D). Interactions of CSS with polar solvent result in a decrease of the S CSS state dipole moment to 1.1 D and change the relative energy ordering of the S BDP and S CSS states, making PeT process favourable. The formation of bicyclic acetal and tetraepoxide products from BAD2 is likely to take place via an 9,10-endoperoxide intermediate (Scheme 2). The rearrangement of endoperoxides into bisepoxides can be induced either thermally or photochemically. 9 The process is caused by the homolytic cleavage of the peroxide O-O bond, followed by rearrangement to more stable bisepoxides. Commonly such bisepoxides, containing a cyclohexadiene ring, could not be isolated, but only trapped with dienophiles. 15 Indeed, we found no traces of this intermediate in the reaction mixture. According to previous reports, the formation of a bicyclic acetal from bisepoxide may take place via heterolytic cleavage of the epoxide C-C bond, leading to an ylide-type bipolar intermediate. 15b This is then followed by C-O bond rupture of a second epoxide fragment, leading to rearomatization of the lateral ring and formation of the acetal bridge. The rearrangement competes with addition of 1 O 2 molecule to the diene moiety leading to BAD2-TE. In contrast, the bisepoxide BAD1-BE is stable and showed no formation of the rearrangement products. Its formation likely proceeds via the mechanism discussed above, involving O-O homolytic cleavage and further isomerization. The addition of 1 O 2 to the outer ring in this case is surprising, as the central 9,10-site is the most reactive, based on frontier molecular orbital analysis. The influence of steric factors on the regioselectivity of endoperoxide formation has previously been reported for acenes with bulky substituents at the ortho-positions of the aryl groups. 16 Comparison with BAD2 shows that the unusual reactivity of BAD1 can be attributed to the effect of methyl substituents in position 4 of the BODIPY core. This can be seen in the XRD data where C-4 methyl substituents in BAD1 are forming a steric like shield of the C-9 position of the anthracene unit. Introduction of methyl groups into the BODIPY pyrrole rings shields the inner ring of the orthogonal anthracene residue, making the approach of 1 O 2 molecule difficult. Different reactivity of BADs towards 1 O 2 accounts for the variations in their fluorescence response (Figure 3b) due to the cycloaddition to the anthracene moiety, which takes place considerably faster for BAD2. The rise of BAD2 fluorescence due to cycloaddition reaction is manifested even at 1 M concentration, and it reaches the intensities comparable to those of the emission of a strongly fluorescent reference BODIPY compound (Fig. S7). It was of special interest for us to investigate whether the sensitization process can be reproduced in live cells. For this purpose we generated appropriate water-soluble derivatives. Substitution of fluorine atoms with N,N-dimethylaminopropyne-1 residues gave corresponding BADs 3 and 4. Quaternization of the dimethylamino group with 1,3-propanesultone then gave BADs 5 and 6, bearing zwitterionic fragments (betaine) which imparted the desired aqueous solubility. To examine the fluorescence response of BADs 5 and 6 towards self-sensitized 1 O 2 in cells, human breast cancer (MDA-MB-468) cells were incubated with BADs 5 and 6 (1 M) followed by irradiation with broadband visible light (400 -700 nm, 23.8 mWcm -2 ). Cells were irradiated for 0, 2.5 and 5 minutes and were visualised by confocal fluorescence microscopy. Over the time course of irradiation an increase in the fluorescence intensity was observed for BAD6 ( Figure 4) indi-cating firstly, that the chromophore had entered the cells, rather than simply associating with the external cell membrane; and secondly, that the fluorescence increased in a similar way to that observed for BAD2 in homogeneous solution. However, this behavior was not replicated in the case of BAD5, which showed no observable fluorescence on this timescale, even when irradiated with the higher light doses. Lower fluorogenicity of BAD5 is in accord with the behaviour of parent BAD1, which was shown to react with 1 O 2 considerably slower than BAD2. At higher concentrations of BADs evidence of morphological changes to the cells upon irradiation, most noticeably "blebbing" of the cell membrane, was observed (Fig. S12), indicating apoptotic behaviour. Cell viabilities after incubation with a range of concentrations (1 -50 M) of BADs 5 and 6, followed by light treatment (23.8 mWcm -2 ), were assayed by MTT protocol. The results obtained indicate that both watersoluble BADs induce a significant cytotoxic effect on the cells, whereas negligible cytotoxic effects were observed in the control group under otherwise identical conditions, but without irradiation (Fig. S13). Median lethal doses (LD 50 ) of BADs were found to be 4 M, thus the lower dose of 1 M was selected for imaging experiments. In conclusion, we have demonstrated that heavy atom-freedonor-acceptor dyads can be used as 1 O 2 sensitizers, whereby the triplet excited states playing the key role in oxygen sensitization form by way of photoinduced electron transfer. Moreover, the described dyads are capable of forming strongly fluorescent species with self-sensitized 1 O 2 in biological media. The fluorescent response allows visualization of 1 O 2 formation within the cells and, consequently, fine-tuning of the photon doses required to cause oxidative stress. These sensitizers may give rise to a promising new class of materials for photonic applications which depend on triplet excited states generation. Studies to extend the scope such systems are underway. Supporting Information Synthetic procedures, NMR, optical spectra, computational details, X-ray crystallographic data for BAD1, BAD1-BE, BAD2-BA, BAD2-TE in CIF format, fluorescence microscopy images and cytotoxicity assay protocols. This material is available free of charge via the Internet at http://pubs.acs.org.
import React from 'react'; import Markdown from '../../components/Markdown'; export default () => { return ( <Markdown content={` ~~~ javascript document.addEventListener('DOMContentLoaded', function(){ ... }); ~~~ `} /> ); };
Cloud Nine, a kind of e-cigarette liquid that people have been using to get high, is the latest nightmare drug making the rounds, in the process getting a bunch of TV news reporters extremely excited. A recent NBC Nightly News episode called Cloud Nine "legal, unregulated, and readily available at convenience stores" while informing viewers that the drug has sent almost two dozen young people to the hospital in Michigan. Despite the entertainment value of freaking out over the possibility that there's a new way to get loaded, these sorts of reports don't contain much actual information for parents—it's scary, and your kids are probably already addicted to it. What else do you need to know? Well, it might be good to note that it's a stretch to call it "absolutely deadly," as the segment above does, considering no one has died from it. Still from the NBC Nightly News Report So what's the deal with this new drug? First, we should note that the term "Cloud Nine" is a little like the term "trail mix"—a name for a general category, not a single product. It caught on two years ago when the Cloud Nine label was being slapped on an herbal synthetic product, and it might have added relevance today, since e-cigarette fans compete to have the biggest clouds. They're not the same thing, though. Whatever the formula is that's been labeled "Cloud Nine" in Michigan lately, people are having a bad time on it, and some counties have responded by banning the drug. The Detroit Free Press received a list of the drug's effects from Westland Deputy Police Chief Todd Adams that included "agitation, paranoia, hallucinations, chest pain, increased pulse, high blood pressure, and suicidal thinking/behavior," which is an odd range of effects for a recreational drug to have. I personally would not be interested in such an experience. Thankfully, there are explanations online about why the kids these days are putting it in their magic drug wands and shooting drug vapor into their young brains. Fortunately, you can always turn to the internet if you want information on drugs. A redditor who goes by Aircoft did extensive homework about Cloud Nine. He or she has been posting about Cloud Nine e-cigarette liquid for months, and judging from the Michigan-centric news links in Aircoft's posts, the Cloud Nine in question appears to be the same stuff that's making people sick. Aircoft has provided dosage suggestions prospective users who want to smoke the stuff: "0.05ml-0.1ml (a few hits) of pure Cloud Nine," or alternatively: "mix 80 percent Cloud Nine and 20 percent flavored e-liquid, such as one of my favorites, 'Pluto' by Mister-E-Liquid," for improved flavor. Aircoft has also speculates about what's in it at the molecular level: "The active ingredient is a chemical of the JWH-family, such as JWH-018. I also more recently am thinking it contains 2C-B as well." In other words, Aircoft finds the experience the be reminiscent of both cannabis and ecstasy analogues. Aircoft has even done some detective work on where the plain bottles with "Cloud Nine" on them come from. When a gas station stopped carrying it, he or she asked the attendant for the supplier's contact information. The reply was, "It's not like I can just give you the guy's number." Aircoft also points out that after Cloud Nine disappeared from some stores, it was replaced by a similar product dubbed "Hookah Relax," the liquid that was banned in one Michigan county at the same time as Cloud Nine. For all the furor over them, these products might be the work of a single chemist who is constantly working on new ways to modify the chemical formula of recreational drugs in order to circumvent laws—standard practice for the makers of legal highs. One story about Cloud Nine by MLive quoted a government official talking about how kids put the stuff in sports drinks, but the article doesn't mention that this is a terrible, terrible idea. Aircoft tried drinking Cloud Nine, and the results sound terrifying: "It was comparable to a deep mushroom trip, and it involved throwing up and dizziness, along with a very intense body high and deep mind high." News articles about drugs like Cloud Nine often put reporters in an awkward position: Generally, they just parrot whatever they're told by police and other authority figures, but often those people are either uninformed about how and why users are taking the drug, or they're committed to scaring as many people as possible. Anonymous internet users aren't a reliable source of information either, of course—in one post Aircoft says "Cloud 9 is great"—but they might know more about the composition and effects of new illicit drugs than anyone likely to be contacted by a respectable media outlet. One thing seems certain, however: Even if you're the kind of person who likes drugs, you shouldn't be putting chemicals in your body if you don't know what they are, and you definitely shouldn't be drinking something that's designed to be smoked. Cloud Nine may not be deadly in the sense that it will kill you right away, it seems dangerous and stupid to ingest it. But if you absolutely must, at least do a little research first. Follow Mike Pearl on Twitter.
Does environment quality and public spending on environment promote life expectancy in China? Evidence from a nonlinear autoregressive distributed lag approach. Environmental quality has become a growing concern for Chinese society since the last 2 decades in China. The large contribution of different pollutants severely affected the environmental quality that untimely affects life expectancy in the country. In this backdrop, the present study investigates the impact of environmental quality and public spending on the environment for life expectancy in China using the period 1999Q1-2017Q4. We employ nonlinear autoregressive distributed lag model (ARDL) approach for the empirical assessment. The outcomes of the study reveal the existence of a long-run relationship between environmental quality, public spending on the environment and life expectancy in China. The empirical finding reported that life expectancy reacts differently in response to positive and negative shocks of environmental quality both in the long- and short-run. Environmental quality and spending on the environment increase the life expectancy, furthermore, population has a positive and significant association with life expectancy only in short run while in long run it does not affect. Hence, the government needs to roll out policies to enhance environmental quality and ensure adequate funding for environmental preservation, to achieve both longevity of society and sustainability of the eco-system.
Engineering and mathematical modelling of a microbial swimmer based biosensor Summary form only given. Our goal here was to develop an Escherichia coli based sensing and actuation system. Here we divided the genetic circuitry required for actuation and sensing into two strains of E. coli and linked the two strains via a cell-cell communication signal. We targeted a quorum-sensing (QS) signaling molecule to control the motility response of our actuator strain. We demonstrated that the actuator cells showed signaling molecule dependent motility. Further, we developed a mathematical model that describes our engineered actuator system to provide insight into the key parameters controlling behavior of the system. As a model sensing system, we built an isopropyl -D-1-thiogalactopyranoside (IPTG) sensor in E. coli. The sensor was designed to produce the QS signaling molecule in response to IPTG. We then demonstrated that the actuator cells respond to signaling molecule produced by this sensor strain. The sensing and actuation system engineered here can be used to build synthetic networks where motility is tightly regulated and controlled by cell-cell communication.
. Sigma receptors in the central nervous system have been considered to play an important role in the modulation of mental diseases and memory/learning. However, the physiological function of sigma receptors still remains unknown. To elucidate physiological functions of the sigma receptors in modulation of neuronal activities, the effects of OPC-24439 (a sigma receptor ligand) on neuronal activities in hippocampal slices were studied with electrophysiological methods. Hippocampal slices (thickness ca. 450 microns) were prepared from male Wistar rats (4-7 weeks of age). In extracellular recording, population spikes in the CA1 region evoked by stimulation applied to the Schaffer collateral/commissural fibers were suppressed by OPC-24439 (1-100 microM) in a dose-dependent manner. This inhibition was antagonized by simultaneously applied haloperidol at 1 microM. In intracellular recording experiments, OPC-24439 (100 microM) did not affect the resting membrane potentials of neurons recorded. In addition, OPC-24439 had no effects on depolarization and firing induced by glutamate. These results indicate that sigma receptor activation caused suppression of neuronal activities in the hippocampus via the sigma receptors. This inhibition probably mediated via the suppression of ion channels that are not related to membrane potentials on post-synaptic neurons and/or sigma receptors on pre-synaptic neurons or interneurons.
package com.uics.grab.entity; import io.swagger.annotations.ApiModel; import java.util.List; /** * vrv监控指标统计信息 * {"total":1,"rows": * [{"ID":1,"ClassName":"sdfsf","DeptName":"asfsadf","AlarmType":"sdfsdfsf","DeviceName":"sdfsf","IPAddress":"192.168.1.1","Status":"0","Dt":"2016-11-25T00:00:00"}] * } * <p> * Created by tom on 2016-12-07 14:34:58. */ @ApiModel("vrv监控指标统计信息") public class VrvPage { private int total; private List<VrvAlarmHistory> rows; public int getTotal() { return total; } public void setTotal(int total) { this.total = total; } public List<VrvAlarmHistory> getRows() { return rows; } public void setRows(List<VrvAlarmHistory> rows) { this.rows = rows; } }
def main(): list1=input("Enter the list elements: ") var=input("Enter the data to be added: ") if list==type(var): list1.extend(var) else: list1.append(var) cnt=("enter which number you want to count:") print("after modifying",list1) print("Count of ",list1.count(cnt)) print("index of count variable",list1.index(cnt)) if __name__ == '__main__': main()
Effects of Data Envelopment Analysis on Performance Assessment: A Cognitive Approach This paper examines the Data Envelopment Analysis (DEA) methodology from a cognitive perspective. Specifically, it analyzes (a) the role of DEA scores as an overall efficiency measure and (b) to what extent the presence of DEA scores for a non-financial performance appraisal influences a posterior financial performance assessment. The study confirms that the efficiency score acts as a strong performance marker when deciding on which decision making units (DMUs) should be awarded for their non-financial performance. Furthermore, it shows that the results of the non-financial performance evaluation may act as an anchor which significantly influences a posterior financial assessment. These insights have practical consequences for planning, reporting, and controlling processes that incorporate DEA efficiency scores. Effects of Data Envelopment Analysis on Performance Assessment: A Cognitive Approach
import { Table, Column, Model, DataType } from 'sequelize-typescript'; export interface IAnnouncements { marker: string; message_id: string; } @Table export class Announcements extends Model<IAnnouncements> implements IAnnouncements { @Column({ type: DataType.STRING, allowNull: false, primaryKey: true, }) marker!: string; @Column({ type: DataType.STRING, }) message_id!: string; }
Deficiency in mitochondrial aldehyde dehydrogenase increases the risk for late-onset Alzheimer's disease in the Japanese population. Mitochondrial aldehyde dehydrogenase 2 (ALDH2) deficiency is caused by a mutant allele in the Mongoloids. To examine whether genetic constitutions affecting aldehyde metabolism influence the risk for late-onset Alzheimer's disease (LOAD), we performed a case-control study in the Japanese population on the deficiency in ALDH2 caused by the dominant-negative mutant allele of the ALDH2 gene (ALDH2*2). In a comparison of 447 patients with sex, age, and region matched nondemented controls, the genotype frequency carrying the ALDH2*2 allele was significantly higher in the patients than in the controls (48.1% vs 37.4%, P = 0.001). Logistic regression analysis indicates that carriage of the ALDH2*2 allele is an independent risk for LOAD of the epsilon4 allele of the apolipoprotein E gene (APOE-epsilon4) (P = 0.002). Moreover, the odds ratio for LOAD in carriers of the ALDH2*2 allele was almost twice that in noncarriers, irrespective of status with regard to the APOE-epsilon4 allele. Among patients homozygous for the APOE-epsilon4 allele, age at onset of LOAD was significantly lower in those with than without the ALDH2*2 allele. In addition, dosage of the ALDH2*2 allele significantly affected age at onset of patients homozygous for the APOE-epsilon4 allele. These results indicate that the ALDH2 deficiency is a risk for LOAD, synergistically acting with the APOE-epsilon4 allele.
//! This was extracted from the Chapter 13 exercises and moved into the core library so it could be used in later chapters. use std::fmt; use std::iter::FromIterator; use rand::distributions::Uniform; use rulinalg::matrix::{BaseMatrix, Matrix}; use std::rc::Rc; use crate::generate_random_vector; use crate::tensor::{Dot, Expand, Tensor}; pub trait Layer { fn forward(&self, inputs: &[&Tensor]) -> Vec<Tensor>; fn parameters(&self) -> Vec<&Tensor> { vec![] } } #[derive(Debug)] pub struct Linear { weights: Tensor, bias: Option<Tensor>, } impl Linear { pub fn new(n_inputs: usize, n_outputs: usize, bias: bool) -> Linear { let distribution = Uniform::new(0.0, 1.0); let weights = Tensor::new_const(Matrix::new( n_inputs, n_outputs, generate_random_vector(n_inputs * n_outputs, 0.5, 0.0, &distribution), )); let bias = if bias { Some(Tensor::new_const(Matrix::zeros(1, n_outputs))) } else { None }; Linear { weights, bias } } } impl Layer for Linear { fn forward(&self, inputs: &[&Tensor]) -> Vec<Tensor> { let rows = inputs[0].0.borrow().data.rows(); match &self.bias { None => vec![inputs[0].dot(&self.weights)], Some(bias) => vec![&inputs[0].dot(&self.weights) + &bias.expand(0, rows)], } } fn parameters(&self) -> Vec<&Tensor> { match &self.bias { None => vec![&self.weights], Some(bias) => vec![&self.weights, bias], } } } pub struct Sequential { layers: Vec<Box<dyn Layer>>, } impl fmt::Debug for Sequential { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!(f, "Sequential {{ }}") } } impl Sequential { pub fn new(layers: Vec<Box<dyn Layer>>) -> Self { Sequential { layers } } #[allow(dead_code)] fn add(&mut self, layer: Box<dyn Layer>) { self.layers.push(layer); } } impl Layer for Sequential { fn forward(&self, inputs: &[&Tensor]) -> Vec<Tensor> { // TODO: can this be avoided let mut input = Tensor(Rc::clone(&inputs[0].0)); for layer in self.layers.iter() { input = layer.forward(&[&input]).remove(0); } vec![input] } fn parameters(&self) -> Vec<&Tensor> { self.layers .iter() .map(|l| l.parameters()) .flat_map(|v| v.into_iter()) .collect() } } #[derive(Debug)] pub struct Embedding { pub weights: Tensor, } impl Embedding { pub fn new(vocab_size: usize, embedding_size: usize) -> Embedding { let distribution = Uniform::new(0.0, 1.0); Embedding { weights: Tensor::new_const(Matrix::new( vocab_size, embedding_size, generate_random_vector( vocab_size * embedding_size, 1.0 / (embedding_size as f64), -0.5 / (embedding_size as f64), &distribution, ), )), } } pub fn from_weights(weights: Matrix<f64>) -> Embedding { Embedding { weights: Tensor::new_const(weights), } } } impl Clone for Embedding { fn clone(&self) -> Embedding { Embedding { weights: Tensor::new_const(self.weights.0.borrow().data.clone()), } } } impl Layer for Embedding { fn forward(&self, inputs: &[&Tensor]) -> Vec<Tensor> { let data = Vec::from_iter( inputs[0] .0 .borrow() .data .row(0) .raw_slice() .iter() .map(|v| (*v as usize)), ); vec![self.weights.index_select(data)] } fn parameters(&self) -> Vec<&Tensor> { vec![&self.weights] } } pub struct RNNCell { n_hidden: usize, w_ih: Linear, w_hh: Linear, w_ho: Linear, activation: Box<dyn Layer>, } impl fmt::Debug for RNNCell { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!( f, "RNNCell {{ n_hidden: {:?}, w_ih: {:?}, w_hh: {:?}, w_ho: {:?} }}", self.n_hidden, self.w_ih, self.w_hh, self.w_ho ) } } impl RNNCell { pub fn new( n_inputs: usize, n_hidden: usize, n_outputs: usize, activation: Box<dyn Layer>, ) -> RNNCell { let w_ih = Linear::new(n_inputs, n_hidden, true); let w_hh = Linear::new(n_hidden, n_hidden, true); let w_ho = Linear::new(n_hidden, n_outputs, true); RNNCell { n_hidden, w_ih, w_hh, w_ho, activation, } } pub fn create_start_state(&self, batch_size: usize) -> Tensor { Tensor::new_const(Matrix::zeros(batch_size, self.n_hidden)) } } impl Layer for RNNCell { fn forward(&self, inputs: &[&Tensor]) -> Vec<Tensor> { let (input, hidden) = (inputs[0], inputs[1]); let state_part = self.w_hh.forward(&[hidden]); let input_part = self.w_ih.forward(&[input]); let mut new_state = self .activation .forward(&[&(&input_part[0] + &state_part[0])]); let mut output = self.w_ho.forward(&[&new_state[0]]); vec![output.remove(0), new_state.remove(0)] } fn parameters(&self) -> Vec<&Tensor> { let mut ans = self.w_ih.parameters(); ans.append(&mut self.w_hh.parameters()); ans.append(&mut self.w_ho.parameters()); ans } } #[derive(Debug)] pub struct LSTMCell { xf: Linear, xi: Linear, xo: Linear, xc: Linear, hf: Linear, hi: Linear, ho: Linear, hc: Linear, w_ho: Linear, n_hidden: usize, } impl LSTMCell { pub fn new(n_inputs: usize, n_hidden: usize, n_outputs: usize) -> LSTMCell { LSTMCell { xf: Linear::new(n_inputs, n_hidden, true), xi: Linear::new(n_inputs, n_hidden, true), xo: Linear::new(n_inputs, n_hidden, true), xc: Linear::new(n_inputs, n_hidden, true), hf: Linear::new(n_hidden, n_hidden, false), hi: Linear::new(n_hidden, n_hidden, false), ho: Linear::new(n_hidden, n_hidden, false), hc: Linear::new(n_hidden, n_hidden, false), w_ho: Linear::new(n_hidden, n_outputs, false), n_hidden, } } pub fn create_start_state(&self, batch_size: usize) -> (Tensor, Tensor) { let mut h = Matrix::zeros(batch_size, self.n_hidden); let mut c = Matrix::zeros(batch_size, self.n_hidden); for i in 0..batch_size { h[[i, 0]] = 1.0; c[[i, 0]] = 1.0; } (Tensor::new_const(h), Tensor::new_const(c)) } } impl Layer for LSTMCell { #[allow(clippy::many_single_char_names)] fn forward(&self, inputs: &[&Tensor]) -> Vec<Tensor> { let (input, prev_hidden, prev_cell) = (inputs[0], inputs[1], inputs[2]); let f = (&self.xf.forward(&[input])[0] + &self.hf.forward(&[prev_hidden])[0]).sigmoid(); let i = (&self.xi.forward(&[input])[0] + &self.hi.forward(&[prev_hidden])[0]).sigmoid(); let o = (&self.xo.forward(&[input])[0] + &self.ho.forward(&[prev_hidden])[0]).sigmoid(); let g = (&self.xc.forward(&[input])[0] + &self.hc.forward(&[prev_hidden])[0]).tanh(); let c = &(&f * prev_cell) + &(&i * &g); let h = &o * &c.tanh(); let output = self.w_ho.forward(&[&h]).remove(0); vec![output, h, c] } fn parameters(&self) -> Vec<&Tensor> { self.xf .parameters() .into_iter() .chain(self.xi.parameters().into_iter()) .chain(self.xo.parameters().into_iter()) .chain(self.xc.parameters().into_iter()) .chain(self.hf.parameters().into_iter()) .chain(self.hi.parameters().into_iter()) .chain(self.ho.parameters().into_iter()) .chain(self.hc.parameters().into_iter()) .chain(self.w_ho.parameters().into_iter()) .collect() } }
Lawyer Jonathan Ficke isn't in the courtroom this week. That's because the Waukesha resident has been flown to Los Angeles to participate in the Writers of the Future writing workshop that leads up to the organization's annual awards ceremony on Sunday. Ficke will be collecting an award for his short story, The Howler on the Sales Room Floor. The story is published by Galaxy Press, in L. Ron Hubbard Presents Writers of the Future Volume 34. "I know it’s the kind of thing that requires a lot of hard work and an awful lot of good luck," he says. "I’ve been given a pretty good step down the pathway but there’s a lot of walking yet to do in order to come close to replacing my day job." The vision of the end of civilization in Emily St. John Mandel’s new novel would be chilling enough – a fast-moving plague from overseas wipes out nearly everyone it touches – even without the real-life Ebola outbreak killing people in Africa. Her novel, Station Eleven, jumps back and forth between the time leading up to the deadly flu outbreak, and the time after, in which as much as 99 percent of the population is killed. If there are any themes that fiction readers have warmed to in recent years, they would include Paris and bookshops. Sometimes, bookshops in Paris. But none of them have woven Milwaukee into that mix - until now. Wisconsin novelist Liam Callanan’s new novel features a Milwaukee woman married to a writer who suddenly goes missing. She and her two adolescent children go looking for him in a journey that leads them to buy a bookshop in Paris. Lawrence Baldassaro had been interviewing baseball players of Italian-American heritage for a while when a realization hit him. "Here I am," he recalled thinking, "the grandson of four Italian immigrants, I teach Italian, I love baseball - why don't I write about Italians in baseball? "It turned out that virtually nothing had been written about that subject," Baldassaro says.
package com.fincatto.documentofiscal.nfe310.classes; import org.junit.Assert; import org.junit.Test; import com.fincatto.documentofiscal.nfe310.classes.NFProdutoCompoeValorNota; public class NFProdutoCompoeValorNotaTest { @Test public void deveObterProdutoCampoValorNotaApartirDoSeuCodigo() { Assert.assertEquals(NFProdutoCompoeValorNota.NAO, NFProdutoCompoeValorNota.valueOfCodigo("0")); Assert.assertEquals(NFProdutoCompoeValorNota.SIM, NFProdutoCompoeValorNota.valueOfCodigo("1")); Assert.assertNull(NFProdutoCompoeValorNota.valueOfCodigo("2")); } @Test public void deveRepresentarOCodigoCorretamente() { Assert.assertEquals("0", NFProdutoCompoeValorNota.NAO.getCodigo()); Assert.assertEquals("1", NFProdutoCompoeValorNota.SIM.getCodigo()); } @Test public void deveObterStringficadoCorretamente() { Assert.assertEquals("1 - Sim", NFProdutoCompoeValorNota.SIM.toString()); } }
The advent of cloud-based computing architectures has opened new possibilities for the rapid, and scalable deployment of virtual Web stores, media outlets, and other on-line sites or services. In general, a cloud-based architecture deploys a set of hosted resources such as processors, operating systems, software and other components that can be combined or strung together to form virtual machines. A user or customer can request the instantiation of a virtual machine or set of machines from those resources from a central server or management system to perform intended tasks or applications. For example, a user may wish to set up and instantiate a virtual server from the cloud to create a storefront to market products or services on a temporary basis, for instance, to sell tickets to an upcoming sports or musical performance. The user can lease or subscribe to the set of resources needed to build and run the set of instantiated virtual machines on a comparatively short-term basis, such as hours or days, for their intended application. Another type of software entity that has found certain application in certain spaces is software appliances, which generally speaking can represent relatively self-contained software installations including full or customized partial operating system installations, combined with selected applications in a single installation or update package. Currently, a network operator can manage a set of machines within a conventional network using known network management platforms, such as the commercially available Tivoli™ platform provided by IBM Corp. or other platforms. Current known network management platforms do not, however, incorporate the ability to extend network management functions to machines within the network that are exposed to an external could, either by way of supplying resources to the cloud or consuming resources from the cloud. Currently, no mechanism exists for systems administrators to adapt for the special circumstances and parameters required to securely maintain a subset of machines within a managed network that are exposed to the cloud. Thus, there is a need in the art for methods and systems that provide extensions of security services to machines under network management that may communicate with or form a part of an external cloud or clouds.
/* eslint-disable import/no-unused-modules */ export { TestParams, TestScenario, testScenarios, testParams, homeDir } from './parameters'; export { TestSuite, testSuites } from './suites'; export { default as runner } from './runner';
Linguistic signals and interpretative strategies: linguistic models in performance, with special reference to free indirect discourse.1 In this article I explore the consequences of the recent linguistic paradigm shift towards pragmatics which has initiated a broader range of methods and areas of investigation within linguistics. Using accounts of free indirect discourse as a test case I will attempt to illustrate how this paradigm shift has made it possible to further bridge the divide between the literary and the linguistic outlook on language.
A mobile remote lab system to monitor in situ thermal solar installations In this paper we describe the design and development of interconnected devices which allow monitoring in situ the performance of solar boilers. This mobile remote lab system comprises two huge blocks of hardware: a mobile station located by the boiler, which is monitored and controlled in a remote way, and a fixed station, located in the Laboratory of Energy for the Sustained Development of the Universidad Nacional de Rosario. The communication between the fixed and mobile devices is controlled by microcontrollers included in both stations and programmed in C language. The project is being developed through three parallel lines of work: 1) Design and development of fixed and mobile hardware; 2) Development of firmware and software necessary to register and communicate data; 3) Design and development of learning activities. This mobile remote lab will be useful to test the behavior of solar boilers in the place and environmental conditions where they are placed so as to evaluate their performance and efficiency anywhere. This is also in order to contribute for the implementation of norms for the certification of solar boilers. On the other hand, the data and results obtained from the development will be used as supplies for the design of learning activities. INTRODUCTION The present world consumes fabulous amounts of energy, especially non-renewable. The growth of society, mainly the western, has been sustained by an energetic matrix based on petroleum and its products, carbon and gas, also by a lack of concern about the exhaustion of natural resources or the ecologic damage produced by their indiscriminate use. Thus, as man has been learning his reality through knowledge and the development of science and technology, environmental problems have increased. On the other hand, with the depletion of non-renewable resources, critical situations that affect production and sustainable development at regional and world level come up. The transition from a petroleum-based society to another with varied and renewable sources of energy requires at least: The development of new technologies and the optimization of the existent ones so as to substitute traditional resources, implement technologies to increase the efficiency of devices, artifacts and equipment. The spread of news and the social optImIzation to lead to the necessary changes in habits and behavior of society so that it is capable to value the energetic resources and to acknowledge the need to control their use. Assuming the existence of these social needs and trying to provide a significant contribution to people studying engineering and to specialists in the field of renewable energies, the Facultad de Ciencias Exactas, Ingenieria y Agrimensura (FCEIA) of the Universidad Nacional de Rosario (UNR), through its Escuela de Posgrado y Formaci6n Continua offers a Master's Degree in Energy for a Sustainable Development. The aim is to train people with a university degree in the development and implementation of a sustainable energetic development that will try to reduce the environmental impacts of the human activity and to allow a generation of wealth adequate to a socio-cultural development. The career recognizes the important European antecedents in the area, mainly, the Master's Degree in Catalufi a bearing the same title; at the same time it tries to develop experimentation related to the devices, equipment and procedures used by the renewable energies. In this context, the Master articulates its activity with the Laboratory for the Sustainable Development of Energy. This Laboratory is an area of unique formation that promotes the creation of new knowledge and technologies in the shape of investigation projects, development and innovation, where both the ones that have a scholarship and the students, get involved and are supervised by specialists and researchers in the area. With this idea in mind and through the generation of knowledge, technologies and services socially relevant, we try to provide, both the design and implementation of a new energetic model that aims to reduce the environmental impact together with a sustainable development as a relevant issue in the area. Sometimes the activities of this laboratory require the participation of specialists from other laboratories of the same Postgraduate School. This project in particular is carried out together with two of the laboratories of the Escuela de Posgrado y Formaci6n Continua of the Facultad de Ciencias Exactas, Ingenieria y Agrimensura: the Laboratory of Energy for a Sustainable Energy and the Laboratory of Remote Laboratories. This other institutional space deals both with the development of technologies for the remote experimentation (through devices at distance), and with didactic materials that implement those technologies in the development of relevant practical activities. EFFICIENCY OF A SOLAR HEATER Solar energy occupies an outstanding place among the renewable energies. One of its most developed uses is in the shape of thermal energy to heat drinking water by plane solar collectors with pipes where water circulates. Those pipes are connected to storing tanks setting the base of the so called solar heaters. In Argentina there are several suppliers of solar heaters; however, there is not in the country an organization system that registers the certification and approval of such equipments of thermal solar heating to guarantee a minimum quality. As it is shown after a strong impulse in the 80s, when the norm 210002 !RAM was enforced for tests with plane plate collectors, the norm could not be enforced for tests with complete systems. That is why, local manufacturers that are selling their products do not have norms that approve or certifY them. In 2010 el A rea Tecnologica Estrategica de Energias Renovables of the Instituto Nacional de Tecnologia Industrial (INTI) proposed a system of measurement to supply hot solar water. It pre-supposes a removal to test beds for measurement, test and certification. On this matter, the efficiency of the conversion of a solar heater depends on the following variables: solar radiation, room temperature, inlet water temperature, wind speed, and circulation flow. It is not possible to give an efficient value to the solar collectors, but it is necessary to determine a yield curve that shows the way the solar collector works under different environmental conditions. The yield curve is experimentally determined under controlled conditions of the mentioned parameters and in agreement with the norms. From the survey of the existent norms at national and international level, we chose the European norm UNE -EN 12975 -2 thermal solar systems and their components -solar collectors -Part 2: Test Method. It is known that the thermal behavior of equipment is conditioned by its dimensions, materials and technical capacity. The efficiency in the real conversion of a solar heater will also be affected by the weather conditions to which it is exposed. That is why we consider it is necessary to develop a system that allows controlling the behavior of such solar equipments on site. Such a system is described in this work. Mainly, the development raises the issue to incorporate communication knowledge and technology and remote experimentation to determine the procedures and instructions which, at distance, will allow the determination of the energetic efficiency of solar heaters in the place and environmental conditions where they were set. We consider this will not only bring knowledge and innovations that involve new equipments and procedures, but will also enrich the experimental training of professionals of the subject through the development of authentic practices. III. DESCRIPTION The system is made up of two huge blocks: a mobile station, placed at the side of the heater that is controlled and monitored remotely, and a fixed station, located at the Laboratory of Energy for a Sustainable Development, at the UNR. Both stations demand a hardware and software development. The project is being carried out through three parallel lines of work: Figure 1 shows an outline of a remote station. The heater can be seen under test, hydraulically fed by a reserve tank. The heater outlet is connected to an opening and closure valve (V) controlled by the electronic testing system. In both extremes there is a connection that holds a high precision semiconductor junction thermometer (Tl AND T2) that measures the temperature of the inlet and outlet of water. The outlet is connected to a flow meter (C) to measure the amount of water pumped out. Besides, in the collector plane, there is a solar meter ® to measure the solar radiation with an incidence and an anemometer (A) to estimate the levels of wind speed. All this is connected to a remote controlling plate that includes a data logger in charge of the data acquisition and a transmission module of analog and digital data. Remote Contro lIing Plate The core of the mobile part is a PIC (microchip) microcontroller connected via a Universal Asynchronous Receiver-Transmitter (USART) to a Global System for Mobile Communication (G SM) a computing processing unit with its corresponding line chip. The controlling plate has also a global positioning system that provides additionally data, latitude, length and height above sea level, elements that condition the solar collector performance. The microcontroller is programmed in C18 language and can: Receive the test configuration from the fixed station. Get the sensors data (temperature, solar radiation, flow) Control the extracting valve Perform an initial processing of data (i.e average temperature and counting volume) Assemble the information pack. Send data through mobile phones to the fixed station in Rosario, using the SIM900 Module of Simcom that is included in the remote plate and connected to the microcontroller. Even though at present, the system can use the GSM network in the mobile station with energy supplied by the usual net, the project contemplates a complete feeding by a solar PV system that makes it independent from the electric power network and makes it possible to perform the field measurement anywhere B. The Fixed Station The fixed station integrates with three well differentiated blocks (Figure 2) The GSM transmitter-receptor, in charge of communication with the remote station The computing processing unit and data base. The former receives and processes the data received from the remote station and stores them in a data base. This system is based on a PC connected to a GSM block via USB. The web server connected to the Internet. The web server: Introduces the interface through which the user can ask for the graphs he has to see. Communicates with the GSM transmitter-receptor to send information to the remote station. Introduces the test results previously processed and stored in the data base to the user. In this way, the owner, importer 0 developer of the solar heater can see, through the WEB, from any site connected to the Internet 0 by mobile phones, how his equipment is tested in real time. IV. CLOSING UP. FROM THE LAB DEVELOPMENT TO THE ACHIEVEMENT OF LEARNING OBJECTIVES The Laboratory of Energy for a Sustainable Development has two solar heaters for didactic uses located in the roof of the Facultad de Ciencias Exactas, Ingenieria y Agrimensura of the Universidad Nacional de Rosario. One of them has vacuum tube collectors characterized by less thermal losses and better performance compared to those of a plane plate, still it covers a wider surface and is less strong. Regarding this solar heater, we are carrying out the first tests to start the last technical stage of the project, which will deal with the realization of field tests and the contrast of results with the commercial technical specifications. This way, users, installers and companies that produce solar heaters can use the information to optimize the development of equipments. The tests results will be important data that will contribute to the implementation of norms to certifY and approve solar heaters. These tests have been discussed in specific seminars that were coordinated for teachers and students. From the point of view of the organizers of the career, this task/design activity, its evaluation and reflection in practice constitute a didactic strategy to give opportunities for setting up an enormously significant learning process. It is the sort of activity that experts of that field meet. Regarding this point, it expresses that the authenticity of the educational practice can be determined by the cultural activities the student shares, as well as through the type and level of the social activities promoted. In this context, we would like to point out the utility and functionality of what has been learnt, its sense and application which promote at the same time the development of abilities and knowledge related to the profession. At the same time it uses and transforms the physical and social environment building up a strong bond between the classroom and the community. Also, the project has been devised to create activities in the shape of experimental work that help to achieve specific learning objectives in the subjects that deal with the study of the exploitation of solar energy with an education for a sustainable development. That is, it is not against the incorporation of the teaching, reading of text books and demonstration; they are used, but in a broader sense and as tools to think over. With this aim, the project team has carried out a survey among authorities and teachers of those subjects so as to be able to devise new learning activities.
#!/usr/bin/env python import setuptools setuptools.setup(use_scm_version=True)
#include <iostream> #include <cmath> using namespace std; int h, r; int solve(void) // at least one, not more than 3! { double H=h; double R=r; if(H<R/2) return 1; else { if(H>=R*0.5*sqrt(3.0)) return 3; else return 2; } } int main() { cin >> r >> h; int result=2*(h/r); h%=r; result+=solve(); cout << result; return 0; }
In late 2010 and early 2011 George Zimmerman, the Hispanic Sanford, Fla., man who shot and killed 17-year-old black teen Trayvon Martin, publicly demanded discipline in a race-related beating case for at least two of the police officers who cleared him after the Feb. 26 altercation, according to records obtained by The Daily Caller. In a letter to Seminole County NAACP president Turner Clayton, a member of the Zimmerman family wrote that George was one of “very few” in Sanford who publicly condemned the “beating of the black homeless man Sherman Ware on Dec. 4, 2010, by the son of a Sanford police officer,” who is white. TheDC has confirmed the identity of the Zimmerman family member who wrote the letter but is withholding that person’s specific identity out of concern for the family’s safety. On Dec. 4, 2010, Justin Collison, the son of Sanford Police Department Lt. Chris Collison, was involved in a bar fight at The Wet Spot bar in Sanford. During the fight, which moved from indoors to outdoors, the younger Collison struck Ware. Ware suffered a concussion, and paramedics took him to the hospital shortly after police arrived on the scene. Collison was not arrested or charged, even though an onlooker had video evidence of his actions. No arrest was made and no action taken for weeks. Documents and emails now show police officers and officials from the office of the State Attorney operated with extreme caution because Collison’s father was a high-ranking law enforcement officer. In the final days of 2010, an Orlando television station aired the video footage of Justin Collison beating Ware. Collison turned himself in six days later, on Jan. 3, 2011. He agreed to pay for Ware’s medical bills and make donations to nonprofit organizations, including the NAACP. (RELATED: Full coverage of the Trayvon Martin shooting) After Justin Collison surrendered himself to authorities, the Sanford Police Department struggled to hold its officials accountable. A lengthy investigation conducted by the Seminole County Sheriff’s Office concluded that the police officials involved did not offer Justin Collison “preferential treatment.” Still, according to members of the Zimmerman family, George printed and distributed copies of fliers on bright fluorescent-colored paper demanding that the community “hold accountable” officers responsible for any misconduct. TheDC has obtained a copy of one of those fliers. “Do you know the individual that stepped up when no one else in the black community would?” the Zimmerman family member asked in the letter to the NAACP’s Clayton. “Do you know who spent tireless hours putting fliers on the cars of persons parked in the churches of the black community? Do you know who waited for the church‐goers to get out of church so that he could hand them fliers in an attempt to organize the black community against this horrible miscarriage of justice? Do you know who helped organize the City Hall meeting on January 8th, 2011 at Sanford City Hall??” “That person was GEORGE ZIMMERMAN,” the letter insisted. “Ironic isn’t it?” Every Sunday, according to his family, Zimmerman would stroll through Sanford’s black neighborhoods handing out the fliers demanding justice for Sherman Ware, and calling for the police to hold their own officials accountable. Zimmerman would also place the fliers on people’s cars outside churches. “I challenge you to stand together and to have our voices heard, and to hold accountable all of those officers, and officials whom let this atrocious attack pass unpunished until the media revealed it,” one of the fliers reads in part. “This animal could have attacked anyone of us, our children or loved ones in his alcohol fueled rage.” The officers whom Zimmerman targeted for accountability in the Sherman Ware incident were all cleared by the Seminole County Sheriff’s investigation, despite Zimmerman’s repeated accusations that police gave kid-glove treatment to a white officer’s son who beat a defenseless, homeless black man. But 14 months later, at least two of the same officers investigated the shooting death of Trayvon Martin — and cleared Zimmerman — even though his voice was the loudest calling for their punishment in the Ware case. One of those officers was Timothy Smith. According to a police incident report from the scene of the Feb. 26 shooting, Officer Smith handcuffed Zimmerman and transported him to the police station. Another was Sergeant Anthony Raimondo, who was on scene with Smith and other local officers. At least one liberal media outlet — the self-described African-American news outlet NewsOne — has framed the story in a different light. On March 19, NewsOne argued that the Sherman Ware incident illustrates a pattern of mistreatment of black victims by the Sanford Police Department. ”They [Sanford police] have a history of NOT arresting offenders who assault black men,” the article’s author declared. Follow Matthew on Twitter Zimmerman’s flier in support of the homeless black man Sherman Ware Feb. 23, 2011 conclusions of a Seminole County Sheriff’s Office administrative investigation
<reponame>vadimVoloshanov/quickstart-desktop-cpp<gh_stars>1-10 #pragma once #define BNB_CLIENT_TOKEN <#Place your token here#>
<reponame>feel-easy/myspider # _*_ coding:utf-8 _*_ import random import requests import time from lxml import etree from random import randint import faker cookies_str = "XIN_anti_uid=3CEFA6D9-A446-B367-E80C-D7E5832CD906; path=/; domain=.www.xin.com; Expires=Sat, 02 Mar 2019 06:25:40 GMT;" # 构造cookies_dict cookies_dict = {cookie.split('=')[0]: cookie.split('=')[1] for cookie in cookies_str.split('; ')} def getcookies_dict(cookies_str): return {cookie.split('=')[0]: cookie.split('=')[1] for cookie in cookies_str.split('; ')} def get_header(): f = faker.Faker(locale='zh_CN') ua = f.chrome() return { 'User-Agent': ua, 'Referer': "https://www.xin.com/", } def get_proxies(): ips = ['172.16.58.3:4243','192.168.127.12:4247','172.16.31.10:4274','172.16.31.10:2316','172.16.31.10:4232','172.16.17.32:3012','192.168.3.11:4286','172.16.17.32:1649','192.168.3.11:2314','192.168.3.11:4256'] proxies = { 'https': 'https://{}'.format(random.choice(ips)) } print(proxies) return proxies def get_request(url): print(url) try: response = requests.get(url=url, headers=get_header(), proxies=get_proxies()) # response.enconding = 'utf-8' text = response.text print(response.status_code) except: text = '<img class="cd_m_info_mainimg" src="" onclick="uxl_track(\'w_vehicle_details/top_pic/carid/74027596\');">' # cookies_dict1 = requests.utils.dict_from_cookiejar(response.cookies) # print(cookies_dict1) # print(text) return text def spider_list(text): root = etree.HTML(text) name_list = root.xpath('//div[@class="pad"]//h2/span/text()') url_list = root.xpath('//div[@class="pad"]//h2/span/@href') data = [] for i in range(len(name_list)): # time.sleep(randint(1, 3)) url = "https:{}".format(url_list[i]) data.append(str({ "name": name_list[i], "content_data": get_content(url) })) # print(data) # saveData(data) def saveData(data): with open("data.data", "w", encoding="utf-8") as f: f.writelines(data) def get_data(url_list): for i in url_list: get_content(i) def get_content(url): root = etree.HTML(get_request(url)) img_url = root.xpath("//img[@class='cd_m_info_mainimg']/@src") print(img_url) return { 'img_url': img_url } def spader_main(): start_url = 'https://www.xin.com/beijing/i1/' text = get_request(start_url) spider_list(text) if __name__ == '__main__': spader_main() # url = "https://www.xin.com/9qw5jr5dr5/che74027596.html?cityid=201" # start_url = 'https://www.xin.com/beijing/i1/' # # get_request(start_url) # print(get_content(url))
Servpro of Altoona is located at the address 2309 Union Ave in Altoona, Pennsylvania 16602. They can be contacted via phone at (814) 946-0119 for pricing, hours and directions. Servpro of Altoona specializes in Interior Work, Mold, Wood. When fire and water take control of your life, we help you take it back. SERVPRO of Altoona understands the stress and worry that comes with a fire or water damage and the interruption it causes to your life and home. Our goal is to help minimize the disruption to your life and quickly make it "Like it never even happened". Posted by Ann Marie Anders on March 10, 2015. Brought to you by facebook. These people came to our rescue. Mark and Brandt are so awesome. We highly recommend anyone who needs water damage restoration to consider ServPro. Posted by Christa McIntyre on January 15, 2015. Brought to you by facebook. Servpro of Altoona can be found at Union Ave 2309. The following is offered: Maintenance Services. The entry is present with us since Sep 9, 2010 and was last updated on Nov 14, 2013. In Altoona there are 3 other Maintenance Services. An overview can be found here.
import { Maybe, throwUnwrapErr } from '@utils/fp'; import Flamegraph from './Flamegraph'; import { BAR_HEIGHT } from './constants'; import TestData from './testData'; jest.mock('./Flamegraph_render'); type focusedNodeType = ConstructorParameters<typeof Flamegraph>[2]; type zoomType = ConstructorParameters<typeof Flamegraph>[5]; describe('Flamegraph', () => { let canvas: any; let flame: Flamegraph; const CANVAS_WIDTH = 600; const CANVAS_HEIGHT = 300; describe('isWithinBounds', () => { beforeEach(() => { canvas = document.createElement('canvas'); canvas.width = CANVAS_WIDTH; canvas.height = CANVAS_HEIGHT; const fitMode = 'HEAD'; const highlightQuery = ''; const focusedNode: focusedNodeType = Maybe.nothing(); const zoom = Maybe.of({ i: 2, j: 8 }); flame = new Flamegraph( TestData.ComplexTree, canvas, focusedNode, fitMode, highlightQuery, zoom ); flame.render(); }); it('handles within canvas', () => { expect(flame.isWithinBounds(0, 0)).toBe(true); expect(flame.isWithinBounds(CANVAS_WIDTH - 1, 0)).toBe(true); expect(flame.isWithinBounds(-1, 0)).toBe(false); expect(flame.isWithinBounds(0, -1)).toBe(false); expect(flame.isWithinBounds(-1, -1)).toBe(false); }); it('handles within canvas but outside the flamegraph', () => { // this test is a bit difficult to visually // you just have to know that it has the format such as // // | | (level 3) // |_| (level 4) // (level 5) expect(flame.isWithinBounds(0, BAR_HEIGHT * 3 + 1)).toBe(true); expect(flame.isWithinBounds(0, BAR_HEIGHT * 4 + 1)).toBe(true); expect(flame.isWithinBounds(0, BAR_HEIGHT * 5 + 1)).toBe(false); }); }); describe('xyToBarData', () => { describe('normal', () => { beforeAll(() => { canvas = document.createElement('canvas'); canvas.width = CANVAS_WIDTH; canvas.height = CANVAS_HEIGHT; const fitMode = 'HEAD'; const highlightQuery = ''; const zoom: zoomType = Maybe.nothing(); const focusedNode: focusedNodeType = Maybe.nothing(); flame = new Flamegraph( TestData.SimpleTree, canvas, focusedNode, fitMode, highlightQuery, zoom ); flame.render(); }); it('works with the first bar (total)', () => { const got = flame.xyToBar(0, 0).unwrapOrElse(throwUnwrapErr); expect(got.x).toBe(0); expect(got.y).toBe(0); expect(got.width).toBeCloseTo(CANVAS_WIDTH); }); it('works a full bar (runtime.main)', () => { // 2nd line, const got = flame .xyToBar(0, BAR_HEIGHT + 1) .unwrapOrElse(throwUnwrapErr); expect(got.x).toBe(0); expect(got.y).toBe(22); expect(got.width).toBeCloseTo(CANVAS_WIDTH); }); it('works with (main.fastFunction)', () => { // 3nd line, 'slowFunction' const got = flame .xyToBar(1, BAR_HEIGHT * 2 + 1) .unwrapOrElse(throwUnwrapErr); expect(got.x).toBe(0); expect(got.y).toBe(44); expect(got.width).toBeCloseTo(129.95951417004048); }); it('works with (main.slowFunction)', () => { // 3nd line, 'slowFunction' const got = flame .xyToBar(CANVAS_WIDTH - 1, BAR_HEIGHT * 2 + 1) .unwrapOrElse(throwUnwrapErr); expect(got.x).toBeCloseTo(131.78); expect(got.y).toBe(44); expect(got.width).toBeCloseTo(468.218); }); describe('boundary testing', () => { const cases = [ [0, 0], [CANVAS_WIDTH, 0], [1, BAR_HEIGHT], [CANVAS_WIDTH, BAR_HEIGHT], [CANVAS_WIDTH / 2, BAR_HEIGHT / 2], ]; test.each(cases)( 'given %p and %p as arguments, returns the total bar', (i: number, j: number) => { const got = flame.xyToBar(i, j).unwrapOrElse(throwUnwrapErr); expect(got).toMatchObject({ i: 0, j: 0, x: 0, y: 0, }); expect(got.width).toBeCloseTo(CANVAS_WIDTH); } ); }); }); describe('focused', () => { describe('on the first row (runtime.main)', () => { beforeAll(() => { canvas = document.createElement('canvas'); canvas.width = CANVAS_WIDTH; canvas.height = CANVAS_HEIGHT; const fitMode = 'HEAD'; const highlightQuery = ''; const zoom: zoomType = Maybe.nothing(); const focusedNode = Maybe.just({ i: 1, j: 0 }); flame = new Flamegraph( TestData.SimpleTree, canvas, focusedNode, fitMode, highlightQuery, zoom ); flame.render(); }); it('works with the first bar (total)', () => { const got = flame.xyToBar(0, 0).unwrapOrElse(throwUnwrapErr); expect(got.x).toBe(0); expect(got.y).toBe(0); expect(got.width).toBeCloseTo(CANVAS_WIDTH); }); it('works with a full bar (runtime.main)', () => { // 2nd line, const got = flame .xyToBar(0, BAR_HEIGHT + 1) .unwrapOrElse(throwUnwrapErr); expect(got).toMatchObject({ i: 1, j: 0, x: 0, y: 22, }); expect(got.width).toBeCloseTo(CANVAS_WIDTH); }); // // it('works with (main.fastFunction)', () => { // 3nd line, 'slowFunction' const got = flame .xyToBar(1, BAR_HEIGHT * 2 + 1) .unwrapOrElse(throwUnwrapErr); expect(got).toMatchObject({ i: 2, j: 0, x: 0, y: 44, }); expect(got.width).toBeCloseTo(129.95951417004048); }); // it('works with (main.slowFunction)', () => { // 3nd line, 'slowFunction' const got = flame .xyToBar(CANVAS_WIDTH - 1, BAR_HEIGHT * 2 + 1) .unwrapOrElse(throwUnwrapErr); expect(got).toMatchObject({ i: 2, j: 8, }); expect(got.x).toBeCloseTo(131.78); expect(got.y).toBe(44); expect(got.width).toBeCloseTo(468.218); }); }); describe('on main.slowFunction', () => { beforeAll(() => { canvas = document.createElement('canvas'); canvas.width = CANVAS_WIDTH; canvas.height = CANVAS_HEIGHT; const fitMode = 'HEAD'; const highlightQuery = ''; const zoom: zoomType = Maybe.nothing(); const focusedNode = Maybe.just({ i: 2, j: 8 }); flame = new Flamegraph( TestData.SimpleTree, canvas, focusedNode, fitMode, highlightQuery, zoom ); flame.render(); }); it('works with the first row (total)', () => { const got = flame.xyToBar(0, 0).unwrapOrElse(throwUnwrapErr); expect(got.x).toBe(0); expect(got.y).toBe(0); expect(got.width).toBeCloseTo(CANVAS_WIDTH); }); it('works with itself as second row (main.slowFunction)', () => { // 2nd line, const got = flame .xyToBar(1, BAR_HEIGHT + 1) .unwrapOrElse(throwUnwrapErr); expect(got).toMatchObject({ i: 2, j: 8, x: 0, y: 22, }); expect(got.width).toBeCloseTo(CANVAS_WIDTH); }); it('works with its child as third row (main.work)', () => { // 2nd line, const got = flame .xyToBar(1, BAR_HEIGHT * 2 + 1) .unwrapOrElse(throwUnwrapErr); expect(got).toMatchObject({ i: 3, j: 8, x: 0, y: 44, }); expect(got.width).toBeCloseTo(CANVAS_WIDTH); }); }); }); describe('zoomed', () => { describe('on the first row (runtime.main)', () => { beforeAll(() => { canvas = document.createElement('canvas'); canvas.width = CANVAS_WIDTH; canvas.height = CANVAS_HEIGHT; const fitMode = 'HEAD'; const highlightQuery = ''; const zoom: zoomType = Maybe.of({ i: 1, j: 0 }); const focusedNode: focusedNodeType = Maybe.nothing(); flame = new Flamegraph( TestData.SimpleTree, canvas, focusedNode, fitMode, highlightQuery, zoom ); flame.render(); }); it('works with the first bar (total)', () => { const got = flame.xyToBar(0, 0).unwrapOrElse(throwUnwrapErr); expect(got.x).toBe(0); expect(got.y).toBe(0); expect(got.width).toBeCloseTo(CANVAS_WIDTH); }); // it('works with a full bar (runtime.main)', () => { // 2nd line, const got = flame .xyToBar(0, BAR_HEIGHT + 1) .unwrapOrElse(throwUnwrapErr); expect(got).toMatchObject({ i: 1, j: 0, x: 0, y: 22, }); expect(got.width).toBeCloseTo(CANVAS_WIDTH); }); // // it('works with (main.fastFunction)', () => { // 3nd line, 'slowFunction' const got = flame .xyToBar(1, BAR_HEIGHT * 2 + 1) .unwrapOrElse(throwUnwrapErr); expect(got).toMatchObject({ i: 2, j: 0, x: 0, y: 44, }); expect(got.width).toBeCloseTo(129.95951417004048); }); // it('works with (main.slowFunction)', () => { // 3nd line, 'slowFunction' const got = flame .xyToBar(CANVAS_WIDTH - 1, BAR_HEIGHT * 2 + 1) .unwrapOrElse(throwUnwrapErr); expect(got).toMatchObject({ i: 2, j: 8, }); expect(got.x).toBeCloseTo(131.78); expect(got.y).toBe(44); expect(got.width).toBeCloseTo(468.218); }); }); describe('on main.slowFunction', () => { beforeAll(() => { canvas = document.createElement('canvas'); canvas.width = CANVAS_WIDTH; canvas.height = CANVAS_HEIGHT; const fitMode = 'HEAD'; const highlightQuery = ''; const zoom = Maybe.of({ i: 2, j: 8 }); const focusedNode: focusedNodeType = Maybe.nothing(); flame = new Flamegraph( TestData.SimpleTree, canvas, focusedNode, fitMode, highlightQuery, zoom ); flame.render(); }); it('works with the first bar (total)', () => { const got = flame.xyToBar(0, 0).unwrapOrElse(throwUnwrapErr); expect(got.x).toBe(0); expect(got.y).toBe(0); expect(got.width).toBeCloseTo(CANVAS_WIDTH); }); // it('works with a full bar (runtime.main)', () => { // 2nd line, const got = flame .xyToBar(0, BAR_HEIGHT + 1) .unwrapOrElse(throwUnwrapErr); expect(got).toMatchObject({ i: 1, j: 0, x: 0, y: 22, }); expect(got.width).toBeCloseTo(CANVAS_WIDTH); }); // // it('works with (main.slowFunction)', () => { // 3nd line, 'slowFunction' const got = flame .xyToBar(1, BAR_HEIGHT * 2 + 1) .unwrapOrElse(throwUnwrapErr); expect(got).toMatchObject({ i: 2, j: 8, x: 0, y: 44, }); expect(got.width).toBeCloseTo(CANVAS_WIDTH); }); it('works with main.work (child of main.slowFunction)', () => { // 4th line, 'main.work' // TODO why 2?? const got = flame .xyToBar(1, BAR_HEIGHT * 3 + 2) .unwrapOrElse(throwUnwrapErr); expect(got).toMatchObject({ i: 3, j: 8, x: 0, y: 66, }); expect(got.width).toBeCloseTo(CANVAS_WIDTH); }); }); }); describe('focused+zoomed', () => { describe('focused on the first row (runtime.main), zoomed on the third row (main.slowFunction)', () => { beforeAll(() => { canvas = document.createElement('canvas'); canvas.width = CANVAS_WIDTH; canvas.height = CANVAS_HEIGHT; const fitMode = 'HEAD'; const highlightQuery = ''; const zoom = Maybe.of({ i: 2, j: 8 }); const focusedNode = Maybe.of({ i: 1, j: 0 }); flame = new Flamegraph( TestData.SimpleTree, canvas, focusedNode, fitMode, highlightQuery, zoom ); flame.render(); }); it('works with the first bar (total)', () => { const got = flame.xyToBar(0, 0).unwrapOrElse(throwUnwrapErr); expect(got).toMatchObject({ x: 0, y: 0, i: 0, j: 0, }); expect(got.width).toBeCloseTo(CANVAS_WIDTH); }); it('works with a full bar (runtime.main)', () => { // 2nd line, const got = flame .xyToBar(0, BAR_HEIGHT + 1) .unwrapOrElse(throwUnwrapErr); expect(got).toMatchObject({ i: 1, j: 0, x: 0, y: 22, }); expect(got.width).toBeCloseTo(CANVAS_WIDTH); }); it('works with (main.slowFunction)', () => { // 3nd line, 'slowFunction' const got = flame .xyToBar(1, BAR_HEIGHT * 2 + 1) .unwrapOrElse(throwUnwrapErr); expect(got).toMatchObject({ i: 2, j: 8, x: 0, y: 44, }); expect(got.width).toBeCloseTo(CANVAS_WIDTH); }); it('works with (main.slowFunction)', () => { // 3nd line, 'slowFunction' const got = flame .xyToBar(1, BAR_HEIGHT * 3 + 1) .unwrapOrElse(throwUnwrapErr); expect(got).toMatchObject({ i: 3, j: 8, x: 0, y: 66, }); expect(got.width).toBeCloseTo(CANVAS_WIDTH); }); }); }); }); });
Huynh Phu Sos innate capacity of establishing Hao hoa buddhism in southwest region Huynh Phu So is the Hoa Hao Buddhist founder who established doctrines and canon laws of religion at a young age. The religion he founded so far has nearly one million believers throughout the Southwest region. It has also spread to other provinces and cities in the country. His teachings are always remembered and applied by believers in their lives in order to build a compassionate, peaceful and prosperous society. Through bibliographic documents and fieldwork in the community, the article analyzes his genius to explain the birth of an endogenous religion with a large number of followers in the Southwest region today.
<gh_stars>0 package no.nav.domain.pensjon.kjerne.skjema; import javax.persistence.CascadeType; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; import javax.persistence.JoinColumn; import javax.persistence.ManyToOne; import javax.persistence.NamedQueries; import javax.persistence.NamedQuery; import javax.persistence.Table; //import no.nav.domain.AbstractVersionedPersistentDomainObject; import no.nav.domain.pensjon.kjerne.PenPerson; import no.nav.domain.pensjon.kjerne.kodetabeller.KanalBprofCti; import no.nav.domain.pensjon.kjerne.kodetabeller.KommunikasjonsformCti; import no.nav.domain.pensjon.kjerne.kodetabeller.Land3TegnCti; import no.nav.domain.pensjon.kjerne.kodetabeller.SprakCti; import java.io.Serializable; //@Entity //@Table(name = "T_SKJEMA_PERS_OPPL") //@NamedQueries({@NamedQuery(name = "SkjemaPersonopplysninger.findSkjemaPersonopplysningerByPenPerson", query = "select s from SkjemaPersonopplysninger s" // + " where s.penPerson = :penPerson")}) //public class SkjemaPersonopplysninger extends AbstractVersionedPersistentDomainObject { public class SkjemaPersonopplysninger implements Serializable { private static final long serialVersionUID = 5993997913751167715L; //@Id //@GeneratedValue(strategy = GenerationType.AUTO) //@Column(name = "skjema_pers_oppl", nullable = false) private Long skjemaPersonopplysningerId; //@Column(name = "fornavn", nullable = true) private String fornavn; //@Column(name = "mellomnavn", nullable = true) private String mellomnavn; //@Column(name = "etternavn", nullable = true) private String etternavn; //@Column(name = "bosted_adresse1", nullable = true) private String bostedsadresse1; //@Column(name = "bosted_adresse2", nullable = true) private String bostedsadresse2; //@Column(name = "bosted_adresse3", nullable = true) private String bostedsadresse3; //@ManyToOne //@org.hibernate.annotations.Fetch(org.hibernate.annotations.FetchMode.SELECT) //@JoinColumn(name = "bostedsland", nullable = true) private Land3TegnCti land; //@Column(name = "utvandret", nullable = true) private Boolean utvandret; //@Column(name = "adresse_epost", nullable = true) private String epost; //@Column(name = "tlf_nr_mob", nullable = true) private String telefonnummerMobil; //@Column(name = "tlf_nr_arb", nullable = true) private String telefonnummerArbeid; //@Column(name = "tlf_nr_hjemme", nullable = true) private String telefonnummerHjemme; //@ManyToOne //@org.hibernate.annotations.Fetch(org.hibernate.annotations.FetchMode.SELECT) //@JoinColumn(name = "K_KOMMSJN_FORM", nullable = true) private KommunikasjonsformCti kanalPreferanse; //@ManyToOne //@org.hibernate.annotations.Fetch(org.hibernate.annotations.FetchMode.SELECT) //@JoinColumn(name = "k_kanal_bprof_t", nullable = true) private KanalBprofCti varslingskanalForetrukket; //@ManyToOne //@org.hibernate.annotations.Fetch(org.hibernate.annotations.FetchMode.SELECT) //@JoinColumn(name = "valgt_malform", nullable = true) private SprakCti valgtMalform; //@Column(name = "kontonummer_norge", nullable = true) private String kontonummerNorge; //@ManyToOne //@org.hibernate.annotations.Fetch(org.hibernate.annotations.FetchMode.SELECT) //@JoinColumn(name = "statsborger_i_land", nullable = false) private Land3TegnCti statsborgerskap; //@Column(name = "flyktning", nullable = false) private Boolean flyktning; //SIR 34210: Changed Fetchtype to eager to avoid lazyInitializationException. //@ManyToOne(fetch = javax.persistence.FetchType.EAGER, cascade = CascadeType.REFRESH) //@JoinColumn(name = "person_id", nullable = false) private PenPerson penPerson; public boolean livesInNorway() { return !utvandret; } public boolean livesAbroad() { return !livesInNorway(); } /** * @return the bostedsadresse1 */ public String getBostedsadresse1() { return bostedsadresse1; } /** * @param bostedsadresse1 the bostedsadresse1 to set */ public void setBostedsadresse1(String bostedsadresse1) { this.bostedsadresse1 = bostedsadresse1; } /** * @return the bostedsadresse2 */ public String getBostedsadresse2() { return bostedsadresse2; } /** * @param bostedsadresse2 the bostedsadresse2 to set */ public void setBostedsadresse2(String bostedsadresse2) { this.bostedsadresse2 = bostedsadresse2; } /** * @return the bostedsadresse3 */ public String getBostedsadresse3() { return bostedsadresse3; } /** * @param bostedsadresse3 the bostedsadresse3 to set */ public void setBostedsadresse3(String bostedsadresse3) { this.bostedsadresse3 = bostedsadresse3; } /** * @return the epost */ public String getEpost() { return epost; } /** * @param epost the epost to set */ public void setEpost(String epost) { this.epost = epost; } /** * @return the etternavn */ public String getEtternavn() { return etternavn; } /** * @param etternavn the etternavn to set */ public void setEtternavn(String etternavn) { this.etternavn = etternavn; } /** * @return the flyktning */ public Boolean getFlyktning() { return flyktning; } /** * @param flyktning the flyktning to set */ public void setFlyktning(Boolean flyktning) { this.flyktning = flyktning; } /** * @return the fornavn */ public String getFornavn() { return fornavn; } /** * @param fornavn the fornavn to set */ public void setFornavn(String fornavn) { this.fornavn = fornavn; } /** * @return the kontonummerNorge */ public String getKontonummerNorge() { return kontonummerNorge; } /** * @param kontonummerNorge the kontonummerNorge to set */ public void setKontonummerNorge(String kontonummerNorge) { this.kontonummerNorge = kontonummerNorge; } /** * @return the mellomnavn */ public String getMellomnavn() { return mellomnavn; } /** * @param mellomnavn the mellomnavn to set */ public void setMellomnavn(String mellomnavn) { this.mellomnavn = mellomnavn; } /** * @return the skjemaPersonopplysningerId */ public Long getSkjemaPersonopplysningerId() { return skjemaPersonopplysningerId; } /** * @param skjemaPersonopplysningerId the skjemaPersonopplysningerId to set */ public void setSkjemaPersonopplysningerId(Long skjemaPersonopplysningerId) { this.skjemaPersonopplysningerId = skjemaPersonopplysningerId; } /** * @return the statsborgerskap */ public Land3TegnCti getStatsborgerskap() { return statsborgerskap; } /** * @param statsborgerskap the statsborgerskap to set */ public void setStatsborgerskap(Land3TegnCti statsborgerskap) { this.statsborgerskap = statsborgerskap; } /** * @return the valgtMalform */ public SprakCti getValgtMalform() { return valgtMalform; } /** * @param valgtMalform the valgtMalform to set */ public void setValgtMalform(SprakCti valgtMalform) { this.valgtMalform = valgtMalform; } public PenPerson getPenPerson() { return penPerson; } public void setPenPerson(PenPerson penPerson) { this.penPerson = penPerson; } /** * @return the land */ public Land3TegnCti getLand() { return land; } /** * @param land the land to set */ public void setLand(Land3TegnCti land) { this.land = land; } /** * @return the utvandret */ public Boolean getUtvandret() { return utvandret; } /** * @param utvandret the utvandret to set */ public void setUtvandret(Boolean utvandret) { this.utvandret = utvandret; } /** * @return the kanalPreferanse */ public KommunikasjonsformCti getKanalPreferanse() { return kanalPreferanse; } /** * @param kanalPreferanse the kanalPreferanse to set */ public void setKanalPreferanse(KommunikasjonsformCti kanalPreferanse) { this.kanalPreferanse = kanalPreferanse; } /** * @return the arbeidtelefonnummer */ public String getTelefonnummerArbeid() { return telefonnummerArbeid; } /** * @param telefonnummerArbeid the arbeidtelefonnummer to set */ public void setTelefonnummerArbeid(String telefonnummerArbeid) { this.telefonnummerArbeid = telefonnummerArbeid; } /** * @return the hjemmetelefonnummer */ public String getTelefonnummerHjemme() { return telefonnummerHjemme; } /** * @param telefonnummerHjemme the hjemmetelefonnummer to set */ public void setTelefonnummerHjemme(String telefonnummerHjemme) { this.telefonnummerHjemme = telefonnummerHjemme; } /** * @return the mobiltelefonnummer */ public String getTelefonnummerMobil() { return telefonnummerMobil; } /** * @param telefonnummerMobil the mobiltelefonnummer to set */ public void setTelefonnummerMobil(String telefonnummerMobil) { this.telefonnummerMobil = telefonnummerMobil; } /** * @return the foretrukket varslingskanal */ public KanalBprofCti getVarslingskanalForetrukket() { return varslingskanalForetrukket; } /** * @param varslingskanalForetrukket the foretrukket varslingskanal to set */ public void setVarslingskanalForetrukket(KanalBprofCti varslingskanalForetrukket) { this.varslingskanalForetrukket = varslingskanalForetrukket; } }
Dynamic programming for optimal stopping via pseudo-regression We introduce new variants of classical regression-based algorithms for optimal stopping problems based on computation of regression coefficients by Monte Carlo approximation of the corresponding inner products instead of the least-squares error functional. Coupled with new proposals for simulation of the underlying samples, we call the approach pseudo regression. A detailed convergence analysis is provided and it is shown that the approach asymptotically leads to lower computational cost for a pre-specified error tolerance, hence to lower complexity. The method is justified by numerical examples.
<filename>src/main/java/com/alipay/api/response/AlipayOpenSearchBoxQueryResponse.java package com.alipay.api.response; import java.util.List; import com.alipay.api.internal.mapping.ApiField; import com.alipay.api.internal.mapping.ApiListField; import com.alipay.api.domain.SearchBoxAccountModule; import com.alipay.api.domain.SearchBoxBasicInfoModule; import com.alipay.api.domain.SearchBoxKeyWordModule; import com.alipay.api.domain.SearchBoxImageModule; import com.alipay.api.domain.SearchBoxServiceModule; import com.alipay.api.AlipayResponse; /** * ALIPAY API: alipay.open.search.box.query response. * * @author auto create * @since 1.0, 2022-04-22 16:51:43 */ public class AlipayOpenSearchBoxQueryResponse extends AlipayResponse { private static final long serialVersionUID = 1846684335798119238L; /** * 搜索直达账号模块 */ @ApiField("account_module") private SearchBoxAccountModule accountModule; /** * 搜索直达基础信息模块 */ @ApiField("basic_info_module") private SearchBoxBasicInfoModule basicInfoModule; /** * 搜索直达配置id */ @ApiField("box_id") private String boxId; /** * 搜索直达配置状态,INITIAL-初始/AUDIT-审核中/CANCEL-已取消/ONLINE-已上架/REJECT-驳回/OFFLINE-已下架/EXPIRE-已失效 */ @ApiField("box_status") private String boxStatus; /** * 搜索直达默认触发词,由系统生成,无法修改 */ @ApiListField("default_keywords") @ApiField("string") private List<String> defaultKeywords; /** * 搜索直达关键词模块 */ @ApiField("keyword_module") private SearchBoxKeyWordModule keywordModule; /** * 最近一次审核氛围图 */ @ApiField("latest_audit_image") private SearchBoxImageModule latestAuditImage; /** * 搜索直达服务模块 */ @ApiField("service_module") private SearchBoxServiceModule serviceModule; /** * 已生效氛围图 */ @ApiField("valid_image") private SearchBoxImageModule validImage; public void setAccountModule(SearchBoxAccountModule accountModule) { this.accountModule = accountModule; } public SearchBoxAccountModule getAccountModule( ) { return this.accountModule; } public void setBasicInfoModule(SearchBoxBasicInfoModule basicInfoModule) { this.basicInfoModule = basicInfoModule; } public SearchBoxBasicInfoModule getBasicInfoModule( ) { return this.basicInfoModule; } public void setBoxId(String boxId) { this.boxId = boxId; } public String getBoxId( ) { return this.boxId; } public void setBoxStatus(String boxStatus) { this.boxStatus = boxStatus; } public String getBoxStatus( ) { return this.boxStatus; } public void setDefaultKeywords(List<String> defaultKeywords) { this.defaultKeywords = defaultKeywords; } public List<String> getDefaultKeywords( ) { return this.defaultKeywords; } public void setKeywordModule(SearchBoxKeyWordModule keywordModule) { this.keywordModule = keywordModule; } public SearchBoxKeyWordModule getKeywordModule( ) { return this.keywordModule; } public void setLatestAuditImage(SearchBoxImageModule latestAuditImage) { this.latestAuditImage = latestAuditImage; } public SearchBoxImageModule getLatestAuditImage( ) { return this.latestAuditImage; } public void setServiceModule(SearchBoxServiceModule serviceModule) { this.serviceModule = serviceModule; } public SearchBoxServiceModule getServiceModule( ) { return this.serviceModule; } public void setValidImage(SearchBoxImageModule validImage) { this.validImage = validImage; } public SearchBoxImageModule getValidImage( ) { return this.validImage; } }
This invention relates in general to terminal headers mounted to printed circuit boards (PCBs) of PCB assemblies. In particular, this invention relates to improved male and female terminal headers configured for mounting on different PCBs and configured to mount to each other, wherein electrical terminals are mounted within the male and female terminal headers before the male and female terminal headers are mounted to the PCBs. Two PCBs may be electrically connected to each other by connecting a male terminal header mounted on a first PCB to a female terminal header mounted on a second PCB. Conventionally, electrical terminals are positioned and held in desired locations on the PCB with an alignment tool during attachment to the PCB, such as with solder. The male and female terminal headers are typically then mounted to the electrical terminals after the electrical terminals have been soldered to the PCB and the alignment tool removed. The electrical terminals may become bent, misaligned, or otherwise damaged during the mounting of the male and female terminal headers. Also, the male and female terminal headers may be difficult to assemble if they are not aligned properly. It is therefore desirable to provide improved male and female terminal headers that are easier to align with their corresponding electrical terminals, easier to mount to the PCB, and easier to align with and connect to each other. It is further desirable to provide an improved method of assembling male and female terminal headers on to PCBs that eliminates the need for an alignment tool.
/** * The thread that loads documents asynchronously. */ private class PageLoader implements Runnable { private Document doc; private PageStream in; private URL old; URL page; PageLoader(Document doc, InputStream in, URL old, URL page) { this.doc = doc; this.in = new PageStream(in); this.old = old; this.page = page; } public void run() { try { read(in, doc); } catch (IOException ex) { UIManager.getLookAndFeel().provideErrorFeedback(JEditorPane.this); } finally { if (SwingUtilities.isEventDispatchThread()) firePropertyChange("page", old, page); else { SwingUtilities.invokeLater(new Runnable() { public void run() { firePropertyChange("page", old, page); } }); } } } void cancel() { in.cancel(); } }
A Color-Based Approach for Melanoma Skin Cancer Detection Skin cancer cases are continuously arising from the past few years. Broadly skin cancer is of three types: Basal Cell Carcinoma, Squamous Cell Carcinoma, and Melanoma. Among all its types, melanoma is the dangerous form of skin cancer whose treatment is possible only if it is detected in early stages. Early detection of melanoma is really challenging. Therefore, various systems were developed to automate the process of melanoma skin cancer diagnosis. Features used to characterize the disease play a very important role in the diagnosis. It is also very important to find the correct combination of features and the machine learning techniques for classification. Here, a system for the melanoma skin cancer detection is developed by using a MED-NODE dataset of digital images. Raw images from the dataset contain various artifacts so firstly preprocessing is applied to remove these artifacts. Then to extract the region of interest Active Contour segmentation method is used. Various color features were extracted from the segmented part and the system performance is checked by using three classifiers (Nave Bayes, Decision Tree, and KNN). The system achieves an accuracy of 82.35% on Decision Tree which is greater than other classifiers.
Tim Blake Nelson has joined the cast of CBS' drama pilot "Chaos." Blake ("The Incredible Hunk") will play a trained psychologist-turned-Core Collector at the CIA who leads the ODS (Office of Disruptive Services) team. He is a tactical genius and can't understand why Rick was chosen for the team.
package info.ata4.minecraft.dragon.util; import info.ata4.minecraft.dragon.DragonMounts; import net.minecraft.util.ResourceLocation; import net.minecraft.world.storage.loot.*; import net.minecraft.world.storage.loot.conditions.LootCondition; import net.minecraftforge.event.LootTableLoadEvent; import net.minecraftforge.fml.common.eventhandler.SubscribeEvent; public class LootHandler { @SubscribeEvent public void lootLoad(LootTableLoadEvent evt) { if (evt.getName().equals(LootTableList.CHESTS_NETHER_BRIDGE)) { evt.getTable().addPool(new LootPool(new LootEntry[]{new LootEntryTable(new ResourceLocation(DragonMounts.AID, "chests/nether_egg"), 5, 0, new LootCondition[0], "nether_egg")}, new LootCondition[0], new RandomValueRange(1), new RandomValueRange(0, 1), "nether_egg")); } if (evt.getName().equals(LootTableList.CHESTS_DESERT_PYRAMID)) { evt.getTable().addPool(new LootPool(new LootEntry[]{new LootEntryTable(new ResourceLocation(DragonMounts.AID, "chests/air_egg"), 5, 0, new LootCondition[0], "air_egg")}, new LootCondition[0], new RandomValueRange(1), new RandomValueRange(0, 1), "air_egg")); } if (evt.getName().equals(LootTableList.CHESTS_END_CITY_TREASURE)) { evt.getTable().addPool(new LootPool(new LootEntry[]{new LootEntryTable(new ResourceLocation(DragonMounts.AID, "chests/ender_egg"), 5, 0, new LootCondition[0], "ender_egg")}, new LootCondition[0], new RandomValueRange(1), new RandomValueRange(0, 1), "ender_egg")); } if (evt.getName().equals(LootTableList.CHESTS_IGLOO_CHEST)) { evt.getTable().addPool(new LootPool(new LootEntry[]{new LootEntryTable(new ResourceLocation(DragonMounts.AID, "chests/ice_egg"), 5, 0, new LootCondition[0], "ice_egg")}, new LootCondition[0], new RandomValueRange(1), new RandomValueRange(0, 1), "ice_egg")); } if (evt.getName().equals(LootTableList.CHESTS_JUNGLE_TEMPLE)) { evt.getTable().addPool(new LootPool(new LootEntry[]{new LootEntryTable(new ResourceLocation(DragonMounts.AID, "chests/forest_egg"), 5, 0, new LootCondition[0], "forest_egg")}, new LootCondition[0], new RandomValueRange(1), new RandomValueRange(0, 1), "forest_egg")); } if (evt.getName().equals(LootTableList.CHESTS_ABANDONED_MINESHAFT)) { evt.getTable().addPool(new LootPool(new LootEntry[]{new LootEntryTable(new ResourceLocation(DragonMounts.AID, "chests/ghost_egg"), 5, 0, new LootCondition[0], "ghost_egg")}, new LootCondition[0], new RandomValueRange(1), new RandomValueRange(0, 1), "ghost_egg")); } if (evt.getName().equals(LootTableList.CHESTS_STRONGHOLD_CORRIDOR)) { evt.getTable().addPool(new LootPool(new LootEntry[]{new LootEntryTable(new ResourceLocation(DragonMounts.AID, "chests/fire_egg"), 5, 0, new LootCondition[0], "fire_egg")}, new LootCondition[0], new RandomValueRange(1), new RandomValueRange(0, 1), "fire_egg")); } if (evt.getName().equals(LootTableList.CHESTS_STRONGHOLD_CROSSING)) { evt.getTable().addPool(new LootPool(new LootEntry[]{new LootEntryTable(new ResourceLocation(DragonMounts.AID, "chests/fire_egg"), 5, 0, new LootCondition[0], "fire_egg")}, new LootCondition[0], new RandomValueRange(1), new RandomValueRange(0, 1), "fire_egg")); } if (evt.getName().equals(LootTableList.CHESTS_STRONGHOLD_LIBRARY)) { evt.getTable().addPool(new LootPool(new LootEntry[]{new LootEntryTable(new ResourceLocation(DragonMounts.AID, "chests/fire_egg"), 5, 0, new LootCondition[0], "fire_egg")}, new LootCondition[0], new RandomValueRange(1), new RandomValueRange(0, 1), "fire_egg")); } if (evt.getName().equals(LootTableList.CHESTS_WOODLAND_MANSION)) { evt.getTable().addPool(new LootPool(new LootEntry[]{new LootEntryTable(new ResourceLocation(DragonMounts.AID, "chests/ghost_egg"), 5, 0, new LootCondition[0], "ghost_egg")}, new LootCondition[0], new RandomValueRange(1), new RandomValueRange(0, 1), "ghost_egg")); } if (evt.getName().equals(LootTableList.CHESTS_SIMPLE_DUNGEON)) { evt.getTable().addPool(new LootPool(new LootEntry[]{new LootEntryTable(new ResourceLocation(DragonMounts.AID, "chests/fire_egg"), 5, 0, new LootCondition[0], "fire_egg")}, new LootCondition[0], new RandomValueRange(1), new RandomValueRange(0, 1), "fire_egg")); } if (evt.getName().equals(LootTableList.ENTITIES_ELDER_GUARDIAN)) { evt.getTable().addPool(new LootPool(new LootEntry[]{new LootEntryTable(new ResourceLocation(DragonMounts.AID, "chests/water_egg"), 5, 0, new LootCondition[0], "water_egg")}, new LootCondition[0], new RandomValueRange(1), new RandomValueRange(0, 1), "water_egg")); } } }
Gremlin. I love you. What else is there to say? You're perfect and that's totally your signature color.
<filename>layer/pipeline.cpp /* Copyright (c) 2017, ARM Limited and Contributors * * SPDX-License-Identifier: MIT * * Permission is hereby granted, free of charge, * to any person obtaining a copy of this software and associated documentation files (the "Software"), * to deal in the Software without restriction, including without limitation the rights to * use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, * and to permit persons to whom the Software is furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, * INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ #include "pipeline.hpp" #include "device.hpp" #include "format.hpp" #include "message_codes.hpp" #include "render_pass.hpp" #include "shader_module.hpp" #include "spirv_cross.hpp" #include <algorithm> using namespace spirv_cross; using namespace std; namespace MPD { void Pipeline::checkWorkGroupSize(const VkComputePipelineCreateInfo &createInfo) { auto *module = baseDevice->get<ShaderModule>(createInfo.stage.module); const auto &cfg = this->getDevice()->getConfig(); try { Compiler comp(module->getCode()); comp.set_entry_point(createInfo.stage.pName); // Get the workgroup size. uint32_t x = comp.get_execution_mode_argument(spv::ExecutionModeLocalSize, 0); uint32_t y = comp.get_execution_mode_argument(spv::ExecutionModeLocalSize, 1); uint32_t z = comp.get_execution_mode_argument(spv::ExecutionModeLocalSize, 2); MPD_ASSERT(x > 0); MPD_ASSERT(y > 0); MPD_ASSERT(z > 0); uint32_t numThreads = x * y * z; const uint32_t quadSize = baseDevice->getConfig().threadGroupSize; if (cfg.msgComputeNoThreadGroupAlignment && (numThreads == 1 || ((x > 1) && (x & (quadSize - 1))) || ((y > 1) && (y & (quadSize - 1))) || ((z > 1) && (z & (quadSize - 1))))) { log(VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT, MESSAGE_CODE_COMPUTE_NO_THREAD_GROUP_ALIGNMENT, "The work group size (%u, %u, %u) has dimensions which are not aligned to %u threads. " "Not aligning work group sizes to %u may leave threads idle on the shader core.", x, y, z, quadSize, quadSize); } if (cfg.msgComputeLargeWorkGroup && (x * y * z) > baseDevice->getConfig().maxEfficientWorkGroupThreads) { log(VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT, MESSAGE_CODE_COMPUTE_LARGE_WORK_GROUP, "The work group size (%u, %u, %u) (%u threads) has more threads than advised. " "It is advised to not use more than %u threads per work group, especially when using barrier() and/or " "shared memory.", x, y, z, x * y * z, baseDevice->getConfig().maxEfficientWorkGroupThreads); } // Make some basic advice about compute work group sizes based on active resource types. auto activeVariables = comp.get_active_interface_variables(); auto resources = comp.get_shader_resources(activeVariables); unsigned dimensions = 0; if (x > 1) dimensions++; if (y > 1) dimensions++; if (z > 1) dimensions++; // Here the dimension will really depend on the dispatch grid, but assume it's 1D. dimensions = max(dimensions, 1u); // If we're accessing images, we almost certainly want to have a 2D workgroup for cache reasons. // There are some false positives here. We could simply have a shader that does this within a 1D grid, // or we may have a linearly tiled image, but these cases are quite unlikely in practice. bool accesses_2d = false; const auto check_image = [&](const Resource &resource) { auto &type = comp.get_type(resource.base_type_id); switch (type.image.dim) { // These are 1D, so don't count these images. case spv::Dim1D: case spv::DimBuffer: break; default: accesses_2d = true; break; } }; for (auto &image : resources.storage_images) check_image(image); for (auto &image : resources.sampled_images) check_image(image); for (auto &image : resources.separate_images) check_image(image); if (cfg.msgComputePoorSpatialLocality && accesses_2d && dimensions < 2) { log(VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT, MESSAGE_CODE_COMPUTE_POOR_SPATIAL_LOCALITY, "The compute shader has a work group size of (%u, %u, %u), which suggests a 1D dispatch, " "but the shader is accessing 2D or 3D images. There might be poor spatial locality in this shader.", x, y, z); } } catch (const CompilerError &error) { log(VK_DEBUG_REPORT_WARNING_BIT_EXT, 0, "SPIRV-Cross failed to analyze shader: %s. No checks for this pipeline will be performed.", error.what()); } } static bool accessChainIsStaticallyAddressable(const Compiler &comp, const SPIRType &type) { // For any non-struct type, if there are no arrays, there is no access chain except for OpVectorExtractDynamic or similar // which is fine, Vulkan spec only prohibits divergent array accesses into push constant space. if (type.basetype != SPIRType::Struct) return type.array.empty(); // For structs, recurse through our members. for (auto &memb : type.member_types) if (!accessChainIsStaticallyAddressable(comp, comp.get_type(memb))) return false; return true; } void Pipeline::checkPushConstantsForStage(const VkPipelineShaderStageCreateInfo &stage) { auto *module = baseDevice->get<ShaderModule>(stage.module); try { Compiler comp(module->getCode()); comp.set_entry_point(stage.pName); // Heuristic: // If a shader accesses at least one uniform buffer on a member which is not an array type and // The shader does not use any push constant blocks, suggest that the shader could use push constants. // Arrays are not considered as they are generally needed for any kind of instancing/batching, // and push constants aren't possible there. auto activeVariables = comp.get_active_interface_variables(); auto resources = comp.get_shader_resources(activeVariables); // If we have a push constant block, nothing to warn about. if (!resources.push_constant_buffers.empty()) return; struct PotentialPushConstant { string blockName; string memberName; uint32_t uboID; uint32_t index; size_t offset; size_t range; }; vector<PotentialPushConstant> potentials; uint32_t totalPushConstantSize = 0; // See if we find any access to UBO members which are not arrayed. for (auto &ubo : resources.uniform_buffers) { auto &type = comp.get_type(ubo.type_id); // Array of UBOs, not a push constant candidate. if (!type.array.empty()) continue; // Type of the basic struct. auto &baseType = comp.get_type(ubo.base_type_id); auto ranges = comp.get_active_buffer_ranges(ubo.id); for (auto &range : ranges) { auto &memberType = comp.get_type(baseType.member_types[range.index]); // If a nested variant of this type can be statically addressed, (no dynamic accesses anywhere), // this is a push constant candidate. if (accessChainIsStaticallyAddressable(comp, memberType)) { auto &blockName = ubo.name; auto &memberName = comp.get_member_name(ubo.base_type_id, range.index); potentials.push_back({ blockName.empty() ? "<stripped>" : blockName, memberName.empty() ? "<stripped>" : memberName, ubo.id, range.index, range.offset, range.range }); totalPushConstantSize += range.range; } } } const auto &cfg = this->getDevice()->getConfig(); if (cfg.msgPotentialPushConstant) { for (auto &potential : potentials) { module->log(VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT, MESSAGE_CODE_POTENTIAL_PUSH_CONSTANT, "Identified static access to a UBO block (%s, ID: %u) member (%s, index: %u, offset: %u, " "range: %u). " "This data should be considered for a push constant block which would enable more " "efficient access to " "this data.", potential.blockName.c_str(), potential.uboID, potential.memberName.c_str(), potential.index, potential.offset, potential.range); } } if (cfg.msgPotentialPushConstant && totalPushConstantSize) { module->log(VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT, MESSAGE_CODE_POTENTIAL_PUSH_CONSTANT, "Identified a total of %u bytes of UBO data which could potentially be push constant.", totalPushConstantSize); } } catch (const CompilerError &error) { log(VK_DEBUG_REPORT_WARNING_BIT_EXT, 0, "SPIRV-Cross failed to analyze shader: %s. No checks for this pipeline will be performed.", error.what()); } } VkResult Pipeline::initCompute(VkPipeline pipeline_, const VkComputePipelineCreateInfo &createInfo) { pipeline = pipeline_; type = Type::Compute; layout = baseDevice->get<PipelineLayout>(createInfo.layout); checkWorkGroupSize(createInfo); checkPushConstantsForStage(createInfo.stage); return VK_SUCCESS; } void Pipeline::checkInstancedVertexBuffer(const VkGraphicsPipelineCreateInfo &createInfo) { auto &vertexInput = *createInfo.pVertexInputState; uint32_t count = 0; for (uint32_t i = 0; i < vertexInput.vertexBindingDescriptionCount; i++) if (vertexInput.pVertexBindingDescriptions[i].inputRate == VK_VERTEX_INPUT_RATE_INSTANCE) count++; const auto &cfg = this->getDevice()->getConfig(); if (cfg.msgTooManyInstancedVertexBuffers && count > baseDevice->getConfig().maxInstancedVertexBuffers) { log(VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT, MESSAGE_CODE_TOO_MANY_INSTANCED_VERTEX_BUFFERS, "The pipeline is using %u instanced vertex buffers (current limit: %u), but this can be inefficient on the " "GPU. " "If using instanced vertex attributes prefer interleaving them in a single buffer.", count, baseDevice->getConfig().maxInstancedVertexBuffers); } } void Pipeline::checkMultisampledBlending(const VkGraphicsPipelineCreateInfo &createInfo) { if (!createInfo.pColorBlendState || !createInfo.pMultisampleState) return; if (createInfo.pMultisampleState->rasterizationSamples == VK_SAMPLE_COUNT_1_BIT) return; // For per-sample shading, we don't expect 1x shading rate anyways, so per-sample // blending is not really a problem. if (createInfo.pMultisampleState->sampleShadingEnable) return; auto *renderPass = baseDevice->get<RenderPass>(createInfo.renderPass); auto &info = renderPass->getCreateInfo(); MPD_ASSERT(createInfo.subpass < info.subpassCount); auto &subpass = info.pSubpasses[createInfo.subpass]; const auto &cfg = this->getDevice()->getConfig(); for (uint32_t i = 0; i < createInfo.pColorBlendState->attachmentCount; i++) { auto &att = createInfo.pColorBlendState->pAttachments[i]; MPD_ASSERT(i < subpass.colorAttachmentCount); uint32_t attachment = subpass.pColorAttachments[i].attachment; if (attachment != VK_ATTACHMENT_UNUSED && att.blendEnable && att.colorWriteMask) { MPD_ASSERT(attachment < info.attachmentCount); //works fine on PowerVR if (cfg.msgNotFullThroughputBlending && !formatHasFullThroughputBlending(info.pAttachments[attachment].format)) { log(VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT, MESSAGE_CODE_NOT_FULL_THROUGHPUT_BLENDING, "Pipeline is multisampled and color attachment #%u makes use of a format which cannot be blended " "at full throughput " "when using MSAA.", i); } } } } VkResult Pipeline::initGraphics(VkPipeline pipeline_, const VkGraphicsPipelineCreateInfo &createInfo) { pipeline = pipeline_; type = Type::Graphics; layout = baseDevice->get<PipelineLayout>(createInfo.layout); this->createInfo.graphics = createInfo; if (createInfo.pDepthStencilState) { depthStencilState = *createInfo.pDepthStencilState; } if (createInfo.pInputAssemblyState) { inputAssemblyState = *createInfo.pInputAssemblyState; } this->createInfo.graphics.pDepthStencilState = &depthStencilState; this->createInfo.graphics.pInputAssemblyState = &inputAssemblyState; if (createInfo.pColorBlendState) { colorBlendState = *createInfo.pColorBlendState; colorBlendAttachmentState.clear(); for (uint32_t i = 0; i < colorBlendState.attachmentCount; i++) colorBlendAttachmentState.push_back(createInfo.pColorBlendState->pAttachments[i]); if (colorBlendAttachmentState.empty()) colorBlendState.pAttachments = nullptr; else colorBlendState.pAttachments = colorBlendAttachmentState.data(); this->createInfo.graphics.pColorBlendState = &colorBlendState; } checkInstancedVertexBuffer(createInfo); checkMultisampledBlending(createInfo); for (uint32_t i = 0; i < createInfo.stageCount; i++) checkPushConstantsForStage(createInfo.pStages[i]); return VK_SUCCESS; } } // namespace MPD
<gh_stars>0 package it.cnr.isti.hpclab.example.search; import eu.nicecode.simulator.Time; import it.cnr.isti.hpclab.engine.RequestProcessingThread; import it.cnr.isti.hpclab.request.Request; import it.cnr.isti.hpclab.request.RunningRequest; /** * A search engine-specific implementation of {@link it.cnr.isti.hpclab.request.RunningRequest} * @author <NAME> * */ public class RunningQuery extends RunningRequest { public RunningQuery(Time time, RequestProcessingThread processor, Request originalQuery) { super(time, processor, originalQuery); } /** * Get the completion ratio of the query, i.e., the how much of the query * has been processed with respect of the total amount of work required to * complete it. * @return */ private double getCompletionRatio() { return ((double) executedTime) / requiredTime; } /** * Get the predicted processing cost of this query (i.e., the estimated number * of postings to score for completing its processing) according to some * query performance predictor. * @return */ public int getPredictedProcessingCost() { return ((Query) originalRequest).getPredictedProcessingCost((Shard) thread.getShardServer().getShard()); } /** * Get the root-mean-squared-error for the predicted processing cost of this query (i.e., the estimated number * of postings to score for completing its processing) according to some * query performance predictor. * @return */ public int getPredictedProcessingCostRMSE() { return ((Query) originalRequest).getPredictedProcessingCostRMSE((Shard) thread.getShardServer().getShard()); } /** * Estimate how many postings have been already scored for this query, according to * its predicted processing cost and its current completion ratio. * @return */ public int getProcessedPostings() { return (int) Math.floor((getPredictedProcessingCost() + getPredictedProcessingCostRMSE()) * getCompletionRatio()); } /** * Get the number of terms of this query. * @return */ public int getNumberOfTerms() { return ((Query) originalRequest).getNumberOfTerms((Shard) thread.getShardServer().getShard()); } }
<gh_stars>100-1000 import torch import numpy as np import torch.distributions import time from ..utils import eval def laplace_parameters(module, params, no_cov_mat=True, max_num_models=0): for name in list(module._parameters.keys()): if module._parameters[name] is None: print(module, name) continue data = module._parameters[name].data module._parameters.pop(name) module.register_buffer("%s_mean" % name, data.new(data.size()).zero_()) module.register_buffer("%s_var" % name, data.new(data.size()).zero_()) module.register_buffer(name, data.new(data.size()).zero_()) if no_cov_mat is False: if int(torch.__version__.split(".")[1]) >= 4: module.register_buffer( "%s_cov_mat_sqrt" % name, torch.zeros(max_num_models, data.numel()).cuda(), ) else: module.register_buffer( "%s_cov_mat_sqrt" % name, torch.autograd.Variable( torch.zeros(max_num_models, data.numel()).cuda() ), ) params.append((module, name)) class Laplace(torch.nn.Module): def __init__(self, base, max_num_models=20, no_cov_mat=False, *args, **kwargs): super(Laplace, self).__init__() self.params = list() self.base = base(*args, **kwargs) self.max_num_models = max_num_models self.base.apply( lambda module: laplace_parameters( module=module, params=self.params, no_cov_mat=no_cov_mat, max_num_models=max_num_models, ) ) def forward(self, input): return self.base(input) def sample(self, scale=1.0, cov=False, require_grad=False): for module, name in self.params: mean = module.__getattr__("%s_mean" % name) var = module.__getattr__("%s_var" % name) if not cov: eps = mean.new(mean.size()).normal_() w = mean + scale * torch.sqrt(var) * eps else: cov_mat_sqrt = module.__getattr__("%s_cov_mat_sqrt" % name) eps = ( torch.zeros(cov_mat_sqrt.size(0), 1).normal_().cuda() ) # rank-deficient normal results # sqrt(max_num_models) scaling comes from covariance matrix w = mean + ( scale / ((self.max_num_models - 1) ** 0.5) ) * var * cov_mat_sqrt.t().matmul(eps).view_as(mean) if require_grad: w.requires_grad_() module.__setattr__(name, w) getattr(module, name) else: for module, name in self.params: mean = module.__getattr__("%s_mean" % name) var = module.__getattr__("%s_var" % name) def export_numpy_params(self): mean_list = [] var_list = [] for module, name in self.params: mean_list.append(module.__getattr__("%s_mean" % name).cpu().numpy().ravel()) var_list.append(module.__getattr__("%s_var" % name).cpu().numpy().ravel()) mean = np.concatenate(mean_list) var = np.concatenate(var_list) return mean, var def import_numpy_mean(self, w): k = 0 for module, name in self.params: mean = module.__getattr__("%s_mean" % name) s = np.prod(mean.shape) mean.copy_(mean.new_tensor(w[k : k + s].reshape(mean.shape))) k += s def import_numpy_cov_mat_sqrt(self, w): k = 0 for (module, name), sq in zip(self.params, w): cov_mat_sqrt = module.__getattr__("%s_cov_mat_sqrt" % name) cov_mat_sqrt.copy_(cov_mat_sqrt.new_tensor(sq.reshape(cov_mat_sqrt.shape))) def estimate_variance(self, loader, criterion, samples=1, tau=5e-4): fisher_diag = dict() for module, name in self.params: var = module.__getattr__("%s_var" % name) fisher_diag[(module, name)] = var.new(var.size()).zero_() self.sample(scale=0.0, require_grad=True) for s in range(samples): t_s = time.time() for input, target in loader: input = input.cuda(non_blocking=True) target = target.cuda(non_blocking=True) output = self(input) distribution = torch.distributions.Categorical(logits=output) y = distribution.sample() loss = criterion(output, y) loss.backward() for module, name in self.params: grad = module.__getattr__(name).grad fisher_diag[(module, name)].add_(torch.pow(grad, 2)) t = time.time() - t_s print("%d/%d %.2f sec" % (s + 1, samples, t)) for module, name in self.params: f = fisher_diag[(module, name)] / samples var = 1.0 / (f + tau) module.__getattr__("%s_var" % name).copy_(var) def scale_grid_search( self, loader, criterion, logscale_range=torch.arange(-10, 0, 0.5).cuda() ): all_losses = torch.zeros_like(logscale_range) t_s = time.time() for i, logscale in enumerate(logscale_range): print("forwards pass with ", logscale) current_scale = torch.exp(logscale) self.sample(scale=current_scale) result = eval(loader, self, criterion) all_losses[i] = result["loss"] min_index = torch.min(all_losses, dim=0)[1] scale = torch.exp(logscale_range[min_index]).item() t_s_final = time.time() - t_s print("estimating scale took %.2f sec" % (t_s_final)) return scale
// // Checks if accessing by ptr covers one unsplit block and substitutes // struct. Tracks max offset of ptr until ptr goes to function. If function is // read/write, then checks if max offset lies within unsplit block. If it // does, then substitutes the struct. Otherwise we cannot split the struct. // bool Substituter::processPTI(PtrToIntInst &PTI, const TypeToInstrMap &NewInstr, InstsToSubstitute &InstToInst) { StructType &STy = *cast<StructType>( PTI.getPointerOperand()->getType()->getPointerElementType()); uint64_t MaxPtrOffset{0}; if (!processPTIsUses(PTI, MaxPtrOffset)) return false; const ElemMapping &IdxMapping = Graph.getElemMappingFor(STy); const SElement &TheFirstNewElem = *IdxMapping.begin()->begin(); IGC_ASSERT_MESSAGE(TheFirstNewElem.isUnwrapped() || !TheFirstNewElem.getIndex(), "The first element of the original structure has to be " "mathced with the first element of the split structure."); Type *TheFirstElemTy = TheFirstNewElem.getTy(); for (auto &&ElemEnum : enumerate(vc::make_flat_range(IdxMapping))) { const SElement &Elem = ElemEnum.value(); if (!verifyElement(STy, Elem, TheFirstElemTy, ElemEnum.index())) return false; if (!MaxPtrOffset) break; const uint64_t SizeOfElem = vc::getTypeSize(Elem.retrieveElemTy(), &DL).inBytes(); MaxPtrOffset = SizeOfElem > MaxPtrOffset ? 0 : MaxPtrOffset - SizeOfElem; } Instruction *ToInsert = findProperInstruction(getBaseTy(TheFirstElemTy), NewInstr); IRBuilder<> IRB{&PTI}; Value *NewPTI = IRB.CreatePtrToInt(ToInsert, PTI.getType(), PTI.getName() + ".split"); LLVM_DEBUG(dbgs() << "New Instruction has been created: " << *NewPTI << "\n"); InstToInst.emplace_back(cast<Instruction>(&PTI), cast<Instruction>(NewPTI)); return true; }
import { atom, useRecoilValue, useSetRecoilState } from 'recoil'; import { usePerson } from './person'; export const aktivPeriodeState = atom<string | undefined>({ key: 'aktivPeriodeState', default: undefined, }); export const useSetAktivPeriode = () => useSetRecoilState(aktivPeriodeState); export const useMaybeAktivPeriode = (): Tidslinjeperiode | undefined => { const person = usePerson(); const periodeId = useRecoilValue(aktivPeriodeState); if (person && periodeId) { const { id, beregningId, unique } = decomposedId(periodeId); return ( person.arbeidsgivere .flatMap(({ tidslinjeperioder }) => tidslinjeperioder) .flatMap((perioder) => perioder) .find( (periode) => periode.id === id && periode.beregningId === beregningId && periode.unique === unique ) ?? defaultTidslinjeperiode(person) ); } if (person) { return defaultTidslinjeperiode(person); } else return undefined; }; export const useAktivPeriode = (): Tidslinjeperiode => { const aktivPeriode = useMaybeAktivPeriode(); if (!aktivPeriode) { throw Error('Forventet aktiv periode men fant ingen'); } return aktivPeriode; }; export const useVedtaksperiode = (vedtaksperiodeId?: string): Vedtaksperiode | undefined => usePerson() ?.arbeidsgivere.flatMap((a) => a.vedtaksperioder) .find((p) => p.id === vedtaksperiodeId) as Vedtaksperiode | undefined; export const useOppgavereferanse = (beregningId: string): string | undefined => { const person = usePerson(); const vedtaksperiode = person?.arbeidsgivere .flatMap((a) => a.vedtaksperioder) .find((p) => p.beregningIder?.includes(beregningId)) as Vedtaksperiode | undefined; return vedtaksperiode?.oppgavereferanse; }; export const harOppgave = (tidslinjeperiode: Tidslinjeperiode) => ['oppgaver', 'revurderes'].includes(tidslinjeperiode.tilstand) && !!tidslinjeperiode.oppgavereferanse; const defaultTidslinjeperiode = (person: Person): Tidslinjeperiode | undefined => { const valgbarePerioder: Tidslinjeperiode[] = person.arbeidsgivere .flatMap((arb) => arb.tidslinjeperioder) .flatMap((perioder) => perioder) .filter((periode) => periode.fullstendig) .sort((a, b) => (a.opprettet.isAfter(b.opprettet) ? 1 : -1)) .sort((a, b) => (a.fom.isBefore(b.fom) ? 1 : -1)); return ( valgbarePerioder.find((periode) => ['oppgaver', 'revurderes'].includes(periode.tilstand)) ?? valgbarePerioder[0] ); }; export const decomposedId = (periodeId: String) => { const res = periodeId.split('+'); return { id: res[0], beregningId: res[1], unique: res[2], }; };
<reponame>scaleplandev/vertx-json-schema package io.vertx.ext.json.schema.common; import io.vertx.ext.json.schema.Schema; public abstract class BaseSingleSchemaValidator extends BaseMutableStateValidator { protected Schema schema; public BaseSingleSchemaValidator(MutableStateValidator parent) { super(parent); } @Override public boolean calculateIsSync() { return schema.isSync(); } void setSchema(Schema schema) { this.schema = schema; this.initializeIsSync(); } }
Verse 1-9 — The prophet must conduct himself as one who expected to see his country ruined very shortly. In the prospect of sad times, he is to abstain from marriage, mourning for the dead, and pleasure. Those who would convince others of the truths of God, must make it appear by their self-denial, that they believe it themselves. Peace, inward and outward, family and public, is wholly the work of God, and from his loving-kindness and mercy. When He takes his peace from any people, distress must follow. There may be times when it is proper to avoid things otherwise our duty; and we should always sit loose to the pleasures and concerns of this life. Verse 10-13 — Here seems to be the language of those who quarrel at the word of God, and instead of humbling and condemning themselves, justify themselves, as though God did them wrong. A plain and full answer is given. They were more obstinate in sin than their fathers, walking every one after the devices of his heart. Since they will not hearken, they shall be hurried away into a far country, a land they know not. If they had God's favour, that would make even the land of their captivity pleasant. Verse 14-21 — The restoration from the Babylonish captivity would be remembered in place of the deliverance from Egypt; it also typified spiritual redemption, and the future deliverance of the church from antichristian oppression. But none of the sins of sinners can be hidden from God, or shall be overlooked by him. He will find out and raise up instruments of his wrath, that shall destroy the Jews, by fraud like fishers, by force like hunters. The prophet, rejoicing at the hope of mercy to come, addressed the Lord as his strength and refuge. The deliverance out of captivity shall be a figure of the great salvation to be wrought by the Messiah. The nations have often known the power of Jehovah in his wrath; but they shall know him as the strength of his people, and their refuge in time of trouble.
package tests import ( "context" "encoding/json" "fmt" "net/http" "net/http/httptest" "testing" "time" "github.com/pborman/uuid" "remoteschool/smarthead/internal/mid" "remoteschool/smarthead/internal/platform/auth" "remoteschool/smarthead/internal/platform/tests" "remoteschool/smarthead/internal/platform/web" "remoteschool/smarthead/internal/platform/web/weberror" "remoteschool/smarthead/internal/user" "remoteschool/smarthead/internal/user_account" "remoteschool/smarthead/internal/user_auth" ) type mockUser struct { *user.User password string } func mockUserCreateRequest() user.UserCreateRequest { return user.UserCreateRequest{ FirstName: "Lee", LastName: "Brown", Email: uuid.NewRandom().String() + "@geek<EMAIL>.<EMAIL>", Password: "<PASSWORD>", PasswordConfirm: "<PASSWORD>", } } // mockUser creates a new user for testing and associates it with the supplied account ID. func newMockUser(accountID string, role user_account.UserAccountRole) mockUser { req := mockUserCreateRequest() u, err := appCtx.UserRepo.Create(tests.Context(), auth.Claims{}, req, time.Now().UTC().AddDate(-1, -1, -1)) if err != nil { panic(err) } _, err = appCtx.UserAccountRepo.Create(tests.Context(), auth.Claims{}, user_account.UserAccountCreateRequest{ UserID: u.ID, AccountID: accountID, Roles: []user_account.UserAccountRole{role}, }, time.Now().UTC().AddDate(-1, -1, -1)) if err != nil { panic(err) } return mockUser{ User: u, password: <PASSWORD>, } } // TestUserCRUDAdmin tests all the user CRUD endpoints using an user with role admin. func TestUserCRUDAdmin(t *testing.T) { defer tests.Recover(t) tr := roleTests[auth.RoleAdmin] // Add claims to the context for the user. ctx := context.WithValue(tests.Context(), auth.Key, tr.Claims) // Test create. var created user.UserResponse { expectedStatus := http.StatusCreated req := mockUserCreateRequest() rt := requestTest{ fmt.Sprintf("Create %d w/role %s", expectedStatus, tr.Role), http.MethodPost, "/v1/users", req, tr.Token, tr.Claims, expectedStatus, nil, } t.Logf("\tTest: %s - %s %s", rt.name, rt.method, rt.url) w, ok := executeRequestTest(t, rt, ctx) if !ok { t.Fatalf("\t%s\tExecute request failed.", tests.Failed) } t.Logf("\t%s\tReceived valid status code of %d.", tests.Success, w.Code) var actual user.UserResponse if err := json.Unmarshal(w.Body.Bytes(), &actual); err != nil { t.Logf("\t\tGot error : %+v", err) t.Fatalf("\t%s\tDecode response body failed.", tests.Failed) } created = actual expectedMap := map[string]interface{}{ "updated_at": web.NewTimeResponse(ctx, actual.UpdatedAt.Value), "id": actual.ID, "email": req.Email, "timezone": actual.Timezone, "created_at": web.NewTimeResponse(ctx, actual.CreatedAt.Value), "first_name": req.FirstName, "last_name": req.LastName, "name": req.FirstName + " " + req.LastName, "gravatar": web.NewGravatarResponse(ctx, actual.Email), } var expected user.UserResponse if err := decodeMapToStruct(expectedMap, &expected); err != nil { t.Logf("\t\tGot error : %+v\nActual results to format expected : \n", err) printResultMap(ctx, w.Body.Bytes()) // used to help format expectedMap t.Fatalf("\t%s\tDecode expected failed.", tests.Failed) } if diff := cmpDiff(t, actual, expected); diff { if len(expectedMap) == 0 { printResultMap(ctx, w.Body.Bytes()) // used to help format expectedMap } t.Fatalf("\t%s\tReceived expected result.", tests.Failed) } t.Logf("\t%s\tReceived expected result.", tests.Success) // Only for user creation do we need to do this. _, err := appCtx.UserAccountRepo.Create(tests.Context(), auth.Claims{}, user_account.UserAccountCreateRequest{ UserID: actual.ID, AccountID: tr.Account.ID, Roles: []user_account.UserAccountRole{user_account.UserAccountRole_User}, }, time.Now().UTC().AddDate(-1, -1, -1)) if err != nil { t.Fatalf("\t%s\tLink user to account.", tests.Failed) } } // Test read. { expectedStatus := http.StatusOK rt := requestTest{ fmt.Sprintf("Read %d w/role %s", expectedStatus, tr.Role), http.MethodGet, fmt.Sprintf("/v1/users/%s", created.ID), nil, tr.Token, tr.Claims, expectedStatus, nil, } t.Logf("\tTest: %s - %s %s", rt.name, rt.method, rt.url) w, ok := executeRequestTest(t, rt, ctx) if !ok { t.Fatalf("\t%s\tExecute request failed.", tests.Failed) } t.Logf("\t%s\tReceived valid status code of %d.", tests.Success, w.Code) var actual user.UserResponse if err := json.Unmarshal(w.Body.Bytes(), &actual); err != nil { t.Logf("\t\tGot error : %+v", err) t.Fatalf("\t%s\tDecode response body failed.", tests.Failed) } if diff := cmpDiff(t, actual, created); diff { t.Fatalf("\t%s\tReceived expected result.", tests.Failed) } t.Logf("\t%s\tReceived expected result.", tests.Success) } // Test Read with random ID. { expectedStatus := http.StatusNotFound randID := uuid.NewRandom().String() rt := requestTest{ fmt.Sprintf("Read %d w/role %s using random ID", expectedStatus, tr.Role), http.MethodGet, fmt.Sprintf("/v1/users/%s", randID), nil, tr.Token, tr.Claims, expectedStatus, nil, } t.Logf("\tTest: %s - %s %s", rt.name, rt.method, rt.url) w, ok := executeRequestTest(t, rt, ctx) if !ok { t.Fatalf("\t%s\tExecute request failed.", tests.Failed) } t.Logf("\t%s\tReceived valid status code of %d.", tests.Success, w.Code) var actual weberror.ErrorResponse if err := json.Unmarshal(w.Body.Bytes(), &actual); err != nil { t.Logf("\t\tGot error : %+v", err) t.Fatalf("\t%s\tDecode response body failed.", tests.Failed) } expected := weberror.ErrorResponse{ StatusCode: expectedStatus, Error: http.StatusText(expectedStatus), Details: fmt.Sprintf("user %s not found: Entity not found", randID), StackTrace: actual.StackTrace, } if diff := cmpDiff(t, actual, expected); diff { t.Fatalf("\t%s\tReceived expected error.", tests.Failed) } t.Logf("\t%s\tReceived expected error.", tests.Success) } // Test Read with forbidden ID. { expectedStatus := http.StatusNotFound rt := requestTest{ fmt.Sprintf("Read %d w/role %s using forbidden ID", expectedStatus, tr.Role), http.MethodGet, fmt.Sprintf("/v1/users/%s", tr.ForbiddenUser.ID), nil, tr.Token, tr.Claims, expectedStatus, nil, } t.Logf("\tTest: %s - %s %s", rt.name, rt.method, rt.url) w, ok := executeRequestTest(t, rt, ctx) if !ok { t.Fatalf("\t%s\tExecute request failed.", tests.Failed) } t.Logf("\t%s\tReceived valid status code of %d.", tests.Success, w.Code) var actual weberror.ErrorResponse if err := json.Unmarshal(w.Body.Bytes(), &actual); err != nil { t.Logf("\t\tGot error : %+v", err) t.Fatalf("\t%s\tDecode response body failed.", tests.Failed) } expected := weberror.ErrorResponse{ StatusCode: expectedStatus, Error: http.StatusText(expectedStatus), Details: fmt.Sprintf("user %s not found: Entity not found", tr.ForbiddenUser.ID), StackTrace: actual.StackTrace, } if diff := cmpDiff(t, actual, expected); diff { t.Fatalf("\t%s\tReceived expected error.", tests.Failed) } t.Logf("\t%s\tReceived expected error.", tests.Success) } // Test update. { expectedStatus := http.StatusNoContent newName := uuid.NewRandom().String() rt := requestTest{ fmt.Sprintf("Update %d w/role %s", expectedStatus, tr.Role), http.MethodPatch, "/v1/users", user.UserUpdateRequest{ ID: created.ID, FirstName: &newName, }, tr.Token, tr.Claims, expectedStatus, nil, } t.Logf("\tTest: %s - %s %s", rt.name, rt.method, rt.url) w, ok := executeRequestTest(t, rt, ctx) if !ok { t.Fatalf("\t%s\tExecute request failed.", tests.Failed) } t.Logf("\t%s\tReceived valid status code of %d.", tests.Success, w.Code) if len(w.Body.String()) != 0 { if diff := cmpDiff(t, w.Body.Bytes(), nil); diff { t.Fatalf("\t%s\tReceived expected empty.", tests.Failed) } } t.Logf("\t%s\tReceived expected empty.", tests.Success) } // Test update password. { expectedStatus := http.StatusNoContent newPass := uuid.NewRandom().String() rt := requestTest{ fmt.Sprintf("Update password %d w/role %s", expectedStatus, tr.Role), http.MethodPatch, "/v1/users/password", user.UserUpdatePasswordRequest{ ID: created.ID, Password: <PASSWORD>, PasswordConfirm: <PASSWORD>, }, tr.Token, tr.Claims, expectedStatus, nil, } t.Logf("\tTest: %s - %s %s", rt.name, rt.method, rt.url) w, ok := executeRequestTest(t, rt, ctx) if !ok { t.Fatalf("\t%s\tExecute request failed.", tests.Failed) } t.Logf("\t%s\tReceived valid status code of %d.", tests.Success, w.Code) if len(w.Body.String()) != 0 { if diff := cmpDiff(t, w.Body.Bytes(), nil); diff { t.Fatalf("\t%s\tReceived expected empty.", tests.Failed) } } t.Logf("\t%s\tReceived expected empty.", tests.Success) } // Test archive. { expectedStatus := http.StatusNoContent rt := requestTest{ fmt.Sprintf("Archive %d w/role %s", expectedStatus, tr.Role), http.MethodPatch, "/v1/users/archive", user.UserArchiveRequest{ ID: created.ID, }, tr.Token, tr.Claims, expectedStatus, nil, } t.Logf("\tTest: %s - %s %s", rt.name, rt.method, rt.url) w, ok := executeRequestTest(t, rt, ctx) if !ok { t.Fatalf("\t%s\tExecute request failed.", tests.Failed) } t.Logf("\t%s\tReceived valid status code of %d.", tests.Success, w.Code) if len(w.Body.String()) != 0 { if diff := cmpDiff(t, w.Body.Bytes(), nil); diff { t.Fatalf("\t%s\tReceived expected empty.", tests.Failed) } } t.Logf("\t%s\tReceived expected empty.", tests.Success) } // Test delete. { expectedStatus := http.StatusNoContent rt := requestTest{ fmt.Sprintf("Delete %d w/role %s", expectedStatus, tr.Role), http.MethodDelete, fmt.Sprintf("/v1/users/%s", created.ID), nil, tr.Token, tr.Claims, expectedStatus, nil, } t.Logf("\tTest: %s - %s %s", rt.name, rt.method, rt.url) w, ok := executeRequestTest(t, rt, ctx) if !ok { t.Fatalf("\t%s\tExecute request failed.", tests.Failed) } t.Logf("\t%s\tReceived valid status code of %d.", tests.Success, w.Code) if len(w.Body.String()) != 0 { if diff := cmpDiff(t, w.Body.Bytes(), nil); diff { t.Fatalf("\t%s\tReceived expected empty.", tests.Failed) } } t.Logf("\t%s\tReceived expected empty.", tests.Success) } // Test switch account. { expectedStatus := http.StatusOK newAccount := newMockSignup().account rt := requestTest{ fmt.Sprintf("Switch account %d w/role %s", expectedStatus, tr.Role), http.MethodPatch, fmt.Sprintf("/v1/users/switch-account/%s", newAccount.ID), nil, tr.Token, tr.Claims, expectedStatus, nil, } t.Logf("\tTest: %s - %s %s", rt.name, rt.method, rt.url) _, err := appCtx.UserAccountRepo.Create(tests.Context(), auth.Claims{}, user_account.UserAccountCreateRequest{ UserID: tr.User.ID, AccountID: newAccount.ID, Roles: []user_account.UserAccountRole{user_account.UserAccountRole_User}, }, time.Now().UTC().AddDate(-1, -1, -1)) if err != nil { t.Fatalf("\t%s\tAdd user to account failed.", tests.Failed) } w, ok := executeRequestTest(t, rt, ctx) if !ok { t.Fatalf("\t%s\tExecute request failed.", tests.Failed) } t.Logf("\t%s\tReceived valid status code of %d.", tests.Success, w.Code) var actual map[string]interface{} if err := json.Unmarshal(w.Body.Bytes(), &actual); err != nil { t.Logf("\t\tGot error : %+v", err) t.Fatalf("\t%s\tDecode response body failed.", tests.Failed) } // This is just for response format validation, will verify account from claims. expected := map[string]interface{}{ "access_token": actual["access_token"], "token_type": actual["token_type"], "expiry": actual["expiry"], "ttl": actual["ttl"], "user_id": tr.User.ID, "account_id": newAccount.ID, } if diff := cmpDiff(t, actual, expected); diff { t.Fatalf("\t%s\tReceived expected result.", tests.Failed) } t.Logf("\t%s\tReceived expected result.", tests.Success) newClaims, err := authenticator.ParseClaims(actual["access_token"].(string)) if err != nil { t.Logf("\t\tGot error : %+v", err) t.Fatalf("\t%s\tParse claims failed.", tests.Failed) } else if newClaims.Audience != newAccount.ID { t.Logf("\t\tGot : %+v", newClaims.Audience) t.Logf("\t\tExpected : %+v", newAccount.ID) t.Fatalf("\t%s\tParse claims expected audience to match new account.", tests.Failed) } else if newClaims.Subject != tr.User.ID { t.Logf("\t\tGot : %+v", newClaims.Subject) t.Logf("\t\tExpected : %+v", tr.User.ID) t.Fatalf("\t%s\tParse claims expected Subject to match user.", tests.Failed) } t.Logf("\t%s\tParse claims valid.", tests.Success) } } // TestUserCRUDUser tests all the user CRUD endpoints using an user with role user. func TestUserCRUDUser(t *testing.T) { defer tests.Recover(t) tr := roleTests[auth.RoleUser] // Add claims to the context for the user. ctx := context.WithValue(tests.Context(), auth.Key, tr.Claims) // Test create. { expectedStatus := http.StatusForbidden req := mockUserCreateRequest() rt := requestTest{ fmt.Sprintf("Create %d w/role %s", expectedStatus, tr.Role), http.MethodPost, "/v1/users", req, tr.Token, tr.Claims, expectedStatus, nil, } t.Logf("\tTest: %s - %s %s", rt.name, rt.method, rt.url) w, ok := executeRequestTest(t, rt, ctx) if !ok { t.Fatalf("\t%s\tExecute request failed.", tests.Failed) } t.Logf("\t%s\tReceived valid status code of %d.", tests.Success, w.Code) var actual weberror.ErrorResponse if err := json.Unmarshal(w.Body.Bytes(), &actual); err != nil { t.Logf("\t\tGot error : %+v", err) t.Fatalf("\t%s\tDecode response body failed.", tests.Failed) } expected := mid.ErrorForbidden(ctx).(*weberror.Error).Response(ctx, false) expected.StackTrace = actual.StackTrace if diff := cmpDiff(t, actual, expected); diff { t.Fatalf("\t%s\tReceived expected error.", tests.Failed) } t.Logf("\t%s\tReceived expected error.", tests.Success) } // Since role doesn't support create, bypass auth to test other endpoints. created := newMockUser(tr.Account.ID, user_account.UserAccountRole_User).Response(ctx) // Test read. { expectedStatus := http.StatusOK rt := requestTest{ fmt.Sprintf("Read %d w/role %s", expectedStatus, tr.Role), http.MethodGet, fmt.Sprintf("/v1/users/%s", created.ID), nil, tr.Token, tr.Claims, expectedStatus, nil, } t.Logf("\tTest: %s - %s %s", rt.name, rt.method, rt.url) w, ok := executeRequestTest(t, rt, ctx) if !ok { t.Fatalf("\t%s\tExecute request failed.", tests.Failed) } t.Logf("\t%s\tReceived valid status code of %d.", tests.Success, w.Code) var actual *user.UserResponse if err := json.Unmarshal(w.Body.Bytes(), &actual); err != nil { t.Logf("\t\tGot error : %+v", err) t.Fatalf("\t%s\tDecode response body failed.", tests.Failed) } if diff := cmpDiff(t, actual, created); diff { t.Fatalf("\t%s\tReceived expected result.", tests.Failed) } t.Logf("\t%s\tReceived expected result.", tests.Success) } // Test Read with random ID. { expectedStatus := http.StatusNotFound randID := uuid.NewRandom().String() rt := requestTest{ fmt.Sprintf("Read %d w/role %s using random ID", expectedStatus, tr.Role), http.MethodGet, fmt.Sprintf("/v1/users/%s", randID), nil, tr.Token, tr.Claims, expectedStatus, nil, } t.Logf("\tTest: %s - %s %s", rt.name, rt.method, rt.url) w, ok := executeRequestTest(t, rt, ctx) if !ok { t.Fatalf("\t%s\tExecute request failed.", tests.Failed) } t.Logf("\t%s\tReceived valid status code of %d.", tests.Success, w.Code) var actual weberror.ErrorResponse if err := json.Unmarshal(w.Body.Bytes(), &actual); err != nil { t.Logf("\t\tGot error : %+v", err) t.Fatalf("\t%s\tDecode response body failed.", tests.Failed) } expected := weberror.ErrorResponse{ StatusCode: expectedStatus, Error: http.StatusText(expectedStatus), Details: fmt.Sprintf("user %s not found: Entity not found", randID), StackTrace: actual.StackTrace, } if diff := cmpDiff(t, actual, expected); diff { t.Fatalf("\t%s\tReceived expected error.", tests.Failed) } t.Logf("\t%s\tReceived expected error.", tests.Success) } // Test Read with forbidden ID. { expectedStatus := http.StatusNotFound rt := requestTest{ fmt.Sprintf("Read %d w/role %s using forbidden ID", expectedStatus, tr.Role), http.MethodGet, fmt.Sprintf("/v1/users/%s", tr.ForbiddenUser.ID), nil, tr.Token, tr.Claims, expectedStatus, nil, } t.Logf("\tTest: %s - %s %s", rt.name, rt.method, rt.url) w, ok := executeRequestTest(t, rt, ctx) if !ok { t.Fatalf("\t%s\tExecute request failed.", tests.Failed) } t.Logf("\t%s\tReceived valid status code of %d.", tests.Success, w.Code) var actual weberror.ErrorResponse if err := json.Unmarshal(w.Body.Bytes(), &actual); err != nil { t.Logf("\t\tGot error : %+v", err) t.Fatalf("\t%s\tDecode response body failed.", tests.Failed) } expected := weberror.ErrorResponse{ StatusCode: expectedStatus, Error: http.StatusText(expectedStatus), Details: fmt.Sprintf("user %s not found: Entity not found", tr.ForbiddenUser.ID), StackTrace: actual.StackTrace, } if diff := cmpDiff(t, actual, expected); diff { t.Fatalf("\t%s\tReceived expected error.", tests.Failed) } t.Logf("\t%s\tReceived expected error.", tests.Success) } // Test update. { expectedStatus := http.StatusForbidden newName := uuid.NewRandom().String() rt := requestTest{ fmt.Sprintf("Update %d w/role %s", expectedStatus, tr.Role), http.MethodPatch, "/v1/users", user.UserUpdateRequest{ ID: created.ID, FirstName: &newName, }, tr.Token, tr.Claims, expectedStatus, nil, } t.Logf("\tTest: %s - %s %s", rt.name, rt.method, rt.url) w, ok := executeRequestTest(t, rt, ctx) if !ok { t.Fatalf("\t%s\tExecute request failed.", tests.Failed) } t.Logf("\t%s\tReceived valid status code of %d.", tests.Success, w.Code) var actual weberror.ErrorResponse if err := json.Unmarshal(w.Body.Bytes(), &actual); err != nil { t.Logf("\t\tGot error : %+v", err) t.Fatalf("\t%s\tDecode response body failed.", tests.Failed) } expected := weberror.ErrorResponse{ StatusCode: expectedStatus, Error: http.StatusText(expectedStatus), Details: user.ErrForbidden.Error(), StackTrace: actual.StackTrace, } if diff := cmpDiff(t, actual, expected); diff { t.Fatalf("\t%s\tReceived expected error.", tests.Failed) } t.Logf("\t%s\tReceived expected error.", tests.Success) } // Test update password. { expectedStatus := http.StatusForbidden newPass := uuid.NewRandom().String() rt := requestTest{ fmt.Sprintf("Update password %d w/role %s", expectedStatus, tr.Role), http.MethodPatch, "/v1/users/password", user.UserUpdatePasswordRequest{ ID: created.ID, Password: <PASSWORD>, PasswordConfirm: <PASSWORD>, }, tr.Token, tr.Claims, expectedStatus, nil, } t.Logf("\tTest: %s - %s %s", rt.name, rt.method, rt.url) w, ok := executeRequestTest(t, rt, ctx) if !ok { t.Fatalf("\t%s\tExecute request failed.", tests.Failed) } t.Logf("\t%s\tReceived valid status code of %d.", tests.Success, w.Code) var actual weberror.ErrorResponse if err := json.Unmarshal(w.Body.Bytes(), &actual); err != nil { t.Logf("\t\tGot error : %+v", err) t.Fatalf("\t%s\tDecode response body failed.", tests.Failed) } expected := weberror.ErrorResponse{ StatusCode: expectedStatus, Error: http.StatusText(expectedStatus), Details: user.ErrForbidden.Error(), StackTrace: actual.StackTrace, } if diff := cmpDiff(t, actual, expected); diff { t.Fatalf("\t%s\tReceived expected error.", tests.Failed) } t.Logf("\t%s\tReceived expected error.", tests.Success) } // Test archive. { expectedStatus := http.StatusForbidden rt := requestTest{ fmt.Sprintf("Archive %d w/role %s", expectedStatus, tr.Role), http.MethodPatch, "/v1/users/archive", user.UserArchiveRequest{ ID: created.ID, }, tr.Token, tr.Claims, expectedStatus, nil, } t.Logf("\tTest: %s - %s %s", rt.name, rt.method, rt.url) w, ok := executeRequestTest(t, rt, ctx) if !ok { t.Fatalf("\t%s\tExecute request failed.", tests.Failed) } t.Logf("\t%s\tReceived valid status code of %d.", tests.Success, w.Code) var actual weberror.ErrorResponse if err := json.Unmarshal(w.Body.Bytes(), &actual); err != nil { t.Logf("\t\tGot error : %+v", err) t.Fatalf("\t%s\tDecode response body failed.", tests.Failed) } expected := mid.ErrorForbidden(ctx).(*weberror.Error).Response(ctx, false) expected.StackTrace = actual.StackTrace if diff := cmpDiff(t, actual, expected); diff { t.Fatalf("\t%s\tReceived expected error.", tests.Failed) } t.Logf("\t%s\tReceived expected error.", tests.Success) } // Test delete. { expectedStatus := http.StatusForbidden rt := requestTest{ fmt.Sprintf("Delete %d w/role %s", expectedStatus, tr.Role), http.MethodDelete, fmt.Sprintf("/v1/users/%s", created.ID), nil, tr.Token, tr.Claims, expectedStatus, nil, } t.Logf("\tTest: %s - %s %s", rt.name, rt.method, rt.url) w, ok := executeRequestTest(t, rt, ctx) if !ok { t.Fatalf("\t%s\tExecute request failed.", tests.Failed) } t.Logf("\t%s\tReceived valid status code of %d.", tests.Success, w.Code) var actual weberror.ErrorResponse if err := json.Unmarshal(w.Body.Bytes(), &actual); err != nil { t.Logf("\t\tGot error : %+v", err) t.Fatalf("\t%s\tDecode response body failed.", tests.Failed) } expected := mid.ErrorForbidden(ctx).(*weberror.Error).Response(ctx, false) expected.StackTrace = actual.StackTrace if diff := cmpDiff(t, actual, expected); diff { t.Fatalf("\t%s\tReceived expected error.", tests.Failed) } t.Logf("\t%s\tReceived expected error.", tests.Success) } // Test switch account. { expectedStatus := http.StatusOK newAccount := newMockSignup().account rt := requestTest{ fmt.Sprintf("Switch account %d w/role %s", expectedStatus, tr.Role), http.MethodPatch, fmt.Sprintf("/v1/users/switch-account/%s", newAccount.ID), nil, tr.Token, tr.Claims, expectedStatus, nil, } t.Logf("\tTest: %s - %s %s", rt.name, rt.method, rt.url) _, err := appCtx.UserAccountRepo.Create(tests.Context(), auth.Claims{}, user_account.UserAccountCreateRequest{ UserID: tr.User.ID, AccountID: newAccount.ID, Roles: []user_account.UserAccountRole{user_account.UserAccountRole_User}, }, time.Now().UTC().AddDate(-1, -1, -1)) if err != nil { t.Fatalf("\t%s\tAdd user to account failed.", tests.Failed) } w, ok := executeRequestTest(t, rt, ctx) if !ok { t.Fatalf("\t%s\tExecute request failed.", tests.Failed) } t.Logf("\t%s\tReceived valid status code of %d.", tests.Success, w.Code) var actual map[string]interface{} if err := json.Unmarshal(w.Body.Bytes(), &actual); err != nil { t.Logf("\t\tGot error : %+v", err) t.Fatalf("\t%s\tDecode response body failed.", tests.Failed) } // This is just for response format validation, will verify account from claims. expected := map[string]interface{}{ "access_token": actual["access_token"], "token_type": actual["token_type"], "expiry": actual["expiry"], "ttl": actual["ttl"], "user_id": tr.User.ID, "account_id": newAccount.ID, } if diff := cmpDiff(t, actual, expected); diff { t.Fatalf("\t%s\tReceived expected result.", tests.Failed) } t.Logf("\t%s\tReceived expected result.", tests.Success) newClaims, err := authenticator.ParseClaims(actual["access_token"].(string)) if err != nil { t.Logf("\t\tGot error : %+v", err) t.Fatalf("\t%s\tParse claims failed.", tests.Failed) } else if newClaims.Audience != newAccount.ID { t.Logf("\t\tGot : %+v", newClaims.Audience) t.Logf("\t\tExpected : %+v", newAccount.ID) t.Fatalf("\t%s\tParse claims expected audience to match new account.", tests.Failed) } else if newClaims.Subject != tr.User.ID { t.Logf("\t\tGot : %+v", newClaims.Subject) t.Logf("\t\tExpected : %+v", tr.User.ID) t.Fatalf("\t%s\tParse claims expected Subject to match user.", tests.Failed) } t.Logf("\t%s\tParse claims valid.", tests.Success) } } // TestUserCreate validates create user endpoint. func TestUserCreate(t *testing.T) { defer tests.Recover(t) tr := roleTests[auth.RoleAdmin] // Add claims to the context for the user. ctx := context.WithValue(tests.Context(), auth.Key, tr.Claims) // Test create with invalid data. { expectedStatus := http.StatusBadRequest req := mockUserCreateRequest() req.Email = "invalid email address.com" rt := requestTest{ fmt.Sprintf("Create %d w/role %s using invalid data", expectedStatus, tr.Role), http.MethodPost, "/v1/users", req, tr.Token, tr.Claims, expectedStatus, nil, } t.Logf("\tTest: %s - %s %s", rt.name, rt.method, rt.url) w, ok := executeRequestTest(t, rt, ctx) if !ok { t.Fatalf("\t%s\tExecute request failed.", tests.Failed) } t.Logf("\t%s\tReceived valid status code of %d.", tests.Success, w.Code) var actual weberror.ErrorResponse if err := json.Unmarshal(w.Body.Bytes(), &actual); err != nil { t.Logf("\t\tGot error : %+v", err) t.Fatalf("\t%s\tDecode response body failed.", tests.Failed) } expected := weberror.ErrorResponse{ StatusCode: expectedStatus, Error: "Field validation error", Fields: []weberror.FieldError{ //{Field: "email", Error: "Key: 'UserCreateRequest.email' Error:Field validation for 'email' failed on the 'email' tag"}, { Field: "email", Value: req.Email, Tag: "email", Error: "email must be a valid email address", Display: "email must be a valid email address", }, }, Details: actual.Details, StackTrace: actual.StackTrace, } if diff := cmpDiff(t, actual, expected); diff { t.Fatalf("\t%s\tReceived expected error.", tests.Failed) } t.Logf("\t%s\tReceived expected error.", tests.Success) } } // TestUserUpdate validates update user endpoint. func TestUserUpdate(t *testing.T) { defer tests.Recover(t) tr := roleTests[auth.RoleAdmin] // Add claims to the context for the user. ctx := context.WithValue(tests.Context(), auth.Key, tr.Claims) // Test update with invalid data. { expectedStatus := http.StatusBadRequest invalidEmail := "invalid email address" rt := requestTest{ fmt.Sprintf("Update %d w/role %s using invalid data", expectedStatus, tr.Role), http.MethodPatch, "/v1/users", user.UserUpdateRequest{ ID: tr.User.ID, Email: &invalidEmail, }, tr.Token, tr.Claims, expectedStatus, nil, } t.Logf("\tTest: %s - %s %s", rt.name, rt.method, rt.url) w, ok := executeRequestTest(t, rt, ctx) if !ok { t.Fatalf("\t%s\tExecute request failed.", tests.Failed) } t.Logf("\t%s\tReceived valid status code of %d.", tests.Success, w.Code) var actual weberror.ErrorResponse if err := json.Unmarshal(w.Body.Bytes(), &actual); err != nil { t.Logf("\t\tGot error : %+v", err) t.Fatalf("\t%s\tDecode response body failed.", tests.Failed) } expected := weberror.ErrorResponse{ StatusCode: expectedStatus, Error: "Field validation error", Fields: []weberror.FieldError{ //{Field: "email", Error: "Key: 'UserUpdateRequest.email' Error:Field validation for 'email' failed on the 'email' tag"}, { Field: "email", Value: invalidEmail, Tag: "email", Error: "email must be a valid email address", Display: "email must be a valid email address", }, }, Details: actual.Details, StackTrace: actual.StackTrace, } if diff := cmpDiff(t, actual, expected); diff { t.Fatalf("\t%s\tReceived expected error.", tests.Failed) } t.Logf("\t%s\tReceived expected error.", tests.Success) } } // TestUserUpdatePassword validates update user password endpoint. func TestUserUpdatePassword(t *testing.T) { defer tests.Recover(t) tr := roleTests[auth.RoleAdmin] // Add claims to the context for the user. ctx := context.WithValue(tests.Context(), auth.Key, tr.Claims) // Since role doesn't support create, bypass auth to test other endpoints. created := newMockUser(tr.Account.ID, user_account.UserAccountRole_User).Response(ctx) // Test update user password with invalid data. { expectedStatus := http.StatusBadRequest newPass := uuid.NewRandom().String() diffPass := "<PASSWORD>" rt := requestTest{ fmt.Sprintf("Update password %d w/role %s using invalid data", expectedStatus, tr.Role), http.MethodPatch, "/v1/users/password", user.UserUpdatePasswordRequest{ ID: created.ID, Password: <PASSWORD>, PasswordConfirm: <PASSWORD>, }, tr.Token, tr.Claims, expectedStatus, nil, } t.Logf("\tTest: %s - %s %s", rt.name, rt.method, rt.url) w, ok := executeRequestTest(t, rt, ctx) if !ok { t.Fatalf("\t%s\tExecute request failed.", tests.Failed) } t.Logf("\t%s\tReceived valid status code of %d.", tests.Success, w.Code) var actual weberror.ErrorResponse if err := json.Unmarshal(w.Body.Bytes(), &actual); err != nil { t.Logf("\t\tGot error : %+v", err) t.Fatalf("\t%s\tDecode response body failed.", tests.Failed) } expected := weberror.ErrorResponse{ StatusCode: expectedStatus, Error: "Field validation error", Fields: []weberror.FieldError{ //{Field: "password_confirm", Error: "Key: 'UserUpdatePasswordRequest.password_confirm' Error:Field validation for 'password_confirm' failed on the 'eqfield' tag"}, { Field: "password_confirm", Value: diffPass, Tag: "eqfield", Error: "password_confirm must be equal to Password", Display: "password_confirm must be equal to Password", }, }, Details: actual.Details, StackTrace: actual.StackTrace, } if diff := cmpDiff(t, actual, expected); diff { t.Fatalf("\t%s\tReceived expected error.", tests.Failed) } t.Logf("\t%s\tReceived expected error.", tests.Success) } } // TestUserArchive validates archive user endpoint. func TestUserArchive(t *testing.T) { defer tests.Recover(t) tr := roleTests[auth.RoleAdmin] // Add claims to the context for the user. ctx := context.WithValue(tests.Context(), auth.Key, tr.Claims) // Test archive user with invalid data. { expectedStatus := http.StatusBadRequest invalidId := "a" rt := requestTest{ fmt.Sprintf("Archive %d w/role %s using invalid data", expectedStatus, tr.Role), http.MethodPatch, "/v1/users/archive", user.UserArchiveRequest{ ID: invalidId, }, tr.Token, tr.Claims, expectedStatus, nil, } t.Logf("\tTest: %s - %s %s", rt.name, rt.method, rt.url) w, ok := executeRequestTest(t, rt, ctx) if !ok { t.Fatalf("\t%s\tExecute request failed.", tests.Failed) } t.Logf("\t%s\tReceived valid status code of %d.", tests.Success, w.Code) var actual weberror.ErrorResponse if err := json.Unmarshal(w.Body.Bytes(), &actual); err != nil { t.Logf("\t\tGot error : %+v", err) t.Fatalf("\t%s\tDecode response body failed.", tests.Failed) } expected := weberror.ErrorResponse{ StatusCode: expectedStatus, Error: "Field validation error", Fields: []weberror.FieldError{ //{Field: "id", Error: "Key: 'UserArchiveRequest.id' Error:Field validation for 'id' failed on the 'uuid' tag"}, { Field: "id", Value: invalidId, Tag: "uuid", Error: "id must be a valid UUID", Display: "id must be a valid UUID", }, }, Details: actual.Details, StackTrace: actual.StackTrace, } if diff := cmpDiff(t, actual, expected); diff { t.Fatalf("\t%s\tReceived expected error.", tests.Failed) } t.Logf("\t%s\tReceived expected error.", tests.Success) } // Test archive user with forbidden ID. { expectedStatus := http.StatusForbidden rt := requestTest{ fmt.Sprintf("Archive %d w/role %s using forbidden ID", expectedStatus, tr.Role), http.MethodPatch, "/v1/users/archive", user.UserArchiveRequest{ ID: tr.ForbiddenUser.ID, }, tr.Token, tr.Claims, expectedStatus, nil, } t.Logf("\tTest: %s - %s %s", rt.name, rt.method, rt.url) w, ok := executeRequestTest(t, rt, ctx) if !ok { t.Fatalf("\t%s\tExecute request failed.", tests.Failed) } t.Logf("\t%s\tReceived valid status code of %d.", tests.Success, w.Code) var actual weberror.ErrorResponse if err := json.Unmarshal(w.Body.Bytes(), &actual); err != nil { t.Logf("\t\tGot error : %+v", err) t.Fatalf("\t%s\tDecode response body failed.", tests.Failed) } expected := weberror.ErrorResponse{ StatusCode: expectedStatus, Error: http.StatusText(expectedStatus), Details: user.ErrForbidden.Error(), StackTrace: actual.StackTrace, } if diff := cmpDiff(t, actual, expected); diff { t.Fatalf("\t%s\tReceived expected error.", tests.Failed) } t.Logf("\t%s\tReceived expected error.", tests.Success) } } // TestUserDelete validates delete user endpoint. func TestUserDelete(t *testing.T) { defer tests.Recover(t) tr := roleTests[auth.RoleAdmin] // Add claims to the context for the user. ctx := context.WithValue(tests.Context(), auth.Key, tr.Claims) // Test delete user with invalid data. { expectedStatus := http.StatusBadRequest invalidId := "345345" rt := requestTest{ fmt.Sprintf("Delete %d w/role %s using invalid data", expectedStatus, tr.Role), http.MethodDelete, fmt.Sprintf("/v1/users/%s", invalidId), nil, tr.Token, tr.Claims, expectedStatus, nil, } t.Logf("\tTest: %s - %s %s", rt.name, rt.method, rt.url) w, ok := executeRequestTest(t, rt, ctx) if !ok { t.Fatalf("\t%s\tExecute request failed.", tests.Failed) } t.Logf("\t%s\tReceived valid status code of %d.", tests.Success, w.Code) var actual weberror.ErrorResponse if err := json.Unmarshal(w.Body.Bytes(), &actual); err != nil { t.Logf("\t\tGot error : %+v", err) t.Fatalf("\t%s\tDecode response body failed.", tests.Failed) } expected := weberror.ErrorResponse{ StatusCode: expectedStatus, Error: "Field validation error", Fields: []weberror.FieldError{ //{Field: "id", Error: "Key: 'id' Error:Field validation for 'id' failed on the 'uuid' tag"}, { Field: "id", Value: invalidId, Tag: "uuid", Error: "id must be a valid UUID", Display: "id must be a valid UUID", }, }, Details: actual.Details, StackTrace: actual.StackTrace, } if diff := cmpDiff(t, actual, expected); diff { t.Fatalf("\t%s\tReceived expected error.", tests.Failed) } t.Logf("\t%s\tReceived expected error.", tests.Success) } // Test delete user with forbidden ID. { expectedStatus := http.StatusForbidden rt := requestTest{ fmt.Sprintf("Delete %d w/role %s using forbidden ID", expectedStatus, tr.Role), http.MethodDelete, fmt.Sprintf("/v1/users/%s", tr.ForbiddenUser.ID), nil, tr.Token, tr.Claims, expectedStatus, nil, } t.Logf("\tTest: %s - %s %s", rt.name, rt.method, rt.url) w, ok := executeRequestTest(t, rt, ctx) if !ok { t.Fatalf("\t%s\tExecute request failed.", tests.Failed) } t.Logf("\t%s\tReceived valid status code of %d.", tests.Success, w.Code) var actual weberror.ErrorResponse if err := json.Unmarshal(w.Body.Bytes(), &actual); err != nil { t.Logf("\t\tGot error : %+v", err) t.Fatalf("\t%s\tDecode response body failed.", tests.Failed) } expected := weberror.ErrorResponse{ StatusCode: expectedStatus, Error: http.StatusText(expectedStatus), Details: user.ErrForbidden.Error(), StackTrace: actual.StackTrace, } if diff := cmpDiff(t, actual, expected); diff { t.Fatalf("\t%s\tReceived expected error.", tests.Failed) } t.Logf("\t%s\tReceived expected error.", tests.Success) } } // TestUserSwitchAccount validates user switch account endpoint. func TestUserSwitchAccount(t *testing.T) { defer tests.Recover(t) tr := roleTests[auth.RoleAdmin] // Add claims to the context for the user. ctx := context.WithValue(tests.Context(), auth.Key, tr.Claims) // Test user switch account with invalid data. { expectedStatus := http.StatusBadRequest invalidAccountId := "sf" rt := requestTest{ fmt.Sprintf("Switch account %d w/role %s using invalid data", expectedStatus, tr.Role), http.MethodPatch, "/v1/users/switch-account/" + invalidAccountId, nil, tr.Token, tr.Claims, expectedStatus, nil, } t.Logf("\tTest: %s - %s %s", rt.name, rt.method, rt.url) w, ok := executeRequestTest(t, rt, ctx) if !ok { t.Fatalf("\t%s\tExecute request failed.", tests.Failed) } t.Logf("\t%s\tReceived valid status code of %d.", tests.Success, w.Code) var actual weberror.ErrorResponse if err := json.Unmarshal(w.Body.Bytes(), &actual); err != nil { t.Logf("\t\tGot error : %+v", err) t.Fatalf("\t%s\tDecode response body failed.", tests.Failed) } expected := weberror.ErrorResponse{ StatusCode: expectedStatus, Error: "Field validation error", Fields: []weberror.FieldError{ { Field: "account_id", Value: invalidAccountId, Tag: "uuid", Error: "account_id must be a valid UUID", Display: "account_id must be a valid UUID", }, }, Details: actual.Details, StackTrace: actual.StackTrace, } if diff := cmpDiff(t, expected, actual); diff { t.Fatalf("\t%s\tReceived expected error.", tests.Failed) } t.Logf("\t%s\tReceived expected error.", tests.Success) } // Test user switch account with forbidden ID. { expectedStatus := http.StatusUnauthorized rt := requestTest{ fmt.Sprintf("Switch account %d w/role %s using forbidden ID", expectedStatus, tr.Role), http.MethodPatch, fmt.Sprintf("/v1/users/switch-account/%s", tr.ForbiddenAccount.ID), nil, tr.Token, tr.Claims, expectedStatus, nil, } t.Logf("\tTest: %s - %s %s", rt.name, rt.method, rt.url) w, ok := executeRequestTest(t, rt, ctx) if !ok { t.Fatalf("\t%s\tExecute request failed.", tests.Failed) } t.Logf("\t%s\tReceived valid status code of %d.", tests.Success, w.Code) var actual weberror.ErrorResponse if err := json.Unmarshal(w.Body.Bytes(), &actual); err != nil { t.Logf("\t\tGot error : %+v", err) t.Fatalf("\t%s\tDecode response body failed.", tests.Failed) } expected := weberror.ErrorResponse{ StatusCode: expectedStatus, Error: http.StatusText(expectedStatus), Details: user_auth.ErrAuthenticationFailure.Error(), StackTrace: actual.StackTrace, } if diff := cmpDiff(t, actual, expected); diff { t.Fatalf("\t%s\tReceived expected error.", tests.Failed) } t.Logf("\t%s\tReceived expected error.", tests.Success) } } // TestUserToken validates user token endpoint. func TestUserToken(t *testing.T) { defer tests.Recover(t) // Test user token with empty credentials. { expectedStatus := http.StatusBadRequest rt := requestTest{ fmt.Sprintf("Token %d using empty request", expectedStatus), http.MethodPost, "/v1/oauth/token", nil, user_auth.Token{}, auth.Claims{}, expectedStatus, nil, } t.Logf("\tTest: %s - %s %s", rt.name, rt.method, rt.url) w, ok := executeRequestTest(t, rt, tests.Context()) if !ok { t.Fatalf("\t%s\tExecute request failed.", tests.Failed) } t.Logf("\t%s\tReceived valid status code of %d.", tests.Success, w.Code) var actual weberror.ErrorResponse if err := json.Unmarshal(w.Body.Bytes(), &actual); err != nil { t.Logf("\t\tGot error : %+v", err) t.Fatalf("\t%s\tDecode response body failed.", tests.Failed) } expected := weberror.ErrorResponse{ StatusCode: expectedStatus, Error: "Field validation error", Fields: []weberror.FieldError{ { Field: "username", Value: "", Tag: "required", Error: "username is a required field", Display: "username is a required field", }, { Field: "password", Value: "", Tag: "required", Error: "password is a required field", Display: "password is a required field", }, }, Details: actual.Details, StackTrace: actual.StackTrace, } if diff := cmpDiff(t, actual, expected); diff { t.Fatalf("\t%s\tReceived expected error.", tests.Failed) } t.Logf("\t%s\tReceived expected error.", tests.Success) } // Test user token with invalid email. { expectedStatus := http.StatusBadRequest rt := requestTest{ fmt.Sprintf("Token %d using invalid email", expectedStatus), http.MethodPost, "/v1/oauth/token", nil, user_auth.Token{}, auth.Claims{}, expectedStatus, nil, } t.Logf("\tTest: %s - %s %s", rt.name, rt.method, rt.url) r := httptest.NewRequest(rt.method, rt.url, nil) invalidEmail := "invalid email.com" r.SetBasicAuth(invalidEmail, "<PASSWORD>") w := httptest.NewRecorder() r.Header.Set("Content-Type", web.MIMEApplicationJSONCharsetUTF8) a.ServeHTTP(w, r) if w.Code != expectedStatus { t.Logf("\t\tBody : %s\n", w.Body.String()) t.Logf("\t\tShould receive a status code of %d for the response : %v", rt.statusCode, w.Code) t.Fatalf("\t%s\tExecute request failed.", tests.Failed) } t.Logf("\t%s\tReceived valid status code of %d.", tests.Success, w.Code) var actual weberror.ErrorResponse if err := json.Unmarshal(w.Body.Bytes(), &actual); err != nil { t.Logf("\t\tGot error : %+v", err) t.Fatalf("\t%s\tDecode response body failed.", tests.Failed) } expected := weberror.ErrorResponse{ StatusCode: expectedStatus, Error: "Field validation error", Fields: []weberror.FieldError{ { Field: "email", Value: invalidEmail, Tag: "email", Error: "email must be a valid email address", Display: "email must be a valid email address", }, }, Details: actual.Details, StackTrace: actual.StackTrace, } if diff := cmpDiff(t, actual, expected); diff { t.Fatalf("\t%s\tReceived expected error.", tests.Failed) } t.Logf("\t%s\tReceived expected error.", tests.Success) } // Test user token with invalid password. { for _, tr := range roleTests { expectedStatus := http.StatusUnauthorized rt := requestTest{ fmt.Sprintf("Token %d w/role %s using invalid password", expectedStatus, tr.Role), http.MethodPost, "/v1/oauth/token", nil, user_auth.Token{}, auth.Claims{}, expectedStatus, nil, } t.Logf("\tTest: %s - %s %s", rt.name, rt.method, rt.url) r := httptest.NewRequest(rt.method, rt.url, nil) r.SetBasicAuth(tr.User.Email, "<PASSWORD>") w := httptest.NewRecorder() r.Header.Set("Content-Type", web.MIMEApplicationJSONCharsetUTF8) a.ServeHTTP(w, r) if w.Code != expectedStatus { t.Logf("\t\tBody : %s\n", w.Body.String()) t.Logf("\t\tShould receive a status code of %d for the response : %v", rt.statusCode, w.Code) t.Fatalf("\t%s\tExecute request failed.", tests.Failed) } t.Logf("\t%s\tReceived valid status code of %d.", tests.Success, w.Code) var actual weberror.ErrorResponse if err := json.Unmarshal(w.Body.Bytes(), &actual); err != nil { t.Logf("\t\tGot error : %+v", err) t.Fatalf("\t%s\tDecode response body failed.", tests.Failed) } expected := weberror.ErrorResponse{ StatusCode: expectedStatus, Error: http.StatusText(expectedStatus), Details: user_auth.ErrAuthenticationFailure.Error(), StackTrace: actual.StackTrace, } if diff := cmpDiff(t, actual, expected); diff { t.Fatalf("\t%s\tReceived expected error.", tests.Failed) } t.Logf("\t%s\tReceived expected error.", tests.Success) } } // Test user token with valid email and password. { for _, tr := range roleTests { expectedStatus := http.StatusOK rt := requestTest{ fmt.Sprintf("Token %d w/role %s using valid credentials", expectedStatus, tr.Role), http.MethodPost, "/v1/oauth/token?account_id=" + tr.Account.ID, nil, user_auth.Token{}, auth.Claims{}, expectedStatus, nil, } t.Logf("\tTest: %s - %s %s", rt.name, rt.method, rt.url) r := httptest.NewRequest(rt.method, rt.url, nil) r.SetBasicAuth(tr.User.Email, tr.User.password) w := httptest.NewRecorder() r.Header.Set("Content-Type", web.MIMEApplicationJSONCharsetUTF8) a.ServeHTTP(w, r) if w.Code != expectedStatus { t.Logf("\t\tBody : %s\n", w.Body.String()) t.Logf("\t\tShould receive a status code of %d for the response : %v", rt.statusCode, w.Code) t.Fatalf("\t%s\tExecute request failed.", tests.Failed) } t.Logf("\t%s\tReceived valid status code of %d.", tests.Success, w.Code) var actual map[string]interface{} if err := json.Unmarshal(w.Body.Bytes(), &actual); err != nil { t.Logf("\t\tGot error : %+v", err) t.Fatalf("\t%s\tDecode response body failed.", tests.Failed) } // This is just for response format validation, will verify account from claims. expected := map[string]interface{}{ "access_token": actual["access_token"], "token_type": actual["token_type"], "expiry": actual["expiry"], "ttl": actual["ttl"], "user_id": tr.User.ID, "account_id": tr.Account.ID, } if diff := cmpDiff(t, actual, expected); diff { t.Fatalf("\t%s\tReceived expected result.", tests.Failed) } t.Logf("\t%s\tReceived expected result.", tests.Success) } } }
Practical seed-recovery for the PCG Pseudo-Random Number Generator The Permuted Congruential Generators (PCG) are popular conventional (non-cryptographic) pseudo-random generators designed in 2014. They are used by default in the NumPy scientific computing package. Even though they are not of cryptographic strength, their designer stated that predicting their output should nevertheless be "challenging".In this article, we present a practical algorithm that recovers all the hidden parameters and reconstructs the successive internal states of the generator. This enables us to predict the next random numbers, and output the seeds of the generator. We have successfully executed the reconstruction algorithm using 512 bytes of challenge input; in the worst case, the process takes 20 000 CPU hours.This reconstruction algorithm makes use of cryptanalytic techniques, both symmetric and lattice-based. In particular, the most computationally expensive part is a guessand-determine procedure that solves about 252 instances of the Closest Vector Problem on a very small lattice.
/** * @file disp7seg.cpp * @author <NAME> (<EMAIL>) * @brief Controlador do display de 7 segmentos * @version 0.1 * @date 2020-01-09 * * @copyright Copyright (c) 2020 * */ #include "disp7seg.h" unsigned char DDRA, PORTA; /** * @brief Inicializa os pinos utilizados para o controle do display */ void vDisp7SegInit() { // Configura os pinos de controle dos displays 7 segmentos com saída digital pinMode(DISPLAY_1, OUTPUT); pinMode(DISPLAY_2, OUTPUT); pinMode(DISPLAY_3, OUTPUT); pinMode(DISPLAY_4, OUTPUT); DDRA = 0xFF; // Todos os pinos do PORTA como saída PORTA = 0xFF; // Inicializa todos os pinos do PORTA em nível alto } /** * @brief Encontra o códico correspondente ao digito a ser enviado para o Display * @param num Digito a ser enviado para o Display * @return unsigned char Código correspondente ao digito desejado em hexadecimal */ unsigned char ucDisplay(char num) { // o vetor segmentos representa em hexadecimal os digitos de 0 a 9 correspondentes ao Display de 7 segmentos unsigned char segmentos[] = { 0x3F, // 0 0x06, // 1 0x5B, // 2 0x4F, // 3 0x66, // 4 0x6D, // 5 0x7D, // 6 0x07, // 7 0x7F, // 8 0x67 // 9 }; // No caso dos displays andodo comum, o valor é retornado com negação #ifdef CATODO_COMUM return segmentos[num]; #else return ~segmentos[num]; #endif } /** * @brief Calcula o digito a ser mostrado em cada display * * @param valor Valor inteiro a ser mostrado * @param disp qual dos 4 displays * @return unsigned char digito a ser enviado para o display */ unsigned char ucObtemValorDisplay(int16_t valor, char disp) { unsigned char digito; switch (disp) { case 1: digito = valor / 1000; break; case 2: digito = (valor % 1000) / 100; break; case 3: digito = (valor % 100) / 10; break; case 4: digito = valor % 10; break; } return digito; } /** * @brief Faz a multiplexação dos displays * * @param valor Valor a ser mostrado * @param disp qual display */ void vEscreveNoDisplay(unsigned char valor, char disp) { // Maquina de Estados para a atualização do Display switch (disp) { case ESCREVE_DISPLAY_1: { digitalWrite(DISPLAY_1, LIGA); digitalWrite(DISPLAY_2, DESLIGA); digitalWrite(DISPLAY_3, DESLIGA); digitalWrite(DISPLAY_4, DESLIGA); PORTA = ucDisplay(valor); break; } case ESCREVE_DISPLAY_2: { digitalWrite(DISPLAY_1, DESLIGA); digitalWrite(DISPLAY_2, LIGA); digitalWrite(DISPLAY_3, DESLIGA); digitalWrite(DISPLAY_4, DESLIGA); PORTA = ucDisplay(valor); break; } case ESCREVE_DISPLAY_3: { digitalWrite(DISPLAY_1, DESLIGA); digitalWrite(DISPLAY_2, DESLIGA); digitalWrite(DISPLAY_3, LIGA); digitalWrite(DISPLAY_4, DESLIGA); PORTA = ucDisplay(valor); break; } case ESCREVE_DISPLAY_4: { digitalWrite(DISPLAY_1, DESLIGA); digitalWrite(DISPLAY_2, DESLIGA); digitalWrite(DISPLAY_3, DESLIGA); digitalWrite(DISPLAY_4, LIGA); PORTA = ucDisplay(valor); break; } } } // end EscreveNoDisplay
Deep learning with convolutional neural networks for EEG decoding and visualization Abstract Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through endtoend learning, that is, learning from the raw data. There is increasing interest in using deep ConvNets for endtoend EEG analysis, but a better understanding of how to design and train ConvNets for endtoend EEG decoding and how to visualize the informative EEG features the ConvNets learn is still needed. Here, we studied deep ConvNets with a range of different architectures, designed for decoding imagined or executed tasks from raw EEG. Our results show that recent advances from the machine learning field, including batch normalization and exponential linear units, together with a cropped training strategy, boosted the deep ConvNets decoding performance, reaching at least as good performance as the widely used filter bank common spatial patterns (FBCSP) algorithm (mean decoding accuracies 82.1% FBCSP, 84.0% deep ConvNets). While FBCSP is designed to use spectral power modulations, the features used by ConvNets are not fixed a priori. Our novel methods for visualizing the learned features demonstrated that ConvNets indeed learned to use spectral power modulations in the alpha, beta, and high gamma frequencies, and proved useful for spatially mapping the learned features by revealing the topography of the causal contributions of features in different frequency bands to the decoding decision. Our study thus shows how to design and train ConvNets to decode taskrelated information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEGbased brain mapping. Hum Brain Mapp 38:53915420, 2017. © 2017 Wiley Periodicals, Inc. A.2 FBCSP implementation As in many previous studies (), we used regularized linear discriminant analysis (RLDA) as the classifier, with shrinkage regularization (Ledoit and Wolf, 2004). To decode multiple classes, we used one-vs-one majority weighted voting: We trained an RLDA classifier for each pair of classes, summed the classifier outputs (scaled to be in the same range) across classes and picked the class with the highest sum (;). FBCSP is typically used with feature selection, since few spatial filters from few frequency bands often suffice to reach good accuracies and using many or even all spatial filters often leads to overfitting (;). We use a classical measure for preselecting spatial filters, the ratio of the corresponding power features for both classes extracted by each spatial filter (). Additionally, we performed a feature selection step on the final filter bank features by selecting features using an inner cross validation on the training set, see published code 1 for details. In the present study, we designed two filter banks adapted for the two datasets to capture most discriminative motor-related band power information. In preliminary experiments on the training set, overlapping frequency bands led to higher accuracies, as also proposed by Sun et al.. As the bandwidth of physiological EEG power modulations typically increases in higher frequency ranges (Buzski and Draguhn, 2004), we used frequency bands with 6 Hz width and 3 Hz overlap in frequencies up to 13 Hz, and bands of 8 Hz width and 4 Hz overlap in the range above 10 Hz. Frequencies above 38 Hz only improved accuracies on one of our datasets, the so-called High-Gamma Dataset (see Section 2.7, where we also describe the likely reason for this difference, namely that the recording procedure for the High-Gamma Dataset -but not for the BCI competition datasets -was specifically optimized for the high frequency range). Hence the upper limit of used frequencies was set at 38 Hz for the BCI competition datasets, while the upper limit for the High-Gamma Dataset was set to 122 Hz, close to the Nyquist frequency, thus allowing FBCSP to also use information from the gamma band. As a sanity check, we compared the accuracies of our FBCSP implementation to those published in the literature for the same BCI competition IV dataset 2a (), showing very similar performance: 67.59% for our implementation vs 67.01% for their implementation on average across subjects (p>0.7, Wilcoxon signed-rank test, see Result 1 for more detailed results). This underlines that our FBCSP implementation, including our feature selection and filter bank design, indeed was a suitable baseline for the evaluation of our ConvNet decoding accuracies. A.3 Residual network architecture In total, the ResNet has 31 convolutional layers, a depth where ConvNets without residual blocks started to show problems converging in the original ResNet paper (). In layers where the number of channels is increased, we padded the incoming feature map with zeros to match the new channel dimensionality for the shortcut, as in option A of the original paper (). Table S2: Residual network architecture hyperparameters. Number of kernels, kernel and output size for all subparts of the network. Output size is always time x height x channels. Note that channels here refers to input channels of a network layer, not to EEG channels; EEG channels are in the height dimension. Layer/Block Number of Kernels Kernel Size Output Size Output size is only shown if it changes from the previous block. Second convolution and all residual blocks used ELU nonlinearities. Note that in the end we had seven outputs, i.e., predictions for the four classes, in the time dimension ( 7x1x4 final output size). In practice, when using cropped training as explained in Section 2.5.4, we even had 424 predictions, and used the mean of these to predict the trial. A.4 Optimization and early stopping Adam is a variant of stochastic gradient descent designed to work well with high-dimensional parameters, which makes it suitable for optimizing the large number of parameters of a ConvNet (Kingma and Ba, 2014). The early stopping strategy that we use throughout this study, developed in the computer vision A.5.1 EEG spectral power topographies To visualize the class-specific EEG spectral power modulations, we computed band-specific envelope-class correlations in the alpha, beta and gamma bands for all classes of the High-Gamma Dataset. The groupaveraged topographies of these maps could be readily compared to our input-feature unit-output network correlation maps, since, similar to the power-class correlation map described in Section 2.6.2, we computed correlations of the moving average of the squared envelope with the actual class labels, using the receptive field size of the final layer as the moving average window size. Since this is a ConvNet-independent visualization, we did not subtract any values of an untrained ConvNet. We show the resulting scalp maps for the four classes and did not average over them. Note that these computations were only used for the power topographies shown in Figure 14 and did not enter the decoding analyses as described in the preceding sections. A.6 Dataset details The BCI competition IV dataset 2a is a 22-electrode EEG motor-imagery dataset, with 9 subjects and 2 sessions, each with 288 four-second trials of imagined movements per subject (movements of the left hand, the right hand, the feet and the tongue). The training set consists of the 288 trials of the first session, the test set of the 288 trials of the second session. The BCI competition IV dataset 2b is a 3-electrode EEG motor-imagery dataset with 9 subjects and 5 sessions of imagined movements of the left or the right hand, the latest 3 sessions include online feedback. The training set consists of the approx. 400 trials of the first 3 sessions (408.9±13.7, mean±std), the test set consists of the approx. 320 trials (315.6±12.6, mean±std) of the last two sessions. Our "High-Gamma Dataset" is a 128-electrode dataset (of which we later only use 44 sensors covering the motor cortex, (see Section 2.7.1), obtained from 14 healthy subjects (6 female, 2 left-handed, age 27.2±3.6 (mean±std)) with roughly 1000 (963.1±150.9, mean±std) four-second trials of executed movements divided into 13 runs per subject. The four classes of movements were movements of either the left hand, the right hand, both feet, and rest (no movement, but same type of visual cue as for the other classes). The training set consists of the approx. 880 trials of all runs except the last two runs, the test set of the approx. 160 trials of the last 2 runs. This dataset was acquired in an EEG lab optimized for non-invasive detection of highfrequency movement-related EEG components (;). Such high-frequency components in the range of approx. 60 to above 100 Hz are typically increased during movement execution and may contain useful movement-related information (;; and (4.) full optical decoupling: All devices are battery powered and communicate via optic fibers. Subjects sat in a comfortable armchair in the dimly lit Faraday cabin. The contact impedance from electrodes to skin was typically reduced below 5 kOhm using electrolyte gel (SUPER-VISC, EASYCAP GmbH, Herrsching, GER) and blunt cannulas. Visual cues were presented using a monitor outside the cabin, visible through the shielded window. The distance between the display and the subjects' eyes was approx. 1 m. A fixation point was attached at the center of the screen. The subjects were instructed to relax, fixate the fixation mark and to keep as still as possible during the motor execution task. Blinking and swallowing was restricted to the inter-trial intervals. The electromagnetic shielding combined with the comfortable armchair, dimly lit Faraday cabin and the relatively long 3-4 second inter-trial intervals (see below) were used to minimize artifacts produced by the subjects during the trials. The tasks were as following. Depending on the direction of a gray arrow that was shown on black background, the subjects had to repetitively clench their toes (downward arrow), perform sequential finger-tapping of their left (leftward arrow) or right (rightward arrow) hand, or relax (upward arrow). The movements were selected to require little proximal muscular activity while still being complex enough to keep subjects involved. Within the 4-s trials, the subjects performed the repetitive movements at their own pace, which had to be maintained as long as the arrow was showing. Per run, 80 arrows were displayed for 4 s each, with 3 to 4 s of continuous random inter-trial interval. The order of presentation was pseudo-randomized, with all four arrows being shown every four trials. Ideally 13 runs were performed to collect 260 trials of each movement and rest. The stimuli were presented and the data recorded with BCI2000 (). The experiment was approved by the ethical committee of the University of Freiburg. The Mixed Imagery Dataset (MID) was obtained from 4 healthy subjects (3 female, all right-handed, age 26.75±5.9 (mean±std)) with a varying number of trials (S1: 675, S2: 2172, S3: 698, S4: 464) of imagined movements (right hand and feet), mental rotation and mental word generation. All details were the same as for the High Gamma Dataset, except: a 64-electrode subset of electrodes was used for recording, recordings were not performed in the electromagnetically shielded cabin, thus possibly better approximating conditions of real-world BCI usage, and trials varied in duration between 1 to 7 seconds. The dataset was analyzed by cutting out time windows of 2 seconds with 1.5 second overlap from all trials longer than 2 seconds (S1: 6074 windows, S2: 21339, S3: 6197, S4: 4220), and both methods were evaluated using the accuracy of the predictions for all the 2-second windows for the last two runs of roughly 130 trials (S1: 129, S2: 160, S3: 124, S4: 123). A.7 EEG preprocessing We resampled the High-Gamma Dataset to 250 Hz, i.e., the same as the BCI competition datasets, to be able to use the same ConvNet hyperparameter settings for both datasets. To ensure that the ConvNets only have access to the same frequency range as the CSPs, we low-pass filtered the BCI competition datasets to below 38 Hz. In case of the 4-f end -Hz dataset, we highpass-filtered the signal as described in 2.7.1 ( for the BCI competition datasets, we bandpass-filtered to 4-38 Hz, so the previous lowpass-filter step was merged with the highpass-filter step). Afterwards, for both sets, for the ConvNets, we performed electrode-wise exponential moving standardization with a decay factor of 0.999 to compute exponential moving means and variances for each channel and used these to standardize the continuous data. Formally, where x t and x t are the standardized and the original signal for one electrode at time t, respectively. As starting values for these recursive formulas we set the first 1000 mean values t and first 1000 variance values 2 t to the mean and the variance of the first 1000 samples, which were always completely inside the training set (so we never used future test data in our preprocessing). Some form of standardization is a commonly used procedure for ConvNets; exponentially moving standardization has the advantage that it is also applicable for an online BCI. For FBCSP, this standardization always worsened accuracies in preliminary experiments, so we did not use it. We also did not use the standardization for our visualizations to ensure that the standardization does not make our visualizations harder to interpret. Overall, the minimal preprocessing without any manual feature extraction ensured our end-to-end pipeline could in principle be applied to a large number of brain-signal decoding tasks. We also only minimally cleaned the datasets to remove extreme high-amplitude recording artifacts. Our cleaning method thus only removed trials where at least one channel had a value outside ±800 V. We kept trials with lower-amplitude artifacts as we assumed these trials might still contain useful brain-signal information. As described in Sections 2.6 and 3.5, we used visualization of the features learned by the ConvNets to verify that they learned to classify brain signals and not artifacts. Furthermore, for the High-Gamma Dataset, we used only those sensors covering the motor cortex: all central electrodes, except the Cz electrode which served as the recording reference electrode. Interestingly, using all electrodes led to worse accuracies for both the ConvNets and FBCSP, which may be a useful insight for the design of future movement-related decoding/BCI studies. Any further data restriction (trial-or channel-based cleaning) never led to accuracy increases in either of the two methods when averaged across all subjects. For the visualizations, we used all electrodes and common average re-referencing to investigate spatial distributions for the entire scalp. A.8 Software implementation and hardware We performed the ConvNet experiments on Geforce GTX Titan Black GPUs with 6 GB memory. The machines had Intel(R) Xeon(R) E5-2650 v2 CPUs @ 2.60 GHz with 32 cores (which were never fully used as most computations were performed on the GPU) and 128 GB RAM. FBCSP was computed on an Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60 GHz with 16 cores and 64 GB RAM. We implemented our ConvNets using the Lasagne framework (), preprocessing of the data and FBCSP were implemented with the Wyrm library (). The code used in this study is available under https: //github.com/robintibor/braindecode/.
# -*- coding: utf-8 -*- # emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*- # vi: set ft=python sts=4 ts=4 sw=4 et: import logging from itertools import islice from pathlib import Path from shutil import copyfile import numpy as np import networkx as nx import nipype.pipeline.engine as pe from ..interface import LoadResult from ..utils import b32digest from ..io import DictListFile, cacheobj, uncacheobj from ..resource import get as getresource from .constants import constants max_chunk_size = 50 # subjects class DontRunRunner: plugin_args = dict() def run(self, *args, **kwargs): pass def init_execgraph(workdir, workflow, n_chunks=None, subject_chunks=None): logger = logging.getLogger("halfpipe") uuid = workflow.uuid uuidstr = str(uuid)[:8] # create or load execgraph execgraph = uncacheobj(workdir, "execgraph", uuid) if execgraph is None: logger.info(f"Initializing new execgraph {uuidstr}") execgraph = workflow.run(plugin=DontRunRunner()) execgraph.uuid = uuid logger.info(f"Finished execgraph {uuidstr}") cacheobj(workdir, "execgraph", execgraph, uuid=uuid) # init reports reports_directory = Path(workdir) / "reports" reports_directory.mkdir(parents=True, exist_ok=True) indexhtml_path = reports_directory / "index.html" copyfile(getresource("index.html"), indexhtml_path) for ftype in ["imgs", "vals", "preproc", "error"]: report_fname = reports_directory / f"report{ftype}.js" with DictListFile.cached(report_fname) as dlf: dlf.is_dirty = True # force write # split workflow subjectworkflows = dict() for node in execgraph.nodes(): subjectname = None hierarchy = node._hierarchy.split(".") if hierarchy[1] in ["fmriprep_wf", "reports_wf", "settings_wf", "features_wf"]: subjectname = hierarchy[2] if subjectname is not None: if subjectname not in subjectworkflows: subjectworkflows[subjectname] = set() subjectworkflows[subjectname].add(node) if n_chunks is None: n_chunks = -(-len(subjectworkflows) // max_chunk_size) if subject_chunks: n_chunks = len(subjectworkflows) typestr = f"execgraph.{n_chunks:d}_chunks" execgraphs = uncacheobj(workdir, typestr, uuid, typedisplaystr="execgraph split") if execgraphs is not None: return execgraphs logger.info(f"Initializing execgraph split with {n_chunks} chunks") execgraphs = [] chunks = np.array_split(np.arange(len(subjectworkflows)), n_chunks) partitioniter = iter(subjectworkflows.values()) for chunk in chunks: nodes = set.union( *islice(partitioniter, len(chunk)) ) # take len(chunk) subjects and create union execgraphs.append(execgraph.subgraph(nodes).copy()) # make safe load subjectlevelnodes = set.union(*subjectworkflows.values()) modelnodes = set(execgraph.nodes()) - subjectlevelnodes modeldir = Path(workdir) / constants.workflowdir / "models_wf" modeldir.mkdir(parents=True, exist_ok=True) newnodes = dict() for (v, u, c) in nx.edge_boundary(execgraph.reverse(), modelnodes, data=True): u.keep = True # don't allow results to be deleted newu = newnodes.get(u.fullname) if newu is None: udigest = b32digest(u.fullname)[:4] newu = pe.Node(LoadResult(u), name=f"load_result_{udigest}", base_dir=modeldir) newu.config = u.config newnodes[u.fullname] = newu execgraph.add_edge(newu, v, attr_dict=c) newuresultfile = Path(newu.output_dir()) / f"result_{newu.name}.pklz" for outattr, inattr in c["connect"]: newu.needed_outputs = [*newu.needed_outputs, outattr] v.input_source[inattr] = (newuresultfile, outattr) execgraph.remove_nodes_from(subjectlevelnodes) execgraphs.append(execgraph) assert len(execgraphs) == n_chunks + 1 for execgraph in execgraphs: execgraph.uuid = uuid logger.info("Finished execgraph split") cacheobj(workdir, typestr, execgraphs, uuid=uuid) return execgraphs
package org.openstreetmap.atlas.checks.validation.linear.edges; import java.util.HashMap; import java.util.Map; import org.junit.Assert; import org.junit.Test; import org.openstreetmap.atlas.checks.configuration.ConfigurationResolver; import org.openstreetmap.atlas.checks.flag.CheckFlag; import org.openstreetmap.atlas.geography.Location; import org.openstreetmap.atlas.geography.Segment; import org.openstreetmap.atlas.geography.atlas.Atlas; import org.openstreetmap.atlas.geography.atlas.packed.PackedAtlasBuilder; import org.openstreetmap.atlas.utilities.collections.Iterables; import org.openstreetmap.atlas.utilities.random.RandomTagsSupplier; /** * @author cuthbertm */ public class RoadLinkCheckTest { @Test public void checkTest() { final PackedAtlasBuilder builder = new PackedAtlasBuilder(); final Map<String, String> tags = RandomTagsSupplier.randomTags(5); final Map<String, String> edgeTags = new HashMap<>(); edgeTags.put("highway", "primary"); final Map<String, String> invalidLinkTags = new HashMap<>(); invalidLinkTags.put("highway", "tertiary_link"); final Map<String, String> correctLinkTags = new HashMap<>(); correctLinkTags.put("highway", "primary_link"); // add nodes builder.addNode(0, Location.TEST_6, tags); builder.addNode(1, Location.TEST_1, tags); builder.addNode(2, Location.TEST_7, tags); builder.addNode(3, Location.COLOSSEUM, tags); // Add edges - one testing distance, the second testing class builder.addEdge(1, new Segment(Location.TEST_6, Location.TEST_1), invalidLinkTags); builder.addEdge(2, new Segment(Location.TEST_1, Location.TEST_7), edgeTags); builder.addEdge(3, new Segment(Location.TEST_7, Location.COLOSSEUM), correctLinkTags); final Atlas atlas = builder.get(); final Iterable<CheckFlag> flags = new RoadLinkCheck(ConfigurationResolver .resourceConfiguration("RoadLinkCheckTest.json", this.getClass())).flags(atlas); Assert.assertEquals(1, Iterables.size(flags)); } }
// // File: controlled_vars.cpp // Object: Generates the controlled_vars.h header file // // Copyright: Copyright (c) 2005-2012 Made to Order Software Corp. // All Rights Reserved. // // http://snapwebsites.org/ // <EMAIL> // // Permission is hereby granted, free of charge, to any person obtaining a copy // of this software and associated documentation files (the "Software"), to deal // in the Software without restriction, including without limitation the rights // to use, copy, modify, merge, publish, distribute, sublicense, and/or sell // copies of the Software, and to permit persons to whom the Software is // furnished to do so, subject to the following conditions: // // The above copyright notice and this permission notice shall be included in // all copies or substantial portions of the Software. // // THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR // IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, // FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE // AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER // LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, // OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN // THE SOFTWARE. // // // Usage: // // This tool can be compiled with a very simple make controller_vars // if you can manually define the config.h (from config.h.in) which // is quite simple. Otherwise use cmake as in: // // mkdir BUILD // (cd BUILD; cmake ..) // make -C BUILD // // Then run it to get the mo_controlled.h file created as in: // // controlled_vars >controlled_vars.h // // The result is a set of lengthy template header files of basic types to be // used with boundaries. Since these are templates, 99.9% of the code goes // away when the compilation is done. // // OS detection in the controlled_vars.h file is done with macros: // (see http://sourceforge.net/p/predef/wiki/OperatingSystems/ // for a complete list) // // Operating System Macro // // Mac OS/X __APPLE__ // Visual Studio _MSC_VER // #include "config.h" #include <stdlib.h> #include <stdio.h> #ifdef HAVE_UNISTD_H #include <unistd.h> #endif #include <string.h> #include <stdint.h> int no_bool_constructors = 0; // current output file FILE *out = nullptr; namespace { uint32_t FLAG_TYPE_INT = 0x00000001; uint32_t FLAG_TYPE_FLOAT = 0x00000002; } struct TYPES { char const * name; char const * short_name; char const * long_name; uint32_t flags; char const * condition; }; typedef struct TYPES types_t; types_t const g_types[] = { { "bool", "bool", "int32_t", FLAG_TYPE_INT, 0 }, /* this generates quite many problems as operator input */ { "char", "char", "int32_t", FLAG_TYPE_INT, 0 }, { "signed char", "schar", "int32_t", FLAG_TYPE_INT, 0 }, { "unsigned char", "uchar", "int32_t", FLAG_TYPE_INT, 0 }, /* in C++, wchar_t is a basic type, not a typedef; however, some compilers still allow changing the default and wchar_t then becomes a typedef */ { "wchar_t", "wchar", "int32_t", FLAG_TYPE_INT, "#if !defined(_MSC_VER) || (defined(_WCHAR_T_DEFINED) && defined(_NATIVE_WCHAR_T_DEFINED))" }, { "int16_t", "int16", "int32_t", FLAG_TYPE_INT, 0 }, { "uint16_t", "uint16", "int32_t", FLAG_TYPE_INT, 0 }, { "int32_t", "int32", "int32_t", FLAG_TYPE_INT, 0 }, { "uint32_t", "uint32", "int32_t", FLAG_TYPE_INT, 0 }, { "long", "plain_long", "int64_t", FLAG_TYPE_INT, "#if UINT_MAX == ULONG_MAX" }, { "unsigned long", "plain_ulong", "uint64_t", FLAG_TYPE_INT, "#if UINT_MAX == ULONG_MAX" }, { "int64_t", "int64", "int64_t", FLAG_TYPE_INT, 0 }, { "uint64_t", "uint64", "uint64_t", FLAG_TYPE_INT, 0 }, { "float", "float", "double", FLAG_TYPE_FLOAT, 0 }, { "double", "double", "double", FLAG_TYPE_FLOAT, 0 }, /* "long double" would be problematic here */ { "long double", "longdouble", "long double", FLAG_TYPE_FLOAT, 0 }, // The following were for Apple computers with a PPC //{ "size_t", "size", "uint64_t", FLAG_TYPE_INT, "#ifdef __APPLE__" }, //{ "time_t", "time", "int64_t", FLAG_TYPE_INT, "#ifdef __APPLE__" } }; #define TYPES_ALL (sizeof(g_types) / sizeof(g_types[0])) types_t const g_ptr_types[] = { { "signed char", "schar", "int32_t", FLAG_TYPE_INT, 0 }, { "unsigned char", "uchar", "int32_t", FLAG_TYPE_INT, 0 }, /* in C++, wchar_t is a basic type, not a typedef; however, some compilers still allow changing the default and wchar_t then becomes a typedef */ { "wchar_t", "wchar", "int32_t", FLAG_TYPE_INT, "#if !defined(_MSC_VER) || (defined(_WCHAR_T_DEFINED) && defined(_NATIVE_WCHAR_T_DEFINED))" }, { "int16_t", "int16", "int32_t", FLAG_TYPE_INT, 0 }, { "uint16_t", "uint16", "int32_t", FLAG_TYPE_INT, 0 }, { "int32_t", "int32", "int64_t", FLAG_TYPE_INT, 0 }, { "uint32_t", "uint32", "int64_t", FLAG_TYPE_INT, 0 }, { "long", "plain_long", "int64_t", FLAG_TYPE_INT, "#if UINT_MAX == ULONG_MAX" }, { "unsigned long", "plain_ulong", "uint64_t", FLAG_TYPE_INT, "#if UINT_MAX == ULONG_MAX" }, { "int64_t", "int64", "int64_t", FLAG_TYPE_INT, 0 }, { "uint64_t", "uint64", "uint64_t", FLAG_TYPE_INT, 0 }, { "size_t", "size", "uint64_t", FLAG_TYPE_INT, "#ifdef __APPLE__" }, }; #define PTR_TYPES_ALL (sizeof(g_ptr_types) / sizeof(g_ptr_types[0])) namespace { uint32_t FLAG_HAS_VOID = 0x00000001; uint32_t FLAG_HAS_DOINIT = 0x00000002; uint32_t FLAG_HAS_INITFLG = 0x00000004; uint32_t FLAG_HAS_DEFAULT = 0x00000008; uint32_t FLAG_HAS_LIMITS = 0x00000010; uint32_t FLAG_HAS_FLOAT = 0x00000020; uint32_t FLAG_HAS_DEBUG_ALREADY = 0x00000040; uint32_t FLAG_HAS_ENUM = 0x00000080; uint32_t FLAG_HAS_RETURN_T = 0x00010000; uint32_t FLAG_HAS_RETURN_BOOL = 0x00020000; uint32_t FLAG_HAS_NOINIT = 0x00040000; uint32_t FLAG_HAS_LIMITED = 0x00080000; uint32_t FLAG_HAS_NOFLOAT = 0x00100000; uint32_t FLAG_HAS_PTR = 0x00200000; uint32_t FLAG_HAS_RETURN_PRIMARY = 0x00400000; uint32_t FLAG_HAS_REFERENCE = 0x00800000; uint32_t FLAG_HAS_CONST = 0x01000000; } struct OP_T { char const * name; uint32_t flags; }; typedef struct OP_T op_t; op_t const g_generic_operators[] = { { "=", FLAG_HAS_NOINIT | FLAG_HAS_LIMITED }, { "*=", FLAG_HAS_LIMITED }, { "/=", FLAG_HAS_LIMITED }, { "%=", FLAG_HAS_LIMITED | FLAG_HAS_NOFLOAT }, { "+=", FLAG_HAS_LIMITED }, { "-=", FLAG_HAS_LIMITED }, { "<<=", FLAG_HAS_LIMITED | FLAG_HAS_NOFLOAT }, { ">>=", FLAG_HAS_LIMITED | FLAG_HAS_NOFLOAT }, { "&=", FLAG_HAS_LIMITED | FLAG_HAS_NOFLOAT }, { "|=", FLAG_HAS_LIMITED | FLAG_HAS_NOFLOAT }, { "^=", FLAG_HAS_LIMITED | FLAG_HAS_NOFLOAT }, { "*", FLAG_HAS_RETURN_T | FLAG_HAS_CONST }, { "/", FLAG_HAS_RETURN_T | FLAG_HAS_CONST }, { "%", FLAG_HAS_RETURN_T | FLAG_HAS_CONST | FLAG_HAS_NOFLOAT }, { "+", FLAG_HAS_RETURN_T | FLAG_HAS_CONST }, { "-", FLAG_HAS_RETURN_T | FLAG_HAS_CONST }, { "<<", FLAG_HAS_RETURN_T | FLAG_HAS_CONST | FLAG_HAS_NOFLOAT }, { ">>", FLAG_HAS_RETURN_T | FLAG_HAS_CONST | FLAG_HAS_NOFLOAT }, { "&", FLAG_HAS_RETURN_T | FLAG_HAS_CONST | FLAG_HAS_NOFLOAT }, { "|", FLAG_HAS_RETURN_T | FLAG_HAS_CONST | FLAG_HAS_NOFLOAT }, { "^", FLAG_HAS_RETURN_T | FLAG_HAS_CONST | FLAG_HAS_NOFLOAT }, { "==", FLAG_HAS_RETURN_BOOL | FLAG_HAS_CONST }, { "!=", FLAG_HAS_RETURN_BOOL | FLAG_HAS_CONST }, { "<", FLAG_HAS_RETURN_BOOL | FLAG_HAS_CONST }, { "<=", FLAG_HAS_RETURN_BOOL | FLAG_HAS_CONST }, { ">", FLAG_HAS_RETURN_BOOL | FLAG_HAS_CONST }, { ">=", FLAG_HAS_RETURN_BOOL | FLAG_HAS_CONST } }; #define GENERIC_OPERATORS_MAX (sizeof(g_generic_operators) / sizeof(g_generic_operators[0])) op_t const g_generic_ptr_operators[] = { { "=", FLAG_HAS_NOINIT | FLAG_HAS_PTR }, { "+=", FLAG_HAS_RETURN_PRIMARY }, { "-=", FLAG_HAS_RETURN_PRIMARY }, { "+", FLAG_HAS_RETURN_PRIMARY }, { "-", FLAG_HAS_RETURN_PRIMARY }, { "==", FLAG_HAS_RETURN_BOOL | FLAG_HAS_PTR }, { "!=", FLAG_HAS_RETURN_BOOL | FLAG_HAS_PTR }, { "<", FLAG_HAS_RETURN_BOOL | FLAG_HAS_PTR }, { "<=", FLAG_HAS_RETURN_BOOL | FLAG_HAS_PTR }, { ">", FLAG_HAS_RETURN_BOOL | FLAG_HAS_PTR }, { ">=", FLAG_HAS_RETURN_BOOL | FLAG_HAS_PTR } }; #define GENERIC_PTR_OPERATORS_MAX (sizeof(g_generic_ptr_operators) / sizeof(g_generic_ptr_operators[0])) void create_operator(const char *name, const char *op, const char *type, long flags, char const *long_type) { const char *right; int direct; fprintf(out, "\t"); if((flags & FLAG_HAS_RETURN_BOOL) != 0) { fprintf(out, "bool"); direct = 1; } else if((flags & FLAG_HAS_ENUM) != 0 && long_type) { fprintf(out, "%s", long_type); direct = 1; } else if((flags & FLAG_HAS_RETURN_T) != 0) { fprintf(out, "T"); direct = 1; } else if((flags & FLAG_HAS_RETURN_PRIMARY) != 0) { fprintf(out, "primary_type_t"); direct = 1; } else { fprintf(out, "%s_init&", name); direct = 0; } fprintf(out, " operator %s (", op); if(type == 0) { fprintf(out, "%s_init const& n", name); right = "n.f_value"; } else { fprintf(out, "%s v", type); right = "v"; } fprintf(out, ")%s {", (flags & FLAG_HAS_CONST) != 0 ? " const" : ""); if((flags & FLAG_HAS_INITFLG) != 0) { if((flags & FLAG_HAS_NOINIT) == 0) { fprintf(out, " if(!f_initialized) throw controlled_vars_error_not_initialized(\"uninitialized variable\");"); } else { fprintf(out, " f_initialized = true;"); } if(type == 0) { fprintf(out, " if(!n.f_initialized) throw controlled_vars_error_not_initialized(\"uninitialized variable\");"); } } if((flags & FLAG_HAS_LIMITS) != 0 && (flags & FLAG_HAS_LIMITED) != 0) { char buf[4]; int i; const char *fmt; for(i = 0; op[i + 1] != '\0'; ++i) { buf[i] = op[i]; } buf[i] = '\0'; if(i == 0) { // op == "=" // the first %s is set to "" (i.e. ignored) fmt = "%s%s"; } else { fmt = "f_value %s %s"; } if(direct) { fprintf(out, " return f_value = check("); #pragma GCC diagnostic push #pragma GCC diagnostic ignored "-Wformat-nonliteral" fprintf(out, fmt, buf, right); #pragma GCC diagnostic pop fprintf(out, ");"); } else { fprintf(out, " f_value = check("); #pragma GCC diagnostic push #pragma GCC diagnostic ignored "-Wformat-nonliteral" fprintf(out, fmt, buf, right); #pragma GCC diagnostic pop fprintf(out, ");"); fprintf(out, " return *this;"); } } else { if(direct) { fprintf(out, " return f_value %s %s;", op, right); } else { fprintf(out, " f_value %s %s;", op, right); fprintf(out, " return *this;"); } } fprintf(out, " }\n"); } void create_ptr_operator(const char *name, const char *op, const char *type, long flags) { const char *right; int direct; fprintf(out, "\t"); if((flags & FLAG_HAS_RETURN_BOOL) != 0) { fprintf(out, "bool"); direct = 1; } else if((flags & FLAG_HAS_RETURN_T) != 0) { fprintf(out, "T"); direct = 1; } else if((flags & FLAG_HAS_RETURN_PRIMARY) != 0) { fprintf(out, "primary_type_t"); direct = 1; } else { fprintf(out, "%s_init&", name); direct = 0; } fprintf(out, " operator %s (", op); if(type == 0) { fprintf(out, "const %s_init& n", name); right = "n.f_ptr"; } else { fprintf(out, "%s v", type); right = "v"; } fprintf(out, ") {"); if((flags & FLAG_HAS_INITFLG) != 0) { if((flags & FLAG_HAS_NOINIT) == 0) { fprintf(out, " if(!f_initialized) throw controlled_vars_error_not_initialized(\"uninitialized variable\");"); } else { fprintf(out, " f_initialized = true;"); } if(type == 0) { fprintf(out, " if(!n.f_initialized) throw controlled_vars_error_not_initialized(\"uninitialized variable\");"); } } if(direct) { fprintf(out, " return f_ptr %s %s;", op, right); } else { fprintf(out, " f_ptr %s %s;", op, right); fprintf(out, " return *this;"); } fprintf(out, " }\n"); } void create_ptr_operator_for_ptr(const char *name, const char *op, const char *type, long flags) { const char *right; int direct; fprintf(out, "\t"); if(*op == 'r') { fprintf(out, "void"); direct = 2; } else if((flags & FLAG_HAS_RETURN_BOOL) != 0) { fprintf(out, "bool"); direct = 1; } else if((flags & FLAG_HAS_RETURN_T) != 0) { fprintf(out, "T"); direct = 1; } else if((flags & FLAG_HAS_RETURN_PRIMARY) != 0) { fprintf(out, "primary_type_t"); direct = 1; } else { fprintf(out, "%s_init&", name); direct = 0; } if(*op == 'r') { fprintf(out, " reset("); op = "="; } else { fprintf(out, " operator %s (", op); } if(type == 0) { fprintf(out, "const %s_init%sp", name, ((flags & FLAG_HAS_REFERENCE) != 0 ? "& " : " *")); if((flags & FLAG_HAS_REFERENCE) != 0) { right = "p.f_ptr"; } else { right = "p->f_ptr"; } } else { fprintf(out, "%s p", type); if((flags & FLAG_HAS_REFERENCE) != 0) { right = "&p"; } else { right = "p"; } } fprintf(out, ") {"); if((flags & FLAG_HAS_INITFLG) != 0) { if((flags & FLAG_HAS_NOINIT) == 0) { fprintf(out, " if(!f_initialized) throw controlled_vars_error_not_initialized(\"uninitialized variable\");"); } if(type == 0) { fprintf(out, " if(!p%sf_initialized) throw controlled_vars_error_not_initialized(\"uninitialized variable\");", (flags & FLAG_HAS_REFERENCE) == 0 ? "->" : "."); } } if(type == 0) { // this is a bit extra since we're testing the input and // not the data of this object fprintf(out, " if(%sp == 0) throw controlled_vars_error_null_pointer(\"dereferencing a null pointer\");", (flags & FLAG_HAS_REFERENCE) == 0 ? "" : "&"); } if((flags & FLAG_HAS_INITFLG) != 0 && (flags & FLAG_HAS_NOINIT) != 0) { fprintf(out, " f_initialized = true;"); } switch(direct) { default: //case 0: fprintf(out, " f_ptr %s %s;", op, right); fprintf(out, " return *this;"); break; case 1: fprintf(out, " return f_ptr %s %s;", op, right); break; case 2: fprintf(out, " f_ptr %s %s;", op, right); break; } fprintf(out, " }\n"); } void create_all_operators(const char *name, long flags) { const op_t *op; unsigned long o, t, f; for(o = 0; o < GENERIC_OPERATORS_MAX; ++o) { op = g_generic_operators + o; f = flags | op->flags; // test to avoid the auto_init& operator %= (auto_init& v); // and other integer only operators. if((f & FLAG_HAS_FLOAT) == 0 || (f & FLAG_HAS_NOFLOAT) == 0) { create_operator(name, op->name, 0, f, nullptr); } /* IMPORTANT: * Here we were skipping the type bool, now there is a * command line option and by default we do not skip it. */ for(t = (no_bool_constructors == 1 ? 1 : 0); t < TYPES_ALL; ++t) { // test to avoid all the operators that are not float compatible // (i.e. bitwise operators, modulo) if((f & FLAG_HAS_NOFLOAT) == 0 || ((f & FLAG_HAS_FLOAT) == 0 && (g_types[t].flags & FLAG_TYPE_FLOAT) == 0)) { if(g_types[t].condition) { fprintf(out, "%s\n", g_types[t].condition); } create_operator(name, op->name, g_types[t].name, f, nullptr); if(g_types[t].condition) { fprintf(out, "#endif\n"); } } } } } void create_all_enum_operators(const char *name, long flags) { const op_t *op; unsigned long o, t, f; for(o = 0; o < GENERIC_OPERATORS_MAX; ++o) { op = g_generic_operators + o; f = flags | op->flags; // test to avoid the auto_init& operator %= (auto_init& v); // and other integer only operators. if(((f & FLAG_HAS_FLOAT) == 0 || (f & FLAG_HAS_NOFLOAT) == 0) && (f & FLAG_HAS_LIMITED) == 0) { create_operator(name, op->name, 0, f, "int32_t"); } /* IMPORTANT: * Here we were skipping the type bool, now there is a * command line option and by default we do not skip it * except for comparison tests which are in conflict * with testing with the enumeration type, somehow. */ t = no_bool_constructors == 1 || strcmp(op->name, "==") == 0 || strcmp(op->name, "!=") == 0 || strcmp(op->name, "<") == 0 || strcmp(op->name, "<=") == 0 || strcmp(op->name, ">") == 0 || strcmp(op->name, ">=") == 0 ? 1 : 0; for(; t < TYPES_ALL; ++t) { // test to avoid all the operators that are not float compatible // (i.e. bitwise operators, modulo) if(((f & FLAG_HAS_NOFLOAT) == 0 || ((f & FLAG_HAS_FLOAT) == 0 && (g_types[t].flags & FLAG_TYPE_FLOAT) == 0)) && (f & FLAG_HAS_LIMITED) == 0) { if(g_types[t].condition) { fprintf(out, "%s\n", g_types[t].condition); } create_operator(name, op->name, g_types[t].name, f, g_types[t].long_name); if(g_types[t].condition) { fprintf(out, "#endif\n"); } } } } create_operator(name, "==", "T", f | FLAG_HAS_CONST, nullptr); create_operator(name, "!=", "T", f | FLAG_HAS_CONST, nullptr); create_operator(name, "<", "T", f | FLAG_HAS_CONST, nullptr); create_operator(name, "<=", "T", f | FLAG_HAS_CONST, nullptr); create_operator(name, ">", "T", f | FLAG_HAS_CONST, nullptr); create_operator(name, ">=", "T", f | FLAG_HAS_CONST, nullptr); // our create_operator does not support the following so we do it // here as is: fprintf(out, "template<class Q = T>\n"); fprintf(out, "typename std::enable_if<!std::is_fundamental<Q>::value, Q>::type operator == (bool v) const { return f_value == v; }\n"); fprintf(out, "template<class Q = T>\n"); fprintf(out, "typename std::enable_if<!std::is_fundamental<Q>::value, Q>::type operator >= (bool v) const { return f_value == v; }\n"); fprintf(out, "template<class Q = T>\n"); fprintf(out, "typename std::enable_if<!std::is_fundamental<Q>::value, Q>::type operator > (bool v) const { return f_value == v; }\n"); fprintf(out, "template<class Q = T>\n"); fprintf(out, "typename std::enable_if<!std::is_fundamental<Q>::value, Q>::type operator <= (bool v) const { return f_value == v; }\n"); fprintf(out, "template<class Q = T>\n"); fprintf(out, "typename std::enable_if<!std::is_fundamental<Q>::value, Q>::type operator < (bool v) const { return f_value == v; }\n"); fprintf(out, "template<class Q = T>\n"); fprintf(out, "typename std::enable_if<!std::is_fundamental<Q>::value, Q>::type operator != (bool v) const { return f_value == v; }\n"); } void create_unary_operators(const char *name, long flags) { int i; const char *s; // NOTE: max i can be either 2 or 4 // at this time, we don't want to have the T * operators // instead we'll have a set of ptr() functions for(i = 0; i < 2; ++i) { fprintf(out, "\toperator T%s ()%s {", i & 2 ? " *" : "", i & 1 ? "" : " const"); // NOTE: we want to change the following test for T * // but it requires a reference!!! // (also, we use ptr() instead for now) if((flags & FLAG_HAS_INITFLG) != 0) { fprintf(out, " if(!f_initialized) throw controlled_vars_error_not_initialized(\"uninitialized variable\");"); } fprintf(out, " return %sf_value;", i & 2 ? "&" : ""); fprintf(out, " }\n"); } // C++ casts can be annoying to write so make a value() function available too fprintf(out, "\tT value() const {"); if((flags & FLAG_HAS_INITFLG) != 0) { fprintf(out, " if(!f_initialized) throw controlled_vars_error_not_initialized(\"uninitialized variable\");"); } fprintf(out, " return f_value; }\n"); for(i = 0; i < 2; ++i) { s = i & 1 ? "" : "const "; fprintf(out, "\t%sT * ptr() %s{", s, s); // NOTE: we want to change the following test for T * // but it requires a reference!!! if((flags & FLAG_HAS_INITFLG) != 0) { fprintf(out, " if(!f_initialized) throw controlled_vars_error_not_initialized(\"uninitialized variable\");"); } fprintf(out, " return &f_value; }\n"); } fprintf(out, "\tbool operator ! () const {"); if((flags & FLAG_HAS_INITFLG) != 0) { fprintf(out, " if(!f_initialized) throw controlled_vars_error_not_initialized(\"uninitialized variable\");"); } fprintf(out, " return !f_value; }\n"); const char *op = "~+-"; if(flags & FLAG_HAS_FLOAT) { op = "+-"; } int max = strlen(op); for(i = 0; i < max; ++i) { fprintf(out, "\tT operator %c () const {", op[i]); if((flags & FLAG_HAS_INITFLG) != 0) { fprintf(out, " if(!f_initialized) throw controlled_vars_error_not_initialized(\"uninitialized variable\");"); } fprintf(out, " return %cf_value; }\n", op[i]); } const char *limits; if((flags & FLAG_HAS_LIMITS) != 0) { limits = ", min, max"; } else { limits = ""; } // NOTE: operator ++/-- () -> ++/--var // operator ++/-- (int) -> var++/-- for(i = 0; i < 4; ++i) { fprintf(out, "\t%s_init%s operator %s (%s) {", name, i & 1 ? "" : "&", i & 2 ? "--" : "++", i & 1 ? "int" : ""); if((flags & FLAG_HAS_INITFLG) != 0) { fprintf(out, " if(!f_initialized) throw controlled_vars_error_not_initialized(\"uninitialized variable\");"); } if(i & 1) { fprintf(out, " %s_init<T%s> result(*this);", name, limits); } if((flags & FLAG_HAS_LIMITS) != 0) { // in this case we only need to check against one bound if(i & 2) { fprintf(out, " if(f_value <= min)"); } else { fprintf(out, " if(f_value >= max)"); } fprintf(out, " throw controlled_vars_error_out_of_bounds(\"%s would render value out of bounds\");", i & 2 ? "--" : "++"); fprintf(out, " %sf_value;", i & 2 ? "--" : "++"); } else { fprintf(out, " %sf_value;", i & 2 ? "--" : "++"); } if(i & 1) { fprintf(out, " return result;"); } else { fprintf(out, " return *this;"); } fprintf(out, " }\n"); } } void create_unary_enum_operators(const char *name, long flags) { int i; const char *s; static_cast<void>(name); // NOTE: max i can be either 2 or 4 // at this time, we don't want to have the T * operators // instead we'll have a set of ptr() functions for(i = 0; i < 2; ++i) { fprintf(out, "\toperator T%s ()%s {", i & 2 ? " *" : "", i & 1 ? "" : " const"); // NOTE: we want to change the following test for T * // but it requires a reference!!! // (also, we use ptr() instead for now) if((flags & FLAG_HAS_INITFLG) != 0) { fprintf(out, " if(!f_initialized) throw controlled_vars_error_not_initialized(\"uninitialized variable\");"); } fprintf(out, " return %sf_value;", i & 2 ? "&" : ""); fprintf(out, " }\n"); } // C++ casts can be annoying to write so make a value() function available too fprintf(out, "\tT value() const {"); if((flags & FLAG_HAS_INITFLG) != 0) { fprintf(out, " if(!f_initialized) throw controlled_vars_error_not_initialized(\"uninitialized variable\");"); } fprintf(out, " return f_value; }\n"); for(i = 0; i < 2; ++i) { s = i & 1 ? "" : "const "; fprintf(out, "\t%sT * ptr() %s{", s, s); // NOTE: we want to change the following test for T * // but it requires a reference!!! if((flags & FLAG_HAS_INITFLG) != 0) { fprintf(out, " if(!f_initialized) throw controlled_vars_error_not_initialized(\"uninitialized variable\");"); } fprintf(out, " return &f_value; }\n"); } // This does not work with 'operator T () const' //fprintf(out, "\toperator bool () const {"); //if((flags & FLAG_HAS_INITFLG) != 0) { // fprintf(out, " if(!f_initialized) throw controlled_vars_error_not_initialized(\"uninitialized variable\");"); //} //fprintf(out, " return f_value != static_cast<T>(0); }\n"); fprintf(out, "\tbool operator ! () const {"); if((flags & FLAG_HAS_INITFLG) != 0) { fprintf(out, " if(!f_initialized) throw controlled_vars_error_not_initialized(\"uninitialized variable\");"); } fprintf(out, " return !f_value; }\n"); const char *op = "~+-"; if(flags & FLAG_HAS_FLOAT) { op = "+-"; } int max = strlen(op); for(i = 0; i < max; ++i) { fprintf(out, "\tint operator %c () const {", op[i]); if((flags & FLAG_HAS_INITFLG) != 0) { fprintf(out, " if(!f_initialized) throw controlled_vars_error_not_initialized(\"uninitialized variable\");"); } fprintf(out, " return %cf_value; }\n", op[i]); } } void create_unary_ptr_operators(const char *name, long flags) { int i; const char *s; for(i = 0; i < 2; ++i) { fprintf(out, "\toperator primary_type_t ()%s {", i & 1 ? "" : " const"); if((flags & FLAG_HAS_INITFLG) != 0) { fprintf(out, " if(!f_initialized) throw controlled_vars_error_not_initialized(\"uninitialized variable\");"); } fprintf(out, " return %sf_ptr;", i & 2 ? "&" : ""); fprintf(out, " }\n"); } // C++ casts can be annoying to write so make a value() function available too fprintf(out, "\tprimary_type_t value() const {"); if((flags & FLAG_HAS_INITFLG) != 0) { fprintf(out, " if(!f_initialized) throw controlled_vars_error_not_initialized(\"uninitialized variable\");"); } fprintf(out, " return f_ptr; }\n"); for(i = 0; i < 2; ++i) { s = i & 1 ? "" : "const "; fprintf(out, "\tT *get() %s {", s); if((flags & FLAG_HAS_INITFLG) != 0) { fprintf(out, " if(!f_initialized) throw controlled_vars_error_not_initialized(\"uninitialized variable\");"); } fprintf(out, " return f_ptr; }\n"); fprintf(out, "\tprimary_type_t *ptr() %s {", s); if((flags & FLAG_HAS_INITFLG) != 0) { fprintf(out, " if(!f_initialized) throw controlled_vars_error_not_initialized(\"uninitialized variable\");"); } fprintf(out, " return &f_ptr; }\n"); fprintf(out, "\tT *operator -> () %s {", s); if((flags & FLAG_HAS_INITFLG) != 0) { fprintf(out, " if(!f_initialized) throw controlled_vars_error_not_initialized(\"uninitialized variable\");"); } fprintf(out, " if(f_ptr == 0) throw controlled_vars_error_null_pointer(\"dereferencing a null pointer\");"); fprintf(out, " return f_ptr; }\n"); fprintf(out, "\t%s T& operator * () %s {",s, s); if((flags & FLAG_HAS_INITFLG) != 0) { fprintf(out, " if(!f_initialized) throw controlled_vars_error_not_initialized(\"uninitialized variable\");"); } fprintf(out, " if(f_ptr == 0) throw controlled_vars_error_null_pointer(\"dereferencing a null pointer\");"); fprintf(out, " return *f_ptr; }\n"); fprintf(out, "\t%s T& operator [] (int index) %s {",s, s); if((flags & FLAG_HAS_INITFLG) != 0) { fprintf(out, " if(!f_initialized) throw controlled_vars_error_not_initialized(\"uninitialized variable\");"); } fprintf(out, " if(f_ptr == 0) throw controlled_vars_error_null_pointer(\"dereferencing a null pointer\");"); // unfortunately we cannot check bounds as these were not indicated to us fprintf(out, " return f_ptr[index]; }\n"); } fprintf(out, "\tvoid swap(%s_init& p) {", name); if((flags & FLAG_HAS_INITFLG) != 0) { fprintf(out, " if(!f_initialized) throw controlled_vars_error_not_initialized(\"uninitialized variable\");"); } fprintf(out, " primary_type_t n(f_ptr); f_ptr = p.f_ptr; p.f_ptr = n; }\n"); fprintf(out, "\toperator bool () const {"); if((flags & FLAG_HAS_INITFLG) != 0) { fprintf(out, " if(!f_initialized) throw controlled_vars_error_not_initialized(\"uninitialized variable\");"); } fprintf(out, " return f_ptr != 0; }\n"); fprintf(out, "\tbool operator ! () const {"); if((flags & FLAG_HAS_INITFLG) != 0) { fprintf(out, " if(!f_initialized) throw controlled_vars_error_not_initialized(\"uninitialized variable\");"); } fprintf(out, " return f_ptr == 0; }\n"); // NOTE: operator ++/-- () -> ++/--var // operator ++/-- (int) -> var++/-- for(i = 0; i < 4; ++i) { fprintf(out, "\t%s_init%s operator %s (%s) {", name, i & 1 ? "" : "&", i & 2 ? "--" : "++", i & 1 ? "int" : ""); if((flags & FLAG_HAS_INITFLG) != 0) { fprintf(out, " if(!f_initialized) throw controlled_vars_error_not_initialized(\"uninitialized variable\");"); } if(i & 1) { fprintf(out, " %s_init<T> result(*this);", name); } fprintf(out, " %sf_ptr;", i & 2 ? "--" : "++"); if(i & 1) { fprintf(out, " return result;"); } else { fprintf(out, " return *this;"); } fprintf(out, " }\n"); } } void create_all_ptr_operators(const char *name, long flags) { const op_t *op; unsigned long o, t, f; // if no default, then the default reset uses null() fprintf(out, "\tvoid reset() {%s f_ptr = %s; }\n", (flags & FLAG_HAS_INITFLG) == 0 ? "" : " f_initialized = true;", (flags & FLAG_HAS_DEFAULT) != 0 ? "init_value::DEFAULT_VALUE()" : "null()"); create_ptr_operator_for_ptr(name, "reset", "T&", flags | FLAG_HAS_REFERENCE | FLAG_HAS_NOINIT); create_ptr_operator_for_ptr(name, "reset", "primary_type_t", flags | FLAG_HAS_NOINIT); create_ptr_operator_for_ptr(name, "reset", 0, flags | FLAG_HAS_REFERENCE | FLAG_HAS_NOINIT); create_ptr_operator_for_ptr(name, "reset", 0, flags | FLAG_HAS_NOINIT); for(o = 0; o < GENERIC_PTR_OPERATORS_MAX; ++o) { op = g_generic_ptr_operators + o; f = flags | op->flags; if((f & FLAG_HAS_PTR) != 0) { create_ptr_operator_for_ptr(name, op->name, "T&", f | FLAG_HAS_REFERENCE); create_ptr_operator_for_ptr(name, op->name, "primary_type_t", f); create_ptr_operator_for_ptr(name, op->name, 0, f | FLAG_HAS_REFERENCE); create_ptr_operator_for_ptr(name, op->name, 0, f); } else { /* IMPORTANT: * Here we were skipping the type bool, now there is a * command line option and by default we don't skip it. */ for(t = 0; t < PTR_TYPES_ALL; ++t) { // test to avoid all the operators that are not float compatible // (i.e. bitwise operators, modulo) if(g_ptr_types[t].condition) { fprintf(out, "%s\n", g_ptr_types[t].condition); } create_ptr_operator(name, op->name, g_ptr_types[t].name, f); if(g_ptr_types[t].condition) { fprintf(out, "#endif\n"); } } } } } void create_typedef(const char *name, const char *short_name) { const char *t; unsigned int idx; // here we include the size_t and time_t types (these were removed though) // UPDATE: We do not include bool because now it is managed as an // enumeration instead for(idx = 1; idx < TYPES_ALL; ++idx) { t = g_types[idx].name; if(g_types[idx].flags & FLAG_TYPE_FLOAT) { // skip integer types if(strcmp(name, "auto") == 0 || strcmp(name, "ptr_auto") == 0) { continue; } } else { // skip floating point types if(strcmp(name, "fauto") == 0) { continue; } } if(g_types[idx].condition) { fprintf(out, "%s\n", g_types[idx].condition); } fprintf(out, "typedef %s_init<%s> %s%s_t;\n", name, g_types[idx].name, short_name, g_types[idx].short_name); if(g_types[idx].condition) { fprintf(out, "#endif\n"); } } } void create_class(const char *name, const char *short_name, long flags) { unsigned int idx; char const *init; char const *limits; if((flags & FLAG_HAS_LIMITS) != 0) { // we'd need to check that min <= max which should be possible // (actually BOOST does it...) limits = ", T min, T max"; } else { limits = ""; } fprintf(out, "/** \\brief Documentation available online.\n"); fprintf(out, " * Please go to http://snapwebsites.org/project/controlled-vars\n"); fprintf(out, " */\n"); if((flags & FLAG_HAS_DEFAULT) != 0) { fprintf(out, "template<class T%s, T init_value = 0>", limits); if((flags & FLAG_HAS_LIMITS) != 0) { // the init_value should be checked using a static test // (which is possible, BOOST does it, but good luck to // replicate that work in a couple lines of code!) init = " f_value = check(init_value);"; } else { init = " f_value = init_value;"; } } else { fprintf(out, "template<class T%s>", limits); if((flags & FLAG_HAS_LIMITS) != 0) { // here we can use the min value if zero is not part of the range init = " f_value = 0.0 >= min && 0.0 <= max ? 0.0 : min;"; } else { init = " f_value = 0.0;"; } } fprintf(out, " class %s_init {\n", name); fprintf(out, "public:\n"); fprintf(out, "\ttypedef T primary_type_t;\n"); // Define the default value if((flags & FLAG_HAS_DEFAULT) != 0) { fprintf(out, "\tstatic T const DEFAULT_VALUE = init_value;\n"); } // Define the limits if((flags & FLAG_HAS_LIMITS) != 0) { fprintf(out, "\tstatic primary_type_t const MIN_BOUND = min;\n"); fprintf(out, "\tstatic primary_type_t const MAX_BOUND = max;\n"); fprintf(out, "\tCONTROLLED_VARS_STATIC_ASSERT(min <= max);\n"); if((flags & FLAG_HAS_DEFAULT) != 0) { fprintf(out, "\tCONTROLLED_VARS_STATIC_ASSERT(init_value >= min && init_value <= max);\n"); } // a function to check the limits fprintf(out, "\ttemplate<class L> T check(L v) {\n"); fprintf(out, "#ifdef CONTROLLED_VARS_LIMITED\n"); fprintf(out, "#ifdef __GNUC__\n"); fprintf(out, "#pragma GCC diagnostic push\n"); fprintf(out, "#pragma GCC diagnostic ignored \"-Wlogical-op\"\n"); fprintf(out, "#endif\n"); fprintf(out, "\t\tif(v < min || v > max)"); fprintf(out, " throw controlled_vars_error_out_of_bounds(\"value out of bounds\");\n"); fprintf(out, "#ifdef __GNUC__\n"); fprintf(out, "#pragma GCC diagnostic pop\n"); fprintf(out, "#endif\n"); fprintf(out, "#endif\n"); fprintf(out, "\t\treturn static_cast<primary_type_t>(v);\n"); fprintf(out, "\t}\n"); } // Constructors if((flags & FLAG_HAS_VOID) != 0) { fprintf(out, "\t%s_init() {%s%s }\n", name, (flags & FLAG_HAS_DOINIT) != 0 ? init : "", (flags & FLAG_HAS_INITFLG) != 0 ? " f_initialized = false;" : ""); } // in older versions of g++ we did not want the bool // type in constructors; it works fine in newer versions though // use the --no-bool-constructor option to revert to the // old behavior // // old command: (use idx = 1 to skip the bool type) // we don't want the bool type in the constructors... // it creates some problems // here we exclude the bool type for(idx = (no_bool_constructors == 1 ? 1 : 0); idx < TYPES_ALL; ++idx) { if(g_types[idx].condition) { fprintf(out, "%s\n", g_types[idx].condition); } fprintf(out, "\t%s_init(%s v) {", name, g_types[idx].name); if((flags & FLAG_HAS_INITFLG) != 0) { fprintf(out, " f_initialized = true;"); } // The static cast is nice to have with cl which otherwise generates // warnings about values being truncated all over the place. fprintf(out, " f_value ="); if((flags & FLAG_HAS_LIMITS) != 0) { fprintf(out, " check(v); }\n"); } else { fprintf(out, " static_cast<primary_type_t>(v); }\n"); } if(g_types[idx].condition) { fprintf(out, "#endif\n"); } } // Unary operators create_unary_operators(name, flags); // Binary Operators create_all_operators(name, flags); if((flags & FLAG_HAS_DEBUG_ALREADY) == 0) { fprintf(out, "#ifdef CONTROLLED_VARS_DEBUG\n"); } fprintf(out, "\tbool is_initialized() const {"); if((flags & FLAG_HAS_INITFLG) != 0) { fprintf(out, " return f_initialized;"); } else { fprintf(out, " return true;"); } fprintf(out, " }\n"); if((flags & FLAG_HAS_DEBUG_ALREADY) == 0) { fprintf(out, "#endif\n"); } fprintf(out, "private:\n"); if((flags & FLAG_HAS_INITFLG) != 0) { fprintf(out, "\tbool f_initialized;\n"); } fprintf(out, "\tT f_value;\n"); fprintf(out, "};\n"); if((flags & FLAG_HAS_LIMITS) == 0) { create_typedef(name, short_name); } } void create_class_enum(const char *name, long flags) { char const *init; char const *limits; flags |= FLAG_HAS_ENUM; if((flags & FLAG_HAS_FLOAT) != 0) { fprintf(stderr, "internal error: create_class_enum() called with FLAG_HAS_FLOAT.\n"); exit(1); } if((flags & FLAG_HAS_LIMITS) != 0) { // we'd need to check that min <= max which should be possible // (actually BOOST does it...) limits = ", T min, T max"; } else { limits = ""; } fprintf(out, "/** \\brief Documentation available online.\n"); fprintf(out, " * Please go to http://snapwebsites.org/project/controlled-vars\n"); fprintf(out, " */\n"); if((flags & FLAG_HAS_DEFAULT) != 0) { if((flags & FLAG_HAS_ENUM) != 0) { // we allow an "auto-init" of enumerations although // really we probably should not allow those at all // because all enumerations should be limited fprintf(out, "template<class T%s, T init_value = static_cast<T>(0)>", limits); } else { fprintf(out, "template<class T%s, T init_value = static_cast<T>(0)>", limits); } if((flags & FLAG_HAS_LIMITS) != 0) { // the init_value should be checked using a static test // (which is possible, BOOST does it, but good luck to // replicate that work in a couple lines of code!) init = " f_value = check(init_value);"; } else { init = " f_value = init_value;"; } } else { fprintf(out, "template<class T%s>", limits); if((flags & FLAG_HAS_LIMITS) != 0) { // here we can use the min value if zero is not part of the range init = " f_value = 0.0 >= min && 0.0 <= max ? 0.0 : min;"; } else { init = " f_value = 0.0;"; } } fprintf(out, " class %s_init {\n", name); fprintf(out, "public:\n"); fprintf(out, "\ttypedef T primary_type_t;\n"); // Define the default value if((flags & FLAG_HAS_DEFAULT) != 0) { fprintf(out, "\tstatic T const DEFAULT_VALUE = init_value;\n"); } // Define the limits if((flags & FLAG_HAS_LIMITS) != 0) { fprintf(out, "\tstatic primary_type_t const MIN_BOUND = min;\n"); fprintf(out, "\tstatic primary_type_t const MAX_BOUND = max;\n"); fprintf(out, "\tCONTROLLED_VARS_STATIC_ASSERT(min <= max);\n"); if((flags & FLAG_HAS_DEFAULT) != 0) { fprintf(out, "\tCONTROLLED_VARS_STATIC_ASSERT(init_value >= min && init_value <= max);\n"); } // a function to check the limits fprintf(out, "\tT check(T v) {\n"); fprintf(out, "#ifdef CONTROLLED_VARS_LIMITED\n"); fprintf(out, "#ifdef __GNUC__\n"); fprintf(out, "#pragma GCC diagnostic push\n"); fprintf(out, "#pragma GCC diagnostic ignored \"-Wlogical-op\"\n"); fprintf(out, "#endif\n"); fprintf(out, "\t\tif(v < min || v > max)"); fprintf(out, " throw controlled_vars_error_out_of_bounds(\"value out of bounds\");\n"); fprintf(out, "#ifdef __GNUC__\n"); fprintf(out, "#pragma GCC diagnostic pop\n"); fprintf(out, "#endif\n"); fprintf(out, "#endif\n"); fprintf(out, "\t\treturn v;\n"); fprintf(out, "\t}\n"); } // Constructors if((flags & FLAG_HAS_VOID) != 0) { fprintf(out, "\t%s_init() {%s%s }\n", name, (flags & FLAG_HAS_DOINIT) != 0 ? init : "", (flags & FLAG_HAS_INITFLG) != 0 ? " f_initialized = false;" : ""); } // create only one constructor for enumerations, but the correct // one! fprintf(out, "\t%s_init(T v) {", name); if((flags & FLAG_HAS_INITFLG) != 0) { fprintf(out, " f_initialized = true;"); } fprintf(out, " f_value ="); if((flags & FLAG_HAS_LIMITS) != 0) { fprintf(out, " check(v); }\n"); } else { fprintf(out, " v; }\n"); } // create only one assignment operator fprintf(out, "\t%s_init& operator = (T v) {", name); if((flags & FLAG_HAS_INITFLG) != 0) { fprintf(out, " f_initialized = true;"); } fprintf(out, " f_value ="); if((flags & FLAG_HAS_LIMITS) != 0) { fprintf(out, " check(v); return *this; }\n"); } else { fprintf(out, " v; return *this; }\n"); } // Unary operators create_unary_enum_operators(name, flags); // Binary Operators create_all_enum_operators(name, flags); if((flags & FLAG_HAS_DEBUG_ALREADY) == 0) { fprintf(out, "#ifdef CONTROLLED_VARS_DEBUG\n"); } fprintf(out, "\tbool is_initialized() const {"); if((flags & FLAG_HAS_INITFLG) != 0) { fprintf(out, " return f_initialized;"); } else { fprintf(out, " return true;"); } fprintf(out, " }\n"); if((flags & FLAG_HAS_DEBUG_ALREADY) == 0) { fprintf(out, "#endif\n"); } fprintf(out, "private:\n"); if((flags & FLAG_HAS_INITFLG) != 0) { fprintf(out, "\tbool f_initialized;\n"); } fprintf(out, "\tT f_value;\n"); fprintf(out, "};\n"); //if((flags & FLAG_HAS_LIMITS) == 0) { // create_typedef(name, short_name); //} } //create_class_ptr("ptr_auto", "zp", FLAG_HAS_VOID | FLAG_HAS_DOINIT | FLAG_HAS_DEFAULT) //create_class_ptr("ptr_need", "mp", 0); //create_class_ptr("ptr_no", "np", FLAG_HAS_VOID | FLAG_HAS_INITFLG | FLAG_HAS_DEBUG_ALREADY); void create_class_ptr(const char *name, const char *short_name, long flags) { unsigned int idx; const char *init; fprintf(out, "/** \\brief Documentation available online.\n"); fprintf(out, " * Please go to http://snapwebsites.org/project/controlled-vars\n"); fprintf(out, " */\n"); if((flags & FLAG_HAS_DEFAULT) != 0) { fprintf(out, "template<class T> class trait_%s_null { public: static T *DEFAULT_VALUE() { return 0; } };\n", name); fprintf(out, "template<class T, typename init_value = trait_%s_null<T> >", name); init = " f_ptr = DEFAULT_VALUE();"; } else { fprintf(out, "template<class T>"); init = " f_ptr = 0;"; } fprintf(out, " class %s_init {\n", name); fprintf(out, "public:\n"); fprintf(out, "\ttypedef T *primary_type_t;\n"); // Define the default value if((flags & FLAG_HAS_DEFAULT) != 0) { fprintf(out, "\tstatic T *DEFAULT_VALUE() { return init_value::DEFAULT_VALUE(); }\n"); } fprintf(out, "\tstatic T *null() { return 0; }\n"); // Constructors if((flags & FLAG_HAS_VOID) != 0) { fprintf(out, "\t%s_init() {%s%s }\n", name, (flags & FLAG_HAS_DOINIT) != 0 ? init : "", (flags & FLAG_HAS_INITFLG) != 0 ? " f_initialized = false;" : ""); } // for points, the different types are: // T pointer // T reference // class by pointer // class by reference for(idx = 0; idx < 4; ++idx) { int mode = idx % 2; // 0 - pointer, 1 - reference int type = idx / 2; // 0 - T, 1 - class static const char *ptr_modes[] = { " *", "& " }; fprintf(out, "\t%s_init(%s%s%s%sp) {", name, (type == 0 ? "" : "const "), (type == 0 ? "T" : name), (type == 0 ? "" : "_init"), ptr_modes[mode]); if((flags & FLAG_HAS_INITFLG) != 0) { fprintf(out, " f_initialized = true;"); } if(type == 0) { fprintf(out, " f_ptr = %sp; }\n", mode == 0 ? "" : "&"); } else { switch(mode) { default: //case 0: fprintf(out, " f_ptr = p == 0 ? 0 : p->f_ptr; }\n"); break; case 1: fprintf(out, " f_ptr = &p == 0 ? 0 : p.f_ptr; }\n"); break; } } } // Unary operators create_unary_ptr_operators(name, flags); // Binary Operators create_all_ptr_operators(name, flags); if((flags & FLAG_HAS_DEBUG_ALREADY) == 0) { fprintf(out, "#ifdef CONTROLLED_VARS_DEBUG\n"); } fprintf(out, "\tbool is_initialized() const {"); if((flags & FLAG_HAS_INITFLG) != 0) { fprintf(out, " return f_initialized;"); } else { fprintf(out, " return true;"); } fprintf(out, " }\n"); if((flags & FLAG_HAS_DEBUG_ALREADY) == 0) { fprintf(out, "#endif\n"); } fprintf(out, "private:\n"); if((flags & FLAG_HAS_INITFLG) != 0) { fprintf(out, "\tbool f_initialized;\n"); } fprintf(out, "\tprimary_type_t f_ptr;\n"); fprintf(out, "};\n"); create_typedef(name, short_name); } void create_direct_typedef(const char *short_name) { unsigned int idx; // here we include the bool, size_t and time_t types // UPDATE: I removed the bool because it is handled as an enumeration for(idx = 1; idx < TYPES_ALL; ++idx) { if(g_types[idx].condition) { fprintf(out, "%s\n", g_types[idx].condition); } fprintf(out, "typedef %s %s%s_t;\n", g_types[idx].name, short_name, g_types[idx].short_name); if(g_types[idx].condition) { fprintf(out, "#endif\n"); } } } void create_file(const char *filename) { if(out != nullptr) { fclose(out); } out = fopen(filename, "w"); if(out == nullptr) { fprintf(stderr, "error:controlled_vars: cannot create file \"%s\"\n", filename); exit(1); } } namespace { uint32_t PRINT_FLAG_INCLUDE_STDEXCEPT = 0x0001; uint32_t PRINT_FLAG_INCLUDE_INIT = 0x0002; uint32_t PRINT_FLAG_INCLUDE_EXCEPTION = 0x0004; uint32_t PRINT_FLAG_NO_NAMESPACE = 0x0008; uint32_t PRINT_FLAG_INCLUDE_STATIC_ASSERT = 0x0010; uint32_t PRINT_FLAG_ENUM = 0x0020; } void print_header(const char *filename, const char *upper, int flags) { fprintf(out, "// WARNING: do not edit; this is an auto-generated\n"); fprintf(out, "// WARNING: file; please, use the generator named\n"); fprintf(out, "// WARNING: controlled_vars to re-generate\n"); fprintf(out, "//\n"); fprintf(out, "// File: %s\n", filename); fprintf(out, "// Object: Help you by constraining basic types like classes.\n"); fprintf(out, "//\n"); fprintf(out, "// Copyright: Copyright (c) 2005-2012 Made to Order Software Corp.\n"); fprintf(out, "// All Rights Reserved.\n"); fprintf(out, "//\n"); fprintf(out, "// http://snapwebsites.org/\n"); fprintf(out, "// <EMAIL>\n"); fprintf(out, "//\n"); fprintf(out, "// Permission is hereby granted, free of charge, to any person obtaining a copy\n"); fprintf(out, "// of this software and associated documentation files (the \"Software\"), to deal\n"); fprintf(out, "// in the Software without restriction, including without limitation the rights\n"); fprintf(out, "// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n"); fprintf(out, "// copies of the Software, and to permit persons to whom the Software is\n"); fprintf(out, "// furnished to do so, subject to the following conditions:\n"); fprintf(out, "//\n"); fprintf(out, "// The above copyright notice and this permission notice shall be included in\n"); fprintf(out, "// all copies or substantial portions of the Software.\n"); fprintf(out, "//\n"); fprintf(out, "// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n"); fprintf(out, "// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n"); fprintf(out, "// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n"); fprintf(out, "// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n"); fprintf(out, "// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n"); fprintf(out, "// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n"); fprintf(out, "// THE SOFTWARE.\n"); fprintf(out, "//\n"); fprintf(out, "#ifndef CONTROLLED_VARS_%s%s_H\n", upper, (flags & PRINT_FLAG_INCLUDE_INIT) != 0 ? "_INIT" : ""); fprintf(out, "#define CONTROLLED_VARS_%s%s_H\n", upper, (flags & PRINT_FLAG_INCLUDE_INIT) != 0 ? "_INIT" : ""); fprintf(out, "#ifdef _MSC_VER\n"); fprintf(out, "#pragma warning(push)\n"); fprintf(out, "#pragma warning(disable: 4005 4018 4244 4800)\n"); fprintf(out, "#if _MSC_VER > 1000\n"); fprintf(out, "#pragma once\n"); fprintf(out, "#endif\n"); fprintf(out, "#elif defined(__GNUC__)\n"); fprintf(out, "#if (__GNUC__ == 3 && __GNUC_MINOR__ >= 4) || (__GNUC__ >= 4)\n"); fprintf(out, "#pragma once\n"); fprintf(out, "#pragma GCC diagnostic push\n"); fprintf(out, "#pragma GCC diagnostic ignored \"-Wsign-compare\"\n"); fprintf(out, "#endif\n"); fprintf(out, "#endif\n"); if((flags & PRINT_FLAG_NO_NAMESPACE) == 0) { if((flags & PRINT_FLAG_INCLUDE_EXCEPTION) != 0) { fprintf(out, "#include \"controlled_vars_exceptions.h\"\n"); } else { fprintf(out, "#include <limits.h>\n"); fprintf(out, "#include <sys/types.h>\n"); //fprintf(out, "#ifndef BOOST_CSTDINT_HPP\n"); fprintf(out, "#include <stdint.h>\n"); //fprintf(out, "#endif\n"); } if((flags & PRINT_FLAG_INCLUDE_STATIC_ASSERT) != 0) { fprintf(out, "#include \"controlled_vars_static_assert.h\"\n"); } if((flags & PRINT_FLAG_INCLUDE_STDEXCEPT) != 0) { fprintf(out, "#include <stdexcept>\n"); } if((flags & PRINT_FLAG_ENUM) != 0) { fprintf(out, "#include <type_traits>\n"); } fprintf(out, "namespace controlled_vars {\n"); } } void print_footer(int flags) { if((flags & PRINT_FLAG_NO_NAMESPACE) == 0) { fprintf(out, "} // namespace controlled_vars\n"); } fprintf(out, "#ifdef _MSC_VER\n"); fprintf(out, "#pragma warning(pop)\n"); fprintf(out, "#elif defined(__GNUC__)\n"); fprintf(out, "#pragma GCC diagnostic pop\n"); fprintf(out, "#endif\n"); fprintf(out, "#endif\n"); } typedef void (*print_func)(); void print_exceptions() { fprintf(out, "class controlled_vars_error : public std::logic_error {\n"); fprintf(out, "public:\n"); fprintf(out, "\texplicit controlled_vars_error(const std::string& what_msg) : logic_error(what_msg) {}\n"); fprintf(out, "};\n"); fprintf(out, "class controlled_vars_error_not_initialized : public controlled_vars_error {\n"); fprintf(out, "public:\n"); fprintf(out, "\texplicit controlled_vars_error_not_initialized(const std::string& what_msg) : controlled_vars_error(what_msg) {}\n"); fprintf(out, "};\n"); fprintf(out, "class controlled_vars_error_out_of_bounds : public controlled_vars_error {\n"); fprintf(out, "public:\n"); fprintf(out, "\texplicit controlled_vars_error_out_of_bounds(const std::string& what_msg) : controlled_vars_error(what_msg) {}\n"); fprintf(out, "};\n"); fprintf(out, "class controlled_vars_error_null_pointer : public controlled_vars_error {\n"); fprintf(out, "public:\n"); fprintf(out, "\texplicit controlled_vars_error_null_pointer(const std::string& what_msg) : controlled_vars_error(what_msg) {}\n"); fprintf(out, "};\n"); } void print_static_assert() { fprintf(out, "// The following is 100%% coming from boost/static_assert.hpp\n"); fprintf(out, "// At this time we only support MSC and GNUC\n"); fprintf(out, "#if defined(_MSC_VER)||defined(__GNUC__)\n"); fprintf(out, "#define CONTROLLED_VARS_JOIN(X,Y) CONTROLLED_VARS_DO_JOIN(X,Y)\n"); fprintf(out, "#define CONTROLLED_VARS_DO_JOIN(X,Y) CONTROLLED_VARS_DO_JOIN2(X,Y)\n"); fprintf(out, "#define CONTROLLED_VARS_DO_JOIN2(X,Y) X##Y\n"); fprintf(out, "template<bool x> struct STATIC_ASSERTION_FAILURE;\n"); fprintf(out, "template<> struct STATIC_ASSERTION_FAILURE<true>{enum{value=1};};\n"); fprintf(out, "template<int x> struct static_assert_test{};\n"); fprintf(out, "#if defined(__GNUC__)&&((__GNUC__>3)||((__GNUC__==3)&&(__GNUC_MINOR__>=4)))\n"); fprintf(out, "#define CONTROLLED_VARS_STATIC_ASSERT_BOOL_CAST(x) ((x)==0?false:true)\n"); fprintf(out, "#else\n"); fprintf(out, "#define CONTROLLED_VARS_STATIC_ASSERT_BOOL_CAST(x) (bool)(x)\n"); fprintf(out, "#endif\n"); fprintf(out, "#ifdef _MSC_VER\n"); fprintf(out, "#define CONTROLLED_VARS_STATIC_ASSERT(B) " "typedef ::controlled_vars::static_assert_test<" "sizeof(::controlled_vars::STATIC_ASSERTION_FAILURE<CONTROLLED_VARS_STATIC_ASSERT_BOOL_CAST(B)>)>" "CONTROLLED_VARS_JOIN(controlled_vars_static_assert_typedef_,__COUNTER__)\n"); fprintf(out, "#else\n"); fprintf(out, "#define CONTROLLED_VARS_STATIC_ASSERT(B) " "typedef ::controlled_vars::static_assert_test<" "sizeof(::controlled_vars::STATIC_ASSERTION_FAILURE<CONTROLLED_VARS_STATIC_ASSERT_BOOL_CAST(B)>)>" "CONTROLLED_VARS_JOIN(controlled_vars_static_assert_typedef_,__LINE__)\n"); fprintf(out, "#endif\n"); fprintf(out, "#else\n"); fprintf(out, "#define CONTROLLED_VARS_STATIC_ASSERT(B)\n"); fprintf(out, "#endif\n"); } void print_auto() { create_class("auto", "z", FLAG_HAS_VOID | FLAG_HAS_DOINIT | FLAG_HAS_DEFAULT); } void print_auto_enum() { create_class_enum("auto_enum", FLAG_HAS_VOID | FLAG_HAS_DOINIT | FLAG_HAS_DEFAULT); fprintf(out, "typedef auto_enum_init<bool, false> fbool_t;\n"); fprintf(out, "typedef fbool_t zbool_t;\n"); fprintf(out, "typedef auto_enum_init<bool, true> tbool_t;\n"); } void print_limited_auto() { create_class("limited_auto", "lz", FLAG_HAS_VOID | FLAG_HAS_DOINIT | FLAG_HAS_DEFAULT | FLAG_HAS_LIMITS); } void print_limited_auto_enum() { create_class_enum("limited_auto_enum", FLAG_HAS_VOID | FLAG_HAS_DOINIT | FLAG_HAS_DEFAULT | FLAG_HAS_LIMITS); fprintf(out, "typedef limited_auto_enum_init<bool, false, true, false> flbool_t;\n"); fprintf(out, "typedef flbool_t zlbool_t;\n"); fprintf(out, "typedef limited_auto_enum_init<bool, false, true, true> tlbool_t;\n"); } void print_ptr_auto() { create_class_ptr("ptr_auto", "zp", FLAG_HAS_VOID | FLAG_HAS_DOINIT | FLAG_HAS_DEFAULT); } void print_fauto() { create_class("fauto", "z", FLAG_HAS_VOID | FLAG_HAS_DOINIT | FLAG_HAS_FLOAT); } void print_limited_fauto() { create_class("limited_fauto", "lz", FLAG_HAS_VOID | FLAG_HAS_DOINIT | FLAG_HAS_FLOAT | FLAG_HAS_LIMITS); } void print_need() { create_class("need", "m", 0); } void print_need_enum() { create_class_enum("need_enum", 0); fprintf(out, "typedef need_enum_init<bool> mbool_t;\n"); } void print_limited_need() { create_class("limited_need", "lm", FLAG_HAS_LIMITS); } void print_limited_need_enum() { create_class_enum("limited_need_enum", FLAG_HAS_LIMITS); fprintf(out, "typedef limited_need_enum_init<bool, false, true> mlbool_t;\n"); } void print_ptr_need() { create_class_ptr("ptr_need", "mp", 0); } void print_no_init() { fprintf(out, "#ifdef CONTROLLED_VARS_DEBUG\n"); create_class("no", "r", FLAG_HAS_VOID | FLAG_HAS_INITFLG | FLAG_HAS_DEBUG_ALREADY); fprintf(out, "#else\n"); create_direct_typedef("r"); fprintf(out, "#endif\n"); } void print_no_init_enum() { // Anything here? fprintf(out, "#ifdef CONTROLLED_VARS_DEBUG\n"); create_class_enum("no_enum", FLAG_HAS_VOID | FLAG_HAS_INITFLG | FLAG_HAS_DEBUG_ALREADY); fprintf(out, "typedef no_enum_init<bool> rbool_t;\n"); fprintf(out, "#else\n"); fprintf(out, "typedef bool rbool_t;\n"); fprintf(out, "#endif\n"); } void print_limited_no_init() { fprintf(out, "#ifdef CONTROLLED_VARS_DEBUG\n"); create_class("limited_no", "r", FLAG_HAS_VOID | FLAG_HAS_INITFLG | FLAG_HAS_LIMITS | FLAG_HAS_DEBUG_ALREADY); //fprintf(out, "#else\n"); // in non-debug, this is essentially the same template, but we // expect the users to declare their types "properly" (i.e. using // a typedef whenever CONTROLLED_VARS_DEBUG is not defined.) //create_direct_typedef("rl"); fprintf(out, "#endif\n"); } void print_limited_no_init_enum() { fprintf(out, "#ifdef CONTROLLED_VARS_DEBUG\n"); create_class_enum("limited_no_enum", FLAG_HAS_VOID | FLAG_HAS_INITFLG | FLAG_HAS_LIMITS | FLAG_HAS_DEBUG_ALREADY); fprintf(out, "typedef limited_no_enum_init<bool, false, true> rlbool_t;\n"); fprintf(out, "#else\n"); fprintf(out, "typedef bool rlbool_t;\n"); fprintf(out, "#endif\n"); } void print_ptr_no_init() { create_class_ptr("ptr_no", "rp", FLAG_HAS_VOID | FLAG_HAS_INITFLG | FLAG_HAS_DEBUG_ALREADY); } void print_file(const char *name, int flags, print_func func) { char filename[256], upper[256]; // create an uppercase version of the name const char *n = name; char *u = upper; while(*n != '\0') { *u++ = *n++ & 0x5F; } *u = '\0'; //printf("Working on \"%s\"\n", upper); // create the output file sprintf(filename, "controlled_vars_%s%s.h", name, (flags & PRINT_FLAG_INCLUDE_INIT) != 0 ? "_init" : ""); create_file(filename); // print out the header print_header(filename, upper, flags); // print out the contents (*func)(); // print closure print_footer(flags); } void print_include_all() { create_file("controlled_vars.h"); print_header("controlled_vars.h", "", PRINT_FLAG_NO_NAMESPACE); // we don't have to include the exception header, // it will be by several of the following headers fprintf(out, "#include \"controlled_vars_auto_init.h\"\n"); fprintf(out, "#include \"controlled_vars_auto_enum_init.h\"\n"); fprintf(out, "#include \"controlled_vars_limited_auto_init.h\"\n"); fprintf(out, "#include \"controlled_vars_limited_auto_enum_init.h\"\n"); fprintf(out, "#include \"controlled_vars_fauto_init.h\"\n"); fprintf(out, "#include \"controlled_vars_limited_fauto_init.h\"\n"); fprintf(out, "#include \"controlled_vars_need_init.h\"\n"); fprintf(out, "#include \"controlled_vars_need_enum_init.h\"\n"); fprintf(out, "#include \"controlled_vars_limited_need_init.h\"\n"); fprintf(out, "#include \"controlled_vars_limited_need_enum_init.h\"\n"); fprintf(out, "#include \"controlled_vars_no_init.h\"\n"); fprintf(out, "#include \"controlled_vars_no_enum_init.h\"\n"); fprintf(out, "#include \"controlled_vars_limited_no_init.h\"\n"); fprintf(out, "#include \"controlled_vars_limited_no_enum_init.h\"\n"); print_footer(PRINT_FLAG_NO_NAMESPACE); } int main(int argc, char *argv[]) { for(int i(1); i < argc; ++i) { if(strcmp(argv[i], "--no-bool-constructors") == 0) { no_bool_constructors = 1; } } print_file("exceptions", PRINT_FLAG_INCLUDE_STDEXCEPT, print_exceptions); print_file("static_assert", 0, print_static_assert); print_file("auto", PRINT_FLAG_INCLUDE_INIT, print_auto); print_file("auto_enum", PRINT_FLAG_INCLUDE_INIT | PRINT_FLAG_ENUM, print_auto_enum); print_file("limited_auto", PRINT_FLAG_INCLUDE_INIT | PRINT_FLAG_INCLUDE_EXCEPTION | PRINT_FLAG_INCLUDE_STATIC_ASSERT, print_limited_auto); print_file("limited_auto_enum", PRINT_FLAG_INCLUDE_INIT | PRINT_FLAG_ENUM | PRINT_FLAG_INCLUDE_EXCEPTION | PRINT_FLAG_INCLUDE_STATIC_ASSERT, print_limited_auto_enum); print_file("ptr_auto", PRINT_FLAG_INCLUDE_INIT | PRINT_FLAG_INCLUDE_EXCEPTION, print_ptr_auto); print_file("fauto", PRINT_FLAG_INCLUDE_INIT, print_fauto); print_file("limited_fauto", PRINT_FLAG_INCLUDE_INIT | PRINT_FLAG_INCLUDE_EXCEPTION | PRINT_FLAG_INCLUDE_STATIC_ASSERT, print_limited_fauto); print_file("need", PRINT_FLAG_INCLUDE_INIT, print_need); print_file("need_enum", PRINT_FLAG_INCLUDE_INIT | PRINT_FLAG_ENUM, print_need_enum); print_file("limited_need", PRINT_FLAG_INCLUDE_INIT | PRINT_FLAG_INCLUDE_EXCEPTION | PRINT_FLAG_INCLUDE_STATIC_ASSERT, print_limited_need); print_file("limited_need_enum", PRINT_FLAG_INCLUDE_INIT | PRINT_FLAG_ENUM | PRINT_FLAG_INCLUDE_EXCEPTION | PRINT_FLAG_INCLUDE_STATIC_ASSERT, print_limited_need_enum); print_file("ptr_need", PRINT_FLAG_INCLUDE_INIT | PRINT_FLAG_INCLUDE_EXCEPTION, print_ptr_need); print_file("no", PRINT_FLAG_INCLUDE_INIT | PRINT_FLAG_INCLUDE_EXCEPTION, print_no_init); print_file("no_enum", PRINT_FLAG_INCLUDE_INIT | PRINT_FLAG_ENUM | PRINT_FLAG_INCLUDE_EXCEPTION, print_no_init_enum); print_file("limited_no", PRINT_FLAG_INCLUDE_INIT | PRINT_FLAG_INCLUDE_EXCEPTION | PRINT_FLAG_INCLUDE_STATIC_ASSERT, print_limited_no_init); print_file("limited_no_enum", PRINT_FLAG_INCLUDE_INIT | PRINT_FLAG_ENUM | PRINT_FLAG_INCLUDE_EXCEPTION | PRINT_FLAG_INCLUDE_STATIC_ASSERT, print_limited_no_init_enum); print_file("ptr_no", PRINT_FLAG_INCLUDE_INIT | PRINT_FLAG_INCLUDE_EXCEPTION, print_ptr_no_init); print_include_all(); return 0; }
Nsolv is nearing the end of one its pilot project north of Fort McMurray. The project, which began in Fort McKay in 2014, started as a way for the company to extract heavy oil using a solvent vapor. The Calgary-based company reports during the 3-year run, more than 125 000 barrels have been produced while generating little greenhouse gas and using no water. Nsolv reports that the pilot proved the technology works and provides a solution to the challenges of heavy oil extraction. This allows them to stay under the Alberta government’s 100-mega-ton carbon cap. In a release, the CEO of Nsolv says their technology extracts heavy oil while protecting the environment and provides economic benefits to industry and government, even when oil prices are low The project is now entering its final stages and will provide shutdown data for commercial scale projects. A complete shutdown of the project is expected by mid-2017. -Photo Courtesy of the Nsolv website
<gh_stars>0 //A simple client for xtrachat import java.net.*; import java.util.*; import java.io.*; public class Client { Socket client_socket; PrintWriter writer; public static void main (String[] args) { Client c = new Client(); System.out.println("Client Started..."); System.out.println("Making a request to localhost:1337"); c.send(); } void send() { try { client_socket = new Socket("127.0.0.1", 1337); System.out.println("Connection Established"); writer = new PrintWriter(client_socket.getOutputStream()); while(true) { Scanner in = new Scanner(System.in); System.out.print("Send: "); String text; text = in.nextLine(); writer.println(text + "\n"); writer.flush(); //Exiting the loop when user enters exit if (text.equals("exit")) { break; } } } catch(IOException ex) { ex.printStackTrace(); } } }
<reponame>orls/php-spx #include <stdio.h> #include <unistd.h> #ifndef ZTS # define USE_SIGNAL #endif #ifdef USE_SIGNAL # include <signal.h> #endif #include "php_spx.h" #include "ext/standard/info.h" #include "spx_thread.h" #include "spx_config.h" #include "spx_php.h" #include "spx_utils.h" #include "spx_metric.h" #include "spx_resource_stats.h" #include "spx_profiler_tracer.h" #include "spx_profiler_sampler.h" #include "spx_reporter_fp.h" #include "spx_reporter_full.h" #include "spx_reporter_trace.h" typedef struct { void (*init) (void); void (*shutdown) (void); } execution_handler_t; static SPX_THREAD_TLS struct { int cli_sapi; spx_config_t config; execution_handler_t * execution_handler; struct { #ifdef USE_SIGNAL struct { int handler_set; struct { struct sigaction sigint; struct sigaction sigterm; } prev_handler; volatile sig_atomic_t handler_called; volatile sig_atomic_t probing; volatile sig_atomic_t stop; int signo; } sig_handling; #endif spx_profiler_reporter_t * reporter; spx_profiler_t * profiler; } profiling_handler; } context; ZEND_BEGIN_MODULE_GLOBALS(spx) const char * data_dir; zend_bool http_enabled; const char * http_key; const char * http_ip_var; const char * http_ip_whitelist; const char * http_ui_assets_dir; const char * http_ui_uri_prefix; ZEND_END_MODULE_GLOBALS(spx) ZEND_DECLARE_MODULE_GLOBALS(spx) #ifdef ZTS # define SPX_G(v) TSRMG(spx_globals_id, zend_spx_globals *, v) #else # define SPX_G(v) (spx_globals.v) #endif PHP_INI_BEGIN() STD_PHP_INI_ENTRY( "spx.data_dir", "/tmp/spx", PHP_INI_SYSTEM, OnUpdateString, data_dir, zend_spx_globals, spx_globals ) STD_PHP_INI_ENTRY( "spx.http_enabled", "0", PHP_INI_SYSTEM, OnUpdateBool, http_enabled, zend_spx_globals, spx_globals ) STD_PHP_INI_ENTRY( "spx.http_key", "", PHP_INI_SYSTEM, OnUpdateString, http_key, zend_spx_globals, spx_globals ) STD_PHP_INI_ENTRY( "spx.http_ip_var", "REMOTE_ADDR", PHP_INI_SYSTEM, OnUpdateString, http_ip_var, zend_spx_globals, spx_globals ) STD_PHP_INI_ENTRY( "spx.http_ip_whitelist", "", PHP_INI_SYSTEM, OnUpdateString, http_ip_whitelist, zend_spx_globals, spx_globals ) STD_PHP_INI_ENTRY( "spx.http_ui_assets_dir", SPX_HTTP_UI_ASSETS_DIR, PHP_INI_SYSTEM, OnUpdateString, http_ui_assets_dir, zend_spx_globals, spx_globals ) STD_PHP_INI_ENTRY( "spx.http_ui_uri_prefix", "/_spx", PHP_INI_SYSTEM, OnUpdateString, http_ui_uri_prefix, zend_spx_globals, spx_globals ) PHP_INI_END() static PHP_MINIT_FUNCTION(spx); static PHP_MSHUTDOWN_FUNCTION(spx); static PHP_RINIT_FUNCTION(spx); static PHP_RSHUTDOWN_FUNCTION(spx); static PHP_MINFO_FUNCTION(spx); static int check_access(void); static void profiling_handler_init(void); static void profiling_handler_shutdown(void); static void profiling_handler_ex_set_context(void); static void profiling_handler_ex_unset_context(void); static void profiling_handler_ex_hook_before(void); static void profiling_handler_ex_hook_after(void); #ifdef USE_SIGNAL static void profiling_handler_sig_terminate(void); static void profiling_handler_sig_handler(int signo); static void profiling_handler_sig_set_handler(void); static void profiling_handler_sig_unset_handler(void); #endif static void http_ui_handler_init(void); static void http_ui_handler_shutdown(void); static int http_ui_handler_data(const char * data_dir, const char *relative_path); static void http_ui_handler_list_metadata_files_callback(const char * file_name, size_t count); static int http_ui_handler_output_file(const char * file_name); static void read_stream_content(FILE * stream, size_t (*callback) (const void * ptr, size_t len)); static execution_handler_t profiling_handler = { profiling_handler_init, profiling_handler_shutdown }; static execution_handler_t http_ui_handler = { http_ui_handler_init, http_ui_handler_shutdown }; static zend_function_entry spx_functions[] = { /* empty */ {NULL, NULL, NULL, 0, 0} }; zend_module_entry spx_module_entry = { STANDARD_MODULE_HEADER, PHP_SPX_EXTNAME, spx_functions, PHP_MINIT(spx), PHP_MSHUTDOWN(spx), PHP_RINIT(spx), PHP_RSHUTDOWN(spx), PHP_MINFO(spx), PHP_SPX_VERSION, PHP_MODULE_GLOBALS(spx), NULL, NULL, NULL, STANDARD_MODULE_PROPERTIES_EX }; #ifdef COMPILE_DL_SPX ZEND_GET_MODULE(spx) #endif static PHP_MINIT_FUNCTION(spx) { #ifdef ZTS spx_php_global_hooks_set(); #endif REGISTER_INI_ENTRIES(); return SUCCESS; } static PHP_MSHUTDOWN_FUNCTION(spx) { #ifdef ZTS spx_php_global_hooks_unset(); #endif UNREGISTER_INI_ENTRIES(); return SUCCESS; } static PHP_RINIT_FUNCTION(spx) { #ifdef ZTS spx_php_global_hooks_disable(); #endif context.execution_handler = NULL; context.cli_sapi = spx_php_is_cli_sapi(); if (context.cli_sapi) { spx_config_get(&context.config, context.cli_sapi, SPX_CONFIG_SOURCE_ENV, -1); } else { spx_config_get( &context.config, context.cli_sapi, SPX_CONFIG_SOURCE_HTTP_COOKIE, SPX_CONFIG_SOURCE_HTTP_HEADER, SPX_CONFIG_SOURCE_HTTP_QUERY_STRING, -1 ); } int web_ui_url = 0; if (!context.cli_sapi) { const char * request_uri = spx_php_global_array_get("_SERVER", "REQUEST_URI"); if (request_uri) { web_ui_url = spx_utils_str_starts_with(request_uri, SPX_G(http_ui_uri_prefix)); } } if (!web_ui_url && !context.config.enabled) { return SUCCESS; } if (!check_access()) { return SUCCESS; } if (web_ui_url) { context.execution_handler = &http_ui_handler; } else if (context.config.enabled) { context.execution_handler = &profiling_handler; } if (context.execution_handler) { context.execution_handler->init(); } return SUCCESS; } static PHP_RSHUTDOWN_FUNCTION(spx) { if (context.execution_handler) { context.execution_handler->shutdown(); } #ifdef ZTS spx_php_global_hooks_disable(); #endif return SUCCESS; } static PHP_MINFO_FUNCTION(spx) { php_info_print_table_start(); php_info_print_table_row(2, PHP_SPX_EXTNAME " Support", "enabled"); php_info_print_table_row(2, PHP_SPX_EXTNAME " Version", PHP_SPX_VERSION); php_info_print_table_end(); DISPLAY_INI_ENTRIES(); } static int check_access(void) { TSRMLS_FETCH(); if (context.cli_sapi) { /* CLI SAPI -> granted */ return 1; } if (!SPX_G(http_enabled)) { /* HTTP profiling explicitly turned off -> not granted */ return 0; } if (!SPX_G(http_key) || SPX_G(http_key)[0] == 0) { /* empty spx.http_key (server config) -> not granted */ spx_php_log_notice("access not granted: http_key is empty"); return 0; } if (!context.config.key || context.config.key[0] == 0) { /* empty SPX_KEY (client config) -> not granted */ spx_php_log_notice("access not granted: client key is empty"); return 0; } if (0 != strcmp(SPX_G(http_key), context.config.key)) { /* server / client key mismatch -> not granted */ spx_php_log_notice( "access not granted: server (\"%s\") & client (\"%s\") key mismatch", SPX_G(http_key), context.config.key ); return 0; } if (!SPX_G(http_ip_var) || SPX_G(http_ip_var)[0] == 0) { /* empty client ip server var name -> not granted */ spx_php_log_notice("access not granted: http_ip_var is empty"); return 0; } const char * ip_str = spx_php_global_array_get("_SERVER", SPX_G(http_ip_var)); if (!ip_str || ip_str[0] == 0) { /* empty client ip -> not granted */ spx_php_log_notice( "access not granted: $_SERVER[\"%s\"] is empty", SPX_G(http_ip_var) ); return 0; } const char * authorized_ips_str = SPX_G(http_ip_whitelist); if (!authorized_ips_str || authorized_ips_str[0] == 0) { /* empty ip white list -> not granted */ spx_php_log_notice("access not granted: IP white list is empty"); return 0; } SPX_UTILS_TOKENIZE_STRING(authorized_ips_str, ',', authorized_ip_str, 32, { if (0 == strcmp(ip_str, authorized_ip_str)) { /* ip authorized (OK, as well as all previous checks) -> granted */ spx_php_log_notice( "access granted: \"%s\" IP with \"%s\" key", ip_str, context.config.key ); return 1; } }); spx_php_log_notice( "access not granted: \"%s\" IP is not in white list (\"%s\")", ip_str, authorized_ips_str ); /* no matching ip in white list -> not granted */ return 0; } static void profiling_handler_init(void) { TSRMLS_FETCH(); #ifdef USE_SIGNAL context.profiling_handler.sig_handling.handler_set = 0; context.profiling_handler.sig_handling.probing = 0; context.profiling_handler.sig_handling.stop = 0; context.profiling_handler.sig_handling.handler_called = 0; context.profiling_handler.sig_handling.signo = -1; #endif profiling_handler_ex_set_context(); context.profiling_handler.reporter = NULL; context.profiling_handler.profiler = NULL; switch (context.config.report) { default: case SPX_CONFIG_REPORT_FULL: context.profiling_handler.reporter = spx_reporter_full_create(SPX_G(data_dir)); break; case SPX_CONFIG_REPORT_FLAT_PROFILE: context.profiling_handler.reporter = spx_reporter_fp_create( context.config.fp_focus, context.config.fp_inc, context.config.fp_rel, context.config.fp_limit, context.config.fp_live ); break; case SPX_CONFIG_REPORT_TRACE: context.profiling_handler.reporter = spx_reporter_trace_create( context.config.trace_file, context.config.trace_safe ); break; } if (!context.profiling_handler.reporter) { goto error; } context.profiling_handler.profiler = spx_profiler_tracer_create( context.config.max_depth, context.config.enabled_metrics, context.profiling_handler.reporter ); if (!context.profiling_handler.profiler) { goto error; } if (context.config.sampling_period > 0) { spx_profiler_t * sampling_profiler = spx_profiler_sampler_create( context.profiling_handler.profiler, context.config.sampling_period ); if (!sampling_profiler) { goto error; } context.profiling_handler.profiler = sampling_profiler; } return; error: profiling_handler_shutdown(); } static void profiling_handler_shutdown(void) { spx_php_execution_finalize(); if (context.profiling_handler.profiler) { context.profiling_handler.profiler->finalize(context.profiling_handler.profiler); context.profiling_handler.profiler->destroy(context.profiling_handler.profiler); context.profiling_handler.profiler = NULL; } if (context.profiling_handler.reporter) { spx_profiler_reporter_destroy(context.profiling_handler.reporter); context.profiling_handler.reporter = NULL; } profiling_handler_ex_unset_context(); } static void profiling_handler_ex_set_context(void) { #ifndef ZTS spx_php_global_hooks_set(); #endif spx_php_execution_init(); spx_php_execution_hook( profiling_handler_ex_hook_before, profiling_handler_ex_hook_after, 0 ); if (context.config.builtins) { spx_php_execution_hook( profiling_handler_ex_hook_before, profiling_handler_ex_hook_after, 1 ); } spx_resource_stats_init(); #ifdef USE_SIGNAL if (context.cli_sapi) { profiling_handler_sig_set_handler(); } #endif } static void profiling_handler_ex_unset_context(void) { #ifdef USE_SIGNAL if (context.cli_sapi) { profiling_handler_sig_unset_handler(); } #endif spx_resource_stats_shutdown(); spx_php_execution_shutdown(); #ifndef ZTS spx_php_global_hooks_unset(); #endif } static void profiling_handler_ex_hook_before(void) { #ifdef USE_SIGNAL context.profiling_handler.sig_handling.probing = 1; #endif spx_php_function_t function; spx_php_current_function(&function); context.profiling_handler.profiler->call_start(context.profiling_handler.profiler, &function); #ifdef USE_SIGNAL context.profiling_handler.sig_handling.probing = 0; if (context.profiling_handler.sig_handling.stop) { profiling_handler_sig_terminate(); } #endif } static void profiling_handler_ex_hook_after(void) { #ifdef USE_SIGNAL context.profiling_handler.sig_handling.probing = 1; #endif context.profiling_handler.profiler->call_end(context.profiling_handler.profiler); #ifdef USE_SIGNAL context.profiling_handler.sig_handling.probing = 0; if (context.profiling_handler.sig_handling.stop) { profiling_handler_sig_terminate(); } #endif } #ifdef USE_SIGNAL static void profiling_handler_sig_terminate(void) { profiling_handler_shutdown(); _exit( context.profiling_handler.sig_handling.signo < 0 ? EXIT_SUCCESS : 128 + context.profiling_handler.sig_handling.signo ); } static void profiling_handler_sig_handler(int signo) { context.profiling_handler.sig_handling.handler_called++; if (context.profiling_handler.sig_handling.handler_called > 1) { return; } context.profiling_handler.sig_handling.signo = signo; if (context.profiling_handler.sig_handling.probing) { context.profiling_handler.sig_handling.stop = 1; return; } profiling_handler_sig_terminate(); } static void profiling_handler_sig_set_handler(void) { struct sigaction act; act.sa_handler = profiling_handler_sig_handler; act.sa_flags = 0; sigaction(SIGINT, &act, &context.profiling_handler.sig_handling.prev_handler.sigint); sigaction(SIGTERM, &act, &context.profiling_handler.sig_handling.prev_handler.sigterm); context.profiling_handler.sig_handling.handler_set = 1; } static void profiling_handler_sig_unset_handler(void) { if (!context.profiling_handler.sig_handling.handler_set) { return; } sigaction(SIGINT, &context.profiling_handler.sig_handling.prev_handler.sigint, NULL); sigaction(SIGTERM, &context.profiling_handler.sig_handling.prev_handler.sigterm, NULL); context.profiling_handler.sig_handling.handler_set = 0; } #endif /* defined(USE_SIGNAL) */ static void http_ui_handler_init(void) { #ifndef ZTS spx_php_global_hooks_set(); #endif spx_php_execution_init(); spx_php_execution_disable(); } static void http_ui_handler_shutdown(void) { TSRMLS_FETCH(); const char * request_uri = spx_php_global_array_get("_SERVER", "REQUEST_URI"); if (!request_uri) { goto error_404; } const char * prefix_pos = strstr(request_uri, SPX_G(http_ui_uri_prefix)); if (prefix_pos != request_uri) { goto error_404; } char relative_path[512]; strncpy(relative_path, request_uri + strlen(SPX_G(http_ui_uri_prefix)), sizeof(relative_path)); char * query_string = strchr(relative_path, '?'); if (relative_path[0] != '/') { spx_php_output_add_header_line("HTTP/1.1 301 Moved Permanently"); spx_php_output_add_header_linef( "Location: %s/index.html%s", SPX_G(http_ui_uri_prefix), query_string ? query_string : "" ); spx_php_output_send_headers(); goto finish; } if (query_string) { *query_string = 0; } if (0 == strcmp(relative_path, "/")) { strncpy(relative_path, "/index.html", sizeof(relative_path)); } if (0 == http_ui_handler_data(SPX_G(data_dir), relative_path)) { goto finish; } char local_file_name[512]; snprintf( local_file_name, sizeof(local_file_name), "%s%s", SPX_G(http_ui_assets_dir), relative_path ); if (0 == http_ui_handler_output_file(local_file_name)) { goto finish; } error_404: spx_php_output_add_header_line("HTTP/1.1 404 Not Found"); spx_php_output_add_header_line("Content-Type: text/plain"); spx_php_output_send_headers(); spx_php_output_direct_print("File not found.\n"); finish: spx_php_execution_shutdown(); #ifndef ZTS spx_php_global_hooks_unset(); #endif } static int http_ui_handler_data(const char * data_dir, const char *relative_path) { if (0 == strcmp(relative_path, "/data/metrics")) { spx_php_output_add_header_line("HTTP/1.1 200 OK"); spx_php_output_add_header_line("Content-Type: application/json"); spx_php_output_send_headers(); spx_php_output_direct_print("{\"results\": [\n"); SPX_METRIC_FOREACH(i, { if (i > 0) { spx_php_output_direct_print(","); } spx_php_output_direct_print("{"); spx_php_output_direct_printf("\"key\": \"%s\",", spx_metrics_info[i].key); spx_php_output_direct_printf("\"short_name\": \"%s\",", spx_metrics_info[i].short_name); spx_php_output_direct_printf("\"name\": \"%s\",", spx_metrics_info[i].name); spx_php_output_direct_print("\"type\": \""); switch (spx_metrics_info[i].type) { case SPX_FMT_TIME: spx_php_output_direct_print("time"); break; case SPX_FMT_MEMORY: spx_php_output_direct_print("memory"); break; case SPX_FMT_QUANTITY: spx_php_output_direct_print("quantity"); break; default: ; } spx_php_output_direct_print("\","); spx_php_output_direct_printf("\"releasable\": %d", spx_metrics_info[i].releasable); spx_php_output_direct_print("}\n"); }); spx_php_output_direct_print("]}\n"); return 0; } if (0 == strcmp(relative_path, "/data/reports/metadata")) { spx_php_output_add_header_line("HTTP/1.1 200 OK"); spx_php_output_add_header_line("Content-Type: application/json"); spx_php_output_send_headers(); spx_php_output_direct_print("{\"results\": [\n"); spx_reporter_full_metadata_list_files( data_dir, http_ui_handler_list_metadata_files_callback ); spx_php_output_direct_print("]}\n"); return 0; } const char * get_report_metadata_uri = "/data/reports/metadata/"; if (spx_utils_str_starts_with(relative_path, get_report_metadata_uri)) { char file_name[512]; spx_reporter_full_metadata_get_file_name( data_dir, relative_path + strlen(get_report_metadata_uri), file_name, sizeof(file_name) ); return http_ui_handler_output_file(file_name); } const char * get_report_uri = "/data/reports/get/"; if (spx_utils_str_starts_with(relative_path, get_report_uri)) { char file_name[512]; spx_reporter_full_get_file_name( data_dir, relative_path + strlen(get_report_uri), file_name, sizeof(file_name) ); return http_ui_handler_output_file(file_name); } return -1; } static void http_ui_handler_list_metadata_files_callback(const char * file_name, size_t count) { if (count > 0) { spx_php_output_direct_print(","); } FILE * fp = fopen(file_name, "r"); if (!fp) { return; } read_stream_content(fp, spx_php_output_direct_write); fclose(fp); } static int http_ui_handler_output_file(const char * file_name) { FILE * fp = fopen(file_name, "rb"); if (!fp) { return -1; } char suffix[32]; int suffix_offset = strlen(file_name) - (sizeof(suffix) - 1); strncpy( suffix, file_name + (suffix_offset < 0 ? 0 : suffix_offset), sizeof(suffix) ); suffix[sizeof(suffix) - 1] = 0; const int compressed = spx_utils_str_ends_with(suffix, ".gz"); if (compressed) { *strrchr(suffix, '.') = 0; } const char * content_type = "application/octet-stream"; if (spx_utils_str_ends_with(suffix, ".html")) { content_type = "text/html; charset=utf-8"; } else if (spx_utils_str_ends_with(suffix, ".css")) { content_type = "text/css"; } else if (spx_utils_str_ends_with(suffix, ".js")) { content_type = "application/javascript"; } else if (spx_utils_str_ends_with(suffix, ".json")) { content_type = "application/json"; } spx_php_output_add_header_line("HTTP/1.1 200 OK"); spx_php_output_add_header_linef("Content-Type: %s", content_type); if (compressed) { spx_php_output_add_header_line("Content-Encoding: gzip"); } fseek(fp, 0L, SEEK_END); spx_php_output_add_header_linef("Content-Length: %ld", ftell(fp)); rewind(fp); spx_php_output_send_headers(); read_stream_content(fp, spx_php_output_direct_write); fclose(fp); return 0; } static void read_stream_content(FILE * stream, size_t (*callback) (const void * ptr, size_t len)) { char buf[8 * 1024]; while (1) { size_t read = fread(buf, 1, sizeof(buf), stream); callback(buf, read); if (read < sizeof(buf)) { break; } } }
Study of kinetics in aqueous solution of phenol in the presence of micelles by ultrasonic methods Ultrasonic absorption and velocity measurements were carried for an aqueous solution of phenol in the presence of sodium dodecyl sulfate (SDS) as functions of both concentration and frequency at 25 o C. A single relaxational excess absorption was found in the frequency range 6.5-220 MHz. The absorption was attributed to a perturbation of an equilibrium associated with the interaction between phenol and water molecule from the concentration dependences of the relaxation frequency and the amplitude of the relaxational absorption
Load-Prediction Parallelization for Computer Simulation of Electrocardiogram Based on GPU This paper introduces a parallel algorithm using GPU for computer simulation of Electrocardiogram (ECG) based on a 3-dimensional (3D) whole-heart model. The computer heart model includes approximately 50,000 discrete elements (cell models) inside a torso model represented by 344 nodal points with 684 triangular meshes. Since computational burden for computer simulation of ECGs is considerably heavy, we employ GPU to accelerate the speed of calculation. However, GPU is based on SIMD structure which is unsuited for branch structure, so that the computing capabilities of GPU are limited by the branch of program. In order to solve this problem, we present a GPU-based algorithm which concentrates on eliminating branches in computation and optimizing the calculation of electric potentials through the way of load-prediction. The new parallel algorithm accelerates the speed of calculation of ECGs to 6.18 times compared with the former algorithm. This study demonstrates an effective algorithm based on GPU for parallel computing in biomedical simulation study.
Marine radar derived current vector mapping at a planned commercial tidal stream turbine array in the Pentland Firth, U.K. A marine radar was deployed on a remote clifftop overlooking a 4.8km radius area of the Inner Sound of Stroma in the Pentland Firth for 3 months during spring 2013. The area viewed by the radar includes The Crown Estate lease areas for MeyGen Ltd (Inner Sound of Stroma) and Scottish Power Renewables (Ness of Duncansby), although the data analysis has focussed solely on the MeyGen area. Data were post processed to extract current vector maps based on determining the Doppler shift of sea surface waves by the tidal current. Comparisons between current time series from two Acoustic Doppler Current Profiler (ADCP) surveys and the radar derived data are presented and show excellent correlation. The quality of the data has enabled tidal analyses to be performed and spatial variations in tidal current constituents to be mapped.