content
stringlengths
10
4.9M
Intravenous lacosamide and phenytoin for the treatment of acute exacerbations of trigeminal neuralgia: A retrospective analysis of 144 cases Abstract Background Scant evidence is available on the use of intravenous pain treatment in acute exacerbations of trigeminal neuralgia. The aim of this descriptive study was to evaluate the effectiveness and security of intravenous lacosamide and phenytoin in the treatment of acute trigeminal neuralgia pain. Methods We reviewed patients who attended the emergency department of a tertiary hospital between 2012 and 2020 for exacerbations of trigeminal neuralgia pain and were treated with either intravenous phenytoin or lacosamide for the first time. Primary endpoints were pain relief and adverse effects during the hospital stay. A comparative analysis between both treatment groups was performed. Results We studied 144 episodes in 121 patients (median age 61 years, 66.1% women). Trigeminal neuralgia etiology was secondary in 9.9%. Pain relief was observed in 77.8% of 63 patients receiving lacosamide infusions, and adverse effects in 1.6%. Pain relief was observed in 72.8% of 81 phenytoin infusions and adverse effects in 12.3%, all mild. No difference was observed in pain relief between groups, but the proportion of adverse effects was significantly different (p = 0.023). Statistically significant differences were also detected in readmissions within six months, time to readmission, and pain relief status at first follow-up visit. Conclusion Intravenous lacosamide and phenytoin can be effective and safe treatments for acute pain in trigeminal neuralgia. According to our series, lacosamide might be better tolerated than phenytoin and lead to lower readmissions and sustained pain relief. Introduction Trigeminal neuralgia (TN) is defined by the third edition of the International Classification of Headache Disorders (ICHD3) (1) and the first edition of the International Classification of Orofacial Pain (ICOP) (2) as paroxysmal, short-lasting, unilateral, electricshock-like pain, triggered by innocuous stimuli, limited to one or more divisions of the trigeminal nerve. It is categorized as classical TN, in which neurovascular compression is confirmed by magnetic resonance imaging (MRI); secondary TN, caused by multiple sclerosis, space-occupying lesions or other causes; or idiopathic TN, when no cause is identified (1,3). Despite the number of drugs proposed to treat TN, mainly sodium channel blockers, only carbamazepine and oxcarbazepine offer enough evidence to be considered first-line treatments (4). Nevertheless, pain is refractory in a large number of cases and therapeutic adverse events are common (5). Pain relapses in TN are a frequent reason for consulting the emergency department, and to date evidence on effective drugs in the acute phase is scant, especially for rapid-acting or intravenous compounds that are preferred when the oral route is not tolerated due to intense pain (6). Only phenytoin or lidocaine are recommended as intravenous drugs in acute exacerbations of pain, but the quality of evidence is low (7,8). To date, only one randomized, double-blind, placebo-controlled trial using intravenous medications in TN exacerbations has been published, supporting the use of intravenous lidocaine for the reduction of pain (9). Other lidocaine preparations have also been studied, including nasal spray (10), eye drops (11), nerve blocks (12), or application to trigger points in the oral mucosa (13), all of which showed low evidence of effectiveness. Lately, there is a growing interest in new drugs which act by blocking voltage-gated sodium channels (14,15), similar to carbamazepine or phenytoin, but with novel mechanisms of action that may decrease the very common side effects of these drugs (16,17). For instance, lacosamide blocks voltage-gated sodium channels in a slow inactivating manner (18). This compound, initially designed as an antiepileptic drug, has recently shown effectiveness in the treatment of neuropathic pain (19,20) and its effectiveness as an adjunctive treatment in TN has been suggested in small case series (21)(22)(23). Its efficacy as intravenous treatment in pain crises has scarcely been studied, but a few cases have been described in which intravenous administration has improved pain in the acute phase (24). Considering the small but mounting evidence of lacosamide in acute TN, our aim was to analyze its use in real-life situations in the emergency room and compare it with the use of phenytoin. For this purpose, we retrospectively evaluated the effectiveness and safety of intravenous lacosamide and intravenous phenytoin in a series of patients attending a tertiary hospital due to acute exacerbations of TN pain. Methods We retrospectively reviewed consecutive patients who attended the emergency department of a tertiary reference hospital (Bellvitge University Hospital, Barcelona, Spain) between 2012 and 2020, and who were coded in the system as trigeminal neuralgia. We reviewed clinical records, nurse medication sheets, subsequent emergency admissions and consecutive follow-up visits for at least six months for each patient. Two different neurologists trained in headache and facial pain reviewed the records, splitting the dataset review between them (AM-V and ST). Patients were included if they met diagnostic criteria for TN according to the International Classification of Headache Disorders, third edition (1), reported facial pain exacerbation as the reason for admission in the emergency room, and received intravenous lacosamide or phenytoin for the first time during their visit to the emergency department. Patients were excluded if they had already received that specific intravenous medication before, or if not all of the required variables had been registered, including a minimum follow-up of six months after discharge. Variables collected for each patient were: age, gender, TN etiology, time since diagnosis, ongoing treatment, time of admission to the emergency department, treatment choice (lacosamide or phenytoin), dose and start time of infusion, use of adjuvant medication, pain relief status (defined as: no pain reported by the patient, absence of further rescue medication after the infusion, and hospital discharge less than 10 hours after receiving treatment), adverse effects, time to discharge, need for hospital admission, readmissions in the next six months and time to readmission if the treatment was prescribed at discharge, pain relief status at the next follow-up visit if the medication was prescribed at discharge, and time until surgical treatment if ever needed. Primary endpoints were pain relief (see definition criteria above) and adverse effects in each treatment group. Secondary endpoints were time to discharge after the infusion, need for emergency readmission if the treatment was prescribed at discharge and time to readmission, and improvement of pain control at the next outpatient visit if the treatment was prescribed at discharge. Additionally, a comparative analysis between both treatment groups was performed for demographic and clinical variables and primary and secondary endpoints. Lacosamide or phenytoin treatment and doses were determined by the attending neurologist, based on clinical criteria and patient comorbidities. Prior systemic and neurological examinations and electrocardiograms were obtained for all patients. Treatment was administered by infusion pump, with continuous electrocardiographic monitoring or serial electrocardiograms. Time of infusion for both drugs was between 15 and 40 minutes depending on doses, as per the hospital emergency protocol. The study was approved by the Ethical Committee of the Hospital Universitari de Bellvitge with reference EOM028/21. The confidential information of the patients was handled in accordance with Spanish regulations. Statistics Primary and secondary endpoints were assessed using a descriptive analysis. Categorical variables were presented as absolute frequencies. Demographic and clinical variables were presented as median and ranges or mean and standard deviation according to the distribution. For the secondary comparative analysis, chisquared tests and Student's t-tests were used to describe clinical and sociodemographic differences between groups when the distribution was normal. Otherwise, non-parametric tests were used (Fisher's exact test). Kaplan-Meier survival analyses were performed to study the time to readmission in each group when treatment was prescribed at discharge. All tests were studied with confidence intervals of 95% and a significance level of 5%. Statistical analyses were performed in SPSS v.22 (SPSS Inc, Chicago, USA). Results An initial registry of 896 episodes in a total of 599 patients treated in the emergency department was obtained. After a review of the individual cases, 144 episodes in a total of 121 patients were finally included (see Figure 1). Each episode was a single lacosamide or phenytoin first-time infusion; as such, two different episodes were included for 23 patients-in one episode they received first-time intravenous phenytoin, and in the other they received first-time intravenous lacosamide. The median age of the patients was 61 years (range 26-91), and 66.1% were women. TN etiology was secondary in 9.9% (5 multiple sclerosis, 5 tumors and 2 post-surgical). Diagnosis of TN had already been established for 80.2% of the patients at the time of attendance, and the median time since diagnosis was four years. 83.3% of the patients without a previous TN diagnosis received a subsequent MRI, revealing secondary etiologies in 16.7% of patients. Patients were stratified into two groups based on whether they received lacosamide or phenytoin. Demographic and clinical variables for each group are shown in Table 1. Mean infusion dose was 180 mg for lacosamide (range 50-400) and 757 mg (range 100-1500) for phenytoin. All patients had a follow-up of six months or more, except for one patient who was not subsequently evaluated due to death (caused by a massive medial cerebral artery aneurysmatic hemorrhage, so not attributable to TN or associated medication). Pain relief was achieved in 49 out of 63 patients who received lacosamide (77.8%), and in 59 out of 81 patients who received phenytoin (72.8%). Immediate adverse effects were detected in one patient in the lacosamide group (1.6%) and 10 patients in the phenytoin group (12.3%). Secondary endpoint results are shown in Table 2. Reported adverse effects were mild in all cases. In the lacosamide group, the single adverse effect reported was sleepiness, in one patient. In the phenytoin group, 10 patients reported the following adverse effects: dizziness (5) infusion pain (2), cutaneous rash (2), paresthesia (1) and itchiness (1); 6 patients presented more than one symptom. No difference in dosage was detected in patients who presented adverse events (850 mg for phenytoin and 200 mg for the single lacosamide infusion) when compared to those who did not (744 mg for phenytoin and 179.8 mg for lacosamide). In a similar manner, the proportion of patients who received adjuvant medication was not different in this subgroup of patients with adverse effects (72.7%) compared to those without (65.4%) (p ¼ 0.242); and neither the concomitance of ongoing treatment with carbamazepine or derivatives (62.4% for patients without adverse events, 54.5% for patients with adverse events, p ¼ 0.266). A secondary analysis was performed to compare both treatment groups. Demographic variables and clinical characteristics of TN did not differ between groups. Pain relief was similar but the proportion of adverse effects was significantly higher in the phenytoin group (1 of 63 patients with lacosamide vs 10 of 81 patients with phenytoin, p ¼ 0.023). Statistically significant differences were also found when comparing the secondary endpoints of readmissions in the following six months if the treatment was prescribed at discharge (25% for lacosamide vs 68.4% for phenytoin, p ¼ 0.002), time to readmission if the treatment was Figure 2. Discussion Treatment of acute pain exacerbations in TN is still controversial. Although many antiepileptic or anesthetic drugs have been proposed, evidence-based studies are scarce, and many neurologists prescribe treatments based mostly on their own clinical experience (7). In this study, retrospective analysis shows how lacosamide and phenytoin administered intravenously as a rescue situation can be effective and safe options in controlling pain. Phenytoin or fosphenytoin has been proposed as a treatment for neuropathic pain for years, and its intravenous preparation has aroused special interest (25). Nonetheless, only small case series have been published in patients with TN (26)(27)(28). A retrospective series of cases has been published recently, in which 65 intravenous phenytoin infusions in 39 patients were effective as acute rescue treatment in 89.2% of cases (29), 15.4% of whom reported mild adverse effects. These results are consistent with our study. As for lacosamide, its potential effect in alleviating neuropathic pain, its intravenous preparation and its generally good tolerance have made it a modern and attractive option for the treatment of TN pain. However, few studies have evaluated its real effectiveness and safety: only small case series on oral lacosamide in adjunctive treatment (21)(22)(23) and one case report on intravenous administration have been published (24). Our study suggests that both lacosamide and phenytoin are useful options for acute pain exacerbations in TN, with similar effectiveness rates of around 75%. It is important to highlight that most patients in both groups were already receiving regular treatment with carbamazepine or one of its derivatives, so our series comprised cases of refractory TN. This reinforces the potential role of these treatments as rescue medications in refractory cases when the standard oral treatments fail to work properly. We should point out that more patients in the lacosamide group were receiving ongoing treatment with carbamazepine or derivatives than in the phenytoin group (70% vs 55% respectively, difference not statistically significant), which might have conferred better outcomes for the follow-up variables in the lacosamide group. It should be remarked 23 patients received both drugs at two different emergency admissions. Specifically, 19 of these 23 patients first received phenytoin and 4 received lacosamide, with pain relief proportions of 78.9% and 100%, respectively. Of those who did not respond to phenytoin, 50% responded at their second admission to a lacosamide infusion. Curiously, among patients who responded to the first treatment, only half of them responded when changing drug at their second admission (50% pain relief with lacosamide after phenytoin and 40% pain relief with phenytoin after lacosamide). This fact led us to suggest not changing the rescue medication chosen if it was effective at its first infusion. Adverse events in our series were relatively rare, and those reported for phenytoin were mild and in line with previous series (29). No adverse effects other than sleepiness in one patient were reported for lacosamide. However, it must not be forgotten that potential severe complications have been described, specially related to atrioventricular blocks (30), the risk of which can be multiplied when associated with other sodium channel blockers. A potential bias in our study is that only immediate adverse events were registered, so perhaps not all possible delayed reactions occurring after patient discharge were obtained, and as such, the proportion of adverse effects could have been underestimated. In fact, previous incidence studies suggest approximately 25% of patients treated with lacosamide reported dizziness, though its time of onset is usually after three months of initiation (31). Another source of bias may be dosing. There is no established dose for these drugs in the treatment of TN, so large differences were recorded between infusions in each group, making it difficult to stratify cases by dosing for a controlled analysis (see doses for each group in Table 1). However, in our sample, higher doses were not related with the presence of adverse events, as mean treatment doses were similar when comparing patients who reported adverse reactions and those who did not. Also, no difference was found in the proportion of adjuvant medication received or in the proportion of ongoing treatment with carbamazepine or derivatives between these groups, thus not suggesting a synergic effect of drugs that could lead to the adverse event. Other limitations for this study are intrinsic to a retrospective analysis. We attempted to control for the possibility of a diagnostic bias by reviewing each clinical record and checking that the case met the diagnostic criteria. Furthermore, most patients had received a previous diagnosis of TN before their emergency admission. Similarly, time to discharge is associated with multiple confounders, such as workload in the emergency department. We tried to control this by measuring time from drug infusion and not from arrival, assuming that the immeasurable confounders will be similarly present in both groups. It is also difficult to quantify pain in a retrospective manner, as no specific pain scales or questionnaires were used. To minimize this bias, we defined pain relief based on the available objective data, including hospital discharge within less than 10 hours, time to discharge after infusion, and need for further treatment. Finally, a difference emerged in treatment prescription at discharge between treatments. Lacosamide was prescribed in 57% of cases, while phenytoin was only prescribed in 24%. This constitutes a notable size difference between groups when analyzing readmissions in patients who were prescribed treatment (36 for lacosamide vs 19 for phenytoin); however, this difference does not affect our primary endpoints, as these relate only to the emergency episode and not the follow-up. In spite of this, a survival analysis was performed in both groups, and the proportion of patients with no readmissions at the six month follow-up was significantly higher for lacosamide. This is probably due to the better long-term tolerance of lacosamide fostering better adherence. To summarize, we provide evidence of the potential role of lacosamide and phenytoin in trigeminal neuralgia exacerbations. Our results pave the way for further prospective studies or randomized controlled trials, which are needed to confirm these findings and our main hypothesis. In this line, and according to our results, a therapeutic proposal could begin by applying 150 to 200 mg of lacosamide or 750 to 1000 mg of phenytoin in a 30 to 40 minutes infusion, depending on patient weight and comorbidities, with continuous cardiac monitoring and a strict surveillance of adverse effects and pain control. Regarding our results, lacosamide should be preferred over phenytoin due to a better adverse effect profile. Conclusion Intravenous lacosamide and phenytoin can be effective, safe treatments for acute pain in trigeminal neuralgia. According to our series, lacosamide might be better tolerated than phenytoin. Article highlights • Intravenous lacosamide and phenytoin can be effective and safe treatments for acute pain in trigeminal neuralgia. • Lacosamide can be an effective option as intravenous treatment of pain in trigeminal neuralgia, with low proportion of adverse effects. Ethics approval and consent to participate The study was approved by the Ethical Committee of the Hospital Universitari de Bellvitge with reference EOM028/ 21. The confidential information of the patients was handled in accordance with Spanish regulations. Availability of data and materials The datasets used and/or analyzed during this study are available from the corresponding author on reasonable request.
<gh_stars>1-10 import { FormModal } from "components/ui/form"; import { myContext } from "context"; import React from "react"; import tw from "twin.macro"; import styled from "@emotion/styled"; const Title = styled.h2` ${tw`flex-auto font-sans text-2xl font-semibold text-yellow-100 capitalize`} color: #fcf8c9; `; const Price = styled.p` ${tw`flex-auto px-4 font-sans font-semibold text-right text-yellow-100`} color: #fcf8c9; `; const Spantop = tw.span`text-base`; const SpanBottom = tw.span`text-base font-semibold`; const Button = styled.button` ${tw` flex-1 px-4 py-2 font-bold rounded`} color: #101010; background-color: #d1c414; &:hover { background-color: #fff23d; } `; const Bar = styled.section` ${tw`flex items-center px-4 py-6 m-2`}; grid-area: 1 / 1 / 2 / 2; background-color: #101010; `; export const PriceBar = ({ title, price, }: { title: string; price: number; }): JSX.Element => { return ( <myContext.Consumer> {(context) => ( <Bar> <Title>{title}</Title> <Price> <Spantop>from</Spantop> <br /> <SpanBottom>{price} USD</SpanBottom> </Price> <Button onClick={context.showModal}>Order</Button> <FormModal /> </Bar> )} </myContext.Consumer> ); };
<gh_stars>1-10 """ Modified Newsgroups dataset where hold out certain classes for validation and testing """ from dataclasses import dataclass from datasets import load_from_disk, DatasetDict from .base import NewsgroupsDataArgs, NewsgroupsDataModule @dataclass class NewsgroupsHeldoutArgs(NewsgroupsDataArgs): """Data arguments for Newsgroups ZSLDM. Args: val_names (tuple): val classes to holdout from training test_names (tuple): test classes to hold out from training gzsl (bool): if set to true, val and test classes will be self.classes, otherwise it will be self.val/test_classes eval_train (bool): if set to true, eval on train classes """ val_names: tuple = ( "alt.atheism", "comp.sys.mac.hardware", "rec.motorcycles", "sci.electronics", ) test_names: tuple = ( "comp.os.ms-windows.misc", "rec.sport.hockey", "sci.space", "talk.politics.guns", ) gzsl: bool = False eval_train: bool = False # filled in for you in __post_init__ heldout_classes: tuple = None def __post_init__(self): super().__post_init__() self.heldout_classes = tuple(list(self.val_names) + list(self.test_names)) self.train_classes = tuple( [x for x in self.classes if x not in self.heldout_classes] ) self.val_classes = ( tuple([x for x in self.classes if x not in self.test_names]) if self.gzsl else self.val_names ) self.test_classes = ( tuple() if self.gzsl else self.test_names ) if self.run_test: self.val_names = self.test_names self.val_classes = self.test_classes if self.eval_train: self.val_names = self.train_classes self.val_classes = self.train_classes class NewsgroupsHeldoutDM(NewsgroupsDataModule): def __init__(self, args: NewsgroupsHeldoutArgs, *margs, **kwargs): super().__init__(args, *margs, **kwargs) def setup(self, stage=None): self.setup_labels() # input dataset dataset = DatasetDict() loaded_dataset = load_from_disk(self.args.dataset_cache_path) dataset["train"] = loaded_dataset["train"].filter( lambda x: x["labels"] in self.train_classlabel.names ) dataset["train"] = dataset["train"].map( lambda x: {"labels": self.train_classlabel.str2int(x["labels"])} ) # we can pull from the larger train set, because the val/test classes are different split = "test" if self.args.run_test and self.args.eval_train else "train" dataset["val"] = loaded_dataset[split].filter( lambda x: x["labels"] in self.args.val_names ) dataset["val"] = dataset["val"].map( lambda x: {"labels": self.val_classlabel.str2int(x["labels"])} ) dataset.set_format( type="torch", columns=["input_ids", "token_type_ids", "attention_mask", "labels"], ) self.dataset = dataset
<reponame>tw-jr-auto-test/exam-examination-service<filename>src/main/java/com/thoughtworks/exam/examination/domain/model/examination/ExaminationId.java package com.thoughtworks.exam.examination.domain.model.examination; import com.thoughtworks.exam.examination.common.ddd.annotation.ValueObject; import com.thoughtworks.exam.examination.common.ddd.core.AbstractId; import lombok.EqualsAndHashCode; import java.util.UUID; @ValueObject @EqualsAndHashCode(callSuper = true) public class ExaminationId extends AbstractId { public ExaminationId(final String value) { super(value); } public static ExaminationId generate() { return new ExaminationId("exam-" + UUID.randomUUID().toString()); } @Override public String toString() { return getValue(); } }
def solve(): n=int(input()) prev=input() n-=1 count=1 while n: string=input() if(string[0]==prev[1]): count+=1 prev=string n-=1 print(count) solve()
<gh_stars>0 /* * Copyright (c) 2016, Alliance for Open Media. All rights reserved * * This source code is subject to the terms of the BSD 2 Clause License and * the Alliance for Open Media Patent License 1.0. If the BSD 2 Clause License * was not distributed with this source code in the LICENSE file, you can * obtain it at www.aomedia.org/license/software. If the Alliance for Open * Media Patent License 1.0 was not distributed with this source code in the * PATENTS file, you can obtain it at www.aomedia.org/license/patent. */ #include <math.h> #include <stdlib.h> #include <string.h> #include "third_party/googletest/src/googletest/include/gtest/gtest.h" #include "./av1_rtcd.h" #include "./aom_dsp_rtcd.h" #include "test/acm_random.h" #include "test/av1_txfm_test.h" #include "test/clear_system_state.h" #include "test/register_state_check.h" #include "test/util.h" #include "av1/common/blockd.h" #include "av1/common/scan.h" #include "aom/aom_integer.h" #include "aom_dsp/inv_txfm.h" using libaom_test::ACMRandom; namespace { typedef void (*IdctFunc)(const tran_low_t *in, tran_low_t *out); typedef std::tr1::tuple<IdctFunc, int, int> IdctParam; class AV1InvTxfm : public ::testing::TestWithParam<IdctParam> { public: virtual void SetUp() { inv_txfm_ = GET_PARAM(0); txfm_size_ = GET_PARAM(1); max_error_ = GET_PARAM(2); } void RunInvAccuracyCheck() { ACMRandom rnd(ACMRandom::DeterministicSeed()); const int count_test_block = 5000; for (int ti = 0; ti < count_test_block; ++ti) { tran_low_t input[64]; double ref_input[64]; for (int ni = 0; ni < txfm_size_; ++ni) { input[ni] = rnd.Rand8() - rnd.Rand8(); ref_input[ni] = static_cast<double>(input[ni]); } tran_low_t output[64]; inv_txfm_(input, output); double ref_output[64]; libaom_test::reference_idct_1d(ref_input, ref_output, txfm_size_); for (int ni = 0; ni < txfm_size_; ++ni) { EXPECT_LE( abs(output[ni] - static_cast<tran_low_t>(round(ref_output[ni]))), max_error_); } } } private: double max_error_; int txfm_size_; IdctFunc inv_txfm_; }; TEST_P(AV1InvTxfm, RunInvAccuracyCheck) { RunInvAccuracyCheck(); } INSTANTIATE_TEST_CASE_P(C, AV1InvTxfm, ::testing::Values(IdctParam(&aom_idct4_c, 4, 1))); } // namespace
s = input() s_list=list(s) C_num=-1 if (s_list[0]!="A"): print("WA") else: nagasa=len(s_list) num=0 for i in range(2,nagasa-1): if s_list[i]=="C": num += 1 C_num=i if num!=1: print("WA") else: del s_list[0] del s_list[C_num-1] mojiretu = ''.join(s_list) if(mojiretu.islower()==True): print("AC") else: print("WA")
/** * @author Daniel Siviter * @since v1.0 [13 Nov 2019] */ @EnableWeld @Timeout(60_0) public abstract class AbstractTest extends TestContainer { protected final AtomicInteger msgSeqNum = new AtomicInteger(); protected Server server; @BeforeEach void before() throws DeploymentException, IOException { try (var s = new ServerSocket(0)) { setDefaultPort(s.getLocalPort()); // TestContainer doesn't update port for client } this.server = startServer(TestConfig.class); } @AfterEach void after() { this.server.stop(); } // --- Static Methods --- /** * * @param <M> * @param message * @return */ protected <M extends Message> M defaults(M message) { return defaults(message, "client", "server"); } /** * * @param <M> * @param message * @param senderCompId * @param targetCompId * @return */ protected <M extends Message> M defaults(M message, String senderCompId, String targetCompId) { var header = message.getHeader(); header.setField(new MsgSeqNum(this.msgSeqNum.incrementAndGet())); header.setField(new SendingTime()); header.setField(new SenderCompID(senderCompId)); header.setField(new TargetCompID(targetCompId)); header.setField(new ApplVerID(ApplVerID.FIX50)); return message; } /** * * @param subprotocols * @return */ public static ClientEndpointConfig clientConfig(String... subprotocols) { return ClientEndpointConfig.Builder.create().preferredSubprotocols(FixEndpoint.subprotocols(subprotocols)) .decoders(List.of(Encoding.class)) .encoders(List.of(Encoding.class)) .build(); } // --- Inner Classes --- /** * */ public static class TestConfig implements ServerApplicationConfig { @Override public Set<ServerEndpointConfig> getEndpointConfigs(Set<Class<? extends Endpoint>> endpointClasses) { return Set.of(FixEndpoint.config("/fix", List.of(FixVersions.FIX50))); } @Override public Set<Class<?>> getAnnotatedEndpointClasses(Set<Class<?>> scanned) { return emptySet(); } } /** * */ public static class Encoding implements Decoder.Text<Message>, Encoder.Text<Message> { private final MessageFactory messageFactory = new DefaultMessageFactory(ApplVerID.FIX50); @Override public void init(EndpointConfig config) { } @Override public void destroy() { } @Override public Message decode(String s) throws DecodeException { try { return MessageUtils.parse(messageFactory, null, s); } catch (InvalidMessage e) { throw new DecodeException(s, e.getLocalizedMessage(), e); } } @Override public boolean willDecode(String s) { return true; } @Override public String encode(Message object) throws EncodeException { return object.toString(); } } }
<gh_stars>10-100 package pythonresource import ( "bytes" "encoding/binary" "errors" "fmt" "strings" spooky "github.com/dgryski/go-spooky" "github.com/kiteco/kiteco/kite-go/lang/python/pythonimports" "github.com/kiteco/kiteco/kite-go/lang/python/pythonresource/internal/resources/symgraph" "github.com/kiteco/kiteco/kite-go/lang/python/pythonresource/internal/toplevel" "github.com/kiteco/kiteco/kite-go/lang/python/pythonresource/keytypes" "github.com/kiteco/kiteco/kite-golib/kitectx" ) const maxRecursionDepth = 10 // maximum recursion depth for canonicalization // recursionDepthError implements error type recursionDepthError struct{} // Error implements error func (e recursionDepthError) Error() string { return "recursion depth limit (10) exceeded" } // DistLoadError is an error produced when the resource group for a distribution is not loaded type DistLoadError keytypes.Distribution // Error implements error func (e DistLoadError) Error() string { return fmt.Sprintf("could not load resource group for distribution %s", keytypes.Distribution(e)) } // Pkgs returns a list of all indexed top-level importable packages func (rm *manager) Pkgs() []string { var pkgs []string for pkg := range rm.index { pkgs = append(pkgs, pkg) } return pkgs } // DistsForPkg returns all distributions that expose pkg as a top-level importable name func (rm *manager) DistsForPkg(pkg string) []keytypes.Distribution { return rm.index[pkg] } // Symbol wraps a keytypes.Symbol to track the canonical symbol and validation status type Symbol struct { Symbol keytypes.Symbol canonical keytypes.Symbol ref symgraph.Ref } // Symbol: // path.Hash (uint64) | number of path parts | []parts | dist-string func writeSymbol(symbol keytypes.Symbol, buf *bytes.Buffer) error { // write hash err := binary.Write(buf, binary.BigEndian, symbol.Path.Hash) if err != nil { return err } // write number of parts var partCount = int64(len(symbol.Path.Parts)) err = binary.Write(buf, binary.BigEndian, partCount) if err != nil { return err } // write parts for _, p := range symbol.Path.Parts { buf.WriteString(p) buf.WriteByte(byte('\n')) } // write dist buf.WriteString(symbol.Dist.String()) buf.WriteByte(byte('\n')) return nil } func readSymbol(buf *bytes.Buffer) (keytypes.Symbol, error) { symbol := keytypes.Symbol{} // hash err := binary.Read(buf, binary.BigEndian, &symbol.Path.Hash) if err != nil { return symbol, err } // number of parts var partCount int64 err = binary.Read(buf, binary.BigEndian, &partCount) if err != nil { return symbol, err } // parts if partCount == 0 { symbol.Path.Parts = nil } else { parts := make([]string, partCount) for i := 0; i < int(partCount); i++ { v, err := buf.ReadString('\n') if err != nil { return symbol, err } parts[i] = v[0 : len(v)-1] } symbol.Path.Parts = parts } // dist-string dist, err := buf.ReadString(byte('\n')) if err != nil { return symbol, err } symbol.Dist, err = keytypes.ParseDistribution(dist[0 : len(dist)-1]) return symbol, err } func writeRef(ref symgraph.Ref, buf *bytes.Buffer) error { err := binary.Write(buf, binary.BigEndian, int64(ref.Internal)) if err != nil { return err } buf.WriteString(ref.TopLevel + "\n") return nil } func readRef(buf *bytes.Buffer) (symgraph.Ref, error) { ref := symgraph.Ref{} var internalValue int64 err := binary.Read(buf, binary.BigEndian, &internalValue) if err != nil { return ref, err } ref.Internal = int(internalValue) topLevel, err := buf.ReadString('\n') if err != nil { return ref, err } ref.TopLevel = topLevel[:len(topLevel)-1] return ref, err } // MarshalBinary implements gob encoding to be able to use this in the remote Python resourcemanager func (s Symbol) MarshalBinary() ([]byte, error) { shortFormat := s.Symbol.Dist == s.canonical.Dist && s.Symbol.Path.Hash == s.canonical.Path.Hash var b bytes.Buffer if shortFormat { b.WriteByte(byte('a')) err := writeSymbol(s.Symbol, &b) if err != nil { return nil, err } } else { b.WriteByte(byte('b')) err := writeSymbol(s.Symbol, &b) if err != nil { return nil, err } err = writeSymbol(s.canonical, &b) if err != nil { return nil, err } } err := writeRef(s.ref, &b) if err != nil { return nil, err } return b.Bytes(), nil } // UnmarshalBinary implements gob decoding for Symbol. Note: s has to be pointer-type to make this work func (s *Symbol) UnmarshalBinary(data []byte) error { b := bytes.NewBuffer(data) markerByte, err := b.ReadByte() if err != nil { return err } typeMarker := int32(markerByte) if typeMarker == 'a' { sym, err := readSymbol(b) if err != nil { return err } s.Symbol = sym s.canonical = sym } else if typeMarker == 'b' { s.Symbol, err = readSymbol(b) if err != nil { return err } s.canonical, err = readSymbol(b) if err != nil { return err } } else { return errors.New("invalid symbol marker type") } ref, err := readRef(b) if err != nil { return err } s.ref = ref return nil } // String implements fmt.Stringer func (s Symbol) String() string { if s.Symbol.Dist == s.canonical.Dist && s.Symbol.Path.Hash == s.canonical.Path.Hash { return s.canonical.String() } return fmt.Sprintf("Symbol(%s -> %s)", s.Symbol, s.canonical) } // Hash for the distribution and path func (s Symbol) Hash() pythonimports.Hash { parts := strings.Join([]string{ s.Symbol.Dist.Name, s.Symbol.Dist.Version, s.Symbol.Path.String(), }, ":") return pythonimports.Hash(spooky.Hash64([]byte(parts))) } // Less re-exposes the arbitrary but fixed ordering on the underlying keytypes.Symbol func (s Symbol) Less(other Symbol) bool { return s.Symbol.Less(other.Symbol) } // Dist returns the symbol's distribution func (s Symbol) Dist() keytypes.Distribution { return s.Symbol.Dist } // Nil tests whether the Symbol represents a "nil" (invalid) value func (s Symbol) Nil() bool { return s.Symbol.Path.Empty() && s.Symbol.Dist == keytypes.Distribution{} } // Equals tests for equality with another symbol func (s Symbol) Equals(other Symbol) bool { return s.Symbol.Dist == other.Symbol.Dist && s.Symbol.Path.Hash == other.Symbol.Path.Hash } // Path returns the symbol's path func (s Symbol) Path() pythonimports.DottedPath { return s.Symbol.Path.Copy() } // PathHead returns the first component ("top level") of the symbol's path func (s Symbol) PathHead() string { return s.Symbol.Path.Head() } // PathHash returns a hash of the symbol's path func (s Symbol) PathHash() pythonimports.Hash { return s.Symbol.Path.Hash } // PathString returns the symbol's path as a string func (s Symbol) PathString() string { return s.Symbol.Path.String() } // PathLast is a more efficient version of Path().Last() func (s Symbol) PathLast() string { return s.Symbol.Path.Last() } // Canonical returns the canonical symbol func (s Symbol) Canonical() Symbol { return Symbol{ Symbol: s.canonical, canonical: s.canonical, ref: s.ref, } } // Distribution returns the distribution for the given symbol func (s Symbol) Distribution() keytypes.Distribution { return s.Symbol.Dist } // PathSymbol returns the canonicalized Symbol from the "least" matching distribution for the given path func (rm *manager) PathSymbol(path pythonimports.DottedPath) (Symbol, error) { syms, err := rm.PathSymbols(kitectx.TODO(), path) if err != nil { return Symbol{}, err } return syms[0], nil } // PathSymbols returns a slice of Symbols for each matching distributions (in distribution order) for the given path // if the returned error is nil, the returned slice must be non-empty func (rm *manager) PathSymbols(ctx kitectx.Context, path pythonimports.DottedPath) ([]Symbol, error) { var syms []Symbol err := ctx.WithCallLimit(maxRecursionDepth, func(ctx kitectx.CallContext) error { var err error syms, err = rm.pathSymbols(ctx, path) return err }) return syms, err } // NewSymbol validates the given distribution and path, and returns a valid Symbol func (rm *manager) NewSymbol(dist keytypes.Distribution, path pythonimports.DottedPath) (Symbol, error) { var sym Symbol err := kitectx.TODO().WithCallLimit(maxRecursionDepth, func(ctx kitectx.CallContext) error { var err error sym, err = rm.canonicalize(ctx, keytypes.Symbol{Dist: dist, Path: path}) return err }) return sym, err } // pathSymbols and canonicalize are mutually recursive functions that validate a symbol by following external // references in the Symbol graph. The returned Symbol tracks both the queried symbol path as well as its // canonicalization. type noDistributionsMatchPathError pythonimports.DottedPath func (e noDistributionsMatchPathError) Error() string { return fmt.Sprintf("no distributions match path %s", pythonimports.DottedPath(e)) } type pathNotFoundError struct { cause error path pythonimports.DottedPath } func (e pathNotFoundError) Error() string { return fmt.Sprintf("no distributions match path %s: %s", e.path, e.cause) } // pathSymbols validates a Symbol from a DottedPath, where the distribution is unknown; // it returns a slice of matching symbols, ordered by the argument path's matching distributions func (rm *manager) pathSymbols(ctx kitectx.CallContext, path pythonimports.DottedPath) ([]Symbol, error) { if ctx.AtCallLimit() { return nil, recursionDepthError{} } // get the matching distributions (in sorted order) dists := rm.DistsForPkg(path.Head()) if len(dists) == 0 { return nil, noDistributionsMatchPathError(path) } // find all possible symbols that match var err error var syms []Symbol for _, dist := range dists { var sym Symbol if sym, err = rm.canonicalize(ctx, keytypes.Symbol{Dist: dist, Path: path}); err == nil { syms = append(syms, sym) } } // if there are none, return immediately if len(syms) == 0 { return nil, pathNotFoundError{cause: err, path: path} } return syms, nil } // canonicalize validates a Symbol from a Distribution & DottedPath func (rm *manager) canonicalize(ctx kitectx.CallContext, sym keytypes.Symbol) (Symbol, error) { if ctx.AtCallLimit() { return Symbol{}, recursionDepthError{} } // if the resource group is inaccessible, we're screwed if !rm.resourceGroupLoadable(sym.Dist) { return Symbol{}, DistLoadError(sym.Dist) } // check if the toplevel actually exists in the provided package toplevel := sym.Path.Head() var found bool for _, dist := range rm.DistsForPkg(toplevel) { if dist == sym.Dist { found = true break } } if !found { return Symbol{}, symgraph.TopLevelNotFound(toplevel) } // if we're just looking for the toplevel, don't bother with the resource group since we know it exists if len(sym.Path.Parts) == 1 { return Symbol{ Symbol: sym, canonical: sym, ref: symgraph.Ref{TopLevel: toplevel, Internal: 0}, }, nil } // otherwise, we need the graph rg := rm.loadResourceGroup(sym.Dist, "canonicalize") if rg == nil { // this should theoretically never happen, since we check resourceGroupLoadable above return Symbol{}, DistLoadError(sym.Dist) } ref, err := rg.SymbolGraph.Lookup(sym.Path) if extErr, ok := err.(symgraph.ExternalEncountered); ok { // TODO(naman) should canonicalize return a slice of symbols? there are performance tradeoffs resSyms, err := rm.pathSymbols(ctx.Call(), extErr.WithRest()) if err != nil { return Symbol{}, err } resSym := resSyms[0] // reset the `symbol` to what the input actually was resSym.Symbol = sym return resSym, nil } else if err != nil { return Symbol{}, err } // otherwise, we found a canonical path in the same distribution return Symbol{ Symbol: sym, canonical: keytypes.Symbol{ Dist: sym.Dist, Path: rg.SymbolGraph.Canonical(ref), }, ref: ref, }, nil } // Kind returns the keytypes.Kind of a Symbol func (rm *manager) Kind(s Symbol) keytypes.Kind { if rm.topLevelData(s) != nil { return keytypes.ModuleKind } rg := rm.resourceGroup(s.canonical.Dist) if rg == nil { return keytypes.NoneKind // TODO(naman) } return rg.SymbolGraph.Kind(s.ref) } // Type resolves the type Symbol for s func (rm *manager) Type(s Symbol) (Symbol, error) { if rm.topLevelData(s) != nil { // top-levels don't currently have types in the graph return Symbol{}, errors.New("no types for top-levels") } rg := rm.resourceGroup(s.canonical.Dist) if rg == nil { return Symbol{}, DistLoadError(s.canonical.Dist) } ref, err := rg.SymbolGraph.Type(s.ref) if extErr, ok := err.(symgraph.ExternalEncountered); ok { // TODO(naman) we may want the non-canonical path here to be s.sym.Path.WithTail("__class__") if a __class__ attribute exists // in that case, we should actually update pkgexploration to not skip __class__ attributes return rm.PathSymbol(extErr.WithRest()) } else if err != nil { return Symbol{}, err } // again, the non-canonical path may be better set to WithTail("__class__") sym := keytypes.Symbol{ Dist: s.canonical.Dist, Path: rg.SymbolGraph.Canonical(ref), } return Symbol{ Symbol: sym, canonical: sym, ref: ref, }, nil } // Bases resolves the base class Symbols for s, skipping any unresolvable base classes func (rm *manager) Bases(s Symbol) []Symbol { if rm.topLevelData(s) != nil { // modules don't have base classes return nil } rg := rm.resourceGroup(s.canonical.Dist) if rg == nil { return nil // TODO(naman) } var bases []Symbol numBases := rg.SymbolGraph.NumBases(s.ref) for i := 0; i < numBases; i++ { ref, err := rg.SymbolGraph.GetBase(s.ref, i) if extErr, ok := err.(symgraph.ExternalEncountered); ok { base, err := rm.PathSymbol(extErr.WithRest()) if err == nil { bases = append(bases, base) } else { // TODO(naman) } continue } else if err != nil { // TODO(naman) continue } sym := keytypes.Symbol{ Dist: s.canonical.Dist, Path: rg.SymbolGraph.Canonical(ref), } bases = append(bases, Symbol{ Symbol: sym, canonical: sym, ref: ref, }) } return bases } // Children returns a list of strings identifying children of the given Symbol func (rm *manager) Children(s Symbol) ([]string, error) { rg := rm.loadResourceGroup(s.canonical.Dist, "Children") if rg == nil { return nil, DistLoadError(s.canonical.Dist) } return rg.SymbolGraph.Children(s.ref), nil } // ChildSymbol computes a validated child Symbol specified by the string argument func (rm *manager) ChildSymbol(s Symbol, c string) (Symbol, error) { rg := rm.loadResourceGroup(s.canonical.Dist, "ChildSymbol") if rg == nil { return Symbol{}, DistLoadError(s.canonical.Dist) } childSymbol := keytypes.Symbol{ Dist: s.Symbol.Dist, Path: s.Symbol.Path.WithTail(c), } ref, err := rg.SymbolGraph.Child(s.ref, c) if extErr, ok := err.(symgraph.ExternalEncountered); ok { sym, err := rm.PathSymbol(extErr.WithRest()) if err != nil { return Symbol{}, err } sym.Symbol = childSymbol return sym, nil } else if err != nil { return Symbol{}, err } return Symbol{ Symbol: childSymbol, canonical: keytypes.Symbol{ Dist: s.canonical.Dist, Path: rg.SymbolGraph.Canonical(ref), }, ref: ref, }, nil } // - // CanonicalSymbols returns a slice of canonical symbols for the given distribution func (rm *manager) CanonicalSymbols(dist keytypes.Distribution) ([]Symbol, error) { rg := rm.loadResourceGroup(dist, "CanonicalSymbols") if rg == nil { return nil, DistLoadError(dist) } var out []Symbol for toplevel, nodes := range *rg.SymbolGraph { for i, n := range nodes { sym := keytypes.Symbol{ Dist: dist, Path: n.Canonical.Cast(), } out = append(out, Symbol{ Symbol: sym, canonical: sym, ref: symgraph.Ref{ TopLevel: toplevel, Internal: i, }, }) } } return out, nil } // TopLevels returns a slice of toplevel packages for the given distribution func (rm *manager) TopLevels(dist keytypes.Distribution) ([]string, error) { rg := rm.resourceGroup(dist) if rg == nil { return nil, DistLoadError(dist) } var out []string for tl := range *rg.SymbolGraph { out = append(out, tl) } return out, nil } func (rm *manager) topLevelData(sym Symbol) *toplevel.Entity { if rm.toplevel == nil || len(sym.Symbol.Path.Parts) != 1 { return nil } res, ok := rm.toplevel[toplevel.DistributionTopLevel{ Distribution: sym.Symbol.Dist, TopLevel: sym.Symbol.Path.Head(), }] if !ok { return nil } return &res } // - // MustInternalGraph is for internal (builder) use only; it may panic func (rm *manager) MustInternalGraph(dist keytypes.Distribution) symgraph.Graph { rg := rm.resourceGroup(dist) if rg == nil { panic(DistLoadError(dist).Error()) } return *rg.SymbolGraph }
<reponame>templateK/beam-migrate {-# OPTIONS_GHC -fno-warn-orphans #-} {-# LANGUAGE GeneralizedNewtypeDeriving #-} {-# LANGUAGE CPP #-} -- | Instances that allow us to use Haskell as a backend syntax. This allows us -- to use migrations defined a la 'Database.Beam.Migrate.SQL' to generate a beam -- schema. -- -- Mainly of interest to backends. -- -- Unfortunately, we define some orphan 'Hashable' instances that aren't defined -- for us in @haskell-src-exts@. module Database.Beam.Haskell.Syntax where import Database.Beam import Database.Beam.Backend.SQL import Database.Beam.Backend.SQL.AST import Database.Beam.Backend.SQL.Builder import Database.Beam.Migrate.Checks (HasDataTypeCreatedCheck(..)) import Database.Beam.Migrate.SQL.SQL92 import Database.Beam.Migrate.SQL.Types import Database.Beam.Migrate.Serialization import Data.Char (toLower, toUpper) import Data.Hashable import Data.List (find, nub) import qualified Data.Map as M import Data.Maybe import qualified Data.Set as S import Data.String import qualified Data.Text as T #if !MIN_VERSION_base(4, 11, 0) import Data.Semigroup #endif import qualified Language.Haskell.Exts as Hs import Text.PrettyPrint (render) newtype HsDbField = HsDbField { buildHsDbField :: Hs.Type () -> Hs.Type () } data HsConstraintDefinition = HsConstraintDefinition { hsConstraintDefinitionConstraint :: HsExpr } deriving (Show, Eq, Generic) instance Hashable HsConstraintDefinition instance Sql92DisplaySyntax HsConstraintDefinition where displaySyntax = show newtype HsEntityName = HsEntityName { getHsEntityName :: String } deriving (Show, Eq, Ord, IsString) data HsImport = HsImportAll | HsImportSome (S.Set (Hs.ImportSpec ())) deriving (Show, Eq, Generic) instance Hashable HsImport instance Semigroup HsImport where (<>) = mappend instance Monoid HsImport where mempty = HsImportSome mempty mappend HsImportAll _ = HsImportAll mappend _ HsImportAll = HsImportAll mappend (HsImportSome a) (HsImportSome b) = HsImportSome (a <> b) importSome :: T.Text -> [ Hs.ImportSpec () ] -> HsImports importSome modNm names = HsImports (M.singleton (Hs.ModuleName () (T.unpack modNm)) (HsImportSome (S.fromList names))) importTyNamed :: T.Text -> Hs.ImportSpec () importTyNamed = importVarNamed -- nm = Hs.IAbs () (Hs.TypeNamespace ()) (Hs.Ident () (T.unpack nm)) importVarNamed :: T.Text -> Hs.ImportSpec () importVarNamed nm = Hs.IVar () (Hs.Ident () (T.unpack nm)) newtype HsImports = HsImports (M.Map (Hs.ModuleName ()) HsImport) deriving (Show, Eq) instance Hashable HsImports where hashWithSalt s (HsImports a) = hashWithSalt s (M.assocs a) instance Semigroup HsImports where (<>) = mappend instance Monoid HsImports where mempty = HsImports mempty mappend (HsImports a) (HsImports b) = HsImports (M.unionWith mappend a b) data HsDataType = HsDataType { hsDataTypeMigration :: HsExpr , hsDataTypeType :: HsType , hsDataTypeSerialized :: BeamSerializedDataType } deriving (Eq, Show, Generic) instance Hashable HsDataType where hashWithSalt salt (HsDataType mig ty _) = hashWithSalt salt (mig, ty) instance Sql92DisplaySyntax HsDataType where displaySyntax = show instance HasDataTypeCreatedCheck HsDataType where dataTypeHasBeenCreated _ _ = True -- TODO make this more robust data HsType = HsType { hsTypeSyntax :: Hs.Type () , hsTypeImports :: HsImports } deriving (Show, Eq, Generic) instance Hashable HsType data HsExpr = HsExpr { hsExprSyntax :: Hs.Exp () , hsExprImports :: HsImports , hsExprConstraints :: [ Hs.Asst () ] , hsExprTypeVariables :: S.Set (Hs.Name ()) } deriving (Show, Eq, Generic) instance Hashable HsExpr data HsColumnSchema = HsColumnSchema { mkHsColumnSchema :: T.Text -> HsExpr , hsColumnSchemaType :: HsType } instance Show HsColumnSchema where show (HsColumnSchema mk _) = show (mk "fieldNm") instance Eq HsColumnSchema where HsColumnSchema a aTy == HsColumnSchema b bTy = a "fieldNm" == b "fieldNm" && aTy == bTy instance Hashable HsColumnSchema where hashWithSalt s (HsColumnSchema mk ty) = hashWithSalt s (mk "fieldNm", ty) instance Sql92DisplaySyntax HsColumnSchema where displaySyntax = show data HsDecl = HsDecl { hsDeclSyntax :: Hs.Decl () , hsDeclImports :: HsImports , hsDeclExports :: [ Hs.ExportSpec () ] } data HsAction = HsAction { hsSyntaxMigration :: [ (Maybe (Hs.Pat ()), HsExpr) ] , hsSyntaxEntities :: [ HsEntity ] } instance Semigroup HsAction where (<>) = mappend instance Monoid HsAction where mempty = HsAction [] [] mappend (HsAction ma ea) (HsAction mb eb) = HsAction (ma <> mb) (ea <> eb) newtype HsBackendConstraint = HsBackendConstraint { buildHsBackendConstraint :: Hs.Type () -> Hs.Asst () } data HsBeamBackend f = HsBeamBackendSingle HsType f | HsBeamBackendConstrained [ HsBackendConstraint ] | HsBeamBackendNone instance Semigroup (HsBeamBackend f) where (<>) = mappend instance Monoid (HsBeamBackend f) where mempty = HsBeamBackendConstrained [] mappend (HsBeamBackendSingle aTy aExp) (HsBeamBackendSingle bTy _) | aTy == bTy = HsBeamBackendSingle aTy aExp | otherwise = HsBeamBackendNone mappend a@HsBeamBackendSingle {} _ = a mappend _ b@HsBeamBackendSingle {} = b mappend HsBeamBackendNone _ = HsBeamBackendNone mappend _ HsBeamBackendNone = HsBeamBackendNone mappend (HsBeamBackendConstrained a) (HsBeamBackendConstrained b) = HsBeamBackendConstrained (a <> b) data HsEntity = HsEntity { hsEntityBackend :: HsBeamBackend HsExpr , hsEntityName :: HsEntityName , hsEntityDecls :: [ HsDecl ] , hsEntityDbDecl :: HsDbField , hsEntityExp :: HsExpr } newtype HsFieldLookup = HsFieldLookup { hsFieldLookup :: T.Text -> Maybe (T.Text, Hs.Type ()) } newtype HsTableConstraint = HsTableConstraint (T.Text -> HsFieldLookup -> HsTableConstraintDecls) data HsTableConstraintDecls = HsTableConstraintDecls { hsTableConstraintInstance :: [ Hs.InstDecl () ] , hsTableConstraintDecls :: [ HsDecl ] } instance Semigroup HsTableConstraintDecls where (<>) = mappend instance Monoid HsTableConstraintDecls where mempty = HsTableConstraintDecls [] [] mappend (HsTableConstraintDecls ai ad) (HsTableConstraintDecls bi bd) = HsTableConstraintDecls (ai <> bi) (ad <> bd) data HsModule = HsModule { hsModuleName :: String , hsModuleEntities :: [ HsEntity ] , hsModuleMigration :: [ (Maybe (Hs.Pat ()), HsExpr) ] } hsActionsToModule :: String -> [ HsAction ] -> HsModule hsActionsToModule modNm actions = let HsAction ms es = mconcat actions in HsModule modNm es ms unqual :: String -> Hs.QName () unqual = Hs.UnQual () . Hs.Ident () entityDbFieldName :: HsEntity -> String entityDbFieldName entity = "_" ++ getHsEntityName (hsEntityName entity) derivingDecl :: [Hs.InstRule ()] -> Hs.Deriving () derivingDecl = #if MIN_VERSION_haskell_src_exts(1,20,0) Hs.Deriving () Nothing #else Hs.Deriving () #endif dataDecl :: Hs.DeclHead () -> [Hs.QualConDecl ()] -> Maybe (Hs.Deriving ()) -> Hs.Decl () dataDecl declHead cons deriving_ = #if MIN_VERSION_haskell_src_exts(1,20,0) Hs.DataDecl () (Hs.DataType ()) Nothing declHead cons (maybeToList deriving_) #else Hs.DataDecl () (Hs.DataType ()) Nothing declHead cons deriving_ #endif insDataDecl :: Hs.Type () -> [Hs.QualConDecl ()] -> Maybe (Hs.Deriving ()) -> Hs.InstDecl () insDataDecl declHead cons deriving_ = #if MIN_VERSION_haskell_src_exts(1,20,0) Hs.InsData () (Hs.DataType ()) declHead cons (maybeToList deriving_) #else Hs.InsData () (Hs.DataType ()) declHead cons deriving_ #endif databaseTypeDecl :: [ HsEntity ] -> Hs.Decl () databaseTypeDecl entities = dataDecl declHead [ conDecl ] (Just deriving_) where declHead = Hs.DHApp () (Hs.DHead () (Hs.Ident () "Db")) (Hs.UnkindedVar () (Hs.Ident () "entity")) conDecl = Hs.QualConDecl () Nothing Nothing (Hs.RecDecl () (Hs.Ident () "Db") (mkField <$> entities)) deriving_ = derivingDecl [ Hs.IRule () Nothing Nothing $ Hs.IHCon () $ Hs.UnQual () $ Hs.Ident () "Generic" ] mkField entity = Hs.FieldDecl () [ Hs.Ident () (entityDbFieldName entity) ] (buildHsDbField (hsEntityDbDecl entity) $ Hs.TyVar () (Hs.Ident () "entity")) migrationTypeDecl :: HsBeamBackend HsExpr -> [Hs.Type ()] -> Hs.Decl () migrationTypeDecl be inputs = Hs.TypeSig () [Hs.Ident () "migration"] migrationType where (beAssts, beVar) = case be of HsBeamBackendNone -> error "No backend matches" HsBeamBackendSingle ty _ -> ([], hsTypeSyntax ty) HsBeamBackendConstrained cs -> ( map (flip buildHsBackendConstraint beVar) cs , tyVarNamed "be" ) resultType = tyApp (tyConNamed "Migration") [ beVar , tyApp (tyConNamed "CheckedDatabaseSettings") [ beVar , tyConNamed "Db" ] ] migrationUnconstrainedType | [] <- inputs = resultType | otherwise = functionTy (tyTuple inputs) resultType constraints = nub beAssts migrationType | [] <- constraints = migrationUnconstrainedType | [c] <- constraints = Hs.TyForall () Nothing (Just (Hs.CxSingle () c)) migrationUnconstrainedType | otherwise = Hs.TyForall () Nothing (Just (Hs.CxTuple () constraints)) migrationUnconstrainedType migrationDecl :: HsBeamBackend HsExpr -> [Hs.Exp ()] -> [ (Maybe (Hs.Pat ()), HsExpr) ] -> [HsEntity] -> Hs.Decl () migrationDecl _ _ migrations entities = Hs.FunBind () [ Hs.Match () (Hs.Ident () "migration") [] (Hs.UnGuardedRhs () body) Nothing ] where body = Hs.Do () (map (\(pat, expr) -> let expr' = hsExprSyntax expr in case pat of Nothing -> Hs.Qualifier () expr' Just pat' -> Hs.Generator () pat' expr') migrations ++ [Hs.Qualifier () (hsExprSyntax finalReturn)]) finalReturn = hsApp (hsVar "pure") [ hsRecCon "Db" (map (\e -> (fromString (entityDbFieldName e), hsEntityExp e)) entities) ] dbTypeDecl :: HsBeamBackend HsExpr -> Hs.Decl () dbTypeDecl be = Hs.TypeSig () [ Hs.Ident () "db" ] dbType where unconstrainedDbType = tyApp (tyConNamed "DatabaseSettings") [ beVar, tyConNamed "Db" ] dbType | [] <- constraints, [] <- bindings = unconstrainedDbType | [] <- constraints = Hs.TyForall () (Just bindings) Nothing unconstrainedDbType | [c] <- constraints = Hs.TyForall () (Just bindings) (Just (Hs.CxSingle () c)) unconstrainedDbType | otherwise = Hs.TyForall () (Just bindings) (Just (Hs.CxTuple () constraints)) unconstrainedDbType constraints = nub beAssts (bindings, beAssts, beVar) = case be of HsBeamBackendNone -> error "No backend matches" HsBeamBackendSingle ty _ -> (standardBindings, [], hsTypeSyntax ty) HsBeamBackendConstrained cs -> ( tyVarBind "be":standardBindings , map (flip buildHsBackendConstraint beVar) cs , tyVarNamed "be" ) standardBindings = [] tyVarBind nm = Hs.UnkindedVar () (Hs.Ident () nm) dbDecl :: HsBeamBackend HsExpr -> [HsExpr] -> Hs.Decl () dbDecl backend params = Hs.FunBind () [ Hs.Match () (Hs.Ident () "db") [] (Hs.UnGuardedRhs () body) Nothing ] where backendVar = case backend of HsBeamBackendNone -> error "No syntax matches" HsBeamBackendSingle ty _ -> hsTypeSyntax ty HsBeamBackendConstrained _ -> tyVarNamed "be" body = hsExprSyntax $ hsApp (hsVar "unCheckDatabase") [ hsApp (hsVarFrom "runMigrationSilenced" "Database.Beam.Migrate") [ hsApp (hsVisibleTyApp (hsVar "migration") backendVar) $ case params of [] -> [] _ -> [ hsTuple params ] ] ] renderHsSchema :: HsModule -> Either String String renderHsSchema (HsModule modNm entities migrations) = let hsMod = Hs.Module () (Just modHead) modPragmas imports decls modHead = Hs.ModuleHead () (Hs.ModuleName () modNm) Nothing (Just modExports) modExports = Hs.ExportSpecList () (commonExports ++ foldMap (foldMap hsDeclExports . hsEntityDecls) entities) commonExports = [ Hs.EVar () (unqual "db") , Hs.EVar () (unqual "migration") , Hs.EThingWith () (Hs.EWildcard () 0) (unqual "Db") [] ] modPragmas = [ Hs.LanguagePragma () [ Hs.Ident () "StandaloneDeriving" , Hs.Ident () "GADTs" , Hs.Ident () "ScopedTypeVariables" , Hs.Ident () "FlexibleContexts" , Hs.Ident () "FlexibleInstances" , Hs.Ident () "DeriveGeneric" , Hs.Ident () "TypeSynonymInstances" , Hs.Ident () "ExplicitNamespaces" , Hs.Ident () "TypeApplications" , Hs.Ident () "TypeFamilies" , Hs.Ident () "OverloadedStrings" ] ] HsImports importedModules = foldMap (\e -> foldMap hsDeclImports (hsEntityDecls e) <> hsExprImports (hsEntityExp e)) entities <> foldMap (hsExprImports . snd) migrations <> importSome "Database.Beam.Migrate" [ importTyNamed "CheckedDatabaseSettings", importTyNamed "Migration" , importTyNamed "BeamMigrateSqlBackend" , importVarNamed "runMigrationSilenced" , importVarNamed "unCheckDatabase" ] imports = commonImports <> map (\(modName, spec) -> case spec of HsImportAll -> Hs.ImportDecl () modName False False False Nothing Nothing Nothing HsImportSome nms -> let importList = Hs.ImportSpecList () False (S.toList nms) in Hs.ImportDecl () modName False False False Nothing Nothing (Just importList) ) (M.assocs importedModules) commonImports = [ Hs.ImportDecl () (Hs.ModuleName () "Database.Beam") False False False Nothing Nothing Nothing , Hs.ImportDecl () (Hs.ModuleName () "Control.Applicative") False False False Nothing Nothing Nothing ] backend = foldMap hsEntityBackend entities backendHs = case backend of HsBeamBackendNone -> error "Can't instantiate Database instance: No backend matches" HsBeamBackendSingle ty _ -> hsTypeSyntax ty HsBeamBackendConstrained {} -> tyVarNamed "be" -- TODO constraints decls = foldMap (map hsDeclSyntax . hsEntityDecls) entities ++ [ databaseTypeDecl entities , migrationTypeDecl backend [] , migrationDecl backend [] migrations entities , hsInstance "Database" [ backendHs, tyConNamed "Db" ] [] , dbTypeDecl backend , dbDecl backend [] ] in Right (render (Hs.prettyPrim hsMod)) -- * DDL Syntax definitions data HsNone = HsNone deriving (Show, Eq, Ord, Generic) instance Hashable HsNone instance Semigroup HsNone where (<>) = mappend instance Monoid HsNone where mempty = HsNone mappend _ _ = HsNone data HsMigrateBackend = HsMigrateBackend instance BeamMigrateOnlySqlBackend HsMigrateBackend type instance BeamSqlBackendSyntax HsMigrateBackend = HsAction hsMkTableName :: (Char -> Char) -> TableName -> String hsMkTableName toNameCase (TableName sch nm) = case sch of Nothing -> case T.unpack nm of [] -> error "No name for table" x:xs -> toNameCase x:xs Just schNm -> case T.unpack schNm of [] -> error "Empty schema name" x:xs -> toNameCase x:xs ++ "_" ++ T.unpack nm hsTableVarName, hsTableTypeName :: TableName -> String hsTableVarName = hsMkTableName toLower hsTableTypeName = hsMkTableName toUpper instance IsSql92DdlCommandSyntax HsAction where type Sql92DdlCommandCreateTableSyntax HsAction = HsAction type Sql92DdlCommandAlterTableSyntax HsAction = HsAction type Sql92DdlCommandDropTableSyntax HsAction = HsAction createTableCmd = id dropTableCmd = id alterTableCmd = id instance IsSql92AlterTableSyntax HsAction where type Sql92AlterTableTableNameSyntax HsAction = TableName type Sql92AlterTableAlterTableActionSyntax HsAction = HsNone alterTableSyntax _ _ = error "alterTableSyntax" instance IsSql92AlterTableActionSyntax HsNone where type Sql92AlterTableColumnSchemaSyntax HsNone = HsColumnSchema type Sql92AlterTableAlterColumnActionSyntax HsNone = HsNone alterColumnSyntax _ _ = HsNone addColumnSyntax _ _ = HsNone dropColumnSyntax _ = HsNone renameTableToSyntax _ = HsNone renameColumnToSyntax _ _ = HsNone instance IsSql92AlterColumnActionSyntax HsNone where setNullSyntax = HsNone setNotNullSyntax = HsNone instance IsSql92DropTableSyntax HsAction where type Sql92DropTableTableNameSyntax HsAction = TableName dropTableSyntax nm = HsAction [ (Nothing, dropTable) ] [] where dropTable = hsApp (hsVar "dropTable") [ hsVar (fromString (hsTableVarName nm)) ] instance IsSql92CreateTableSyntax HsAction where type Sql92CreateTableTableNameSyntax HsAction = TableName type Sql92CreateTableOptionsSyntax HsAction = HsNone type Sql92CreateTableTableConstraintSyntax HsAction = HsTableConstraint type Sql92CreateTableColumnSchemaSyntax HsAction = HsColumnSchema createTableSyntax _ nm fields cs = HsAction [ ( Just (Hs.PVar () (Hs.Ident () varName)) , migration ) ] [ entity ] where (varName, tyName, tyConName) = ( hsTableVarName nm, hsTableTypeName nm ++ "T", hsTableTypeName nm ) mkHsFieldName fieldNm = "_" ++ varName ++ case T.unpack fieldNm of [] -> error "empty field name" (x:xs) -> toUpper x:xs HsTableConstraintDecls tableInstanceDecls constraintDecls = foldMap (\(HsTableConstraint mkConstraint) -> mkConstraint (fromString tyConName) fieldLookup) cs fieldLookup = HsFieldLookup $ \fieldNm -> fmap (\(fieldNm', ty') -> (fromString (mkHsFieldName fieldNm'), ty')) $ find ( (== fieldNm) . fst ) tyConFields migration = hsApp (hsVarFrom "createTable" "Database.Beam.Migrate") [ hsStr (fromString (hsTableVarName nm)) , hsApp (hsTyCon (fromString tyConName)) (map (\(fieldNm, ty) -> mkHsColumnSchema ty fieldNm) fields) ] entity = HsEntity { hsEntityBackend = HsBeamBackendConstrained [ beamMigrateSqlBackend ] , hsEntityName = HsEntityName varName , hsEntityDecls = [ HsDecl tblDecl imports [ Hs.EThingWith () (Hs.EWildcard () 0) (unqual tyName) [] ] , HsDecl tblBeamable imports [] , HsDecl tblPun imports [ Hs.EVar () (unqual tyConName) ] , HsDecl tblShowInstance imports [] , HsDecl tblEqInstance imports [] , HsDecl tblInstanceDecl imports [] ] ++ constraintDecls , hsEntityDbDecl = HsDbField (\f -> tyApp f [ tyApp (tyConNamed "TableEntity") [tyConNamed tyName] ]) , hsEntityExp = hsVar (fromString varName) } imports = foldMap (\(_, ty) -> hsTypeImports (hsColumnSchemaType ty)) fields tblDecl = dataDecl tblDeclHead [ tblConDecl ] (Just deriving_) tblDeclHead = Hs.DHApp () (Hs.DHead () (Hs.Ident () tyName)) (Hs.UnkindedVar () (Hs.Ident () "f")) tblConDecl = Hs.QualConDecl () Nothing Nothing (Hs.RecDecl () (Hs.Ident () tyConName) tyConFieldDecls) tyConFieldDecls = map (\(fieldNm, ty) -> Hs.FieldDecl () [ Hs.Ident () (mkHsFieldName fieldNm) ] ty) tyConFields tyConFields = map (\(fieldNm, ty) -> ( fieldNm , tyApp (tyConNamed "Columnar") [ tyVarNamed "f" , hsTypeSyntax (hsColumnSchemaType ty) ])) fields deriving_ = derivingDecl [ inst "Generic" ] tblBeamable = hsInstance "Beamable" [ tyConNamed tyName ] [] tblPun = Hs.TypeDecl () (Hs.DHead () (Hs.Ident () tyConName)) (tyApp (tyConNamed tyName) [ tyConNamed "Identity" ]) tblEqInstance = hsDerivingInstance "Eq" [ tyConNamed tyConName ] tblShowInstance = hsDerivingInstance "Show" [ tyConNamed tyConName] tblInstanceDecl = hsInstance "Table" [ tyConNamed tyName ] tableInstanceDecls instance IsSql92ColumnSchemaSyntax HsColumnSchema where type Sql92ColumnSchemaColumnConstraintDefinitionSyntax HsColumnSchema = HsConstraintDefinition type Sql92ColumnSchemaColumnTypeSyntax HsColumnSchema = HsDataType type Sql92ColumnSchemaExpressionSyntax HsColumnSchema = HsExpr columnSchemaSyntax dataType _ cs _ = HsColumnSchema (\nm -> fieldExpr nm) (modTy $ hsDataTypeType dataType) where notNullable = any ((==notNullConstraintSyntax) . hsConstraintDefinitionConstraint) cs modTy t = if notNullable then t else t { hsTypeSyntax = tyApp (tyConNamed "Maybe") [ hsTypeSyntax t ] } modDataTy e = if notNullable then e else hsApp (hsVarFrom "maybeType" "Database.Beam.Migrate") [e] fieldExpr nm = hsApp (hsVarFrom "field" "Database.Beam.Migrate") ([ hsStr nm , modDataTy (hsDataTypeMigration dataType) ] ++ map hsConstraintDefinitionConstraint cs) instance IsSql92TableConstraintSyntax HsTableConstraint where primaryKeyConstraintSyntax fields = HsTableConstraint $ \tblNm tblFields -> let primaryKeyDataDecl = insDataDecl primaryKeyType [ primaryKeyConDecl ] (Just primaryKeyDeriving) tableTypeNm = tblNm <> "T" tableTypeKeyNm = tblNm <> "Key" (fieldRecordNames, fieldTys) = unzip (fromMaybe (error "fieldTys") (mapM (hsFieldLookup tblFields) fields)) primaryKeyType = tyApp (tyConNamed "PrimaryKey") [ tyConNamed (T.unpack tableTypeNm), tyVarNamed "f" ] primaryKeyConDecl = Hs.QualConDecl () Nothing Nothing (Hs.ConDecl () (Hs.Ident () (T.unpack tableTypeKeyNm)) fieldTys) primaryKeyDeriving = derivingDecl [ inst "Generic" ] primaryKeyTypeDecl = Hs.TypeDecl () (Hs.DHead () (Hs.Ident () (T.unpack tableTypeKeyNm))) (tyApp (tyConNamed "PrimaryKey") [ tyConNamed (T.unpack tableTypeNm) , tyConNamed "Identity" ]) primaryKeyFunDecl = Hs.InsDecl () (Hs.FunBind () [Hs.Match () (Hs.Ident () "primaryKey") [] (Hs.UnGuardedRhs () primaryKeyFunBody) Nothing]) primaryKeyFunBody = hsExprSyntax $ hsApApp (hsVar tableTypeKeyNm) (map hsVar fieldRecordNames) decl d = HsDecl d mempty mempty in HsTableConstraintDecls [ primaryKeyDataDecl , primaryKeyFunDecl ] (HsDecl primaryKeyTypeDecl mempty [ Hs.EVar () (unqual (T.unpack tableTypeKeyNm)) ]: map decl [ hsInstance "Beamable" [ tyParens (tyApp (tyConNamed "PrimaryKey") [ tyConNamed (T.unpack tableTypeNm) ]) ] [] , hsDerivingInstance "Eq" [ tyConNamed (T.unpack tableTypeKeyNm) ] , hsDerivingInstance "Show" [ tyConNamed (T.unpack tableTypeKeyNm) ] ]) instance IsSql92ColumnConstraintDefinitionSyntax HsConstraintDefinition where type Sql92ColumnConstraintDefinitionAttributesSyntax HsConstraintDefinition = HsNone type Sql92ColumnConstraintDefinitionConstraintSyntax HsConstraintDefinition = HsExpr constraintDefinitionSyntax Nothing expr Nothing = HsConstraintDefinition expr constraintDefinitionSyntax _ _ _ = error "constraintDefinitionSyntax{HsExpr}" instance Sql92SerializableConstraintDefinitionSyntax HsConstraintDefinition where serializeConstraint _ = "unknown-constrainst" instance IsSql92MatchTypeSyntax HsNone where fullMatchSyntax = HsNone partialMatchSyntax = HsNone instance IsSql92ReferentialActionSyntax HsNone where referentialActionCascadeSyntax = HsNone referentialActionNoActionSyntax = HsNone referentialActionSetDefaultSyntax = HsNone referentialActionSetNullSyntax = HsNone instance IsSql92ExtractFieldSyntax HsExpr where secondsField = hsVar "secondsField" minutesField = hsVar "minutesField" hourField = hsVar "hourField" yearField = hsVar "yearField" monthField = hsVar "monthField" dayField = hsVar "dayField" instance IsSql92ExpressionSyntax HsExpr where type Sql92ExpressionFieldNameSyntax HsExpr = HsExpr type Sql92ExpressionSelectSyntax HsExpr = SqlSyntaxBuilder type Sql92ExpressionValueSyntax HsExpr = HsExpr type Sql92ExpressionQuantifierSyntax HsExpr = HsExpr type Sql92ExpressionExtractFieldSyntax HsExpr = HsExpr type Sql92ExpressionCastTargetSyntax HsExpr = HsDataType valueE = hsApp (hsVar "valueE") . pure rowE = error "rowE" currentTimestampE = hsVar "currentTimestampE" defaultE = hsVar "defaultE" coalesceE = hsApp (hsVar "coalesceE") fieldE = hsApp (hsVar "fieldE") . pure betweenE a b c = hsApp (hsVar "betweenE") [a, b, c] andE a b = hsApp (hsVar "andE") [a, b] orE a b = hsApp (hsVar "orE") [a, b] addE a b = hsApp (hsVar "addE") [a, b] subE a b = hsApp (hsVar "subE") [a, b] mulE a b = hsApp (hsVar "mulE") [a, b] divE a b = hsApp (hsVar "divE") [a, b] modE a b = hsApp (hsVar "modE") [a, b] likeE a b = hsApp (hsVar "likeE") [a, b] overlapsE a b = hsApp (hsVar "overlapsE") [a, b] positionE a b = hsApp (hsVar "positionE") [a, b] notE = hsApp (hsVar "notE") . pure negateE = hsApp (hsVar "negateE") . pure absE = hsApp (hsVar "absE") . pure charLengthE = hsApp (hsVar "charLengthE") . pure octetLengthE = hsApp (hsVar "octetLengthE") . pure bitLengthE = hsApp (hsVar "bitLengthE") . pure lowerE = hsApp (hsVar "lowerE") . pure upperE = hsApp (hsVar "upperE") . pure trimE = hsApp (hsVar "trimE") . pure existsE = error "existsE" uniqueE = error "uniqueE" subqueryE = error "subqueryE" caseE = error "caseE" nullIfE a b = hsApp (hsVar "nullIfE") [a, b] castE = error "castE" extractE = error "extractE" isNullE = hsApp (hsVar "isNullE") . pure isNotNullE = hsApp (hsVar "isNotNullE") . pure isTrueE = hsApp (hsVar "isTrueE") . pure isFalseE = hsApp (hsVar "isFalseE") . pure isNotTrueE = hsApp (hsVar "isNotTrueE") . pure isNotFalseE = hsApp (hsVar "isNotFalseE") . pure isUnknownE = hsApp (hsVar "isUnknownE") . pure isNotUnknownE = hsApp (hsVar "isNotUnknownE") . pure eqE q a b = hsApp (hsVar "eqE") [hsMaybe q, a, b] neqE q a b = hsApp (hsVar "neqE") [hsMaybe q, a, b] gtE q a b = hsApp (hsVar "gtE") [hsMaybe q, a, b] ltE q a b = hsApp (hsVar "ltE") [hsMaybe q, a, b] geE q a b = hsApp (hsVar "geE") [hsMaybe q, a, b] leE q a b = hsApp (hsVar "leE") [hsMaybe q, a, b] inE a b = hsApp (hsVar "inE") [a, hsList b] instance IsSql92QuantifierSyntax HsExpr where quantifyOverAll = hsVar "quantifyOverAll" quantifyOverAny = hsVar "quantifyOverAny" instance IsSql92ColumnConstraintSyntax HsExpr where type Sql92ColumnConstraintExpressionSyntax HsExpr = HsExpr type Sql92ColumnConstraintMatchTypeSyntax HsExpr = HsNone type Sql92ColumnConstraintReferentialActionSyntax HsExpr = HsNone notNullConstraintSyntax = hsVarFrom "notNull" "Database.Beam.Migrate" uniqueColumnConstraintSyntax = hsVar "unique" checkColumnConstraintSyntax = error "checkColumnConstraintSyntax" primaryKeyColumnConstraintSyntax = error "primaryKeyColumnConstraintSyntax" referencesConstraintSyntax = error "referencesConstraintSyntax" instance IsSql92ConstraintAttributesSyntax HsNone where initiallyDeferredAttributeSyntax = HsNone initiallyImmediateAttributeSyntax = HsNone notDeferrableAttributeSyntax = HsNone deferrableAttributeSyntax = HsNone instance HasSqlValueSyntax HsExpr Int where sqlValueSyntax = hsInt instance HasSqlValueSyntax HsExpr Bool where sqlValueSyntax True = hsVar "True" sqlValueSyntax False = hsVar "False" instance IsSql92FieldNameSyntax HsExpr where qualifiedField tbl nm = hsApp (hsVar "qualifiedField") [ hsStr tbl, hsStr nm ] unqualifiedField nm = hsApp (hsVar "unqualifiedField") [ hsStr nm ] hsErrorType :: String -> HsDataType hsErrorType msg = HsDataType (hsApp (hsVar "error") [ hsStr ("Unknown type: " <> fromString msg) ]) (HsType (tyConNamed "Void") (importSome "Data.Void" [ importTyNamed "Void" ])) (BeamSerializedDataType "hsErrorType") instance IsSql92DataTypeSyntax HsDataType where intType = HsDataType (hsVarFrom "int" "Database.Beam.Migrate") (HsType (tyConNamed "Int") mempty) intType smallIntType = HsDataType (hsVarFrom "smallint" "Database.Beam.Migrate") (HsType (tyConNamed "Int16") (importSome "Data.Int" [ importTyNamed "Int16" ])) intType doubleType = HsDataType (hsVarFrom "double" "Database.Beam.Migrate") (HsType (tyConNamed "Double") mempty) doubleType floatType width = HsDataType (hsApp (hsVarFrom "float" "Database.Beam.Migrate") [ hsMaybe (hsInt <$> width) ]) (HsType (tyConNamed "Scientific") (importSome "Data.Scientific" [ importTyNamed "Scientific" ])) (floatType width) realType = HsDataType (hsVarFrom "real" "Database.Beam.Migrate") (HsType (tyConNamed "Double") mempty) realType charType _ Just {} = error "char collation" charType width Nothing = HsDataType (hsApp (hsVarFrom "char" "Database.Beam.Migrate") [ hsMaybe (hsInt <$> width) ]) (HsType (tyConNamed "Text") (importSome "Data.Text" [ importTyNamed "Text" ])) (charType width Nothing) varCharType _ Just {} = error "varchar collation" varCharType width Nothing = HsDataType (hsApp (hsVarFrom "varchar" "Database.Beam.Migrate") [ hsMaybe (hsInt <$> width) ]) (HsType (tyConNamed "Text") (importSome "Data.Text" [ importTyNamed "Text" ])) (varCharType width Nothing) nationalCharType width = HsDataType (hsApp (hsVarFrom "nationalChar" "Database.Beam.Migrate") [ hsMaybe (hsInt <$> width) ]) (HsType (tyConNamed "Text") (importSome "Data.Text" [ importTyNamed "Text" ])) (nationalCharType width) nationalVarCharType width = HsDataType (hsApp (hsVarFrom "nationalVarchar" "Database.Beam.Migrate") [ hsMaybe (hsInt <$> width) ]) (HsType (tyConNamed "Text") (importSome "Data.Text" [ importTyNamed "Text" ])) (nationalVarCharType width) bitType width = HsDataType (hsApp (hsVarFrom "bit" "Database.Beam.Migrate") [ hsMaybe (hsInt <$> width) ]) (HsType (tyConNamed "SqlBits") mempty) (bitType width) varBitType width = HsDataType (hsApp (hsVarFrom "varbit" "Database.Beam.Migrate") [ hsMaybe (hsInt <$> width) ]) (HsType (tyConNamed "SqlBits") mempty) (varBitType width) dateType = HsDataType (hsVarFrom "date" "Database.Beam.Migrate") (HsType (tyConNamed "Day") (importSome "Data.Time" [ importTyNamed "Day" ])) dateType timeType p False = HsDataType (hsApp (hsVarFrom "time" "Database.Beam.Migrate") [ hsMaybe Nothing ] ) (HsType (tyConNamed "TimeOfDay") (importSome "Data.Time" [ importTyNamed "TimeOfDay" ])) (timeType p False) timeType _ _ = error "timeType" domainType _ = error "domainType" timestampType Nothing True = HsDataType (hsVarFrom "timestamptz" "Database.Beam.Migrate") (HsType (tyConNamed "LocalTime") (importSome "Data.Time" [ importTyNamed "LocalTime" ])) (timestampType Nothing True) timestampType Nothing False = HsDataType (hsVarFrom "timestamp" "Database.Beam.Migrate") (HsType (tyConNamed "LocalTime") (importSome "Data.Time" [ importTyNamed "LocalTime" ])) (timestampType Nothing False) timestampType _ _ = error "timestampType with prec" numericType precDec = HsDataType (hsApp (hsVarFrom "numeric" "Database.Beam.Migrate") [ hsMaybe (fmap (\(prec, dec) -> hsTuple [ hsInt prec, hsMaybe (fmap hsInt dec) ]) precDec) ]) (HsType (tyConNamed "Scientific") (importSome "Data.Scientific" [ importTyNamed "Scientific" ])) (numericType precDec) decimalType = numericType instance IsSql99DataTypeSyntax HsDataType where characterLargeObjectType = HsDataType (hsVarFrom "characterLargeObject" "Database.Beam.Migrate") (HsType (tyConNamed "Text") (importSome "Data.Text" [ importTyNamed "Text" ])) characterLargeObjectType binaryLargeObjectType = HsDataType (hsVarFrom "binaryLargeObject" "Database.Beam.Migrate") (HsType (tyConNamed "ByteString") (importSome "Data.ByteString" [ importTyNamed "ByteString" ])) binaryLargeObjectType booleanType = HsDataType (hsVarFrom "boolean" "Database.Beam.Migrate") (HsType (tyConNamed "Bool") mempty) booleanType arrayType (HsDataType migType (HsType typeExpr typeImports) serialized) len = HsDataType (hsApp (hsVarFrom "array" "Database.Beam.Migrate") [ migType, hsInt len ]) (HsType (tyApp (tyConNamed "Vector") [typeExpr]) (typeImports <> importSome "Data.Vector" [ importTyNamed "Vector" ])) (arrayType serialized len) rowType _ = error "row types" instance IsSql2003BinaryAndVarBinaryDataTypeSyntax HsDataType where binaryType prec = HsDataType (hsApp (hsVarFrom "binary" "Database.Beam.Migrate") [ hsMaybe (hsInt <$> prec) ]) (HsType (tyConNamed "Integer") mempty) (binaryType prec) varBinaryType prec = HsDataType (hsApp (hsVarFrom "varbinary" "Database.Beam.Migrate") [ hsMaybe (hsInt <$> prec) ]) (HsType (tyConNamed "Integer") mempty) (varBinaryType prec) instance IsSql2008BigIntDataTypeSyntax HsDataType where bigIntType = HsDataType (hsVarFrom "bigint" "Database.Beam.Migrate") (HsType (tyConNamed "Int64") (importSome "Data.Int" [ importTyNamed "Int64" ])) bigIntType instance Sql92SerializableDataTypeSyntax HsDataType where serializeDataType = fromBeamSerializedDataType . hsDataTypeSerialized -- * HsSyntax utilities tyParens :: Hs.Type () -> Hs.Type () tyParens = Hs.TyParen () functionTy :: Hs.Type () -> Hs.Type () -> Hs.Type () functionTy = Hs.TyFun () tyTuple :: [ Hs.Type () ] -> Hs.Type () tyTuple = Hs.TyTuple () Hs.Boxed tyApp :: Hs.Type () -> [ Hs.Type () ] -> Hs.Type () tyApp fn args = foldl (Hs.TyApp ()) fn args tyConNamed :: String -> Hs.Type () tyConNamed nm = Hs.TyCon () (Hs.UnQual () (Hs.Ident () nm)) tyVarNamed :: String -> Hs.Type () tyVarNamed nm = Hs.TyVar () (Hs.Ident () nm) combineHsExpr :: (Hs.Exp () -> Hs.Exp () -> Hs.Exp ()) -> HsExpr -> HsExpr -> HsExpr combineHsExpr f a b = HsExpr (f (hsExprSyntax a) (hsExprSyntax b)) (hsExprImports a <> hsExprImports b) (hsExprConstraints a <> hsExprConstraints b) (hsExprTypeVariables a <> hsExprTypeVariables b) hsApp :: HsExpr -> [HsExpr] -> HsExpr hsApp fn args = foldl hsDoApp fn args where hsDoApp = combineHsExpr (Hs.App ()) hsVisibleTyApp :: HsExpr -> Hs.Type () -> HsExpr hsVisibleTyApp e t = e { hsExprSyntax = Hs.App () (hsExprSyntax e) (Hs.TypeApp () t) } hsApApp :: HsExpr -> [HsExpr] -> HsExpr hsApApp fn [] = hsApp (hsVar "pure") [ fn ] hsApApp fn (x:xs) = foldl mkAp (mkFmap fn x) xs where mkFmap = combineHsExpr (\a b -> Hs.InfixApp () a fmapOp b) mkAp = combineHsExpr (\a b -> Hs.InfixApp () a apOp b) fmapOp = hsOp "<$>" apOp = hsOp "<*>" hsStr :: T.Text -> HsExpr hsStr t = HsExpr (Hs.Lit () (Hs.String () s s)) mempty mempty mempty where s = T.unpack t hsRecCon :: T.Text -> [ (T.Text, HsExpr) ] -> HsExpr hsRecCon nm fs = foldl (combineHsExpr const) (HsExpr e mempty mempty mempty) (map snd fs) where e = Hs.RecConstr () (Hs.UnQual () (Hs.Ident () (T.unpack nm))) (map (\(fieldNm, e') -> Hs.FieldUpdate () (Hs.UnQual () (Hs.Ident () (T.unpack fieldNm))) (hsExprSyntax e')) fs) hsMaybe :: Maybe HsExpr -> HsExpr hsMaybe Nothing = hsTyCon "Nothing" hsMaybe (Just e) = hsApp (hsTyCon "Just") [e] hsVar :: T.Text -> HsExpr hsVar nm = HsExpr (Hs.Var () (Hs.UnQual () (Hs.Ident () (T.unpack nm)))) mempty mempty mempty hsVarFrom :: T.Text -> T.Text -> HsExpr hsVarFrom nm modNm = HsExpr (Hs.Var () (Hs.UnQual () (Hs.Ident () (T.unpack nm)))) (importSome modNm [ importVarNamed nm]) mempty mempty hsTyCon :: T.Text -> HsExpr hsTyCon nm = HsExpr (Hs.Con () (Hs.UnQual () (Hs.Ident () (T.unpack nm)))) mempty mempty mempty hsInt :: (Integral a, Show a) => a -> HsExpr hsInt i = HsExpr (Hs.Lit () (Hs.Int () (fromIntegral i) (show i))) mempty mempty mempty hsOp :: T.Text -> Hs.QOp () hsOp nm = Hs.QVarOp () (Hs.UnQual () (Hs.Symbol () (T.unpack nm))) hsInstance :: T.Text -> [ Hs.Type () ] -> [ Hs.InstDecl () ] -> Hs.Decl () hsInstance classNm params decls = Hs.InstDecl () Nothing (Hs.IRule () Nothing Nothing instHead) $ case decls of [] -> Nothing _ -> Just decls where instHead = foldl (Hs.IHApp ()) (Hs.IHCon () (Hs.UnQual () (Hs.Ident () (T.unpack classNm)))) params hsDerivingInstance :: T.Text -> [ Hs.Type () ] -> Hs.Decl () hsDerivingInstance classNm params = #if MIN_VERSION_haskell_src_exts(1,20,0) Hs.DerivDecl () Nothing Nothing (Hs.IRule () Nothing Nothing instHead) #else Hs.DerivDecl () Nothing (Hs.IRule () Nothing Nothing instHead) #endif where instHead = foldl (Hs.IHApp ()) (Hs.IHCon () (Hs.UnQual () (Hs.Ident () (T.unpack classNm)))) params hsList, hsTuple :: [ HsExpr ] -> HsExpr hsList = foldl (combineHsExpr addList) (HsExpr (Hs.List () []) mempty mempty mempty) where addList (Hs.List () ts) t = Hs.List () (ts ++ [t]) addList _ _ = error "addList" hsTuple = foldl (combineHsExpr addTuple) (HsExpr (Hs.Tuple () Hs.Boxed []) mempty mempty mempty) where addTuple (Hs.Tuple () boxed ts) t = Hs.Tuple () boxed (ts ++ [t]) addTuple _ _ = error "addTuple" inst :: String -> Hs.InstRule () inst = Hs.IRule () Nothing Nothing . Hs.IHCon () . Hs.UnQual () . Hs.Ident () beamMigrateSqlBackend :: HsBackendConstraint beamMigrateSqlBackend = HsBackendConstraint $ \beTy -> Hs.ClassA () (Hs.UnQual () (Hs.Ident () "BeamMigrateSqlBackend")) [ beTy ] -- * Orphans instance Hashable (Hs.Exp ()) instance Hashable (Hs.QName ()) instance Hashable (Hs.ModuleName ()) instance Hashable (Hs.IPName ()) instance Hashable (Hs.Asst ()) instance Hashable (Hs.Literal ()) instance Hashable (Hs.Name ()) instance Hashable (Hs.Type ()) instance Hashable (Hs.QOp ()) instance Hashable (Hs.TyVarBind ()) #if !MIN_VERSION_haskell_src_exts(1, 21, 0) instance Hashable (Hs.Kind ()) #endif instance Hashable (Hs.Context ()) instance Hashable (Hs.SpecialCon ()) instance Hashable (Hs.Pat ()) instance Hashable (Hs.Sign ()) instance Hashable Hs.Boxed instance Hashable (Hs.Promoted ()) instance Hashable (Hs.Binds ()) instance Hashable (Hs.Splice ()) instance Hashable (Hs.PatField ()) instance Hashable (Hs.Decl ()) instance Hashable (Hs.DeclHead ()) instance Hashable (Hs.IPBind ()) instance Hashable (Hs.RPat ()) instance Hashable (Hs.Stmt ()) instance Hashable (Hs.RPatOp ()) instance Hashable (Hs.XName ()) instance Hashable (Hs.ResultSig ()) instance Hashable (Hs.Alt ()) instance Hashable (Hs.Unpackedness ()) instance Hashable (Hs.InjectivityInfo ()) instance Hashable (Hs.PXAttr ()) instance Hashable (Hs.Rhs ()) instance Hashable (Hs.FieldUpdate ()) instance Hashable (Hs.TypeEqn ()) instance Hashable (Hs.QualStmt ()) instance Hashable (Hs.DataOrNew ()) instance Hashable (Hs.Bracket ()) instance Hashable (Hs.QualConDecl ()) instance Hashable (Hs.XAttr ()) instance Hashable (Hs.ConDecl ()) instance Hashable (Hs.Deriving ()) instance Hashable (Hs.InstRule ()) instance Hashable (Hs.FieldDecl ()) instance Hashable (Hs.GadtDecl ()) instance Hashable (Hs.InstHead ()) instance Hashable (Hs.FunDep ()) instance Hashable (Hs.ClassDecl ()) instance Hashable (Hs.Overlap ()) instance Hashable (Hs.InstDecl ()) instance Hashable (Hs.Assoc ()) instance Hashable (Hs.Op ()) instance Hashable (Hs.Match ()) instance Hashable (Hs.PatternSynDirection ()) instance Hashable (Hs.CallConv ()) instance Hashable (Hs.Safety ()) instance Hashable (Hs.Rule ()) instance Hashable (Hs.Activation ()) instance Hashable (Hs.RuleVar ()) instance Hashable (Hs.Annotation ()) instance Hashable (Hs.BooleanFormula ()) instance Hashable (Hs.Role ()) instance Hashable (Hs.GuardedRhs ()) instance Hashable (Hs.BangType ()) instance Hashable (Hs.ImportSpec ()) instance Hashable (Hs.Namespace ()) instance Hashable (Hs.CName ()) #if MIN_VERSION_haskell_src_exts(1,20,0) instance Hashable (Hs.DerivStrategy ()) instance Hashable (Hs.MaybePromotedName ()) #endif instance Hashable a => Hashable (S.Set a) where hashWithSalt s a = hashWithSalt s (S.toList a)
<filename>app/src/lib/constants.ts export const DEFAULT_SELECTED_COIN = "btc";
/* * Copyright 1993-2010 NVIDIA Corporation. All rights reserved. * * NVIDIA Corporation and its licensors retain all intellectual property and * proprietary rights in and to this software and related documentation. * Any use, reproduction, disclosure, or distribution of this software * and related documentation without an express license agreement from * NVIDIA Corporation is strictly prohibited. * * Please refer to the applicable NVIDIA end user license agreement (EULA) * associated with this source code for terms and conditions that govern * your use of this NVIDIA software. * */ #include <stdio.h> //#include <oclUtils.h> #include "Scan.h" #include <string> using namespace std; Scan::Scan(cl_context GPUContext, cl_command_queue CommandQue, unsigned int numElements, const char* path) : cxGPUContext(GPUContext), cqCommandQueue(CommandQue), mNumElements(numElements) { cl_int ciErrNum; if (numElements > MAX_WORKGROUP_INCLUSIVE_SCAN_SIZE) { d_Buffer = clCreateBuffer(cxGPUContext, CL_MEM_READ_WRITE, numElements / MAX_WORKGROUP_INCLUSIVE_SCAN_SIZE * sizeof(cl_uint), NULL, &ciErrNum); ////oclCheckError(ciErrNum, CL_SUCCESS); } //shrLog("Create and build Scan program\n"); size_t szKernelLength; // Byte size of kernel code char *SourceFile = "Scan_b.cl"; // IAN: FIX CODE AND REMOVE HARDCODING (path to cl files) // cScan should contain the source code string paths(CL_SORT_SOURCE_DIR); paths = paths + "/Scan_b.cl"; const char* pathr = paths.c_str(); FILE* fd = fopen(pathr, "r"); char* cScan = new char [20000]; int nb = fread(cScan, 1, 10000, fd); szKernelLength = nb; //printf("cScan: %s\n", cScan); //printf("cScan= %s\n", cScan); ////oclCheckErrorEX(cScan == NULL, false, NULL); printf("about to create sort program\n"); cpProgram = clCreateProgramWithSource(cxGPUContext, 1, (const char **)&cScan, &szKernelLength, &ciErrNum); ////oclCheckError(ciErrNum, CL_SUCCESS); printf("about to build sort program\n"); ciErrNum = clBuildProgram(cpProgram, 0, NULL, "-cl-fast-relaxed-math", NULL, NULL); if (ciErrNum != CL_SUCCESS) { printf("checking errors for sort\n"); // write out standard error, Build Log and PTX, then cleanup and exit ////shrLogEx(LOGBOTH | ERRORMSG, ciErrNum, STDERROR); ////oclLogBuildInfo(cpProgram, oclGetFirstDev(cxGPUContext)); //oclLogPtx(cpProgram, oclGetFirstDev(cxGPUContext), "Scan.ptx"); //printf("error: %s", oclErrorString(ciErrNum)); //oclCheckError(ciErrNum, CL_SUCCESS); } ckScanExclusiveLocal1 = clCreateKernel(cpProgram, "scanExclusiveLocal1", &ciErrNum); //oclCheckError(ciErrNum, CL_SUCCESS); ckScanExclusiveLocal2 = clCreateKernel(cpProgram, "scanExclusiveLocal2", &ciErrNum); //oclCheckError(ciErrNum, CL_SUCCESS); ckUniformUpdate = clCreateKernel(cpProgram, "uniformUpdate", &ciErrNum); //oclCheckError(ciErrNum, CL_SUCCESS); free(cScan); } Scan::~Scan() { cl_int ciErrNum; ciErrNum = clReleaseKernel(ckScanExclusiveLocal1); ciErrNum |= clReleaseKernel(ckScanExclusiveLocal2); ciErrNum |= clReleaseKernel(ckUniformUpdate); if (mNumElements > MAX_WORKGROUP_INCLUSIVE_SCAN_SIZE) { ciErrNum |= clReleaseMemObject(d_Buffer); } ciErrNum |= clReleaseProgram(cpProgram); //oclCheckErrorEX(ciErrNum, CL_SUCCESS, NULL); } // main exclusive scan routine void Scan::scanExclusiveLarge( cl_mem d_Dst, cl_mem d_Src, unsigned int batchSize, unsigned int arrayLength ){ //Check power-of-two factorization unsigned int log2L; unsigned int factorizationRemainder = factorRadix2(log2L, arrayLength); //oclCheckError( factorizationRemainder == 1, shrTRUE); //Check supported size range //printf("arrayLength= %d\n", arrayLength); //printf("MIN/MAX: %d, %d\n", MIN_LARGE_ARRAY_SIZE, MAX_LARGE_ARRAY_SIZE); //oclCheckError( (arrayLength >= MIN_LARGE_ARRAY_SIZE) && (arrayLength <= MAX_LARGE_ARRAY_SIZE), shrTRUE ); //Check total batch size limit //oclCheckError( (batchSize * arrayLength) <= MAX_BATCH_ELEMENTS, shrTRUE ); scanExclusiveLocal1( d_Dst, d_Src, (batchSize * arrayLength) / (4 * WORKGROUP_SIZE), 4 * WORKGROUP_SIZE ); scanExclusiveLocal2( d_Buffer, d_Dst, d_Src, batchSize, arrayLength / (4 * WORKGROUP_SIZE) ); uniformUpdate( d_Dst, d_Buffer, (batchSize * arrayLength) / (4 * WORKGROUP_SIZE) ); } void Scan::scanExclusiveLocal1( cl_mem d_Dst, cl_mem d_Src, unsigned int n, unsigned int size ){ cl_int ciErrNum; size_t localWorkSize, globalWorkSize; ciErrNum = clSetKernelArg(ckScanExclusiveLocal1, 0, sizeof(cl_mem), (void *)&d_Dst); ciErrNum |= clSetKernelArg(ckScanExclusiveLocal1, 1, sizeof(cl_mem), (void *)&d_Src); ciErrNum |= clSetKernelArg(ckScanExclusiveLocal1, 2, 2 * WORKGROUP_SIZE * sizeof(unsigned int), NULL); ciErrNum |= clSetKernelArg(ckScanExclusiveLocal1, 3, sizeof(unsigned int), (void *)&size); //oclCheckError(ciErrNum, CL_SUCCESS); localWorkSize = WORKGROUP_SIZE; globalWorkSize = (n * size) / 4; ciErrNum = clEnqueueNDRangeKernel(cqCommandQueue, ckScanExclusiveLocal1, 1, NULL, &globalWorkSize, &localWorkSize, 0, NULL, NULL); //oclCheckError(ciErrNum, CL_SUCCESS); } void Scan::scanExclusiveLocal2( cl_mem d_Buffer, cl_mem d_Dst, cl_mem d_Src, unsigned int n, unsigned int size ){ cl_int ciErrNum; size_t localWorkSize, globalWorkSize; unsigned int elements = n * size; ciErrNum = clSetKernelArg(ckScanExclusiveLocal2, 0, sizeof(cl_mem), (void *)&d_Buffer); ciErrNum |= clSetKernelArg(ckScanExclusiveLocal2, 1, sizeof(cl_mem), (void *)&d_Dst); ciErrNum |= clSetKernelArg(ckScanExclusiveLocal2, 2, sizeof(cl_mem), (void *)&d_Src); ciErrNum |= clSetKernelArg(ckScanExclusiveLocal2, 3, 2 * WORKGROUP_SIZE * sizeof(unsigned int), NULL); ciErrNum |= clSetKernelArg(ckScanExclusiveLocal2, 4, sizeof(unsigned int), (void *)&elements); ciErrNum |= clSetKernelArg(ckScanExclusiveLocal2, 5, sizeof(unsigned int), (void *)&size); //oclCheckError(ciErrNum, CL_SUCCESS); localWorkSize = WORKGROUP_SIZE; globalWorkSize = iSnapUp(elements, WORKGROUP_SIZE); ciErrNum = clEnqueueNDRangeKernel(cqCommandQueue, ckScanExclusiveLocal2, 1, NULL, &globalWorkSize, &localWorkSize, 0, NULL, NULL); ////oclCheckError(ciErrNum, CL_SUCCESS); } void Scan::uniformUpdate( cl_mem d_Dst, cl_mem d_Buffer, unsigned int n ){ cl_int ciErrNum; size_t localWorkSize, globalWorkSize; ciErrNum = clSetKernelArg(ckUniformUpdate, 0, sizeof(cl_mem), (void *)&d_Dst); ciErrNum |= clSetKernelArg(ckUniformUpdate, 1, sizeof(cl_mem), (void *)&d_Buffer); //oclCheckError(ciErrNum, CL_SUCCESS); localWorkSize = WORKGROUP_SIZE; globalWorkSize = n * WORKGROUP_SIZE; ciErrNum = clEnqueueNDRangeKernel(cqCommandQueue, ckUniformUpdate, 1, NULL, &globalWorkSize, &localWorkSize, 0, NULL, NULL); //oclCheckError(ciErrNum, CL_SUCCESS); }
/** * Return the html version used in document. * @return version code */ public short apparentVersion() { switch (this.doctype) { case Dict.VERS_UNKNOWN : return htmlVersion(); case Dict.VERS_HTML20 : if (TidyUtils.toBoolean(this.versions & Dict.VERS_HTML20)) { return Dict.VERS_HTML20; } break; case Dict.VERS_HTML32 : if (TidyUtils.toBoolean(this.versions & Dict.VERS_HTML32)) { return Dict.VERS_HTML32; } break; case Dict.VERS_HTML40_STRICT : if (TidyUtils.toBoolean(this.versions & Dict.VERS_HTML40_STRICT)) { return Dict.VERS_HTML40_STRICT; } break; case Dict.VERS_HTML40_LOOSE : if (TidyUtils.toBoolean(this.versions & Dict.VERS_HTML40_LOOSE)) { return Dict.VERS_HTML40_LOOSE; } break; case Dict.VERS_FRAMESET : if (TidyUtils.toBoolean(this.versions & Dict.VERS_FRAMESET)) { return Dict.VERS_FRAMESET; } break; case Dict.VERS_XHTML11 : if (TidyUtils.toBoolean(this.versions & Dict.VERS_XHTML11)) { return Dict.VERS_XHTML11; } break; default : break; } this.lines = 1; this.columns = 1; report.warning(this, null, null, Report.INCONSISTENT_VERSION); return this.htmlVersion(); }
class Box: """The wrapper of all layers in the model.""" def __init__(self, flow, models: list, n_epochs=10, batch_sz=1, loss_fn=tf.losses.mean_squared_error): self.flow = flow self.models = models self.n_epochs = n_epochs self.batch_sz = batch_sz self.loss_fn = loss_fn for model in self.models: if hasattr(model, 'batch_sz'): model.batch_sz = self.batch_sz def train(self, x_tr, y_tr, optimizer=tf.train.AdamOptimizer(), loss_fn=tf.losses.mean_squared_error): """Train the model with training set, x and y Parameters ---------- x_tr : tf.Tensor or np.ndarray, training data y_tr : tf.Tensor or np.ndarray, training labels optimizer : Default value = tf.train.AdamOptimizer()) loss_fn : Default value = tf.losses.mean_squared_error) Returns ------- """ # convert them into tensors. x_tr = tf.convert_to_tensor(x_tr) y_tr = np.array(y_tr, dtype=np.float32) if y_tr.ndim == 1 or y_tr.shape[1] == 1: y_tr = np.transpose([y_tr.flatten()]) y_tr = tf.convert_to_tensor(y_tr) # make them into a dataset object ds = tf.data.Dataset.from_tensor_slices((x_tr, y_tr)).shuffle(y_tr.shape[0]) ds = ds.apply(tf.contrib.data.batch_and_drop_remainder(self.batch_sz)) # loop through the epochs for epoch in range(self.n_epochs): total_loss = 0 # initialize the total loss at the beginning to be 0 # loop through the batches for (batch, (xs, ys)) in enumerate(ds): # the loss at the beginning of the batch is zero loss = 0 with tf.GradientTape() as tape: # for descent ys_hat = self.flow(xs, self.models) # the flow function takes xs and models to make prediction loss += self.loss_fn(ys_hat, ys) total_loss += loss variables = [] for model in self.models: variables += model.variables gradients = tape.gradient(loss, variables) optimizer.apply_gradients(zip(gradients, variables), tf.train.get_or_create_global_step()) if batch % 10 == 0: logging.info("epoch %s batch %s loss %s" % (epoch, batch, np.asscalar(loss.numpy()))) def predict(self, x_te): """Make predictions on the x of test set. Parameters ---------- x_te : tf.Tensor or np.ndarray, test data Returns ------- """ # this is necessary in order to go through all the samples in test set for model in self.models: if hasattr(model, 'batch_sz'): model.batch_sz = 1 ys_hat_all = np.array([]) x_te = tf.convert_to_tensor(x_te) ds_te = tf.data.Dataset.from_tensor_slices((x_te)) # ds_te = ds_te.apply(tf.contrib.data.batch_and_drop_remainder(self.batch_sz)) ds_te = ds_te.apply(tf.contrib.data.batch_and_drop_remainder(1)) for xs in ds_te: ys_hat = self.flow(xs, self.models) ys_hat_all = np.concatenate([ys_hat_all, ys_hat.numpy().flatten()], axis=0) return ys_hat_all def save_weights(self, file_path): """Save the model. Note that it is necessary to also save the shape of the input. Parameters ---------- file_path : str, Returns ------- """ import os os.system('rm -rf ' + file_path) os.system('mkdir ' + file_path) for idx, model in enumerate(self.models): model.save_weights('%s/%s.h5' % (file_path, idx)) # model.save_weights('%s/%s.h5' % (file_path, idx)) def load_weights(self, file_path): """Restore the model. Parameters ---------- file_path : Returns ------- """ for idx, model in self.models: model.load_weights('%s/%s.h5' % (file_path, idx))
<gh_stars>1-10 from modules import zerosevenscraper from modules import twelvescraper from termcolor import colored if __name__ == "__main__": year = int(input('Enter handbook year: ')) if(year < 2012 and year > 2007): scraper = oldscraper.Scraper(year) scraper.setup() scraper.export_as_csv(f'{year}.csv') elif(year >= 2012 and year < 2016): scraper = twelvescraper.Scraper(year) scraper.setup() scraper.export_as_csv(f'{year}.csv') else: print(colored('Year has to be between 2008 and 2012', 'red'))
<filename>src/electron/models/prime/regions/magmoorCaverns.ts<gh_stars>1-10 import { RegionObject } from '../../region'; import { PrimeItem } from '../../../enums/primeItem'; import { PrimeLocation } from '../../../enums/primeLocation'; import { PointOfNoReturnItems } from '../../../enums/pointOfNoReturnItems'; import { PrimeItemCollection } from '../itemCollection'; import { PrimeRandomizerSettings } from '../randomizerSettings'; import { Elevator } from '../../../enums/elevator'; export function magmoorCaverns(): RegionObject[] { const regions: RegionObject[] = [ { name: 'Lava Lake', locations: { [PrimeLocation.LAVA_LAKE]: (items: PrimeItemCollection, settings: PrimeRandomizerSettings) => { if (settings.tricks.lavaLakeItemSuitless) { return items.hasMissiles() && items.has(PrimeItem.SPACE_JUMP_BOOTS); } return items.hasMissiles() && (settings.tricks.lavaLakeItemOnlyMissiles || items.has(PrimeItem.GRAPPLE_BEAM) || items.has(PrimeItem.SPACE_JUMP_BOOTS)) } }, exits: { 'Triclops Pit': (items: PrimeItemCollection, settings: PrimeRandomizerSettings) => { if (settings.tricks.lavaLakeItemSuitless) { return items.hasSuit(settings) && items.canLayBombs(); } return items.canLayBombs(); }, [Elevator.MAGMOOR_NORTH]: () => true } }, { name: '<NAME>', locations: { [PrimeLocation.TRICLOPS_PIT]: (items: PrimeItemCollection, settings: PrimeRandomizerSettings) => { const missileReqs = (settings.tricks.triclopsPitItemWithCharge && items.has(PrimeItem.CHARGE_BEAM)) || items.hasMissiles(); const sjReqs = settings.tricks.triclopsPitItemWithoutSpaceJump || items.has(PrimeItem.SPACE_JUMP_BOOTS); const xrayReqs = settings.tricks.removeXrayReqs || items.has(PrimeItem.XRAY_VISOR); return missileReqs && sjReqs && xrayReqs; }, [PrimeLocation.STORAGE_CAVERN]: (items: PrimeItemCollection) => items.has(PrimeItem.MORPH_BALL) }, exits: { 'Monitor Station': () => true, 'Lava Lake': (items: PrimeItemCollection) => items.canLayBombs() } }, { name: 'Monitor Station', locations: { [PrimeLocation.TRANSPORT_TUNNEL_A]: (items: PrimeItemCollection) => items.canLayBombs() }, exits: { 'Triclops Pit': () => true, 'Shore Tunnel': () => true, 'Warrior Shrine': (items: PrimeItemCollection, settings: PrimeRandomizerSettings) => { if (settings.tricks.warriorShrineMinimumReqs) { return true; } const boostReqs = settings.tricks.warriorShrineWithoutBoost || items.canBoost(); return boostReqs && items.has(PrimeItem.SPACE_JUMP_BOOTS); }, [Elevator.MAGMOOR_WEST]: (items: PrimeItemCollection, settings: PrimeRandomizerSettings) => { const canBoost = settings.tricks.boostThroughBombTunnels && items.canBoost(); return canBoost || items.canLayBombs(); } }, }, { name: '<NAME>', locations: { [PrimeLocation.WARRIOR_SHRINE]: (items: PrimeItemCollection, settings: PrimeRandomizerSettings) => settings.tricks.warriorShrineMinimumReqs || items.has(PrimeItem.SPACE_JUMP_BOOTS) }, exits: { 'Fiery Shores (Warrior Shrine Tunnel)': (items: PrimeItemCollection, settings: PrimeRandomizerSettings) => { if (settings.pointOfNoReturnItems === PointOfNoReturnItems.ALLOW_ALL) { return items.canLayPowerBombs(); } return items.canLayBombs() && items.canLayPowerBombs(); }, 'Monitor Station': () => true } }, { name: 'Fiery Shores (Warrior Shrine Tunnel)', locations: { [PrimeLocation.FIERY_SHORES_WARRIOR_SHRINE_TUNNEL]: () => true, }, exits: { 'Fiery Shores (Shore Tunnel Side)': (items: PrimeItemCollection) => items.canLayBombs() } }, { name: 'Shore Tunnel', locations: {}, exits: { 'Shore Tunnel (Lava Pit)': (items: PrimeItemCollection, settings: PrimeRandomizerSettings) => { if (settings.pointOfNoReturnItems !== PointOfNoReturnItems.DO_NOT_ALLOW) { return items.canLayPowerBombs(); } return items.canLayPowerBombs() && items.has(PrimeItem.SPACE_JUMP_BOOTS); }, 'Fiery Shores (Shore Tunnel Side)': () => true, 'Monitor Station': () => true } }, { name: 'Shore Tunnel (Lava Pit)', locations: { [PrimeLocation.SHORE_TUNNEL]: () => true }, exits: { 'Shore Tunnel': (items: PrimeItemCollection, settings: PrimeRandomizerSettings) => items.has(PrimeItem.SPACE_JUMP_BOOTS) || (settings.tricks.shoreTunnelEscapeWithoutSpaceJump && items.canLayBombs()) } }, { name: 'Fiery Shores (Shore Tunnel Side)', exits: { 'Fiery Shores (Tallon Elevator Side)': (items: PrimeItemCollection) => items.canLayBombs() || items.has(PrimeItem.GRAPPLE_BEAM), 'Shore Tunnel': () => true } }, { name: 'Fiery Shores (Tallon Elevator Side)', locations: { [PrimeLocation.FIERY_SHORES_MORPH_TRACK]: (items: PrimeItemCollection, settings: PrimeRandomizerSettings) => items.canLayBombs() || (items.has(PrimeItem.SPACE_JUMP_BOOTS) && settings.tricks.fieryShoresItemSj) }, exits: { 'Fiery Shores (Shore Tunnel Side)': (items: PrimeItemCollection) => items.canLayBombs() || items.has(PrimeItem.GRAPPLE_BEAM), [Elevator.MAGMOOR_EAST]: (items: PrimeItemCollection) => items.has(PrimeItem.MORPH_BALL) } }, { name: '<NAME>', exits: { 'Geothermal Core': (items: PrimeItemCollection) => items.has(PrimeItem.WAVE_BEAM) && items.has(PrimeItem.SPACE_JUMP_BOOTS), [Elevator.MAGMOOR_EAST]: (items: PrimeItemCollection, settings: PrimeRandomizerSettings) => { if (settings.tricks.crossTwinFiresTunnelSuitless) { return items.has(PrimeItem.SPACE_JUMP_BOOTS) && items.hasCount(PrimeItem.ENERGY_TANK, 2); } const spiderReqs = settings.tricks.crossTwinFiresTunnelWithoutSpider || items.canSpider(); return spiderReqs && items.hasSuit(settings) && items.has(PrimeItem.SPACE_JUMP_BOOTS); } } }, { name: 'Geothermal Core', exits: { 'Plasma Processing': (items: PrimeItemCollection, settings: PrimeRandomizerSettings) => { const baseReqs = items.canLayBombs() && items.canBoost() && items.has(PrimeItem.SPACE_JUMP_BOOTS) && items.has(PrimeItem.ICE_BEAM); const grappleSpiderReqs = settings.tricks.plasmaProcessingItemWithoutGrappleSpider || (items.canSpider() && items.has(PrimeItem.GRAPPLE_BEAM)); if (settings.pointOfNoReturnItems !== PointOfNoReturnItems.DO_NOT_ALLOW) { return grappleSpiderReqs && baseReqs; } return items.has(PrimeItem.PLASMA_BEAM) && grappleSpiderReqs && baseReqs; }, 'Magmoor Workstation': (items: PrimeItemCollection) => items.has(PrimeItem.WAVE_BEAM) && items.has(PrimeItem.SPACE_JUMP_BOOTS), 'Twin Fires': (items: PrimeItemCollection) => items.has(PrimeItem.WAVE_BEAM) && items.has(PrimeItem.SPACE_JUMP_BOOTS) } }, { name: 'Plasma Processing', locations: { [PrimeLocation.PLASMA_PROCESSING]: () => true }, exits: { 'Geothermal Core': (items: PrimeItemCollection) => items.has(PrimeItem.SPACE_JUMP_BOOTS) && items.has(PrimeItem.PLASMA_BEAM) } }, { name: 'Magmoor Workstation', locations: { [PrimeLocation.MAGMOOR_WORKSTATION]: (items: PrimeItemCollection, settings: PrimeRandomizerSettings) => { const thermalReqs = settings.tricks.removeThermalReqs || items.has(PrimeItem.THERMAL_VISOR); const morphAndSpaceOrBombs = items.canLayBombs() || (items.has(PrimeItem.MORPH_BALL) && items.has(PrimeItem.SPACE_JUMP_BOOTS)); return thermalReqs && morphAndSpaceOrBombs && items.has(PrimeItem.WAVE_BEAM) && items.has(PrimeItem.SCAN_VISOR); } }, exits: { [Elevator.MAGMOOR_SOUTH_PHENDRANA]: (items: PrimeItemCollection) => items.has(PrimeItem.WAVE_BEAM) && items.has(PrimeItem.SPACE_JUMP_BOOTS), [Elevator.MAGMOOR_SOUTH_MINES]: (items: PrimeItemCollection) => items.canLayPowerBombs() && items.has(PrimeItem.ICE_BEAM) && items.has(PrimeItem.SPACE_JUMP_BOOTS), 'Geothermal Core': (items: PrimeItemCollection) => items.has(PrimeItem.WAVE_BEAM) && items.has(PrimeItem.SPACE_JUMP_BOOTS), // OOB only 'Plasma Processing': (items: PrimeItemCollection, settings: PrimeRandomizerSettings) => { const baseReqs = settings.tricks.plasmaProcessingFromMagmoorWorkstationOob && items.canWallcrawl(settings) && items.has(PrimeItem.ICE_BEAM); if (settings.pointOfNoReturnItems !== PointOfNoReturnItems.DO_NOT_ALLOW) { return baseReqs; } return items.has(PrimeItem.PLASMA_BEAM) && baseReqs; } } }, { name: Elevator.MAGMOOR_NORTH, exits: { [Elevator.CHOZO_NORTH]: () => true, 'Lava Lake': (items: PrimeItemCollection, settings: PrimeRandomizerSettings) => settings.tricks.lavaLakeItemSuitless || items.hasSuit(settings) } }, { name: Elevator.MAGMOOR_WEST, exits: { [Elevator.PHENDRANA_NORTH]: () => true, // Suitless Magmoor check [Elevator.MAGMOOR_EAST]: (items: PrimeItemCollection, settings: PrimeRandomizerSettings) => { let minimumEnergyTanks: number; if (!(settings.tricks.suitlessMagmoorRun || settings.tricks.suitlessMagmoorRunMinimal)) { return false; } else if (settings.tricks.suitlessMagmoorRunMinimal) { minimumEnergyTanks = items.has(PrimeItem.SPACE_JUMP_BOOTS) ? 3 : 4; } else { // suitlessMagmoorRun minimumEnergyTanks = items.has(PrimeItem.SPACE_JUMP_BOOTS) ? 5 : 6; } return items.canLayBombs() && !items.hasSuit(settings) && items.hasCount(PrimeItem.ENERGY_TANK, minimumEnergyTanks); }, 'Monitor Station': (items: PrimeItemCollection, settings: PrimeRandomizerSettings) => { const canBoost = settings.tricks.boostThroughBombTunnels && items.canBoost(); return items.hasSuit(settings) && (canBoost || items.canLayBombs()); } } }, { name: Elevator.MAGMOOR_EAST, exits: { [Elevator.TALLON_WEST]: () => true, // Suitless Magmoor check [Elevator.MAGMOOR_WEST]: (items: PrimeItemCollection, settings: PrimeRandomizerSettings) => { let minimumEnergyTanks: number; if (!(settings.tricks.suitlessMagmoorRun || settings.tricks.suitlessMagmoorRunMinimal)) { return false; } else if (settings.tricks.suitlessMagmoorRunMinimal) { minimumEnergyTanks = items.has(PrimeItem.SPACE_JUMP_BOOTS) ? 3 : 4; } else { // suitlessMagmoorRun minimumEnergyTanks = items.has(PrimeItem.SPACE_JUMP_BOOTS) ? 5 : 6; } return items.canLayBombs() && !items.hasSuit(settings) && items.hasCount(PrimeItem.ENERGY_TANK, minimumEnergyTanks); }, 'Fiery Shores (Tallon Elevator Side)': (items: PrimeItemCollection, settings: PrimeRandomizerSettings) => { const grappleMorphReq = settings.tricks.fieryShoresAccessWithoutMorphGrapple || (items.has(PrimeItem.MORPH_BALL) && items.has(PrimeItem.GRAPPLE_BEAM)); return grappleMorphReq && items.hasSuit(settings); }, 'Twin Fires': (items: PrimeItemCollection, settings: PrimeRandomizerSettings) => { if (settings.tricks.crossTwinFiresTunnelSuitless) { return items.has(PrimeItem.SPACE_JUMP_BOOTS) && items.hasCount(PrimeItem.ENERGY_TANK, 2); } const spiderReqs = settings.tricks.crossTwinFiresTunnelWithoutSpider || items.canSpider(); return spiderReqs && items.hasSuit(settings) && items.has(PrimeItem.SPACE_JUMP_BOOTS); } } }, { name: Elevator.MAGMOOR_SOUTH_MINES, exits: { [Elevator.MINES_WEST]: () => true, 'Magmoor Workstation': (items: PrimeItemCollection, settings: PrimeRandomizerSettings) => { const baseReqs = (items.hasSuit(settings) || settings.tricks.lateMagmoorNoHeatProtection) && items.canLayPowerBombs() && items.has(PrimeItem.ICE_BEAM); if (settings.pointOfNoReturnItems === PointOfNoReturnItems.ALLOW_ALL) { return baseReqs; } return baseReqs && items.has(PrimeItem.SPACE_JUMP_BOOTS); } } }, { name: Elevator.MAGMOOR_SOUTH_PHENDRANA, exits: { [Elevator.PHENDRANA_SOUTH]: () => true, 'Magmoor Workstation': (items: PrimeItemCollection, settings: PrimeRandomizerSettings) => { const baseReqs = (items.hasSuit(settings) || settings.tricks.lateMagmoorNoHeatProtection) && items.has(PrimeItem.WAVE_BEAM); if (settings.pointOfNoReturnItems === PointOfNoReturnItems.ALLOW_ALL) { return baseReqs; } return baseReqs && items.has(PrimeItem.SPACE_JUMP_BOOTS); } } }, ]; return regions; };
/** * A list of {@link Transform}s. */ public class ListOfTransforms implements Transform { private static final long serialVersionUID = -1899549289571792276L; private List<Transform> list = new ArrayList<Transform>(); public ListOfTransforms(List<Transform> transforms) { list.addAll(transforms); } public ListOfTransforms(Transform... transforms) { Collections.addAll(list, transforms); } public void clear() { list.clear(); } @Override public GraphElementWithStatistics transform(GraphElementWithStatistics graphElementWithStatistics) { for (Transform transform : list) { graphElementWithStatistics = transform.transform(graphElementWithStatistics); } return graphElementWithStatistics; } @Override public void write(DataOutput out) throws IOException { out.writeInt(list.size()); for (Transform transform : list) { Text.writeString(out, transform.getClass().getName()); transform.write(out); } } @Override public void readFields(DataInput in) throws IOException { list.clear(); int listSize = in.readInt(); for (int i = 0; i < listSize; i++) { String className = Text.readString(in); try { Transform transform = (Transform) Class.forName(className).newInstance(); transform.readFields(in); list.add(transform); } catch (InstantiationException e) { throw new IOException("Exception deserialising writable: " + e); } catch (IllegalAccessException e) { throw new IOException("Exception deserialising writable: " + e); } catch (ClassNotFoundException e) { throw new IOException("Exception deserialising writable: " + e); } catch (ClassCastException e) { throw new IOException("Exception deserialising writable: " + e); } } } @Override public boolean equals(Object o) { if (this == o) return true; if (o == null || getClass() != o.getClass()) return false; ListOfTransforms that = (ListOfTransforms) o; if (list != null ? !list.equals(that.list) : that.list != null) return false; return true; } @Override public int hashCode() { return list != null ? list.hashCode() : 0; } @Override public String toString() { return "ListOfTransforms{" + "list=" + list + '}'; } }
// Copyright (c) 2013-2017 <NAME>, <NAME> // All rights reserved. // // Redistribution and use in source and binary forms, with or without modification, are permitted provided that // the following conditions are met: // // 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the // following disclaimer. // 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions // and the following disclaimer in the documentation and/or other materials provided with the distribution. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED // WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A // PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR // ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, // PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER // CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR // OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. /** \file gvec.hpp * * \brief Declaration and implementation of Gvec class. */ #ifndef __GVEC_HPP__ #define __GVEC_HPP__ #include <numeric> #include <map> #include <iostream> #include <type_traits> #include <assert.h> #include "memory.hpp" #include "fft3d_grid.hpp" #include "geometry3d.hpp" #include "serializer.hpp" #include "splindex.hpp" #include "utils/profiler.hpp" #include "utils/rte.hpp" using namespace geometry3d; namespace sddk { inline FFT3D_grid get_min_fft_grid(double cutoff__, matrix3d<double> M__) { return FFT3D_grid(find_translations(cutoff__, M__) + vector3d<int>({2, 2, 2})); } /// Descriptor of the z-column (x,y fixed, z varying) of the G-vectors. /** Sphere of G-vectors within a given plane-wave cutoff is represented as a set of z-columns with different lengths. */ struct z_column_descriptor { /// X-coordinate (can be negative and positive). int x; /// Y-coordinate (can be negative and positive). int y; /// List of the Z-coordinates of the column. std::vector<int> z; /// Constructor. z_column_descriptor(int x__, int y__, std::vector<int> z__) : x(x__) , y(y__) , z(z__) { } /// Default constructor. z_column_descriptor() { } }; /// Serialize a single z-column descriptor. inline void serialize(serializer& s__, z_column_descriptor const& zcol__) { serialize(s__, zcol__.x); serialize(s__, zcol__.y); serialize(s__, zcol__.z); } /// Deserialize a single z-column descriptor. inline void deserialize(serializer& s__, z_column_descriptor& zcol__) { deserialize(s__, zcol__.x); deserialize(s__, zcol__.y); deserialize(s__, zcol__.z); } /// Serialize a vector of z-column descriptors. inline void serialize(serializer& s__, std::vector<z_column_descriptor> const& zcol__) { serialize(s__, zcol__.size()); for (auto& e: zcol__) { serialize(s__, e); } } /// Deserialize a vector of z-column descriptors. inline void deserialize(serializer& s__, std::vector<z_column_descriptor>& zcol__) { size_t sz; deserialize(s__, sz); zcol__.resize(sz); for (size_t i = 0; i < sz; i++) { deserialize(s__, zcol__[i]); } } /* forward declarations */ class Gvec; void serialize(serializer& s__, Gvec const& gv__); void deserialize(serializer& s__, Gvec& gv__); Gvec send_recv(Communicator const& comm__, Gvec const& gv_src__, int source__, int dest__); /// A set of G-vectors for FFTs and G+k basis functions. /** Current implemntation supports up to 2^12 (4096) z-dimension of the FFT grid and 2^20 (1048576) number of * z-columns. The order of z-sticks and G-vectors is not fixed and depends on the number of MPI ranks used * for the parallelization. */ class Gvec { private: /// k-vector of G+k. vector3d<double> vk_{0, 0, 0}; /// Cutoff for |G+k| vectors. double Gmax_{0}; /// Reciprocal lattice vectors. matrix3d<double> lattice_vectors_; /// Total communicator which is used to distribute G or G+k vectors. Communicator const& comm_; /// Indicates that G-vectors are reduced by inversion symmetry. bool reduce_gvec_{false}; /// True if this a list of G-vectors without k-point shift. bool bare_gvec_{true}; /// Total number of G-vectors. int num_gvec_{0}; /// Mapping between G-vector index [0:num_gvec_) and a full index. /** Full index is used to store x,y,z coordinates in a packed form in a single integer number. * The index is equal to ((i << 12) + j) where i is the global index of z_column and j is the * index of G-vector z-coordinate in the column i. This is a global array: each MPI rank stores exactly the * same copy of the gvec_full_index_. * * Limitations: size of z-dimension of FFT grid: 4096, number of z-columns: 1048576 */ mdarray<uint32_t, 1> gvec_full_index_; /// Index of the shell to which the given G-vector belongs. mdarray<int, 1> gvec_shell_; /// Number of G-vector shells (groups of G-vectors with the same length). int num_gvec_shells_; /// Radii (or lengths) of G-vector shells in a.u.^-1. sddk::mdarray<double, 1> gvec_shell_len_; /// Local number of G-vector shells for the local number of G-vectors. /** G-vectors are distributed by sticks, not by G-shells. This means that each rank stores local fraction of G-vectors with a non-consecutive G-shell index and not all G-shells are present at a given rank. This variable stores the number of G-shells which this rank holds. */ int num_gvec_shells_local_; /// Radii of G-vector shells in the local index counting [0, num_gvec_shells_local) std::vector<double> gvec_shell_len_local_; /// Mapping between local index of G-vector and local G-shell index. std::vector<int> gvec_shell_idx_local_; sddk::mdarray<int, 3> gvec_index_by_xy_; /// Global list of non-zero z-columns. std::vector<z_column_descriptor> z_columns_; /// Fine-grained distribution of G-vectors. block_data_descriptor gvec_distr_; /// Fine-grained distribution of z-columns. block_data_descriptor zcol_distr_; /// Set of G-vectors on which the current G-vector distribution can be based. /** This can be used to establish a local mapping between coarse and fine G-vector sets * without MPI communication. */ Gvec const* gvec_base_{nullptr}; /// Mapping between current and base G-vector sets. /** This mapping allows for a local-to-local copy of PW coefficients without any MPI communication. Example: \code{.cpp} // Copy from a coarse G-vector set. for (int igloc = 0; igloc < ctx_.gvec_coarse().count(); igloc++) { rho_vec_[j]->f_pw_local(ctx_.gvec().gvec_base_mapping(igloc)) = rho_mag_coarse_[j]->f_pw_local(igloc); } \endcode */ mdarray<int, 1> gvec_base_mapping_; /// Cartiesian coordinaes for a local set of G-vectors. mdarray<double, 2> gvec_cart_; /// Cartesian coordinaes for a local set of G+k-vectors. mdarray<double, 2> gkvec_cart_; /// Return corresponding G-vector for an index in the range [0, num_gvec). vector3d<int> gvec_by_full_index(uint32_t idx__) const; /// Offset in the global index for the local part of G-vectors. int offset_{-1}; /// Local number of G-vectors. int count_{-1}; /// Find z-columns of G-vectors inside a sphere with Gmax radius. /** This function also computes the total number of G-vectors. */ void find_z_columns(double Gmax__, FFT3D_grid const& fft_box__); /// Distribute z-columns between MPI ranks. void distribute_z_columns(); /// Find a list of G-vector shells. /** G-vectors belonging to the same shell have the same length and transform to each other under a lattice symmetry operation. */ void find_gvec_shells(); /// Compute the Cartesian coordinates. void init_gvec_cart(); /// Initialize everything. void init(FFT3D_grid const& fft_grid); friend void sddk::serialize(serializer& s__, Gvec const& gv__); friend void sddk::deserialize(serializer& s__, Gvec& gv__); /* copy constructor is forbidden */ Gvec(Gvec const& src__) = delete; /* copy assignment operator is forbidden */ Gvec& operator=(Gvec const& src__) = delete; public: /// Constructor for G+k vectors. /** \param [in] vk K-point vector of G+k * \param [in] M Reciprocal lattice vecotors in comumn order * \param [in] Gmax Cutoff for G+k vectors * \param [in] comm Total communicator which is used to distribute G-vectors * \param [in] reduce_gvec True if G-vectors need to be reduced by inversion symmetry. */ Gvec(vector3d<double> vk__, matrix3d<double> M__, double Gmax__, Communicator const& comm__, bool reduce_gvec__) : vk_(vk__) , Gmax_(Gmax__) , lattice_vectors_(M__) , comm_(comm__) , reduce_gvec_(reduce_gvec__) , bare_gvec_(false) { init(get_min_fft_grid(Gmax__, M__)); } /// Constructor for G-vectors. /** \param [in] M Reciprocal lattice vecotors in comumn order * \param [in] Gmax Cutoff for G+k vectors * \param [in] comm Total communicator which is used to distribute G-vectors * \param [in] reduce_gvec True if G-vectors need to be reduced by inversion symmetry. */ Gvec(matrix3d<double> M__, double Gmax__, Communicator const& comm__, bool reduce_gvec__) : Gmax_(Gmax__) , lattice_vectors_(M__) , comm_(comm__) , reduce_gvec_(reduce_gvec__) { init(get_min_fft_grid(Gmax__, M__)); } /// Constructor for G-vectors. /** \param [in] M Reciprocal lattice vecotors in comumn order * \param [in] Gmax Cutoff for G+k vectors * \param [in] fft_grid Provide explicit boundaries for the G-vector min and max frequencies. * \param [in] comm Total communicator which is used to distribute G-vectors * \param [in] reduce_gvec True if G-vectors need to be reduced by inversion symmetry. */ Gvec(matrix3d<double> M__, double Gmax__, FFT3D_grid const& fft_grid__, Communicator const& comm__, bool reduce_gvec__) : Gmax_(Gmax__) , lattice_vectors_(M__) , comm_(comm__) , reduce_gvec_(reduce_gvec__) { init(fft_grid__); } /// Constructor for G-vector distribution based on a previous set. /** Previous set of G-vectors must be a subset of the current set. */ Gvec(double Gmax__, Gvec const& gvec_base__) : Gmax_(Gmax__) , lattice_vectors_(gvec_base__.lattice_vectors()) , comm_(gvec_base__.comm()) , reduce_gvec_(gvec_base__.reduced()) , gvec_base_(&gvec_base__) { init(get_min_fft_grid(Gmax__, lattice_vectors_)); } /// Constructor for G-vectors with mpi_comm_self() Gvec(matrix3d<double> M__, double Gmax__, bool reduce_gvec__) : Gmax_(Gmax__) , lattice_vectors_(M__) , comm_(Communicator::self()) , reduce_gvec_(reduce_gvec__) { init(get_min_fft_grid(Gmax__, M__)); } /// Constructor for empty set of G-vectors. Gvec(Communicator const& comm__) : comm_(comm__) { } /// Move assignment operator. Gvec& operator=(Gvec&& src__); /// Move constructor. Gvec(Gvec&& src__) : comm_(src__.comm_) { *this = std::move(src__); } inline auto const& vk() const { return vk_; } inline Communicator const& comm() const { return comm_; } /// Set the new reciprocal lattice vectors. /** For the varibale-cell relaxation runs we need an option to preserve the number of G- and G+k vectors. * Here we can set the new lattice vectors and update the relevant members of the Gvec class. */ inline auto const& lattice_vectors(matrix3d<double> lattice_vectors__) { lattice_vectors_ = lattice_vectors__; init_gvec_cart(); find_gvec_shells(); return lattice_vectors_; } /// Retrn a const reference to the reciprocal lattice vectors. inline matrix3d<double> const& lattice_vectors() const { return lattice_vectors_; } /// Return the volume of the real space unit cell that corresponds to the reciprocal lattice of G-vectors. inline double omega() const { double const twopi_pow3 = 248.050213442398561403810520537; return twopi_pow3 / std::abs(lattice_vectors().det()); } /// Return the total number of G-vectors within the cutoff. inline int num_gvec() const { return num_gvec_; } /// Number of z-columns for a fine-grained distribution. inline int zcol_count(int rank__) const { assert(rank__ < comm().size()); return zcol_distr_.counts[rank__]; } /// Offset in the global index of z-columns for a given rank. inline int zcol_offset(int rank__) const { assert(rank__ < comm().size()); return zcol_distr_.offsets[rank__]; } /// Number of G-vectors for a fine-grained distribution. inline int gvec_count(int rank__) const { assert(rank__ < comm().size()); return gvec_distr_.counts[rank__]; } /// Number of G-vectors for a fine-grained distribution for the current MPI rank. /** The \em count and \em offset are borrowed from the MPI terminology for data distribution. */ inline int count() const { return count_; } /// Offset (in the global index) of G-vectors for a fine-grained distribution. inline int gvec_offset(int rank__) const { assert(rank__ < comm().size()); return gvec_distr_.offsets[rank__]; } /// Offset (in the global index) of G-vectors for a fine-grained distribution for a current MPI rank. /** The \em count and \em offset are borrowed from the MPI terminology for data distribution. */ inline int offset() const { return offset_; } /// Local starting index of G-vectors if G=0 is not counted. inline int skip_g0() const { return (comm().rank() == 0) ? 1 : 0; } /// Return number of G-vector shells. inline int num_shells() const { return num_gvec_shells_; } /// Return G vector in fractional coordinates. inline auto gvec(int ig__) const { return gvec_by_full_index(gvec_full_index_(ig__)); } /// Return G+k vector in fractional coordinates. inline auto gkvec(int ig__) const { return gvec(ig__) + vk_; } /// Return G vector in Cartesian coordinates. template <index_domain_t idx_t> inline std::enable_if_t<idx_t == index_domain_t::local, vector3d<double>> gvec_cart(int ig__) const { return vector3d<double>(gvec_cart_(0, ig__), gvec_cart_(1, ig__), gvec_cart_(2, ig__)); } /// Return G vector in Cartesian coordinates. template <index_domain_t idx_t, bool print_info = false> inline std::enable_if_t<idx_t == index_domain_t::global, vector3d<double>> gvec_cart(int ig__) const { auto G = gvec(ig__); if (print_info) { auto gc = dot(lattice_vectors_, G); RTE_OUT(std::cout) << "ig="<<ig__<<", G="<<G<<", gc="<<gc<<", len="<<gc.length() << std::endl; } return dot(lattice_vectors_, G); } /// Return G+k vector in Cartesian coordinates. template <index_domain_t idx_t> inline std::enable_if_t<idx_t == index_domain_t::local, vector3d<double>> gkvec_cart(int ig__) const { return vector3d<double>(gkvec_cart_(0, ig__), gkvec_cart_(1, ig__), gkvec_cart_(2, ig__)); } /// Return G+k vector in Cartesian coordinates. template <index_domain_t idx_t> inline std::enable_if_t<idx_t == index_domain_t::global, vector3d<double>> gkvec_cart(int ig__) const { auto G = gvec_by_full_index(gvec_full_index_(ig__)); return dot(lattice_vectors_, vector3d<double>(G[0], G[1], G[2]) + vk_); } /// Return index of the G-vector shell by the G-vector index. inline int shell(int ig__) const { return gvec_shell_(ig__); } inline int shell(vector3d<int> const& G__) const { return this->shell(index_by_gvec(G__)); } /// Return length of the G-vector shell. inline double shell_len(int igs__) const { return gvec_shell_len_(igs__); } /// Get lengths of all G-vector shells. std::vector<double> shells_len() const { std::vector<double> q(this->num_shells()); for (int i = 0; i < this->num_shells(); i++) { q[i] = this->shell_len(i); } return q; } /// Return length of the G-vector. inline double gvec_len(int ig__) const { return gvec_shell_len_(gvec_shell_(ig__)); } inline int index_g12(vector3d<int> const& g1__, vector3d<int> const& g2__) const { auto v = g1__ - g2__; int idx = index_by_gvec(v); assert(idx >= 0); assert(idx < num_gvec()); return idx; } std::pair<int, bool> index_g12_safe(vector3d<int> const& g1__, vector3d<int> const& g2__) const; //inline int index_g12_safe(int ig1__, int ig2__) const //{ // STOP(); // return 0; //} /// Return a global G-vector index in the range [0, num_gvec) by the G-vector. /** The information about a G-vector index is encoded by two numbers: a starting index for the * column of G-vectors and column's size. Depending on the geometry of the reciprocal lattice, * z-columns may have only negative, only positive or both negative and positive frequencies for * a given x and y. This information is used to compute the offset which is added to the starting index * in order to get a full G-vector index. Check find_z_columns() to see how the z-columns are found and * added to the list of columns. */ int index_by_gvec(vector3d<int> const& G__) const; inline bool reduced() const { return reduce_gvec_; } inline bool bare() const { return bare_gvec_; } inline int num_zcol() const { return static_cast<int>(z_columns_.size()); } inline z_column_descriptor const& zcol(size_t idx__) const { return z_columns_[idx__]; } inline int gvec_base_mapping(int igloc_base__) const { assert(gvec_base_ != nullptr); return gvec_base_mapping_(igloc_base__); } inline int num_gvec_shells_local() const { return num_gvec_shells_local_; } inline double gvec_shell_len_local(int idx__) const { return gvec_shell_len_local_[idx__]; } inline int gvec_shell_idx_local(int igloc__) const { return gvec_shell_idx_local_[igloc__]; } }; /// Stores information about G-vector partitioning between MPI ranks for the FFT transformation. /** FFT driver works with a small communicator. G-vectors are distributed over the entire communicator which is larger than the FFT communicator. In order to transform the functions, G-vectors must be redistributed to the FFT-friendly "fat" slabs based on the FFT communicator size. */ class Gvec_partition { private: /// Pointer to the G-vector instance. Gvec const& gvec_; /// Communicator for the FFT. Communicator const& fft_comm_; /// Communicator which is orthogonal to FFT communicator. Communicator const& comm_ortho_fft_; /// Distribution of G-vectors for FFT. block_data_descriptor gvec_distr_fft_; /// Distribution of z-columns for FFT. block_data_descriptor zcol_distr_fft_; /// Distribution of G-vectors inside FFT-friendly "fat" slab. block_data_descriptor gvec_fft_slab_; /// Offset of the z-column in the local data buffer. /** Global index of z-column is expected */ mdarray<int, 1> zcol_offs_; /// Mapping of MPI ranks used to split G-vectors to a 2D grid. mdarray<int, 2> rank_map_; /// Global index of z-column in new (fat-slab) distribution. /** This is a mapping between new and original ordering of z-columns. */ mdarray<int, 1> idx_zcol_; /// Global index of G-vector by local index inside fat-salb. mdarray<int, 1> idx_gvec_; void build_fft_distr(); /// Calculate offsets of z-columns inside each local buffer of PW coefficients. void calc_offsets(); /// Stack together the G-vector slabs to make a larger ("fat") slab for a FFT driver. void pile_gvec(); public: Gvec_partition(Gvec const& gvec__, Communicator const& fft_comm__, Communicator const& comm_ortho_fft__); /// Return FFT communicator inline Communicator const& fft_comm() const { return fft_comm_; } inline Communicator const& comm_ortho_fft() const { return comm_ortho_fft_; } inline int gvec_count_fft(int rank__) const { return gvec_distr_fft_.counts[rank__]; } /// Local number of G-vectors for FFT-friendly distribution. inline int gvec_count_fft() const { return gvec_count_fft(fft_comm().rank()); } /// Return local number of z-columns. inline int zcol_count_fft(int rank__) const { return zcol_distr_fft_.counts[rank__]; } inline int zcol_count_fft() const { return zcol_count_fft(fft_comm().rank()); } template <index_domain_t index_domain> inline int idx_zcol(int idx__) const { switch (index_domain) { case index_domain_t::local: { return idx_zcol_(zcol_distr_fft_.offsets[fft_comm().rank()] + idx__); break; } case index_domain_t::global: { return idx_zcol_(idx__); break; } } } inline int idx_gvec(int idx_local__) const { return idx_gvec_(idx_local__); } inline block_data_descriptor const& gvec_fft_slab() const { return gvec_fft_slab_; } inline int zcol_offs(int icol__) const { return zcol_offs_(icol__); } inline Gvec const& gvec() const { return gvec_; } mdarray<int, 2> get_gvec() const; template <typename T> void gather_pw_fft(std::complex<T>* f_pw_local__, std::complex<T>* f_pw_fft__) const { int rank = gvec().comm().rank(); /* collect scattered PW coefficients */ comm_ortho_fft().allgather(f_pw_local__, gvec().gvec_count(rank), f_pw_fft__, gvec_fft_slab().counts.data(), gvec_fft_slab().offsets.data()); } template <typename T> void gather_pw_global(std::complex<T>* f_pw_fft__, std::complex<T>* f_pw_global__) const { for (int ig = 0; ig < gvec().count(); ig++) { /* position inside fft buffer */ int ig1 = gvec_fft_slab().offsets[comm_ortho_fft().rank()] + ig; f_pw_global__[gvec().offset() + ig] = f_pw_fft__[ig1]; } gvec().comm().allgather(&f_pw_global__[0], gvec().count(), gvec().offset()); } }; /// Helper class to manage G-vector shells and redistribute G-vectors for symmetrization. /** G-vectors are remapped from default distribution which balances both the local number of z-columns and G-vectors to the distribution of G-vector shells in which each MPI rank stores local set of complete G-vector shells such that the "rotated" G-vector remains on the same MPI rank. */ class Gvec_shells { private: /// Sending counts and offsets. block_data_descriptor a2a_send_; /// Receiving counts and offsets. block_data_descriptor a2a_recv_; /// Split global index of G-shells between MPI ranks. splindex<splindex_t::block_cyclic> spl_num_gsh_; /// List of G-vectors in the remapped storage. sddk::mdarray<int, 2> gvec_remapped_; /// Mapping between index of local G-vector and global index of G-vector shell. sddk::mdarray<int, 1> gvec_shell_remapped_; /// Alias for the G-vector communicator. Communicator const& comm_; Gvec const& gvec_; /// A mapping between G-vector and it's local index in the new distribution. std::map<vector3d<int>, int> idx_gvec_; public: Gvec_shells(Gvec const& gvec__); inline void print_gvec() const { pstdout pout(gvec_.comm()); pout.printf("rank: %i\n", gvec_.comm().rank()); pout.printf("-- list of G-vectors in the remapped distribution --\n"); for (int igloc = 0; igloc < gvec_count_remapped(); igloc++) { auto G = gvec_remapped(igloc); int igsh = gvec_shell_remapped(igloc); pout.printf("igloc=%i igsh=%i G=%i %i %i\n", igloc, igsh, G[0], G[1], G[2]); } pout.printf("-- reverse list --\n"); for (auto const& e: idx_gvec_) { pout.printf("G=%i %i %i, igloc=%i\n", e.first[0], e.first[1], e.first[2], e.second); } } /// Local number of G-vectors in the remapped distribution with complete shells on each rank. int gvec_count_remapped() const { return a2a_recv_.size(); } /// G-vector by local index (in the remapped set). vector3d<int> gvec_remapped(int igloc__) const { return vector3d<int>(gvec_remapped_(0, igloc__), gvec_remapped_(1, igloc__), gvec_remapped_(2, igloc__)); } /// Return local index of the G-vector in the remapped set. int index_by_gvec(vector3d<int> G__) const { if (idx_gvec_.count(G__)) { return idx_gvec_.at(G__); } else { return -1; } } /// Index of the G-vector shell by the local G-vector index (in the remapped set). int gvec_shell_remapped(int igloc__) const { return gvec_shell_remapped_(igloc__); } template <typename T> std::vector<T> remap_forward(T* data__) const { PROFILE("sddk::Gvec_shells::remap_forward"); std::vector<T> send_buf(gvec_.count()); std::vector<int> counts(comm_.size(), 0); for (int igloc = 0; igloc < gvec_.count(); igloc++) { int ig = gvec_.offset() + igloc; int igsh = gvec_.shell(ig); int r = spl_num_gsh_.local_rank(igsh); send_buf[a2a_send_.offsets[r] + counts[r]] = data__[igloc]; counts[r]++; } std::vector<T> recv_buf(gvec_count_remapped()); comm_.alltoall(send_buf.data(), a2a_send_.counts.data(), a2a_send_.offsets.data(), recv_buf.data(), a2a_recv_.counts.data(), a2a_recv_.offsets.data()); return recv_buf; } template <typename T> void remap_backward(std::vector<T> buf__, T* data__) const { PROFILE("sddk::Gvec_shells::remap_backward"); std::vector<T> recv_buf(gvec_.count()); comm_.alltoall(buf__.data(), a2a_recv_.counts.data(), a2a_recv_.offsets.data(), recv_buf.data(), a2a_send_.counts.data(), a2a_send_.offsets.data()); std::vector<int> counts(comm_.size(), 0); for (int igloc = 0; igloc < gvec_.count(); igloc++) { int ig = gvec_.offset() + igloc; int igsh = gvec_.shell(ig); int r = spl_num_gsh_.local_rank(igsh); data__[igloc] = recv_buf[a2a_send_.offsets[r] + counts[r]]; counts[r]++; } } inline Gvec const& gvec() const { return gvec_; } }; } // namespace sddk #endif //__GVEC_HPP__
class SamInfo: ''' Information of a SAM line ''' pos = 0 chrom = '' flag = 0 mapq = 0 score = 0 offset = 0 seq = '' cigar = '' tag_md = '' def __init__(self, line, erg=False, md=False, cigar=False, score=True): self.flag = int(line[1]) self.pos = int(line[3]) self.mapq = int(line[4]) self.chrom = line[2] self.seq = line[9] if cigar: self.cigar = line[5] if erg and self.chrom.find('erg') > 0: tmp = self.chrom.split('-') self.offset = int(tmp[2]) self.chrom = tmp[0] self.pos += self.offset - 1 if md and (self.is_unaligned() == False): check_md = False for tag in line[11:]: if tag[:5] == 'MD:Z:': self.tag_md = tag[5:] check_md = True break assert check_md if score: self.update_score(line[11]) def print(self, pos=True, chrom=True, flag=True, mapq=True, score=True, offset=True): if pos: print (' pos =', self.pos) if chrom: print (' chrom =', self.chrom) if flag: print (' flag =', self.flag) if mapq: print (' mapq =', self.mapq) if score: print (' score =', self.score) if offset: print (' offset =', self.offset) return def is_unaligned(self): if self.flag & 4: return True return False def is_rc(self): if self.flag & 16: return True return False def is_first_seg(self): if self.flag & 64: return True return False def is_secondary(self): if self.flag & 256: return True return False def update_score(self, raw_score): if self.is_unaligned(): self.score = 1 return elif raw_score.startswith('AS:') is False: self.score = 1 return self.score = int(raw_score.split(':')[-1])
<gh_stars>10-100 package org.springframework.data.orient.commons.repository.config; import org.springframework.data.orient.commons.repository.support.OrientRepositoryFactoryBean; import org.springframework.data.repository.config.RepositoryConfigurationExtensionSupport; /** * {@link org.springframework.data.repository.config.RepositoryConfigurationExtension} for OrientDB. * * @author Dzmitry_Naskou */ public class OrientRepositoryConfigExtension extends RepositoryConfigurationExtensionSupport { /* (non-Javadoc) * @see org.springframework.data.repository.config.RepositoryConfigurationExtension#getRepositoryFactoryClassName() */ public String getRepositoryFactoryClassName() { return OrientRepositoryFactoryBean.class.getName(); } /* (non-Javadoc) * @see org.springframework.data.repository.config.RepositoryConfigurationExtensionSupport#getModulePrefix() */ @Override protected String getModulePrefix() { return "orient"; } }
<gh_stars>1-10 #pragma once #include <SdFat.h> // Enumerates files in the root of the SD card, returning those suitable to show on the EPaper display. // In order to return files in order without sorting, scans all files in the root directory on every pass. class SdEnumerator final { public: SdEnumerator(const char* previousFileName, uint32_t spiSpeed, uint8_t csPin, uint64_t expectedFileSize); // One-time setup. void setup() const; // Initialize the card for use. Call each time after resuming from sleep to ensure information is up to date. bool begin(); // Retrieve the next image on the card, looping around at the end. // If no suitable files are present, the returned image will not be open. FsFile getNextImage(); // Retrieve the current image in the enumeration, falling back to the next // image or the first image if it is no longer available. If no suitable // files are present, the returned image will not be open. FsFile getCurrentImage(); // Retrieves a specific system image, separate from the normal rotation, e.g. for low battery. FsFile getSystemImage(const char* fileName); private: bool isSuitable(FsFile& file) const; private: static constexpr uint32_t MaxFileNameLength = 256; const uint32_t spiSpeed; const uint8_t csPin; const uint64_t expectedFileSize; SdFs sd; char currentFileName[MaxFileNameLength]; };
/// Generate a reference graph of methods. fn generate_reference_graph(&mut self) { let mut refgraph = ReferenceGraph::new(); // FIXME: Reflect `replace` rule in yaml file for each interface to // the reference (bug NNNNNN). // 1. Typesums let sums_of_interfaces = self.syntax.resolved_sums_of_interfaces_by_name(); for (name, nodes) in sums_of_interfaces { let mut edges: HashSet<Rc<String>> = HashSet::new(); edges.insert(Rc::new(format!("Sum{}", name))); refgraph.insert(string_from_nodename(name), edges); let mut sum_edges: HashSet<Rc<String>> = HashSet::new(); for node in nodes { sum_edges.insert(Rc::new(format!("Interface{}", node.to_string()))); } refgraph.insert(Rc::new(format!("Sum{}", name.to_string())), sum_edges); } // 2. Single interfaces let interfaces_by_name = self.syntax.interfaces_by_name(); for (name, interface) in interfaces_by_name { let mut edges: HashSet<Rc<String>> = HashSet::new(); edges.insert(Rc::new(format!("Interface{}", name))); refgraph.insert(string_from_nodename(name), edges); let mut interface_edges: HashSet<Rc<String>> = HashSet::new(); for field in interface.contents().fields() { match field.type_().get_primitive(&self.syntax) { Some(IsNullable { is_nullable: _, content: Primitive::Interface(_) }) | None => { let typename = TypeName::type_(field.type_()); interface_edges.insert(Rc::new(typename.to_string())); }, // Don't have to handle other type of fields (string, // number, bool, etc). _ => {} } } refgraph.insert(Rc::new(format!("Interface{}", name)), interface_edges); } // 3. String Enums for (kind, _) in self.syntax.string_enums_by_name() { refgraph.insert(string_from_nodename(kind), HashSet::new()); } // 4. Lists for parser in &self.list_parsers_to_generate { let mut edges: HashSet<Rc<String>> = HashSet::new(); edges.insert(string_from_nodename(&parser.elements)); refgraph.insert(string_from_nodename(&parser.name), edges); } // 5. Optional values for parser in &self.option_parsers_to_generate { let mut edges: HashSet<Rc<String>> = HashSet::new(); let named_implementation = if let Some(NamedType::Typedef(ref typedef)) = self.syntax.get_type_by_name(&parser.name) { assert!(typedef.is_optional()); if let TypeSpec::NamedType(ref named) = *typedef.spec() { self.syntax.get_type_by_name(named) .unwrap_or_else(|| panic!("Internal error: Could not find type {}, which should have been generated.", named.to_str())) } else { panic!("Internal error: In {}, type {:?} should have been a named type", parser.name.to_str(), typedef); } } else { panic!("Internal error: In {}, there should be a type with that name", parser.name.to_str()); }; match named_implementation { NamedType::Interface(_) => { edges.insert(Rc::new(format!("Interface{}", parser.elements.to_string()))); }, NamedType::Typedef(ref type_) => { match type_.spec() { &TypeSpec::TypeSum(_) => { edges.insert(Rc::new(format!("Sum{}", parser.elements.to_string()))); }, _ => {} } }, _ => {} } refgraph.insert(string_from_nodename(&parser.name), edges); } // 6. Primitive values. refgraph.insert(Rc::new("IdentifierName".to_string()), HashSet::new()); refgraph.insert(Rc::new("PropertyKey".to_string()), HashSet::new()); self.refgraph = refgraph; }
Is this the end of side-boob? Wonderbra launches Holly Willoughby-inspired lingerie that promises to make embarrassing nipple slips a thing of the past New bra is Wonderbra's lowest ever and dips to solar plexus Thick-strapped style comes in a choice of nude or black Inspired by celebrities such as Holly Willoughby and Miranda Kerr Plunging styles might be all the rage on the red carpet but navel-scraping necklines aren't always the easiest to wear. Now Wonderbra has come to the rescue with its lowest bra to date which claims to offer support while minimising pitfalls such as accidental nipple slips and side-boob. The bra, which comes in a choice of black or nude, features a front that dips down as far the solar plexus so it can be worn under a cleavage-baring frock. Scroll Down for Video Taking the plunge: The new offering from Wonderbra comes in a choice of black or nude The new bra comes hot on the heels of research that found one in 10 women plan to wear a daringly low cut dress to their office Christmas party. A celebrity-inspired trend, plunging necklines have become a common sight on the red carpet with Holly Willoughby, Blake Lively and Miranda Kerr among those daring enough to bare their cleavage. But the plunging trend isn't just a hit with celebrities: according to Wonderbra, baring the chest is also popular with women who want to snare themselves a new beau at their office Christmas party. According to research, one in five British people meet their partners at work - another reason why demand for cleavage baring styles has shot up. Inspiration: Cleavage-baring celebrities such as Holly Willoughby helped popularise navel-scraping styles Blushes averted: Wonderbra claim their new invention will eliminate embarrassing nipple slips and side boob Celebrity fans of low cut frocks who have pulled off the same trick include Sophia Vergara, who met her husband Joe Gonzalez on set, and Miranda Kerr who first laid eyes on former husband Orlando Bloom while at work. 'It is great that so many women want to embrace the plunge trend,' said Martina Alexander, Wonderbra's UK marketing manager.
#include<bits/stdc++.h> using namespace std; vector <string> vec; int main() { int n,m; cin>>n>>m; string s; cin>>s; for(int i=0;i<n;i++) { string s2=s.substr(0,i+1); string s3=""; while(s3.size()<m) s3+=s2; vec.push_back(s3.substr(0,m)); } string ans=vec[0]; for(int i=0;i<vec.size();i++) if(vec[i]<ans) ans=vec[i]; cout<<ans; return 0; }
#include <iostream> #include <fstream> #include <vector> using namespace std; ofstream g("problemabelefant.out"); vector <int> divizori; int main() { bool are[15], bun; int x, i, copie, j, sol = 0, incr; cin>>x; copie = x; sol = 0; for(i=0; i<=9; i++) are[i] = false; while(copie > 0) { are[copie%10] = true; copie /= 10; } incr = 1; if(x%2==1) incr = 2; for(i=1; i*i<=x; i+=incr) { if(x%i==0) { divizori.push_back(i); if(i*i!=x) divizori.push_back(x/i); //sa nu puna radicalu de 2 ori } } //1 if(x==1) divizori.push_back(1); for(i=0; i<divizori.size(); i++) { copie = divizori[i]; bun = false; while(!bun && copie > 0) { if(are[copie%10]) bun = true; copie /= 10; } if(bun) sol++; } if(x==1) sol = 1; cout<<sol<<"\n"; return 0; }
<filename>tests/color/constantes-foreground_test.go<gh_stars>1-10 package color import ( "github.com/DrSmithFr/go-console/pkg/color" "github.com/stretchr/testify/assert" "testing" ) func TestForeground(t *testing.T) { assert.Equal(t, color.NewColor(30, 39), color.GetForegroundColor(color.BLACK)) assert.Equal(t, color.NewColor(31, 39), color.GetForegroundColor(color.RED)) assert.Equal(t, color.NewColor(32, 39), color.GetForegroundColor(color.GREEN)) assert.Equal(t, color.NewColor(33, 39), color.GetForegroundColor(color.YELLOW)) assert.Equal(t, color.NewColor(34, 39), color.GetForegroundColor(color.BLUE)) assert.Equal(t, color.NewColor(35, 39), color.GetForegroundColor(color.MAGENTA)) assert.Equal(t, color.NewColor(36, 39), color.GetForegroundColor(color.CYAN)) assert.Equal(t, color.NewColor(37, 39), color.GetForegroundColor(color.WHITE)) assert.Equal(t, color.NewColor(39, 39), color.GetForegroundColor(color.DEFAULT)) assert.Panics(t, func() { color.GetForegroundColor("undefined-color") }) }
0 With Marvel’s Captain America: The Winter Solider set to invade theaters on April 4th, earlier today Disney started doing press on the film here in Los Angeles. While reviews are embargoed for another week or so, Disney allowed us to tweet after last night’s screening and as I said on twitter, the movie is phenomenal. Loaded with incredible action, a great script, and fantastic character moments, The Winter Solider is easily one of the best Marvel movies and it’s going to make fans and casual moviegoers very happy. While everyone is excited for The Winter Solider, all Marvel fans have another date circled on their calendar: May 1, 2015. At the beginning of next summer, The Avengers return under the direction of Joss Whedon with Avengers: Age of Ultron. While the story is under wraps, when I talked to Scarlett Johansson today for Captain America: The Winter Solider, I asked her about her thoughts on the the script. She said the sequel makes the Marvel Universe feel “very close-knit, cerebral, and progressive” and added that it “feels like the continuation of The Avengers.” Hit the jump to watch what she had to say. Here’s the full quote of what Johansson said about the Age of Ultron script. The video is below the quote. “I was just amazed—Joss is something else, he really is. He locks himself away for God knows how long; I think he has the dark circles to prove it. [He] just came out with something totally solid. I was really impressed by his ability to make this ever-expanding Marvel universe feel very close-knit and cerebral and progressive. I mean this movie feels like the continuation of The Avengers, it doesn’t feel like a tag-on or the rehashed version or just ‘let’s throw a bunch of new characters in there and keep the thing alive.’ It really feels like the next step, and all of our characters, our relationships with one another continue to progress, become more intertwined, more complicated, more meshed. And the film has a lot of great comic book moments that the fans are gonna love, but it’s also got a lot of really great dramatic moments that audience members, I think, will really relate to.” Please enable Javascript to watch this video
import { Direction } from '../models/direction'; import { Side } from '../models/side'; export class SwipeDragEndCounter { public leftCount: number; public rightCount: number; constructor() { this.reset(); } public reset(): void { this.leftCount = 0; this.rightCount = 0; } /** * @param direction of swipe / pan * @param side hit by swipe */ public addHit(side: Side, dir: Direction): void { this.incrementSide(side); this.clearOppositeSideOfDragDirection(dir); } public hitCountReached(): boolean { return this.leftCount >= 2 || this.rightCount >= 2; } private incrementSide(side: Side): void { if (side === Side.LEFT) { this.leftCount++; this.rightCount = 0; } else if (side === Side.RIGHT) { this.rightCount++; this.leftCount = 0; } } /** * Clear opposite side if swiping in the other direction * @param Direction of swipe / pan */ private clearOppositeSideOfDragDirection(dir: Direction): void { if (dir === Direction.LEFT) { this.leftCount = 0; } else if (dir === Direction.RIGHT) { this.rightCount = 0; } } }
def _get_maybe_abstract_instance(self, data): if isinstance(data, abstract.AbstractOrConcreteValue): data_type = type(data.pyval) if data_type in self.primitive_class_instances: return self.primitive_class_instances[data_type] return data
Renal tubular site of action of felodipine. The renal tubular site of action of felodipine was localized using renal clearance and recollection micropuncture techniques in the anesthetized rat. In initial renal clearance experiments, felodipine (2.75 nM/kg/min i.v. X 60 min) had no effect on mean arterial pressure or glomerular filtration but significantly increased urinary flow rate, sodium and potassium excretion. In subsequent recollection micropuncture experiments, felodipine decreased mean arterial pressure but did not affect renal blood flow or renal vascular resistance or glomerular filtration rate; absolute and fractional urinary excretion of sodium and water, but not potassium, were increased. Proximal tubular and loop of Henle sodium, potassium and water reabsorption were not affected but distal tubular and collecting duct sodium and water (not potassium) reabsorption were decreased by felodipine. Felodipine is a vasodilator antihypertensive agent which, in doses which decrease mean arterial pressure in normotensive rats, increases urinary flow rate and sodium excretion by inhibiting distal tubular and collecting duct sodium and water reabsorption; potassium reabsorption or excretion is not affected. As a vasodilator antihypertensive agent, felodipine possesses beneficial natriuretic rather than detrimental sodium retaining properties.
/// Return the colliding bodies involved for this arbiter. /// The order of the cpSpace.collision_type the bodies are associated with values will match /// the order set when the collision handler was registered. static inline void cpArbiterGetBodies(const cpArbiter *arb, cpBody **a, cpBody **b) { CP_ARBITER_GET_SHAPES(arb, shape_a, shape_b); (*a) = shape_a->body; (*b) = shape_b->body; }
package run; public class Start { private static volatile boolean running = true; public static void main(String[] args) { Runtime.getRuntime().addShutdownHook(new Thread(){ @Override public void run() { System.out.println("app stoped"); } }); synchronized (Start.class) { while(running){ try { Start.class.wait(); } catch (InterruptedException e) { e.printStackTrace(); } } } } }
/** * Creates the meta-model objects for the package. This method is * guarded to have no affect on any invocation but its first. * <!-- begin-user-doc --> * <!-- end-user-doc --> * @generated */ public void createPackageContents() { if (isCreated) return; isCreated = true; transformationEClass = createEClass(TRANSFORMATION); createEReference(transformationEClass, TRANSFORMATION__FUNCTION); createEAttribute(transformationEClass, TRANSFORMATION__FUNCTION_DESCRIPTION); createEAttribute(transformationEClass, TRANSFORMATION__IS_PRIMARY); createEReference(transformationEClass, TRANSFORMATION__SOURCE); createEReference(transformationEClass, TRANSFORMATION__TARGET); createEReference(transformationEClass, TRANSFORMATION__TASK); dataObjectSetEClass = createEClass(DATA_OBJECT_SET); createEReference(dataObjectSetEClass, DATA_OBJECT_SET__SOURCE_TRANSFORMATION); createEReference(dataObjectSetEClass, DATA_OBJECT_SET__TARGET_TRANSFORMATION); createEReference(dataObjectSetEClass, DATA_OBJECT_SET__ELEMENT); transformationTaskEClass = createEClass(TRANSFORMATION_TASK); createEReference(transformationTaskEClass, TRANSFORMATION_TASK__STEP); createEReference(transformationTaskEClass, TRANSFORMATION_TASK__ORIGINAL_TASK); createEReference(transformationTaskEClass, TRANSFORMATION_TASK__INVERSE_TASK); createEReference(transformationTaskEClass, TRANSFORMATION_TASK__TRANSFORMATION); transformationStepEClass = createEClass(TRANSFORMATION_STEP); createEReference(transformationStepEClass, TRANSFORMATION_STEP__TASK); createEReference(transformationStepEClass, TRANSFORMATION_STEP__WAREHOUSE_STEP); createEReference(transformationStepEClass, TRANSFORMATION_STEP__EXECUTION); transformationActivityEClass = createEClass(TRANSFORMATION_ACTIVITY); createEAttribute(transformationActivityEClass, TRANSFORMATION_ACTIVITY__CREATION_DATE); createEReference(transformationActivityEClass, TRANSFORMATION_ACTIVITY__WAREHOUSE_ACTIVITY); createEReference(transformationActivityEClass, TRANSFORMATION_ACTIVITY__EXECUTION); precedenceConstraintEClass = createEClass(PRECEDENCE_CONSTRAINT); transformationUseEClass = createEClass(TRANSFORMATION_USE); createEAttribute(transformationUseEClass, TRANSFORMATION_USE__TYPE); transformationMapEClass = createEClass(TRANSFORMATION_MAP); transformationTreeEClass = createEClass(TRANSFORMATION_TREE); createEAttribute(transformationTreeEClass, TRANSFORMATION_TREE__TYPE); createEReference(transformationTreeEClass, TRANSFORMATION_TREE__BODY); classifierMapEClass = createEClass(CLASSIFIER_MAP); createEReference(classifierMapEClass, CLASSIFIER_MAP__FUNCTION); createEAttribute(classifierMapEClass, CLASSIFIER_MAP__FUNCTION_DESCRIPTION); createEReference(classifierMapEClass, CLASSIFIER_MAP__FEATURE_MAP); createEReference(classifierMapEClass, CLASSIFIER_MAP__CF_MAP); createEReference(classifierMapEClass, CLASSIFIER_MAP__SOURCE); featureMapEClass = createEClass(FEATURE_MAP); createEReference(featureMapEClass, FEATURE_MAP__FUNCTION); createEAttribute(featureMapEClass, FEATURE_MAP__FUNCTION_DESCRIPTION); createEReference(featureMapEClass, FEATURE_MAP__CLASSIFIER_MAP); createEReference(featureMapEClass, FEATURE_MAP__TARGET); stepPrecedenceEClass = createEClass(STEP_PRECEDENCE); classifierFeatureMapEClass = createEClass(CLASSIFIER_FEATURE_MAP); createEReference(classifierFeatureMapEClass, CLASSIFIER_FEATURE_MAP__FUNCTION); createEAttribute(classifierFeatureMapEClass, CLASSIFIER_FEATURE_MAP__FUNCTION_DESCRIPTION); createEAttribute(classifierFeatureMapEClass, CLASSIFIER_FEATURE_MAP__CLASSIFIER_TO_FEATURE); createEReference(classifierFeatureMapEClass, CLASSIFIER_FEATURE_MAP__CLASSIFIER_MAP); createEReference(classifierFeatureMapEClass, CLASSIFIER_FEATURE_MAP__CLASSIFIER); createEReference(classifierFeatureMapEClass, CLASSIFIER_FEATURE_MAP__FEATURE); treeTypeEEnum = createEEnum(TREE_TYPE); }
// Treat this as a destructor function. Delete any dynamically allocated memory here void deleteBuffers() { glDeleteVertexArrays(numobjects + ncolors, VAOs); glDeleteVertexArrays(1, &teapotVAO); glDeleteBuffers(numperobj*numobjects + ncolors, buffers); glDeleteBuffers(3, teapotbuffers); }
import { Controller, Get, Header, Res } from '@nestjs/common'; import * as puppeteer from 'puppeteer'; import * as fs from 'fs'; import { Response } from 'express'; @Controller('chrome') export class ChromeController { @Get() async getHello(@Res() res: Response) { const browser = await puppeteer.launch({ ignoreHTTPSErrors: true, dumpio: false, headless: true, args: ['--no-sandbox', '--disable-setuid-sandbox'] }) //.catch(e => console.log('Error launching chrome', e)); const page = await browser.newPage(); const url = 'https://google.com'; console.log(url) await page.goto(url, {waitUntil: ['load', 'domcontentloaded']}); // await page.waitForNavigation({waitUntil: 'networkidle0'}); const pdf = await page.pdf(); fs.writeFileSync('tmp/file.pdf', pdf/*Buffer.from(, 'binary')*/) res.type('pdf'); res.set('Content-Disposition', `attachment; filename="archivo.pdf"`); res.send(pdf) // return // return } }
def detach(ctx, iface, resource_config, **_): params = dict() if not resource_config else resource_config.copy() try: vpc_id = utils.find_ids_of_rels_by_node_type( ctx.instance, VPC_TYPE)[0] except IndexError: vpc_id = None acl_associations = iface.get_network_acls() for acl in acl_associations['NetworkAcls']: if acl[VPC_ID] != vpc_id: continue if acl['IsDefault']: break for _, param in ctx.instance.runtime_properties.get( 'network_acl_associations', {}).items(): params[NETWORKACL_ID] = acl['NetworkAclId'] params[ASSOCIATION_ID] = param['new_assoc_id'] iface.replace(params)
#include <bits/stdc++.h> using namespace std; struct n{ long long a; long long b; }k[500000]; long long vis[500000]; long long cmp(n n1,n n2){ return n1.a<n2.a; } long long ans; void dfs(long long x,long long need){ if(vis[x]==1){ return ; } else{ vis[x]=1; dfs(k[x].b,need); } } int main() { // freopen("in.txt","r",stdin); // freopen("out.txt","w",stdout); long long t; cin >> t; while(t--){ memset(vis,0,sizeof(vis)); long long n; cin>>n; for(long long i=1;i<=n;i++){ cin>>k[i].a; } for(long long i=1;i<=n;i++){ cin>>k[i].b; } sort(k+1,k+n+1,cmp); ans=1; for(long long i=1;i<=n;i++){ if(!vis[i]){ dfs(i,i); ans = ans*2 % 1000000007; } } cout<<ans<<endl; } return 0; }
/* * vpif_release: This function deletes buffer queue, frees the buffers and * the vpif file handle */ static int vpif_release(struct file *filep) { struct vpif_fh *fh = filep->private_data; struct channel_obj *ch = fh->channel; struct common_obj *common = &ch->common[VPIF_VIDEO_INDEX]; if (mutex_lock_interruptible(&common->lock)) return -ERESTARTSYS; if (fh->io_allowed[VPIF_VIDEO_INDEX]) { common->io_usrs = 0; if (VPIF_CHANNEL2_VIDEO == ch->channel_id) { enable_channel2(0); channel2_intr_enable(0); } if ((VPIF_CHANNEL3_VIDEO == ch->channel_id) || (2 == common->started)) { enable_channel3(0); channel3_intr_enable(0); } common->started = 0; videobuf_queue_cancel(&common->buffer_queue); videobuf_mmap_free(&common->buffer_queue); common->numbuffers = config_params.numbuffers[ch->channel_id]; } mutex_unlock(&common->lock); atomic_dec(&ch->usrs); if (fh->initialized) ch->initialized = 0; v4l2_prio_close(&ch->prio, &fh->prio); filep->private_data = NULL; fh->initialized = 0; kfree(fh); return 0; }
Golf can make you deaf, according to an article in the British Medical Journal. My first thought when I heard this, courtesy of a hilarious article in Scientific American by Steve Mirsky, was that it must be the screams of frustration. Actually no, it’s the 112 decibel BANG! produced by a certain titanium driver 1.7 meters from the golfer’s ear. The patient in the BMJ case study suffered 20 dB of hearing loss at frequencies between 3-6 kHz. He described the sound of his King Cobra LD driver hitting the ball as being like “a gun going off.” An Internet search found a number of other comments about the sound of this driver, such as, “This is not so much a ting but a sonic boom which resonates across the course!” You’ve got to hand it to the manufacturers of these new drivers. They’ve found a completely new way (after water, bunker, sand, frustration, and boredom) to spoil a good walk. —Bob Finn Like this: Like Loading... Related Posted in Conditions, sports medicine, weird medical conditions
def apply(self, input_data): return self.signal_function(input_data, *self.function_parameters_tuple, **self.sig_kwargs_dict)
Technical Feasibility of Glucose Oxidase as a Prefermentation Treatment for Lowering the Alcoholic Degree of Red Wine In the present work, the use of the glucose oxidase/catalase enzymatic system was evaluated as an alternative to decrease glucose concentration and eventually produce a reduced-alcohol wine. The effects of glucose oxidase, catalase, and aeration on glucose concentration were evaluated after 24 and 48 hr of treatment of 27°Brix Carmenere must. The results showed that the effect of aeration and glucose oxidase was not significant compared with the effect produced by glucose oxidase itself. In addition, the use of catalase combined with glucose oxidase provided the best result, decreasing the glucose concentration by 51 and 78% after 24 and 48 hr, respectively, when 200 U/mL of both enzymes was used. The alcoholic degree obtained after three and five days under this treatment and subsequent fermentations were 15% (v/v) ± 0.8 and 14% (v/v) ± 0.8, respectively. A major drawback of this treatment was the color change of Carmenere must because H2O2 was produced during the glucose oxidase treatment, despite the presence of catalase. The technical feasibility of using this prefermentative process led to a divided conclusion; obtaining a lower alcoholic degree using the glucose oxidase/catalase system was possible, but if the goal is the industrial application of this technique, the color change should be investigated further. An evaluation of the glucose oxidase/catalase ratio was projected to show an improvement of the H2O2 elimination and, subsequently, decrease the effect on color change.
def solve_part1(puzzle_input): rules, _, nearby_tickets = parse_puzzle_input(puzzle_input) error_rate = 0 for ticket in nearby_tickets: invalid_values = (value for value in ticket if not any(rule.is_valid(value) for rule in rules)) error_rate += sum(invalid_values) return error_rate
package cn.jmicro.test; import org.junit.runners.BlockJUnit4ClassRunner; import org.junit.runners.model.InitializationError; import org.junit.runners.model.TestClass; import cn.jmicro.api.classloader.RpcClassLoader; public class JMicroJUnitTestRunner extends BlockJUnit4ClassRunner{ public JMicroJUnitTestRunner(Class<?> klass) throws InitializationError{ super(klass); } @Override protected TestClass createTestClass(Class<?> testClass) { RpcClassLoader cl = new RpcClassLoader(testClass.getClassLoader()); try { String pn = testClass.getPackage().getName(); String[] pns = pn.split("\\."); if(pns.length >= 2) { cl.addBasePackage(pns[0]+"." + pns[1]); }else { cl.addBasePackage(pn); } Thread.currentThread().setContextClassLoader(cl); Class<?> clazz = cl.loadClass(testClass.getName()); return new TestClass(clazz); } catch (ClassNotFoundException e) { throw new RuntimeException(e); } } }
President Trump spoke about religious freedom at the Celebrate Freedom Concert in Washington D.C. on July 1. (The Washington Post) The war on Christmas came early this year. That is, according to President Trump, who devoted a large portion of his speech at a Celebrate Freedom event at the Kennedy Center on Saturday to railing against those who might try to shy away from overt references to Christianity in American discourse. “Our religious liberty is enshrined in the very first amendment in the Bill of Rights. The American founders invoked our Creator four times in the Declaration of Independence,” Trump said. “Benjamin Franklin reminded his colleague at the Constitutional Convention to begin by bowing their heads in prayer. I remind you that we’re going to start saying ‘Merry Christmas’ again.” Though the rally was meant to honor military veterans, Trump opened his speech by attacking the media and boasting of his election win. “The fake media tried to stop us from going to the White House,” he said, “but I’m president and they’re not.” He then declared that he would fight any “bureaucrats” who “think they can run over your lives, overrule your values, meddle in your faith and tell you how to live, what to say and where to pray.” The mostly evangelical Christian crowd at the event — which was sponsored by the First Baptist Dallas megachurch and Salem Media Group — responded to Trump’s remarks with resounding applause. President Trump at a Celebrate Freedom event at the Kennedy Center on Saturday. (Olivier Douliery/Bloomberg News) “Politicians have tried — oh, have they tried — to centralize authority among the hands of a small few in our nation’s capital,” Trump said. “I see them all the time … But we know that parents, not bureaucrats, know best how to raise their children and create a thriving society. And we know that families and churches, not government officials, know best how to create a strong and loving community.” After a pause, he added: “And, above all else, we know this: In America, we don’t worship government. We worship God.” As cheers broke out after that line, Trump nodded his head and mouthed: “Thank you.” He then pumped his right fist as the crowd began chanting: “U-S-A! U-S-A!” As The Washington Post’s Sarah Pulliam Bailey reported last year, one of Trump’s campaign promises was the assurance that Americans would see “Merry Christmas” being used more. It was a strategy that paid off come November: Many of Trump’s promises, including his emphasis on “Merry Christmas,” included direct appeals to religious voters, especially to evangelical voters who came out and voted overwhelmingly in favor of him. His spiritual cabinet during the campaign was made up of conservative Christian leaders, many of whom identify with the prosperity gospel movement that links faith with wealth. [We’re all going to be saying ‘Merry Christmas’: Here are Donald Trump’s campaign promises on religion] “When was the last time you saw ‘Merry Christmas’? You don’t see it anymore,” then-candidate Trump said in a campaign speech at Liberty University in January 2016. “They want to be politically correct. If I’m president, you will see ‘Merry Christmas’ in department stores, believe me, believe me.” Though there has not been a Christmas yet since Trump took office, references to religious freedom during his presidency have centered on Christianity. For the first time in nearly two decades, the White House did not recognize Ramadan with an iftar dinner or an Eid al-Fitr celebration this year, which some viewed as a slight against Muslim American communities. Trump’s campaign rhetoric — and speeches since becoming president — have extended a long argument between ideologues over holiday greetings. During his tenure, President Barack Obama was frequently accused by the right of being too politically correct in his annual holiday cards, even though he and Michelle Obama wished the nation “Merry Christmas” every year while in office in spoken and other addresses. [Poll: Conservatives most likely to be offended by holiday greetings] For years, retailers and politicians have found themselves at the center of the politicized debate over the use of the neutral greeting “Happy Holidays” — which has the potential to be even more offensive to some groups than “Merry Christmas,” according to a recent survey from Public Policy Polling. The Post’s Christopher Ingraham parsed the survey results last year and surmised that those who were most offended by references to “Happy Holidays” included strong conservatives, Gary Johnson voters, Trump supporters and men. “These are the same groups of people that tend to say there is too much political correctness in society, yielding a paradox,” Ingraham reported. “The folks who complain the most about political correctness are the ones who are the most offended by what they see as ‘incorrect’ speech.” When it comes to holiday greetings, Donald Trump has one clear preference. (Adriana Usero/The Washington Post) Read more: America never stopped saying ‘Merry Christmas’ Should stores say ‘merry Christmas’? Trump says yes. But just ask the shop owners. President Trump just ended a long tradition of celebrating Ramadan at the White House
package com.netflix.discovery.shared.transport; import com.netflix.config.DynamicPropertyFactory; /** * @author <NAME> */ public class DefaultEurekaTransportConfig implements EurekaTransportConfig { private static final String SUB_NAMESPACE = "transport."; private final String namespace; private final DynamicPropertyFactory configInstance; public DefaultEurekaTransportConfig(String parentNamespace, DynamicPropertyFactory configInstance) { this.namespace = parentNamespace == null ? SUB_NAMESPACE : parentNamespace + SUB_NAMESPACE; this.configInstance = configInstance; } @Override public int getSessionedClientReconnectIntervalSeconds() { return configInstance.getIntProperty(namespace + "sessionedClientReconnectIntervalSeconds", 20*60).get(); } @Override public double getRetryableClientQuarantineRefreshPercentage() { return configInstance.getDoubleProperty(namespace + "retryableClientQuarantineRefreshPercentage", 0.66).get(); } @Override public int getBootstrapResolverRefreshIntervalSeconds() { return configInstance.getIntProperty(namespace + "bootstrapResolverRefreshIntervalSeconds", 5*60).get(); } @Override public int getApplicationsResolverDataStalenessThresholdSeconds() { return configInstance.getIntProperty(namespace + "applicationsResolverDataStalenessThresholdSeconds", 5*60).get(); } @Override public int getAsyncResolverRefreshIntervalMs() { return configInstance.getIntProperty(namespace + "asyncResolverRefreshIntervalMs", 5*60*1000).get(); } @Override public int getAsyncResolverWarmUpTimeoutMs() { return configInstance.getIntProperty(namespace + "asyncResolverWarmupTimeoutMs", 5000).get(); } @Override public int getAsyncExecutorThreadPoolSize() { return configInstance.getIntProperty(namespace + "asyncExecutorThreadPoolSize", 5).get(); } @Override public String getReadClusterVip() { return configInstance.getStringProperty(namespace + "readClusterVip", null).get(); } @Override public boolean useBootstrapResolverForQuery() { return configInstance.getBooleanProperty(namespace + "useBootstrapResolverForQuery", true).get(); } }
Ultrasound-guided vs. palpation-guided techniques for radial arterial catheterisation in infants: A randomised controlled trial BACKGROUND The usefulness of ultrasound-guided techniques for radial arterial catheterisation has been well identified; however, its usefulness has not been completely evaluated in infants under 12 months of age, who are generally considered the most difficult group for arterial catheterisation. OBJECTIVE We evaluated whether ultrasound guidance would improve success rates and reduce the number of attempts at radial arterial catheterisation in infants. DESIGN A randomised, controlled and patient-blinded study. SETTING Single-centre trial, study period from June 2016 to February 2017. PATIENTS Seventy-four infants undergoing elective cardiac surgery. INTERVENTION Patients were allocated randomly into either ultrasound-guided group (group US) or palpation-guided group (group P) (each n=37) according to the technique applied for radial arterial catheterisation. All arterial catheterisations were performed by one of two experienced anaesthesiologists based on group assignment and were recorded on video. MAIN OUTCOME MEASURES The primary endpoint was the first-pass success. The number of attempts and total duration of the procedure until successful catheterisation were also analysed. RESULTS The first-pass success rate was significantly higher in the group US than in the group P (68 vs. 38%, P = 0.019). In addition, fewer attempts were needed for successful catheterisation in the group US than in the group P (median 1 vs. 2 , P = 0.023). However, the median procedural time (s) until successful catheterisation in the two groups was not significantly different (102 vs. 218 , P = 0.054). CONCLUSION The current study demonstrated that the ultrasound-guided technique for radial arterial catheterisation in infants effectively improved first-pass success rate and also reduced the number of attempts required. TRIAL REGISTRATION ClinicalTrials.gov NCT02795468.
import java.util.Scanner; public class Solution { public static void main(String[] args) { // TODO Auto-generated method stub Scanner in = new Scanner(System.in); long s1 = in.nextLong(); long s2 = in.nextLong(); long k = in.nextLong(); long m = in.nextLong(); long[] ar1 = new long[(int)s1]; long[] ar2 = new long[(int)s2]; for(long i = 0; i< s1; i ++) ar1[(int)i] = in.nextLong(); for(long i = 0; i< s2; i ++) ar2[(int)i] = in.nextLong(); long temp = ar1[(int)k - 1]; long temp1 = ar2[(int)(s2 -m) ]; if( temp >= temp1) System.out.println("NO"); else System.out.println("YES"); // for(long i = s2-1; i>= 0; i--) { // long run2 = ar2[(int)i]; // boolean flag = false; // boolean isEnough = false; // for(long j = s1-1; j >= 0 ; j --) { // if(ar1[(int)j] >= ar2[(int)i]) continue; // else { // if(!flag) { // flag = true; // count1++; // count++; // } else { // count++; // } // } // if(count >= k && count1 == m) { // isEnough = true; // //System.out.println("count:= " + k + " count1:= " + m); // break; // } // } // if(isEnough) break; // } // if(count >= k && count1 == m) System.out.println("YES"); // else System.out.println("NO"); in.close(); } }
def delegate(self, policy: ABCPolicy):
#include "GlowActionLabel.hh" #include <algorithm> #include <iostream> #include <cassert> #include <glow/common/thread_local.hh> #include <glow/objects/Timestamp.hh> #include <mutex> #include <queue> #include <stack> #include <vector> using namespace glow; namespace { struct QueryEntry { int index; SharedTimestamp queryStart; SharedTimestamp queryEnd; }; GLOW_THREADLOCAL std::vector<GlowActionLabel::Entry> *sEntries = nullptr; GLOW_THREADLOCAL std::queue<QueryEntry> *sQueries = nullptr; GLOW_THREADLOCAL std::vector<SharedTimestamp> *sTimers = nullptr; GLOW_THREADLOCAL std::stack<int> *sEntryStack = nullptr; std::mutex sLabelLock; std::vector<GlowActionLabel *> sLabels; std::vector<std::vector<GlowActionLabel::Entry> *> sEntriesPerThread; #if _MSC_VER LARGE_INTEGER sFrequency; // null init #endif int64_t getTime() { #if _MSC_VER LARGE_INTEGER time; QueryPerformanceCounter(&time); uint32_t secs = int32_t(time.QuadPart / sFrequency.QuadPart); uint32_t nsecs = int32_t((time.QuadPart % sFrequency.QuadPart) * 1000000000LL / sFrequency.QuadPart); #else struct timespec t; clock_gettime(CLOCK_MONOTONIC, &t); uint32_t secs = t.tv_sec; uint32_t nsecs = t.tv_nsec; #endif return secs * 1000000000ULL + nsecs; } } std::string GlowActionLabel::shortDesc() const { auto filename = mFile; if (filename.find('/') != std::string::npos) filename = filename.substr(filename.rfind('/') + 1); if (filename.find('\\') != std::string::npos) filename = filename.substr(filename.rfind('\\') + 1); auto name = mName; if (name.empty()) name = nameOrFunc(); else name = "\"" + name + "\""; return name + ", " + filename + ":" + std::to_string(mLine); } std::string GlowActionLabel::nameOrFunc() const { auto name = mName; if (name.empty()) { name = mFunction; name = name.substr(0, name.find('(')); name = name.substr(name.rfind(' ') + 1); // TODO: more special cases name += "()"; } return name; } GlowActionLabel::GlowActionLabel(const char *file, int line, const char *function, const char *name) : mName(name), mFile(file), mLine(line), mFunction(function) { sLabelLock.lock(); #if _MSC_VER if (sFrequency.QuadPart == 0) QueryPerformanceFrequency(&sFrequency); #endif mIndex = sLabels.size(); sLabels.push_back(this); if (!sEntries) { sEntries = new std::vector<Entry>(); sQueries = new std::queue<QueryEntry>(); sTimers = new std::vector<SharedTimestamp>(); sEntryStack = new std::stack<int>(); sEntriesPerThread.push_back(sEntries); } sLabelLock.unlock(); } void GlowActionLabel::startEntry() { int entryIdx = sEntries->size(); sEntryStack->push(entryIdx); Entry e; e.label = this; e.timeStartCPU = getTime(); e._queryEnd = getQuery(); sEntries->push_back(e); QueryEntry qe; qe.index = entryIdx; qe.queryStart = getQuery(); qe.queryEnd = e._queryEnd; qe.queryStart->save(); sQueries->push(qe); } void GlowActionLabel::endEntry() { assert(!sEntryStack->empty() && "stack empty"); auto idx = sEntryStack->top(); sEntryStack->pop(); Entry &e = sEntries->at(idx); e.timeEndCPU = getTime(); e._queryEnd->save(); e._queryEnd = nullptr; } std::vector<GlowActionLabel *> GlowActionLabel::getAllLabels() { sLabelLock.lock(); auto labels = sLabels; sLabelLock.unlock(); return labels; } void GlowActionLabel::update(bool force) { if (!sQueries) return; while (!sQueries->empty()) { auto const &qe = sQueries->front(); auto const &e = sEntries->at(qe.index); if (e.timeEndCPU == 0) break; // not finished yet if (force || (qe.queryStart->isAvailable() && qe.queryEnd->isAvailable())) { auto &e = sEntries->at(qe.index); e.timeStartGPU = qe.queryStart->getNanoseconds(); e.timeEndGPU = qe.queryEnd->getNanoseconds(); releaseQuery(qe.queryStart); releaseQuery(qe.queryEnd); auto l = e.label; l->mEntriesMutex.lock(); l->mEntries.push_back(e); l->mEntriesMutex.unlock(); sQueries->pop(); } else break; } } void GlowActionLabel::print(int maxLines) { struct Result { double sumCPU; double sumGPU; double avgCPU; double avgGPU; int count; std::string name; }; std::vector<Result> labels; sLabelLock.lock(); for (auto const &l : sLabels) { double sumCPU = 0; double sumGPU = 0; int count = 0; l->mEntriesMutex.lock(); for (auto e : l->mEntries) { if (!e.isValid()) continue; sumCPU += e.durationCPU(); sumGPU += e.durationGPU(); ++count; } l->mEntriesMutex.unlock(); if (count == 0) ++count; labels.push_back({sumCPU, sumGPU, sumCPU / count, sumGPU / count, count, l->shortDesc()}); } sLabelLock.unlock(); std::sort(begin(labels), end(labels), [](Result const &r, Result const &l) { return std::max(r.sumGPU, r.sumCPU) > std::max(l.sumGPU, l.sumCPU); }); if (labels.size() == 0) return; // int wTime = 8; // int wCount = 6; std::cout << "GPU Sum, CPU Sum, GPU, CPU, Cnt, Name" << std::endl; for (auto i = 0u; i < labels.size(); ++i) { if (maxLines-- <= 0) break; auto const &r = labels[i]; std::cout << r.sumGPU / 10e6 << " ms, " << r.sumCPU / 10e6 << " ms, " << r.avgGPU / 10e6 << " ms, " << r.avgCPU / 10e6 << " ms, " << r.count << "x, " << r.name << std::endl; } } SharedTimestamp GlowActionLabel::getQuery() { if (sTimers->empty()) return Timestamp::create(); else { auto timer = sTimers->back(); sTimers->pop_back(); return timer; } } void GlowActionLabel::releaseQuery(const SharedTimestamp &query) { sTimers->push_back(query); }
/** * View shelveset action for a queued build. */ public class ViewShelvesetAction extends QueuedBuildAction { @Override public void doRun(final IAction action) { final IQueuedBuild queuedBuild = getSelectedQueuedBuild(); if (queuedBuild != null) { final TFSRepository repository = TFSCommonUIClientPlugin.getDefault().getProductPlugin().getRepositoryManager().getDefaultRepository(); PendingChangesHelpers.showShelvesetDetails(getShell(), repository, queuedBuild.getShelvesetName()); } } @Override protected void onSelectionChanged(final IAction action, final ISelection selection) { super.onSelectionChanged(action, selection); if (action.isEnabled()) { final IQueuedBuild queuedBuild = getSelectedQueuedBuild(); final BuildReason reason = queuedBuild.getReason(); action.setEnabled( (reason.contains(BuildReason.CHECK_IN_SHELVESET) || reason.contains(BuildReason.VALIDATE_SHELVESET)) && queuedBuild.getShelvesetName() != null); } } }
<reponame>AndreHermanto/sgc-1<gh_stars>1-10 import { Auth, expiredAtKey } from './auth-service'; import { MockRouter } from '../mocks/router.mock'; describe('Auth Service', () => { let authService: Auth; let mockRouter: any; beforeEach(() => { mockRouter = new MockRouter(); authService = new Auth(mockRouter, null); }); describe('authenticated', () => { it('should be expired', () => { localStorage.setItem(expiredAtKey, ''); expect(authService.authenticated()).toBeFalsy(); }); it('should not be expired', () => { localStorage.setItem('expiredAtKey', (new Date().getTime() + 10000).toString()); expect(authService.authenticated()).toBeFalsy(); localStorage.setItem(expiredAtKey, ''); }); }); });
BioWare I wasn’t going to write anything about Mass Effect 3 and the Not-So-Great Ending backlash. I really wasn’t. You’re probably as sick of reading about it as I am. What’s more, I’ve been less-than-enthralled with the Mass Effect games. Saying crass things about BioWare’s beloved sci-fi opera and defending the way they ended it? I must be suicidal. But then I read that the Better Business Bureau was looking to get involved. Or already is, weighing in on whether BioWare “misled” consumers about the ending. And so I’m compelled to say something, because I think fans — and now a consumer watchdog group — are taking things a space-time bridge too far. If you’re just joining us, a bunch of Mass Effect fans — precise numbers unknown, so maybe lots, or just a vocal minority — have been up in arms about the way BioWare ended Mass Effect 3, the big trilogy finale that launched on March 6, 2012. The specific reasons why vary, depending who you talk to, but range from complaints about the number of alternative wrap-ups (“too few”) and the narrative depth of the existing ones (“too superficial”) to the tenor of the ending (minor spoiler ahead — “too bleak”). Poke around and you’ll find online petitions and Facebook pages demanding “better” or “happier” endings. (MORE: Bioware to Extend ‘Mass Effect 3’ Ending, Pacify the Mob) That culminated in BioWare sort of capitulating and promising a freely downlodable “extended cut” for the game sometime this summer that it says will offer “a more fleshed out experience for our fans.” But while BioWare promises the DLC “will offer extended scenes that provide additional context and deeper insight to the conclusion of Commander Shepard’s journey,” they’re clear that “no further ending DLC is planned.” You’ll get more context from the DLC, in other words, but Mass Effect 3‘s endings will stand as-is. The company adds that it “strongly believes in the team’s artistic vision for the end of this arc of the Mass Effect franchise.” Obviously that’s not going to placate the most zealous of Mass Effect 3‘s detractors, but that’s to be expected. What I wasn’t expecting: to find the Better Business Bureau weighing in on Tuesday, arguing that the game’s advertising doesn’t add up. “Consider this,” writes Marjorie Stephens, communications director for the BBB of Northern Indiana. “If you had purchased a game for $59.99 or $79.99 for the digital download version and were told that you had complete control over the game’s outcome by the choices your character made and then actually had no control over the game’s outcome, wouldn’t you be disappointed?!” Well yes, I suppose most people would. But as Stephens herself admits, that’s not exactly what BioWare promised. On the official Mass Effect website, under “About,” the company entreats players to “experience the beginning, middle, and end of an emotional story unlike any other, where the decisions you make completely shape your experience and outcome.” The key words here are “completely,” “experience” and “outcome.” If you read that strictly, as Stephens does, you might walk away thinking Mass Effect 3 offers a limitless number of endings. But I think most gamers are savvy enough, both about the Mass Effect series and the way games like this work in general, to know BioWare was talking about the overall experience, and “outcome” not in terms of the game’s final 15 or 30 minutes — is that even describable as the “outcome,” as if the game were just a mammoth math equation? The problem word is “completely,” which is just marketing hyperbole and rightly scorned, but I wouldn’t go so far as to call it false advertising. (MORE: Why Mass Effect 3′s Multiplayer Doesn’t Bother Me) Mass Effect always felt more like a play-along novel than a roleplaying game to me, an adventure game punctuated by combat and ability-tweaking, where you mostly listen and watch between action or exploration sequences. It has far more in common with a “Choose Your Own Adventure” book than, say, games like Skyrim or Grand Theft Auto. And like a “Choose Your Own Adventure” book, it has limited outcomes. You can’t tell as on-rails a story as BioWare does in the Mass Effect games and still deliver a please-all wrap-up. There’s just no way. People were bound to be disappointed. This is mostly Bioware’s story, after all. Choosing to save this person or that one, picking friendlier conversational phrases over dictatorial ones — that stuff’s there to help color between the lines, but in the end, those lines are drawn. We’re all (spoilers ahead) joining the Spectres, fighting Saren, getting our prize ship pulverized, dying, coming back as kind-of-cyborgs, joining a nefarious shadow organization, rounding up a posse and taking the fight to an intergalactic menace. Mass Effect is in that sense more a story-telling than a role-playing game. Or someone else’s dinner party: You can pick which utensils to use and maybe choose whether to have your salad before or with the main entree and coffee before or with dessert, but in the end, everyone’s having the same thing. Sometimes endings go wrong. Sometimes people love bad ones. And sometimes they hate great ones. While I love the way much of the novel Under the Dome’s written, I can’t get past the Twilight Zone-style zinger Stephen King drops in the wrap-up. Like many, I was seriously bummed about Damon Lindelof and Carlton Cuse’s sentimental Lost series finale. And I mostly enjoyed Vincent Ward’s What Dreams May Come until…yep, the ultra-schmaltzy ending. But never once did I feel my creative distaste, my “I would have done it this way, or maybe this other way,” could be righted by demanding and receiving a ret-con (or worse, a crowd-sourced one). Who’s to say my ideas for those endings would have been better ones? Storytelling run through a crowd-sourced blender too often looks like Easy-Cheese. It’s simple math, the law of averages. The best endings are more often the controversial outlier ones. I’m not saying that’s true of the way BioWare ended Mass Effect 3, but I’m also not saying it isn’t. All I am saying, is that demanding storytellers change endings is wrongheaded. We don’t have to like them or say nice things about them. We may even write elaborate critiques. But handing creators a list of demands, requiring that they deliver a certain number of endings, some of them happy, etc. only diminishes what storytellers do. And the question we’re left to answer, really, is: How much satisfaction did we derive from playing the game before the curtain finally fell? I’d like to leave you with this, from the coda to Stephen King’s The Dark Tower, a series that itself elicited both praise and scorn for the way King chose to wrap things up after seven books and a readership that followed the tale for decades. I think it gets to the heart of what’s at stake here. I’ve told my tale all the way to to the end … I can stop now … Yet some of you … are likely not so willing. You are the grim, goal-oriented ones who will not believe that the joy is in the journey rather than the destination no matter how many times it has been proven to you … I hope most of you know better. Want better. I hope you came to hear the tale, and not just munch your way through… An ending is a closed door no man … can open. I’ve written many, but most only for the same reason that I pull on my pants in the morning before leaving the bedroom — because it is the custom of the country … There is no such thing as a happy ending. I never met a single one to equal “Once upon a time.” Endings are heartless. Ending is just another word for goodbye. MORE: Mass Effect 3 Review Roundup: Pleasing Even the Usual Critics
Autumn Pasquale's estate is claiming local police failed to follow the correct procedures in the search for Autumn, therefore "failing" the 12-year-old murder victim and her family. The estate, which is comprised of Anthony Pasquale, Autumn's father, and her two siblings, filed suit against six municipalities, three Clayton Police officers, the state police and the Gloucester County Prosecutor's Office stating they failed to utilize "adequate law enforcement techniques." Autumn Pasquale was reported missing on Saturday, Oct. 20, 2012. Her body was found two days later, stuffed into a recycling bin on an abandoned property adjacent to the home of 15-year-old Justin Robinson, who later pleaded guilty to her murder. She was strangled the day she disappeared, according to autopsy reports. The lawsuit, filed in Gloucester County's Superior Court claims law enforcement should have implemented the Child Abduction Response Team (CART) — a "multi-disciplinary approach to responding to missing or abducted children" — as soon as Autumn was reported missing, rather than the next day. It also claims local and state police refused assistance from Joseph Nicholas, a retired law enforcement investigator. "Law enforcement did not follow appropriate procedures as they should have been trained to do, including canvassing the immediate area. Law enforcement failed Autumn," the suit reads. If police were trained in proper search techniques and those techniques were used "Autumn may have been discovered sooner, and a reasonable chance exists that she would have survived," the suit claims. "Law enforcement could not have prevented the death of Autumn, because she was killed several hours before she was ever reported missing, said Bernie Weisenfeld, a spokesman for the county prosecutor's office. "Any civil action seeking monetary damages by pointing blame at police is misplaced." The Pasquale estate is asking for monetary damages on seven counts against the government entities named in the suit. In addition to losing the "support and care and attention" of Autumn, they also stipulate that her parents have lost "any direct financial contributions which would have been made" by Autumn as she became a wage earner and that they "lost the value of the child's anticipated services to survivors such as household chores and care of siblings." Anthony Pasquale also filed suit last year against Justin Robinson's parents, Anita Saunders and Alonzo Robinson. In that suit he claims the parents should have known that their son was regularly stealing bicycles and had pre-existing emotional psychological and neurodevelopmental problems. Anthony Pasquale refused to comment on the lawsuit. The estate's attorney Gregg Zeff was not immediately available for comment Monday morning. Rebecca Forand may be reached at [email protected] . Follow her on Twitter @RebeccaForand . Find the South Jersey Times on Facebook
`Standard' Cosmological model&beyond with CMB Observational Cosmology has indeed made very rapid progress in the past decade. The ability to quantify the universe has largely improved due to observational constraints coming from structure formation Measurements of CMB anisotropy and, more recently, polarization have played a very important role. Besides precise determination of various parameters of the `standard' cosmological model, observations have also established some important basic tenets that underlie models of cosmology and structure formation in the universe -- `acausally' correlated initial perturbations in a flat, statistically isotropic universe, adiabatic nature of primordial density perturbations. These are consistent with the expectation of the paradigm of inflation and the generic prediction of the simplest realization of inflationary scenario in the early universe. Further, gravitational instability is the established mechanism for structure formation from these initial perturbations. The signature of primordial perturbations observed as the CMB anisotropy and polarization is the most compelling evidence for new, possibly fundamental, physics in the early universe. The community is now looking beyond the estimation of parameters of a working `standard' model of cosmology for subtle, characteristic signatures from early universe physics. Introduction The 'standard' model of cosmology must not only explain the dynamics of the homogeneous background universe, but also satisfactorily describe the perturbed universe -the generation, evolution and finally, the formation of large scale structures in the universe. It is fair to say much of the recent progress in cosmology has come from the interplay between refinement of the theories of structure formation and the improvement of the observations. The transition to precision cosmology has been spearheaded by measurements of CMB anisotropy and, more recently, polarization. Despite its remarkable success, the 'standard' model of cosmology remains largely tied to a number of fundamental assumptions that have yet to find complete and precise observational verification : the Cosmological Principle, the paradigm of inflation in the early universe and its observable consequences (flat spatial geometry, scale invariant spectrum of primordial seed perturbations, cosmic gravitational radiation background etc.). Our understanding of cosmology and structure formation necessarily depends on the rather inaccessible physics of the early universe that provides the stage for scenarios of inflation (or related alternatives). The CMB anisotropy and polarization contains information about the hypothesized nature of random primordial/initial metric perturbations -(Gaussian) statistics, (nearly scale invariant) power spectrum, (largely) adiabatic vs. iso-curvature and (largely) scalar vs. tensor component. The 'default' settings in brackets are motivated by inflation. The signature of primordial perturbations on superhorizon scales at decoupling in the CMB anisotropy and polarization are the most definite evidence for new physics (eg., inflation ) in the early universe that needs to be uncovered. However, the precision estimation of cosmological parameters implicitly depend on the assumed form of the initial conditions such as the primordial power spectrum, or, explicitly on the scenario of generation of initial perturbations . Besides precise determination of various parameters of the 'standard' cosmological model, observations have also begum to establish (or observationally query) some of the important basic tenets of cosmology and structure formation in the universe -'acausally' correlated initial perturbations, adiabatic nature of primordial density perturbations, gravitational instability as the mechanism for structure formation. We have inferred a spatially flat universe where structures form by the gravitational evolution of nearly scale invariant, adiabatic perturbations in the non-baryonic cold dark matter. There is a dominant component of dark energy that does not cluster (on astrophysical scales). We briefly review the observables from the CMB sky and importance to understanding cosmology in section 2 Most recent estimates of the cosmological parameters are available and best obtained from recent literature, eg. Ref. and, hence, is not given in the article. The main theme of the article is to highlight 1 the success of recent cosmological observations in establishing some of the fundamental tenets of cosmology and structure : • Statistical Isotropy of the universe (Sec. 3); • Gravitational instability mechanism for structure formation(Sec. 4); • Primordial perturbations from Inflation.(Sec. 5). Up to this time, the attention of the community has been largely focused on estimating the cosmological parameters. The next decade would see increasing efforts to observationally test fundamental tenets of the cosmological model and search for subtle deviations from the same using the CMB anisotropy and polarization measurements and related LSS observations, galaxy survey, gravitational lensing, etc. CMB observations and cosmological parameters The angular power spectra of the Cosmic Microwave Background temperature fluctuations (C ℓ )have become invaluable observables for constraining cosmological models. The position and amplitude of the peaks and dips of the C ℓ are sensitive to important cosmological parameters, such as, the relative density of matter, Ω 0 ; cosmological constant, Ω Λ ; baryon content, Ω B ; Hubble constant, H 0 and deviation from flatness (curvature), Ω K . The angular spectrum of CMB temperature fluctuations has been measured with high precision on up to angular scales (ℓ ∼ 1000) by the WMAP experiment , while smaller angular scales have been probed by ground and balloon-based CMB experiments such as ACBAR, QuaD and ACT . These data are largely consistent with a ΛCDM model in which the Universe is spatially flat and is composed of radiation, baryons, neutrinos and, the exotic, cold dark matter and dark energy. The exquisite measurements by the Wilkinson Microwave Anisotropy Probe (WMAP) mark a successful decade of exciting CMB anisotropy measurements and are considered a milestone because they combine high angular resolution with full sky coverage and extremely stable ambient condition (that control systematics) allowed by a space mission . Figure 1 shows the angular power spectrum of CMB temperature fluctuations obtained from the 5 & 7-year WMAP data . The measurements of the anisotropy in the cosmic microwave background (CMB) over the past decade has led to 'precision cosmology'. Observations of the large scale structure in the distribution of galaxies, high redshift supernova, and more recently, CMB polarization, have provided the required complemen-tary information. The current up to date status of cosmological parameter estimates from joint analysis of CMB anisotropy and Large scale structure (LSS) data is usually best to look up in the parameter estimation paper accompanying the most recent results announcement of a major experiment, such as recent WMAP release . One of the firm predictions of this working 'standard' cosmological model is linear polarization pattern (Q and U Stokes parameters) imprinted on the CMB at last scattering surface. Thomson scattering generates CMB polarization anisotropy at decoupling . This arises from the polarization dependence of the differential cross section: dσ/dΩ ∝ |ǫ ′ ·ǫ| 2 , where ǫ and ǫ ′ are the incoming and outgoing polarization states involving linear polarization only. A local quadrupole temperature anisotropy produces a net polarization, because of the cos 2 θ dependence of the cross section. A net pattern of linear polarization is retained due to local quadrupole intensity anisotropy of the CMB radiation impinging on the electrons at z rec . The polarization pattern on the sky can be decomposed in the two kinds with different parities. The even parity pattern arises as the gradient of a scalar field called the E-mode. The odd parity pattern arises from the 'curl' of a pseudo-scalar field called the B-mode of polarization. Hence the CMB sky maps are characterized by a triplet of random scalar fields: X(n) ≡ {T (n), E(n), B(n)}. For Gaussian CMB sky, there are a total of 4 power spectra that characterize the CMB signal : Parity conservation eliminates the two other possible power spectra, C TB ℓ & C EB ℓ . While CMB temperature anisotropy can also be generated during the propagation of the radiation from the last scattering surface, the CMB polarization signal can be generated primarily at the last scattering surface, where the optical depth transits from large to small values. The polarization information complements the CMB temperature anisotropy by isolating the effect at the last scattering surface from effects along the line of sight. The CMB polarization is an even cleaner probe of early universe scenarios that promises to complement the remarkable successes of CMB anisotropy measurements. The CMB polarization signal is much smaller than the anisotropy signal. Measurements of polarization at sensitivities of µK (E-mode) to tens of nK level (B-mode) pose stiff challenges for ongoing and future experiments. After the first detection of CMB polarization spectrum by the Degree Angular Scale Interferometer (DASI) on the intermediate band of angular scales (l ∼ 200 − 440) in late 2002 , the field has rapidly grown, with measurements coming in from a host of ground-based and balloon-borne dedicated CMB polarization experiments. The full sky E-mode polarization maps and polarization spectra from WMAP were a new milestone in CMB research . The most current CMB polarization measurement of C TT ℓ , C TE ℓ and C EE ℓ and a non-detection of B-modes come from QUaD and BICEP. They also report interesting upper limits C TB ℓ or C EB ℓ , over and above observational artifacts . A non-zero detection of C TB ℓ or C EB ℓ , over and above observational artifacts, could be tell-tale signatures of exotic parity violating physics and the CMB measurements put interesting limits on these possibilities. While there has been no detection of cosmological signal in B-mode of polarization, the lack of B-mode power suggests that foreground contamination is at a manageable level which is good news for future measurements. The Planck satellite launched in May 2009 will greatly advance our knowledge of CMB polarization by providing foreground/cosmic variance-limited measure-ments of C TE ℓ and C EE ℓ out beyond l ∼ 1000. We also expect to detect the weak lensing signal, although with relatively low precision. Perhaps, Planck could detect inflationary gravitational waves if they exist at a level of r ∼ 0.1. In the future, a dedicated CMB polarization mission is under study at both NASA and ESA in the time frame 2020+. These primarily target the B-mode polarization signature of gravity waves, and consequently, identify the viable sectors in the space of inflationary parameters. Statistical Isotropy of the universe The Cosmological Principle that led to the idealized FRW universe found its strongest support in the discovery of the (nearly) isotropic, Planckian, Cosmic Microwave Background. The isotropy around every observer leads to spatially homogeneous cosmological models. The large scale structure in the distribution of matter in the universe (LSS) implies that the symmetries incorporated in FRW cosmological models ought to be interpreted statistically. The CMB anisotropy and its polarization is currently the most promising observational probe of the global spatial structure of the universe on length scales close to, and even somewhat beyond, the 'horizon' scale (∼ cH −1 0 ). The exquisite measurement of the temperature fluctuations in the CMB provide an excellent test bed for establishing the statistical isotropy (SI) and homogeneity of the universe. In 'standard' cosmology, CMB anisotropy signal is expected to be statistically isotropic, i.e., statistical expectation values of the temperature fluctuations ∆T (q) are preserved under rotations of the sky. In particular, the angular correlation function C(q,q ′ ) ≡ ∆T (q)∆T (q ′ ) is rotationally invariant for Gaussian fields. In spherical harmonic space, where ∆T (q) = lm a lm Y lm (q), the condition of statistical isotropy (SI) translates to a diagonal a lm a * l ′ m ′ = C l δ ll ′ δ mm ′ where C l , is the widely used angular power spectrum of CMB anisotropy. The C l is a complete description only of (Gaussian) SI CMB sky CMB anisotropy and would be (in principle) an inadequate measure for comparing models when SI is violated . Interestingly enough, the statistical isotropy of CMB has come under a lot of scrutiny after the WMAP results. Tantalizing evidence of SI breakdown (albeit, in very different guises) has mounted in the WMAP first year sky maps, using a variety of different statistics. It was pointed out that the suppression of power in the quadrupole and octopole are aligned . Further "multipole-vector" directions associated with these multipoles (and some other low multipoles as well) appear to be anomalously correlated . There are indications of asymmetry in the power spectrum at low multipoles in opposite hemispheres . Analysis of the distribution of extrema in WMAP sky maps has indicated non-Gaussianity, and to some extent, violation of SI . The more recent WMAP maps are consistent with the first-year maps up to a small quadrupole difference. The additional years of data and the improvements in analysis has not significantly altered the low multipole structures in the maps . Hence, 'anomalies' persisted at the same modest level of significance and are unlikely to be artifacts of noise, systematics, or the analysis in the first year data. The cosmic significance of these 'anomalies' remains debatable also because of the aposteriori statistics employed to ferret them out of the data. The WMAP team has devoted an entire publication to discuss and present a detailed analysis of the various anomalies . The observed CMB sky is a single realization of the underlying correlation, hence detection of SI violation, or correlation patterns, pose a great observational challenge. It is essential to develop a well defined, mathematical language to quantify SI and the ability to ascribe statistical significance to the anomalies unambiguously. The Bipolar spherical harmonic (BipoSH) representation of CMB correlations has proved to be a promising avenue to characterize and quantify violation of statistical isotropy. Two point correlations of CMB anisotropy, C(n 1 ,n 2 ), are functions on S 2 × S 2 , and hence can be generally expanded as Here A ℓM l1l2 are the Bipolar Spherical harmonic (BipoSH) coefficients of the expansion and Y l1l2 ℓM (n 1 ,n 2 ) are bipolar spherical harmonics. Bipolar spherical harmonics form an orthonormal basis on S 2 × S 2 and transform in the same manner as the spherical harmonic function with ℓ, M with respect to rotations. Consequently, inverse-transform of C(n 1 ,n 2 ) in eq. (1) to obtain the BipoSH coefficients of expansion is unambiguous. Most importantly, the Bipolar Spherical Harmonic (BipoSH) coefficients, A ℓM l1l2 , are linear combinations of off-diagonal elements of the harmonic space covariance matrix, where C ℓM l1m1l2m2 are Clebsch-Gordan coefficients and completely represent the information of the covariance matrix. Statistical isotropy implies that the covariance matrix is diagonal, a lm a * l ′ m ′ = C l δ ll ′ δ mm ′ and hence the angular power spectra carry all information of the field. When statistical isotropy holds BipoSH coefficients, A ℓM ll ′ , are zero except those with ℓ = 0, M = 0 which are equal to the angular power spectra up to a (−1) l (2l + 1) 1/2 factor. Therefore to test a CMB map for statistical isotropy, one should compute the BipoSH coefficients for the maps and look for nonzero BipoSH coefficients. Statistically significant deviations of BipoSH coefficient of map from zero would establish violation of statistical isotropy. Since A ℓM l1l2 form an equivalent representation of a general two point correlation function, cosmic variance precludes measurement of every individual A ℓM l1l2 . There are several ways of combining BipoSH coefficients into different observable quantities that serve to highlight different aspects of SI violations. Among the several possible combinations of BipoSH coefficients, the Bipolar Power Spectrum (BiPS) has proved to be a useful tool with interesting features . BiPS of CMB anisotropy is defined as a convenient contraction of the BipoSH coefficients where W l is the window function that corresponds to smoothing the map in real space by symmetric kernel to target specific regions of the multipole space and isolate the SI violation on corresponding angular scales. The BipoSH coefficients can be summed over l and l ′ to reduce the cosmic variance, to as obtain reduced BipoSH (rBipoSH) coefficients Reduced bipolar coefficients orientation information of the correlation patterns. An interesting way of visualizing these coefficients is to make a Bipolar map from A ℓM The symmetry A ℓM = (−1) M A * ℓ−M of reduced bipolar coefficients guarantees reality of Θ(n). It is also possible to obtain a measurable band power measure of A ℓM l1l2 coefficient by averaging l 1 in bands in multipole space. Recently, the WMAP team has chosen to quantify SI violation in the CMB anisotropy maps by the estimation A ℓM ll−i for small value of bipolar multipole, L, band averaged in multipole l. Fig. 3 taken from the WMAP-7 release paper shows SI violation measured in WMAP CMB maps shows the measured quadrupolar (bipolar index L = 2) bipolar power spectra for V-band and W-band WMAP data, using the KQ75y7 mask. The spherical multipole have been binned within uniform bands δl = 50. Only the components of the bipolar power spectra with M = 0 in ecliptic coordinates are shown. A statistically significant quadrupolar effect is seen, even for a single frequency band in a single angular bin. CMB polarization maps over large areas of the sky have been recently delivered by experiments in the near future. The statistical isotropy of the CMB polarization maps will be an independent probe of the cosmological principle. Since CMB polarization is generated at the surface of last scattering, violations of statistical isotropy are pristine cosmic signatures and more difficult to attribute to the local universe. The Bipolar Power spectrum has been defined and implemented for CMB polarization and show great promise . Gravitational instability mechanism for structure formation It is a well accepted notion that the large scale structure in the distribution of matter in the present universe arose due to gravitational instability from the same primordial perturbation seen in the CMB anisotropy at the epoch of recombination. This fundamental assumption in our understanding of structure formation has recently found a strong direct observational evidence . The acoustic peaks occur because the cosmological perturbations excite acoustic waves in the relativistic plasma of the early universe . The recombination of baryons at redshift z ≈ 1100 effectively decouples the baryon and photons in the plasma abruptly switching off the wave propagation. In the time between the excitation of the perturbations and the epoch of recombination, modes of different wavelength can complete different numbers of oscillation periods. This translates the characteristic time into a characteristic length scale and produces a harmonic series of maxima and minima in the CMB anisotropy power spectrum. The acoustic oscillations have a characteristic scale known as the sound horizon, which is the comoving distance that a sound wave could have traveled up to the epoch of recombination. This physical scale is determined by the expansion history of the early universe and the baryon density that determines the speed of acoustic waves in the baryon-photon plasma. For baryonic density comparable to that expected from Big Bang nucleosynthesis, acoustic oscillations in the baryon-photon plasma will also be observably imprinted onto the late-time power spectrum of the non-relativistic matter. This is easier understood in a real space description of the response of the CDM and baryon-photon fluid to metric perturbations . An initial small delta-function (sharp spike) adiabatic perturbation (δ ln a| H ) at a point leads to corresponding spikes in the distribution of cold dark matter (CDM), neutrinos, baryons and radiation (in the 'adiabatic' proportion, 1 + w i , of the species). The CDM perturbation grows in place while the baryonic perturbation being strongly coupled to radiation is carried outward in an expanding spherical wave. At recombination, this shell is roughly 105h −1 Mpc in (comoving) radius when the propagation of baryons ceases. Afterward, the combined dark matter and baryon perturbation seeds the formation of large-scale structure. The remnants of the acoustic feature in the matter correlations are weak (10% contrast in the power spectrum) and on large scales. The acoustic oscillations of characteristic wave-number translates to a bump (a spike softened by gravitational clustering of baryon into the well developed dark matter over-densities) in the correlation function at 105h −1 Mpc separation. The large-scale correlation function of a large spectroscopic sample of luminous, red galaxies (LRGs) from the Sloan Digital Sky Survey that covers ∼ 4000 square degrees out to a redshift of z ∼ 0.5 with ∼ 50, 000 galaxies has allowed a clean detection of the acoustic bump in distribution of matter in the present universe. Figure 4 shows the correlation function derived from SDSS data that clearly shows the acoustic 'bump' feature at a fairly good statistical significance . The acoustic signatures in the largescale clustering of galaxies provide direct, irrefutable evidence for the theory of gravitational clustering, notably the idea that large-scale fluctuations grow by linear perturbation theory from z ∼ 1000 to the present due to gravitational instability. Figure 3: The large-scale redshift-space correlation function of the SDSS LRG sample taken from Ref. . The inset shows an expanded view with a linear vertical axis. The lower-most curve (magenta), which lacks the acoustic peak, shows a pure CDM model (Ω m h 2 = 0.105). The models are Ω m h 2 = 0.12 (top-most, green), 0.13 (red), and 0.14 (bottom-most with peak, blue), all with Ω b h 2 = 0.024 and n = 0.98 and with a mild non-linear prescription folded in. The clearly visible bump at ∼ 100h −1 Mpc scale is statistically significant. Primordial perturbations from Inflation Any observational comparison based on structure formation in the universe necessarily depends on the assumed initial conditions describing the primordial seed perturbations. It is well appreciated that in 'classical' big bang model the initial perturbations would have had to be generated 'acausally'. Besides resolving a number of other problems of classical Big Bang, inflation provides a mechanism for generating these apparently 'acausally' correlated primordial perturbations . The power in the CMB temperature anisotropy at low multipoles (l ∼ < 60) first measured by the COBE-DMR did indicate the existence of correlated cosmological perturbations on super Hubble-radius scales at the epoch of last scattering, except for the (rather unlikely) possibility of all the power arising from the integrated Sachs-Wolfe effect along the line of sight. Since the polarization anisotropy is generated only at the last scattering surface, the negative trough in the C T E l spectrum at l ∼ 130 (that corresponds to a scale larger than the horizon at the epoch of last scattering) measured by WMAP first sealed this loophole, and provides an unambiguous proof of apparently 'acausal' correlations in the cosmological perturbations . Besides, the entirely theoretical motivation of the paradigm of inflation, the assumption of Gaussian, random adiabatic scalar perturbations with a nearly scale invariant power spectrum is arguably also the simplest possible choice for the initial perturbations. What has been truly remarkable is the extent to which recent cosmological observations have been consistent with and, in certain cases, even vindicated the simplest set of assumptions for the initial conditions for the (perturbed) universe discussed below. Nearly zero curvature of space The most interesting and robust constraint obtained in our quests in the CMB sky is that on the spatial curvature of the universe. The combination of CMB anisotropy, LSS and other observations can pin down the universe to be flat, Ω K ≈ −0.02±0.02. This is based on the basic geometrical fact that angular scale subtended in the sky by the acoustic horizon would be different in a universe with uniform positive (spherical), negative (hyperbolic), or, zero (Euclidean) spatial curvature. Inflation dilutes the curvature of the universe to negligible values and generically predicts a (nearly) Euclidean spatial section. Adiabatic primordial perturbation The polarization measurements provides an important test on the adiabatic nature primordial scalar fluctuations 2 . CMB polarization is sourced by the anisotropy of the CMB at recombination, z rec , the angular power spectra of temperature and polarization are closely linked. Peaks in the polarization spectra are sourced by the velocity term in the same acoustic oscillations of the baryon-photon fluid at last scattering. Hence, a clear indication of the adiabatic initial conditions is the compression and rarefaction peaks in the temperature anisotropy spectrum be 'out of phase' with the gradient (velocity) driven peaks in the polarization spectra. The figure 4 taken from Ref. reflects the current observational status of CMB E-mode polarization measurements. The recent measurements of the angular power spectrum the E-mode of CMB polarization at large l have confirmed that the peaks in the spectra are out of phase with that of the temperature anisotropy spectrum. Nearly scale-invariant power spectrum ? In a simple power law parametrization of the primordial spectrum of density perturbation (|δ k | 2 = Ak ns ), the scale invariant spectrum corresponds to n s = 1. Estimation of (smooth) deviations from scale invariance favor a nearly scale invariant spectrum . Current observations favor a value very close to unity are consistent with a nearly scale invariant power spectrum. While the simplest inflationary models predict that the spectral index varies slowly with scale, inflationary models can produce strong scale dependent fluctuations. Many model-independent searches have also been made to look for features in the CMB power spectrum . Accurate measurements of the angular power spectrum over a wide range of multipoles from the WMAP has opened up the possibility to deconvolve the primordial power spectrum for a show a compilation of recent measurements of the angular power spectra CMB anisotropy and polarization from a number of CMB experiments. The data is good enough to indicate that the peaks in EE and TE are out of phase with that of TT as expected for adiabatic initial conditions. The null BB detection of primary CMB signal from gravity waves is not unexpected (given the ratio of tensor to scalar perturbations). given set of cosmological parameters . The primordial power spectrum has been deconvolved from the angular power spectrum of CMB anisotropy measured by WMAP using an improved implementation of the Richardson-Lucy algorithm . The most prominent feature of the recovered primordial power spectrum shown in Figure 5 is a sharp, infra-red cut off on the horizon scale. It also has a localized excess just above the cut-off which leads to great improvement of likelihood over the simple monotonic forms of model infra-red cut-off spectra considered in the post WMAP literature. The form of infra-red cut-off is robust to small changes in cosmological parameters. Remarkably similar form of infra-red cutoff is known to arise in very reasonable extensions and refinement of the predictions from simple inflationary scenarios, such as the modification to the power spectrum from a pre-inflationary radiation dominated epoch or from a sharp change in slope of the inflaton potential . 'Punctuated Inflation' models where a brief interruption of inflation produces features similar to that suggested by direct deconvolution . Wavelet decomposition allows for clean separation of the 'features' in the recovered power spectrum on different scales . Recently, a frequentist analysis of the significance shows, however, that a scale free power law spectrum is not ruled out either . It is known that the assumed functional form of the primordial power spectrum can affect the best fit parameters and their relative confidence limits in cosmological parameter estimation. Specific assumed form actually drives the best fit parameters into distinct basins of likelihood in the space of cosmological parameters where the likelihood resists improvement via modifications to the primordial power spectrum . The regions where considerably better likelihoods are obtained allowing free form primordial power spectrum lie outside these basins. Hence, the apparently 'robust' determination of cosmological parameters under an assumed form of P (k) may be misleading and could well largely reflect the inherent correlations in the power at different k implied by the assumed form of the primordial power spectrum. The results strongly motivate approaches toward simultaneous estimation of the cosmological parameters and the shape of the primordial spectrum from upcoming cosmological data. It is equally important for theorists to keep an open mind towards early universe scenarios that produce features in the primordial power spectrum. Gaussian primordial perturbations The detection of primordial non-Gaussian fluctuations in the CMB would have a profound impact on our understanding of the physics of the early universe. The Gaussianity of the CMB anisotropy on large angular scales directly implies Gaussian primordial perturbations that is theoretically motivated by inflation . The simplest inflationary models predict only very mild non-Gaussianity that should be undetectable in the WMAP data. The CMB anisotropy maps (including the non Gaussianity analysis carried out by the WMAP team data ) have been found to be consistent with a Gaussian random field. Consistent with the predictions of simple inflationary theories, no significant deviations from Gaussianity in the CMB maps using general tests such as Minkowski functionals, the bispectrum, trispectrum in the three year WMAP data . There have however been numerous claims of anomalies in specific forms of non-Gaussian signals in the CMB data from WMAP at large scales (see discussion in sec. 3). Primordial tensor perturbations Inflationary models can produce tensor perturbations (gravitational waves) that are predicted to evolve independently of the scalar perturbations, with an uncorrelated power spectrum. The amplitude of a tensor mode falls off rapidly on sub-Hubble radius scales. The tensor modes on the scales of Hubble-radius the line of sight to the last scattering distort the photon propagation and generate an additional anisotropy pattern predominantly on the largest scales. It is common to parametrize the tensor component by the ratio r k * = A t /A s , ratio of A t , the primordial power in the transverse traceless part of the metric tensor perturbations, and A s , the amplitude scalar perturbation at a comoving wavenumber, k * (in Mpc −1 ). For power-law models, recent WMAP data alone puts an improved upper limit on the tensor to scalar ratio, r 0.002 < 0.55 (95% CL) and the combination of WMAP and the lensing-normalized SDSS galaxy survey implies r 0.002 < 0.28 (95% CL) . On large angular scales, the curl component of CMB polarization is a unique signature of tensor perturbations. Hence, the CMB B-polarization is a direct probe of the energy scale of early universe physics that generate the primordial metric perturbations (scalar & tensor). The relative amplitude of tensor to scalar perturbations, r, sets the energy scale for inflation E Inf = 3.4 × 10 16 GeV r 1/4 . A measurement of B-mode polarization on large scales would give us this amplitude, and hence a direct determination of the energy scale of inflation. Besides being a generic prediction of inflation, the cosmological gravity wave background from inflation would be a fundamental test of GR on cosmic scales and the semi-classical behavior of gravity. Figure 6 summarizes the current theoretical understanding, observational constraints and future possibilities for the stochastic gravity wave background from Inflation. Conclusions The past few years has seen the emergence of a 'concordant' cosmological model that is consistent both with observational constraints from the background evolution of the universe as well that from the formation of large sale structures. It is certainly fair to say that the present edifice of the 'standard' cosmological models is robust. A set of foundation and pillars of cosmology have emerged and are each supported by a number of distinct observations . The community is now looking beyond the estimation of parameters of a working 'standard' model of cosmology. There is increasing effort towards establishing the basic principles and assumptions. The feasibility and promise of this ambitious goal is based on the grand success in the recent years in pinpointing a 'standard' model. The up coming results from the Planck space mission will radically improve the CMB polarization measurements. There are already Figure 6: The figure taken from Ref. shows the theoretical predictions and observational constraints on primordial gravitational waves from inflation. The gravitational wave energy density per logarithmic frequency interval, (in units of the critical density) is plotted versus frequency. The shaded (blue) band labeled 'minimally tuned' represents the range predicted for simple inflation models with the minimal number of parameters and tunings. The dashed curves have lower values of tensor contribution, r, that is possible with more fine tuned inflationary scenarios. The currently existing experimental constraints shown are due to: big bang nucleosynthesis (BBN), binary pulsars, and WMAP-1 (first year) with SDSS. Also shown are the projections for LIGO (both LIGO-I, after one year running, and LIGO-II); LISA; and BBO (both initial sensitivity, BBO-I, and after cross-correlating receivers, BBO-Corr). Also seen the projected sensitivity of a future space mission for CMB polarization (CMBPol). proposals for the next generation dedicated satellite mission in 2020 for CMB polarization measurements at best achievable sensitivity.
<gh_stars>1000+ // run-pass // This is what the signature to spawn should look like with bare functions fn spawn<T:Send>(val: T, f: fn(T)) { f(val); } fn f(i: isize) { assert_eq!(i, 100); } pub fn main() { spawn(100, f); }
// Copyright 2013 by BBN Technologies Corp. // All Rights Reserved. #include "Generic/common/leak_detection.h" #include "Generic/actors/AWAKEDB.h" #include "Generic/actors/ActorTokenSubsetTrees.h" #include "Generic/xdoc/TokenSubsetTrees.h" #include "Generic/theories/Mention.h" #include "Generic/theories/SynNode.h" #include "Generic/common/ParamReader.h" #include <boost/foreach.hpp> #include <boost/algorithm/string.hpp> #include <boost/make_shared.hpp> #include <iostream> ActorTokenSubsetTrees::ActorTokenSubsetTrees(ActorInfo_ptr actorInfo, ActorEntityScorer_ptr aes) { _actorEntityScorer = aes; if (ParamReader::isParamTrue("limited_actor_match")) return; BOOST_FOREACH(ActorPattern *ap, actorInfo->getPatterns()) { if (ap->acronym || ap->requires_context) continue; BOOST_FOREACH(Symbol token, ap->lcPattern) { if (_actorNameCache.find(token) == _actorNameCache.end()) _actorNameCache[token] = std::vector<ActorPattern *>(); _actorNameCache[token].push_back(ap); } } // Remove caches for tokens that are too common; we don't want to match against these unless they share some other // common token anyway. These range from 'the' and 'of' to slightly more common things like 'democratic'. _too_frequent_token_threshold = ParamReader::getOptionalIntParamWithDefaultValue("tst_too_frequent_token_threshold", 1000); } ActorTokenSubsetTrees::~ActorTokenSubsetTrees() { } ActorTokenSubsetTrees::ActorScoreMap ActorTokenSubsetTrees::getTSTEqNames(const Mention *mention) { std::vector<Symbol> syms = mention->getHead()->getTerminalSymbols(); std::vector<std::wstring> allNames; boost::unordered_map<std::wstring, ActorId> actorName2ActorIdMap; ActorScoreMap results; std::wstring mentionName = ActorPattern::getNameFromSymbolList(syms); std::transform(mentionName.begin(), mentionName.end(), mentionName.begin(), towlower); std::wstringstream wss; wss << mentionName << L"_" << mention->getEntityType().getName().to_string(); std::wstring cacheKey = wss.str(); if (_mentionNameCache.find(cacheKey) != _mentionNameCache.end()) return _mentionNameCache[cacheKey]; std::vector<std::wstring> lcNameWords; boost::split(lcNameWords, mentionName, boost::is_any_of(L" ")); // Gather names for organizing into TokenSubsetTrees for (size_t i = 0; i < lcNameWords.size(); i++) { std::wstring word = lcNameWords[i]; if (_actorNameCache.find(Symbol(word)) != _actorNameCache.end()) { if (_actorNameCache[word].size() > _too_frequent_token_threshold) continue; BOOST_FOREACH(ActorPattern *ap, _actorNameCache[word]) { if (ap->entityTypeSymbol == mention->getEntityType().getName()) { allNames.push_back(ap->lcString); actorName2ActorIdMap[ap->lcString] = ap->actor_id; } } } } allNames.push_back(mentionName); TokenSubsetTrees tst; tst.initializeTrees(allNames); std::vector<std::wstring> eqNames1 = tst.getTSTAliases(mentionName); BOOST_FOREACH(std::wstring eqName, eqNames1) { ActorId aid = actorName2ActorIdMap[eqName]; double score = _actorEntityScorer->getTSTEditDistanceEquivalent(mention->getEntityType()); if (mention->getEntityType().matchesPER() && closePersonTSTMatch(mentionName, eqName)) score = 0.97; if (results.find(aid) == results.end() || score > results[aid]) results[aid] = score; } /* std::vector<std::wstring> eqNames2 = tst.getTSTOneCharChildren(mentionName); BOOST_FOREACH(std::wstring eqName, eqNames2) { ActorId aid = actorName2ActorIdMap[eqName]; std::cout << "For mention: " << UnicodeUtil::toUTF8StdString(mention->getHead()->toFlatString()) << " " << mention->getEntityType().getName().to_debug_string() << "\n"; std::cout << "TST One char child/parent match: " << aid.getId() << "\n"; if (results.find(aid) == results.end()) results[aid] = 0.7; } std::vector<std::wstring> eqNames3 = tst.getEditDistTSTChildren(mentionName); BOOST_FOREACH(std::wstring eqName, eqNames3) { ActorId aid = actorName2ActorIdMap[eqName]; std::cout << "For mention: " << UnicodeUtil::toUTF8StdString(mention->getHead()->toFlatString()) << " " << mention->getEntityType().getName().to_debug_string() << "\n"; std::cout << "TST edit distance match match: " << aid.getId() << "\n"; if (results.find(aid) == results.end()) results[aid] = 0.6; } */ if (_mentionNameCache.size() > ATST_MAX_ENTRIES) _mentionNameCache.clear(); _mentionNameCache[cacheKey] = results; return results; } bool ActorTokenSubsetTrees::closePersonTSTMatch(std::wstring name1, std::wstring name2) { std::vector<std::wstring> name1Pieces; std::vector<std::wstring> name2Pieces; boost::split(name1Pieces, name1, boost::is_any_of(L" ")); boost::split(name2Pieces, name2, boost::is_any_of(L" ")); size_t length1 = name1Pieces.size(); size_t length2 = name2Pieces.size(); return length1 > 1 && length2 > 1 && name1Pieces[0] == name2Pieces[0] && name1Pieces[length1 - 1] == name2Pieces[length2 - 1]; }
An inductive analytic criterion for flatness We present a constructive criterion for flatness of a morphism of analytic spaces X ->Y or, more generally, for flatness over Y of a coherent sheaf of modules on X. The criterion is a combination of a simple linear-algebra condition"in codimension zero"and a condition"in codimension one"which can be used together with the Weierstrass preparation theorem to inductively reduce the fibre dimension of the morphism. Introduction The main result of this article is a constructive criterion for flatness of a morphism of analytic spaces ϕ : X → Y (over K = R or C) or, more generally, for flatness over O Y of a coherent sheaf of O X -modules F . In the special case that X = Y and ϕ = id X (the identity morphism of X), our criterion reduces to the following "linear algebra criterion". In a neighbourhood of a point a ∈ X, an O X -module F can be presented as , where Φ is given by multiplication by a q × p-matrix of analytic functions. Let r = rank Φ(a). Then F a is O X,a -flat if and only if all minors of order r + 1 of Φ vanish near a. Our flatness criterion, in general, is a combination of a condition "in codimension zero" similar to the preceding and a condition "in codimension one" which can be used together with the Weierstrass preparation theorem to inductively reduce the fibre-dimension of the morphism ϕ. To justify the criterion, we use it to give natural constructive proofs of several classical results -Hironaka's existence of the local flattener , Douady's openness of flatness , and Frisch's generic flatness theorem . The proofs are essentially a mix of linear algebra and appropriate applications of the Weierstrass preparation theorem. For example, in the case X = Y , the linear algebra criterion above provides an immediate construction of the local flattener of F at a (i.e., the largest germ of an analytic subspace T of X at a such that F a is O T -flat). We can simply take O T = O X /I, where the ideal I is generated by the minors of order r + 1 of Φ. Hironaka's local flattener, in general, can be described using a similar linear algebra construction and the Weierstrass preparation theorem. Algebraic formulation of the flatness criterion. Let ϕ : Z → W and λ : T → W denote morphisms of analytic space-germs, and let F denote a finite O Z -module. We are concerned with O T -flatness of the module F⊗ OW O T , where⊗ OW denotes the analytic tensor product (i.e., the tensor product in the category of local analytic O W -algebras; see, for example, ). Via the embedding (φ, id Z ) : Z → W × Z and the natural projection π : W × Z → W , we can view F as an O W ×Z -module and therefore as an O W -module. Via an embedding Z ֒→ K m 0 we can also replace Z by K m 0 without changing the O W -module structure of F . In particular, then Let m denote the maximal ideal (y 1 , . . . , y n ) of R, and let n = m+(x 1 , . . . , x m ) ⊂ A. Then n is the maximal ideal of A. Given a power series f = f (y, x) ∈ A, we denote by f (0) or by f (0, x) its evaluation at y = 0; i.e., the image of f under the homomorphism A → A(0) := A⊗ R R/m of R-modules. Similarly, given an A-submodule M of A q , we denote by M (0) the evaluation of M at y = 0; i.e., We are thus interested in flatness of F⊗ R R/J over R/J, where F is a finitely generated A-module and J is an ideal in R. Theorem 1.1. Let R, A, F and J be as above. (A) There exist g ∈ A, l ∈ N and a homomorphism ψ : (B) F⊗ R R/J is a flat R/J-module if and only if, for any g, l and ψ as in (A), the following two conditions hold: Remark 1.2. The above theorem allows one to study flatness of a module F by repeated reduction of the fibre-dimension over R. Indeed, consider g and ψ as in (A). First suppose that g(0, 0) = 0. Since g(0, x) = 0, we can apply the Weierstrass division theorem (after a generic linear change in x) to conclude that A/g · A is a finite R{x}-module, wherex = (x 1 , . . . , x m−1 ). Then F/im ψ is a finite R{x}module too, since g · F ⊂ im ψ. On the other hand, if g(0, 0) = 0 (which is the case when the number of x-variables is 0), then condition (2) of (B) in the theorem is vacuous and no fibre dimension reduction is needed. Proof of Theorem 1.1 (A). Consider a presentation of F as an A-module By applying⊗ R R/J and⊗ R R/m to (1.1), we get presentations of F⊗ R R/J and F⊗ R R/m respectively. Notice that, identifying Φ with a matrix (with entries in A), Φ m becomes the matrix with entries obtained by evaluating the corresponding entries of Φ at y = 0. Theorem 2.1 (Hironaka's local flattener ). Let ϕ : Z → W be a morphism of analytic space-germs, where W is regular. Let F be a finite O Z -module. Then there exists a unique analytic subgerm P of W (i.e., a unique local analytic K-algebra O P , which is a quotient of O W ) such that: (2) Let λ P : P → W denote the embedding. Then, for every morphism λ : T → W of germs of analytic spaces such that Since flatness is preserved by base change (see ), it follows that Therefore, in Theorem 2.1 it suffices to consider an embedding λ : T → W , and to show that there is an ideal The germ P is called the local flattener of F (with respect to ϕ), and I(F ) is the ideal of the local flattener. Proof of Theorem 2.1. The uniqueness of P is automatic, since λ * P : O W → O P is surjective. By regularity of W , we can identify O W with the ring R = K{y} of convergent power series in y = (y 1 , . . . , y n ). Assume that Z is a subgerm of K m 0 . Using the graph of ϕ to embed Z in W × K m , we can think of O Z as a quotient ring of A = R{x}, where x = (x 1 , . . . , x m ). Then F is a finitely generated A-module. We will proceed by induction on m, the number of the x-variables. Choose g ∈ A and ψ : A l → F satisfying Theorem 1.1(A). Let J(F ) be the ideal in R generated by the coefficients of (the expansions in x of) the elements in ker ψ; i.e., the unique minimal ideal J in R satisfying ker ψ ⊂ J ·A l . If F = im ψ (which is the case if m = 0, since then g is invertible in A), then Theorem 1.1(B) implies that J(F ) is the ideal of the local flattener of F . If F = im ψ, then m > 0 and we may assume by the inductive hypothesis (see Remark 1.2) that there is a local Let X and Y be analytic spaces over K, and let ϕ : Given any ideal J in O Y,η , we let J η ′ denote the ideal generated by (a system of generators of) J at nearby points η ′ ∈ Y . Then Theorem 1.1 implies the following. is a closed subset of |X|. In particular, for Z = Y , the latter implies openness of the set of points ξ ∈ X with the property that F ξ is a flat O Y,ϕ(η) -module. This result is due to Douady and is the classical form of "openness of flatness". Proof of Theorem 2.3. As in the proof of Theorem 2.1, we proceed by induction on the fibre-dimension m of ϕ : . Since our problem is local, we can assume that U (resp. V ) is an open polydisc in C m (resp. C n ) centred at ξ 0 (resp. η 0 ). (After shrinking V if necessary), let J be a coherent O V -ideal such that J η0 = I η0,ξ0 (F ); we can assume that J η = (I η0,ξ0 (F )) η for all η ∈ V . Let Z denote the closed analytic subspace of V defined by J; i.e., |Z| is a representative in V of the zero-set germ V(I η0,ξ0 (F )). Then Theorem 1.1(B) implies that It follows (after shrinking U and V if needed) that g(η, x) = 0 for all η ∈ V and g ·F ⊂ im ψ. Then (2.1) implies If g(η, ξ) = 0 (which is the case if m = 0), then, by Theorem 1 .1(B), the first inclusion of (2.3) implies that I η,ξ (F ) ⊂ J η = (I η0,ξ0 (F )) η , as required. Proof of the main theorem We use the notation preceding Theorem 1.1. Consider a presentation (1.1) of F as an A-module. Applying⊗ R R/m, we get a homomorphism Φ m : A(0) p → A(0) q of A(0)-modules such that F⊗ R R/m ∼ = coker(Φ m ). Set r m := rank (Φ m ). We can assume that Φ is given by a block matrix (1.4) and g := det α satisfies g(0, x) = 0. For an ideal J in R, define ker J Φ := {ζ ∈ A p : Φ(ζ) ∈ J ·A q } , and rank J Φ := min{r ≥ 1 : all (r + 1) × (r + 1) minors of Φ belong to J ·A} . Our proof of Theorem 1.1(B) is based on showing that property (1) of the theorem is equivalent to equalities q − l = rank J Φ = rank Φ m , and that property (2) of the theorem is equivalent to R/J-flatness of G⊗ R R/J, where The latter equivalence is obvious if g is a unit in A, since both F/im ψ and G are zero in this case. Suppose then that g is not invertible in A, that is, g(0, 0) = 0. Since g(0, x) = 0, then after a (generic and linear) change of the x-coordinates to (x, x m ), wherex = (x 1 , . . . , x m−1 ), we have g(0, 0, x m ) = 0. By the Weierstrass Preparation Theorem, g = u · P , where u(0, 0) = 0 and P (y, The ring A/g · A is a finite free R{x}-module. We shall describe the action of α # · β : A p−rm → A rm modulo g as linear mapping of finite R{x}-modules. Given η ∈ A p−rm , Weierstrass division by g gives η ≡ d j=1 η j x d−j m (mod g), with η j ∈ R{x} p−rm . Applying Weierstrass division by g to the entries of α # · β, we form matrices Applying Euclid division by P (y, x) (as a monic polynomial in x m ) to the latter product, we obtain matrix G = (G ij ) 1≤i,j≤d , with block-matrices G ij of size r m × (p − r m ) and entries in R{x}, such that all entries of the matrix Proof. By definition of ker J Φ, ζ ∈ ker J Φ implies Φ(ζ) ∈ m · A q , and hence Φ m (ζ(0)) = 0. Therefore, we always have (ker J Φ)(0) ⊂ ker Φ m . On the other hand, by a well-known criterion for flatness (see, e.g., has rank p − r m . Then, by assumption, there is a matrix Ξ = Ξ(y, x) of size p × (p − r m ) such that the entries of Φ · Ξ are in J · A and Ξ(0, x) = ξ(x). It follows that rank Ξ = p − r m . By Cramer's Rule (and after an appropriate reordering of the columns of Φ and rows of Ξ), there exists a matrix Σ of size (p − r m ) × (p − r m ) with entries in A such that where g ∈ A satisfies g(0, x) = 0, and Γ is a matrix with entries in A of size r m × (p − r m ). Write Φ = , where Φ 1 consists of the first p − r m columns of Φ. It follows that g · Φ 1 + Φ 2 · Γ is a matrix with entries in J · A, and hence the It thus suffices to show that rank J Φ = rank J (g · Φ), but that is a consequence of Lemma 3.3. Lemma 3.7. Let Φ and π 2 : A p = A rm ⊕ A p−rm → A p−rm be as above, and let J be an ideal in R. Then η ∈ π 2 (ker J Φ) if and only if the following two conditions hold where g denotes det α. By (3.1) and (3.2), the latter is the case iff {η j (y,x)} d j=1 ∈ ker J G, which completes the proof of Proposition 3.1 and Theorem 1.1.
/* Client side implementation of UDP client-server model https://www.geeksforgeeks.org/udp-server-client-implementation-c/ */ #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <string.h> #include <sys/types.h> #include <sys/socket.h> #include <arpa/inet.h> #include <netinet/in.h> #include <sys/ioctl.h> #include <fcntl.h> #include <stdbool.h> #include <chrono> #include <time.h> #include <math.h> #include <signal.h> #include <stdlib.h> //RAND_MAX #include <iostream> int make_nonblocking (int fd){ int flags, ret; flags = fcntl(fd, F_GETFL, 0); if (flags == -1) { return -1; } // Set the nonblocking flag. flags |= O_NONBLOCK; ret = fcntl(fd, F_SETFL, flags); if (ret == -1) { return -1; } return 0; } int64_t WallTimeNowInUsec(){ std::chrono::system_clock::duration d = std::chrono::system_clock::now().time_since_epoch(); std::chrono::microseconds mic = std::chrono::duration_cast<std::chrono::microseconds>(d); return mic.count(); } int64_t TimeMillis(){ return WallTimeNowInUsec()/1000; } double e_random(double lambda){ double ret=0.0; double u=0.0; do{ u=(double)rand()/(double)RAND_MAX;; }while(u<=0||u>1); ret=(-1.0/lambda)*log(u); return ret; } static volatile bool running=true; void signal_exit_handler(int sig) { running=false; } const int kBufferSize=1500; int rate_table[]={500000,1000000,1500000,2000000,2500000}; const int64_t rate_duration=10000000;// 5s int main(int argc, char **argv) { signal(SIGTERM, signal_exit_handler); signal(SIGINT, signal_exit_handler); signal(SIGTSTP, signal_exit_handler); if (argc != 3) { fprintf(stderr, "Usage: %s hostname port\n", argv[0]); exit(1); } srand((unsigned)time(NULL)); uint16_t port= 1234; char buffer[kBufferSize]; char *server_ip=argv[1]; port = (uint16_t)atoi(argv[2]); int sockfd; struct sockaddr_in servaddr; if ( (sockfd = socket(AF_INET, SOCK_DGRAM, 0)) < 0 ) { perror("socket creation failed"); exit(EXIT_FAILURE); } memset(&servaddr, 0, sizeof(servaddr)); // Filling server information servaddr.sin_family = AF_INET; servaddr.sin_port = htons(port); servaddr.sin_addr.s_addr = inet_addr(server_ip); int64_t next_send_time=0; int offset=0; int bps=rate_table[offset]; int packet_size=1450; double interval=0.0; double lambda=0.0; int all=sizeof(rate_table)/sizeof(rate_table[0]); int64_t next_rate_time=0; while(running){ int64_t now=WallTimeNowInUsec(); if(next_rate_time==0||now>=next_rate_time){ bps=rate_table[offset]; interval=((double)packet_size*8*1000)/(bps); lambda=1.0/interval; offset=(offset+1)%all; next_rate_time=now+rate_duration; } if(next_send_time==0||now>=next_send_time){ sendto(sockfd, (const char *)buffer, packet_size, 0,(const struct sockaddr *)&servaddr,sizeof(servaddr)); int64_t micro_ts=e_random(lambda)*1000; next_send_time=now+micro_ts; } } close(sockfd); return 0; }
<reponame>brcolow/candlefx package com.brcolow.candlefx; /** * @author <NAME> */ public enum CurrencyPosition { BEFORE_AMOUNT, AFTER_AMOUNT }
Crystal Palace manager Alan Pardew felt Liverpool would have finished in the top four under Brendan Rodgers this season Alan Pardew feels Liverpool would have finished in the Premier League top four under Brendan Rodgers this season. But Pardew, who is fully aware of the number of 'match winners' in the current Liverpool side - highlighting Philippe Coutinho as a specific threat, is confident he can exploit the Reds' defensive weaknesses. The Palace manager felt the summer recruitment at Anfield would have paved the way for a top-four finish under Rodgers and sees no reason why that expectation should change following the appointment of Jurgen Klopp to the manager's job. However, he does feel he has identified a weakness at Liverpool and is confident his team can exploit it - their defence. "I thought they've got a top-four squad in the summer, and that hasn't changed despite the change in manager," said Pardew, who led Palace to a superb 3-1 win at Anfield on the penultimate Premier League weekend of last season. "They're Liverpool, they should be top four. "I think [Christian] Benteke is a terrific player, and [Jurgen] Klopp has arrived with great players there. Pardew says Christian Benteke gives Jurgen Klopp's Liverpool a different kind of attacking threat "Benteke is a different kind of threat to what they've had before and a great signing. There is a higher press under Klopp's management and we know that's coming, we can cope with that. "But they carry great players and great players can change games. [Roberto] Firmino did that at Stoke. We have to focus on what they're not good at and defending is an issue at the club so we'll look to exploit that. "We've come up against everyone in form, we're having one of those runs, but our performance against Man United, we must take great heart form that. Because only one team deserved to win it and that was us." Coutinho has been in impressive recent form for Liverpool, inspiring a comeback with two goals at Chelsea last weekend when Liverpool came from behind to win 3-1, and after being rested during Thursday's 1-0 Europa League win at Rubin Kazan he is expected to return to their starting line-up against Palace. "Coutinho is probably, at the moment, their best player, in terms of the level he's at," Pardew said. "And I don't think anyone in the Premier League would disagree that he's been in the top five players offensively this year, so he's somebody we need to take care of. "Whenever you go to Liverpool they've got world-class players, and you're going to have to look after them and make sure that you defend very, very well. "Kevin Keegan, John Toshack, Kenny Dalglish, they've always had these players."
package config import ( "log" "github.com/caarlos0/env" ) // Config of environment type Config struct { Port string `env:"PORT" envDefault:"3000"` } // Get returns the environment configs func Get() (cfg Config) { err := env.Parse(&cfg) if err != nil { log.Fatal(err) } return }
Recently at Arkency I was working on a task, on which it was very important to ensure that the right cookies are saved with the specific expiration time. Obiovusly I wanted to test this code to prevent regressions in the future. Controller tests? Firstly I thought about controller tests, but you can use only one controller in one test (at least without strong hacks) and in this case it was important to check values of cookies after requests sent into few different controllers. You can now think, that controller tests are “good enough” for you, if you don’t need to reach to different controllers. Not quite, unfortunately. Let’s consider following code: class ApplicationController before_filter :do_something_with_cookies def do_something_with_cookies puts "My cookie is: #{ cookies [ :foo ] } " cookies [ :foo ] = { value: "some value!" , expires: 30 . minutes . from_now , } end end And controller test: describe SomeController do specify do get :index Timecop . travel ( 35 . minutes . from_now ) do get :index end end end Note that the cookie time has expiration time of 30 minutes and we are doing second call “after” 35 minutes, so we would expect output to be: My cookie is: My cookie is: So, we would expect cookie to be empty, twice. Unfortunately, the output is: My cookie is: My cookie is: some value! Therefore, it is not a good tool to test cookies when you want to test cookies expiring. Feature specs? My second thought was feature specs, but that’s capybara and we prefer to avoid capybara if we can and use it only in very critical parts of our applications, so I wanted to use something lighter than that. It would probably work, but as you can already guess, there’s better solution. Request specs There’s another kind of specs, request specs, which is less popular than previous two, but in this case it is very interesting for us. Let’s take a look at this test: describe do specify do get "/" Timecop . travel ( 35 . minutes . from_now ) do get "/" end end end With this test, we get the desired output: My cookie is: My cookie is: Now we would like to add some assertions about the cookies. Let’s check what cookies class is by calling cookies.inspect : #<Rack::Test::CookieJar:0x0056321c1d8950 @default_host="www.example.com", @cookies=[#<Rack::Test::Cookie:0x0056321976f010 @default_host="www.example.com", @name_value_raw="foo=some+value%21", @name="foo", @value="some value!", @options={"path"=>"/", "expires"=>"Fri, 02 Jun 2017 22:29:34 -0000", "domain"=>"www.example.com" }>]> Great, we see that it has all information we want to check: value of the cookie, expiration time, and more. You can easily retrieve the value of the cookie by calling cookies[:foo] . Getting expire time is more tricky, but nothing you couldn’t do in ruby. On HEAD of rack-test there’s already a method get_cookie you can use to get all cookie’s options. If you are on 0.6.3 though, you can add following method somewhere in your specs: def get_cookie ( cookies , name ) cookies . send ( :hash_for , nil ). fetch ( name , nil ) end It is not perfect, but it is simple enough until you migrate to newer version of rack-test . In the end, my specs looks like this: describe do specify do get "/" Timecop . travel ( 35 . minutes . from_now ) do get "/" cookie = get_cookie ( cookies , "foo" ) expect ( cookie . value ). to eq ( "some value!" ) expect ( cookie . expires ). to be_present end end # That will be built-in in rack-test > 0.6.3 def get_cookie ( cookies , name ) cookies . send ( :hash_for , nil ). fetch ( name , nil ) end end With these I can test more complex logic of my cookies. Having reliable tests allows me and my colleagues to easily refactor code in the future and prevent regressions in our legacy applications (if topic of refactoring legacy applications is interesting to you, you may want to check out our Fearless Refactoring book). What are your experiences of testing cookies in rails?
<reponame>zaptim/django-asana<gh_stars>1-10 import requests import unittest from django.core.exceptions import ImproperlyConfigured from django.test import override_settings from asana.error import NoAuthorizationError from djasana.connect import client_connect class ClientConnectTestCase(unittest.TestCase): @override_settings(ASANA_ACCESS_TOKEN=None, ASANA_CLIENT_ID=None) def test_settings_required(self): with self.assertRaises(ImproperlyConfigured): client_connect() @override_settings(ASANA_ACCESS_TOKEN='foo', ASANA_WORKSPACE='foo') def test_connect_access_token(self): with self.assertRaises(NoAuthorizationError): try: client_connect() except requests.exceptions.ConnectionError: self.skipTest('No Internet connection')
<reponame>nibbleninja/minecraft-discord-bot export class RealmSettings { maximumPlayerPlots:number = 1; defaultPlotShape:string = "square"; defaultPlotSizeMeters:number = 64; // These entries should only ever exist once per discord server, // more than that will break other logic (enforced in repo); defaultRealmName:string = "MyRealm"; serverIsConfigured:boolean = false; serverPlayerRoleName:string = "Player"; serverModeratorRoleName:string = "Moderator"; serverAdminRoleName:string = "Admin"; static serverSettings:string[] = ['defaultRealmName','serverIsConfigured', 'serverModeratorRoleName', 'serverAdminRoleName']; static keyIsServerLevel(key: string):boolean { return RealmSettings.serverSettings.some(set => { return set === key; }); } }
def make_random_text(): words_list = [ MakeWord(file_path.value).random_word() for file_path in Thesaurus ] return str(' '.join(words_list))
#!/usr/bin/env python3 # Copyright (c) Meta Platforms, Inc. and affiliates. # # This source code is licensed under the MIT license found in the # LICENSE file in the root directory of this source tree. from unittest import mock import torch from botorch.posteriors.torch import TorchPosterior from botorch.sampling.stochastic_samplers import ForkedRNGSampler, StochasticSampler from botorch.utils.testing import BotorchTestCase, MockPosterior from torch.distributions.exponential import Exponential class TestForkedRNGSampler(BotorchTestCase): def test_forked_rng_sampler(self): posterior = TorchPosterior(Exponential(rate=torch.rand(1, 2))) sampler = ForkedRNGSampler(sample_shape=torch.Size([2]), seed=0) with mock.patch.object( posterior.distribution, "rsample", wraps=posterior.distribution.rsample ) as mock_rsample: samples = sampler(posterior) mock_rsample.assert_called_once_with(sample_shape=torch.Size([2])) with torch.random.fork_rng(): torch.manual_seed(0) expected = posterior.rsample(sample_shape=torch.Size([2])) self.assertAllClose(samples, expected) class TestStochasticSampler(BotorchTestCase): def test_stochastic_sampler(self): # Basic usage. samples = torch.rand(1, 2) posterior = MockPosterior(samples=samples) sampler = StochasticSampler(sample_shape=torch.Size([2])) self.assertTrue(torch.equal(samples.repeat(2, 1, 1), sampler(posterior))) # Test _update_base_samples. with self.assertRaisesRegex(NotImplementedError, "_update_base_samples"): sampler._update_base_samples(posterior=posterior, base_sampler=sampler)
BEIJING (Reuters) - Three years ago, the mayor of China’s sprawling southwestern city of Chongqing was asked to describe how well he got along with his then boss, the ambitious Communist Party leader Bo Xilai. Chongqing mayor Huang Qifan speaks during a news conference in Beijing in this March 4, 2011 file photograph. REUTERS/Stringer/Files “Like fish and water,” the portly Huang Qifan told reporters on the sidelines of the annual full session of parliament, using a Chinese expression meaning an almost symbiotic relationship. “Everything is great, magnificent. The whole Communist Party secretariat works smoothly together with one mind.” Bo was dramatically ousted from his post last year following lurid accusations of corruption and murder. Huang, still Chongqing mayor, said this year that Bo’s legacy had been “banished”, and the “vanity projects” he championed which “tired the people and drained money” would never be allowed to happen again. As Bo goes on trial this week, Huang’s survival in office is a signal of China’s hesitation in fully taking its customary hard line against senior party members who fall from grace. Several junior officials lost their jobs or were detained because of their proximity to Bo, but Huang is one of two senior allies not purged, underlining the leadership’s caution as new President Xi Jinping seeks to maintain stability and unity. Bo was an advocate of populist, leftist welfare policies and his downfall exposed ideological schisms in the party and society at large which still exist. Xi needs unstinting support at a party plenum later this year to endorse an ambitious program to rebalance the world’s second-largest economy and will be keen to put the Bo scandal behind him with a minimum of fuss. Huang’s survival, which has come against all the odds, was ensured once he had disavowed his former mentor. The mayor was once seen as the brains behind Bo’s grandiose economic plans for foggy Chongqing, which included an ambitious urban renewal scheme and an egalitarian narrowing of the stark rural-urban income gap. He was Bo’s right-hand man, even as the party chieftain’s rule began to crumble when his police chief Wang Lijun sought political asylum at the U.S. consulate in nearby Chengdu city in February last year. Huang led security personnel to besiege the U.S. mission, according to sources with ties to the leadership or with direct knowledge of the case. Wang had fled there after confronting Bo with information that his glamorous lawyer wife, Gu Kailai, had murdered the couple’s friend, British businessman Neil Heywood, setting off the scandal which eventually bought Bo down. Wang hid in the consulate for more than 24 hours until officials from Beijing coaxed him out and put him on a flight to the capital, where he divulged details of the murder safely away from Bo. Both Wang and Gu were jailed last year. REDEMPTION After Bo’s downfall, Huang redeemed himself by writing a self-criticism - a throwback to the Mao-era practice of self-denunciations for political or ideological mistakes - and selling out Bo, the sources said. “He exposed Bo Xilai’s ambitions,” a source with leadership ties said, referring to Bo’s barely concealed public campaign for a seat on the Communist Party’s powerful inner circle, the Politburo Standing Committee. “Huang Qifan also exposed wiretapping by Wang Lijun,” the source said, referring to the bugging of telephone calls between then-President Hu Jintao and a central government anti-corruption investigator who was in Chongqing. “Politically, he (Huang) is someone who can’t be knocked down,” the source said. “But he stayed on also to stabilize Chongqing.” The sources declined to elaborate. “Once Bo Xilai went down, people defected, and I think Huang Qifan is a very good example of that,” said Bo Zhiyue, an expert on elite Chinese politics at the National University of Singapore, who is not related to Bo Xilai. “As long as you distance yourself from this person, you criticize them, you offer evidence or whatever, you cooperate with the authorities, you will be ok,” he added. A number of Bo’s most prominent cronies and supporters have however been either sacked or detained, including Xia Zeliang, party secretary of Chongqing’s Nanan district, and Xu Ming, a plastics-to-property billionaire entrepreneur whose long association with Bo extended for over two decades. However, another senior Bo ally who has so far survived is Zhou Yongkang, once China’s powerful domestic security tsar. Zhou was implicated in rumors last year that he hesitated in supporting the party’s move against Bo. However, his reputation was also dented when security forces failed to prevent blind dissident Chen Guangcheng from fleeing to the U.S. embassy in Beijing from house arrest in a nearby province last year. The hulking, grim-faced 70-year-old Zhou stepped down along with most members of the Standing Committee at the 18th party congress last November. His replacement failed to get a position on the new Standing Committee and was only given membership of the larger Politburo, which showed party concerns the domestic security position had become too powerful and also that Zhou was out of favor. Speculation continues to surround Zhou, fueled earlier this month by a story from U.S.-based Chinese news site Duowei, which said he was being investigated for graft. However, the report was later withdrawn and the government has not commented. Slideshow (3 Images) Three of Zhou’s allies are currently under investigation, including the deputy party boss of Sichuan province, Li Chuncheng, who had for many years overseen development of the province’s prosperous capital, Chengdu. However, sources with ties to the leadership, as well as analysts, are skeptical Zhou will be taken into custody because of an unwritten rule that incumbent and retired members of the Standing Committee are immune from prosecution. “The norm has been to avoid attacking (incumbent) and former Standing Committee members, for if they do so it opens a Pandora’s box,” said the University of Singapore’s Bo.
<gh_stars>0 # -*- coding: utf-8 -*- from mamonsu.plugins.pgsql.plugin import PgsqlPlugin as Plugin from .pool import Pooler class PgWaitSampling(Plugin): AgentPluginType = 'pg' # queries for zabbix agent query_agent_discovery_all_lock = "SELECT json_build_object ('data',json_agg(json_build_object('{#ALL_LOCK}', " \ "t.event_type))) from (SELECT DISTINCT (CASE WHEN event_type = 'LWLockNamed' " \ "THEN 'lwlock' WHEN event_type = 'LWLockTranche' THEN 'lwlock' " \ "WHEN event_type = 'Lock' THEN 'hwlock' ELSE 'buffer' END) AS event_type " \ "FROM pg_wait_sampling_profile where event_type is not null) AS t;" query_agent_discovery_hw_lock = "SELECT json_build_object ('data',json_agg(json_build_object('{#HW_LOCK}'," \ " event))) from pg_wait_sampling_profile where event_type = 'Lock';" query_agent_discovery_lw_lock = "SELECT json_build_object ('data',json_agg(json_build_object('{#LW_LOCK}',t.event)))" \ " from (SELECT DISTINCT ( CASE WHEN event = 'ProcArrayLock' THEN 'xid' " \ "WHEN event = 'WALBufMappingLock' THEN 'wal' WHEN event = 'WALWriteLock' " \ "THEN 'wal' WHEN event = 'ControlFileLock' THEN 'wal' WHEN event = 'wal_insert' " \ "THEN 'wal' WHEN event = 'CLogControlLock' THEN 'clog' WHEN event = 'SyncRepLock' " \ "THEN 'replication' WHEN event = 'ReplicationSlotAllocationLock' " \ "THEN 'replication' WHEN event = 'ReplicationSlotControlLock' THEN 'replication' " \ "WHEN event = 'ReplicationOriginLock' THEN 'replication' " \ "WHEN event = 'replication_origin' THEN 'replication' " \ "WHEN event = 'replication_slot_io' THEN 'replication' " \ "WHEN event = 'buffer_content' THEN 'buffer' " \ "WHEN event = 'buffer_io' THEN 'buffer' " \ "WHEN event = 'buffer_mapping' THEN 'buffer' ELSE 'other' END) " \ "AS event from pg_wait_sampling_profile " \ " where event_type = 'LWLockTranche' or event_type = 'LWLockNamed') as t;" query_agent_all_lock = "select sum(count) as count from pg_wait_sampling_profile where " \ "CASE " \ "WHEN 'lwlock' = :'p1' THEN event_type = 'LWLockNamed' or event_type = 'LWLockTranche' " \ "WHEN 'hwlock' = :'p1' THEN event_type = 'Lock' " \ "WHEN 'buffer' = :'p1' THEN event_type != 'LWLockNamed' and event_type != 'LWLockTranche' " \ "and event_type != 'Lock' " \ "END;" query_agent_hw_lock = "select sum(count) as count from pg_wait_sampling_profile " \ "where event = :'p1' AND event_type = 'Lock';" query_agent_lw_lock = "select sum(count) as count from pg_wait_sampling_profile where " \ "CASE " \ "WHEN 'xid' = :'p1' THEN event = 'ProcArrayLock' " \ "WHEN 'wal' = :'p1' THEN event = 'WALBufMappingLock' OR event = 'WALWriteLock' " \ "OR event = 'ControlFileLock' OR event = 'wal_insert' " \ "WHEN 'clog' = :'p1' THEN event = 'CLogControlLock' OR event = 'clog' " \ "WHEN 'replication' = :'p1' THEN event = 'CLogControlLock' OR event = 'CLogControlLock'" \ " OR event = 'CLogControlLock' OR event = 'CLogControlLock' OR event = 'CLogControlLock' " \ " WHEN 'buffer' = :'p1' THEN event = 'buffer_content' OR event = 'buffer_io' " \ "OR event = 'buffer_mapping' " \ "WHEN 'other' = ':p1' THEN event IS NOT NULL " \ "END; " key_all_lock_discovery = "pgsql.all_lock.discovery{0}" key_all_lock = "pgsql.all_lock{0}" key_hw_lock_discovery = "pgsql.hw_lock.discovery{0}" key_hw_lock = "pgsql.hw_lock{0}" key_lw_lock_discovery = "pgsql.lw_lock.discovery{0}" key_lw_lock = "pgsql.lw_lock{0}" AllLockItems = [ # (sql_key, zbx_key, name, color) ('lwlock', 'all_lock[lwlock]', 'Lightweight locks', '0000CC'), ('hwlock', 'all_lock[hwlock]', 'Heavyweight locks', '00CC00'), ('buffer', 'all_lock[buffer]', 'Buffer locks', 'CC0000') ] AllLockQuery = """ select CASE WHEN event_type = 'LWLockNamed' THEN 'lwlock' WHEN event_type = 'LWLockTranche' THEN 'lwlock' WHEN event_type = 'Lock' THEN 'hwlock' ELSE 'buffer' END, sum(count) as count from pg_wait_sampling_profile where event_type is not null group by 1 order by count desc;""" HWLockItems = [ # (sql_key, zbx_key, name, color) ('relation', 'hwlock[relation]', 'lock on a relation', 'CC0000'), ('extend', 'hwlock[extend]', 'extend a relation', '00CC00'), ('page', 'hwlock[page]', 'lock on page', '0000CC'), ('tuple', 'hwlock[tuple]', 'lock on a tuple', 'CC00CC'), ('transactionid', 'hwlock[transactionid]', 'transaction to finish', '000000'), ('virtualxid', 'hwlock[virtualxid]', 'virtual xid lock', 'CCCC00'), ('speculative token', 'hwlock[speculative_token]', 'speculative insertion lock', '777777'), ('object', 'hwlock[object]', 'lock on database object', '770000'), ('userlock', 'hwlock[userlock]', 'userlock', '000077'), ('advisory', 'hwlock[advisory]', 'advisory user lock', '007700') ] HWLockQuery = """ select event, sum(count) as count from pg_wait_sampling_profile where event_type = 'Lock' group by 1 order by count desc;""" LWLockItems = [ # (sql_key, zbx_key, name, color) ('xid', 'lwlock[xid]', 'XID access', 'BBBB00'), ('wal', 'lwlock[wal]', 'WAL access', 'CC0000'), ('clog', 'lwlock[clog]', 'CLOG access', '00CC00'), ('replication', 'lwlock[replication]', 'Replication Locks', 'FFFFCC'), ('buffer', 'lwlock[buffer]', 'Buffer operations', '0000CC'), ('other', 'lwlock[other]', 'Other operations', '007700')] LWLockQuery = """ select CASE WHEN event = 'ProcArrayLock' THEN 'xid' WHEN event = 'WALBufMappingLock' THEN 'wal' WHEN event = 'WALWriteLock' THEN 'wal' WHEN event = 'ControlFileLock' THEN 'wal' WHEN event = 'wal_insert' THEN 'wal' WHEN event = 'CLogControlLock' THEN 'clog' WHEN event = 'clog' THEN 'clog' WHEN event = 'SyncRepLock' THEN 'replication' WHEN event = 'ReplicationSlotAllocationLock' THEN 'replication' WHEN event = 'ReplicationSlotControlLock' THEN 'replication' WHEN event = 'ReplicationOriginLock' THEN 'replication' WHEN event = 'replication_origin' THEN 'replication' WHEN event = 'replication_slot_io' THEN 'replication' WHEN event = 'buffer_content' THEN 'buffer' WHEN event = 'buffer_io' THEN 'buffer' WHEN event = 'buffer_mapping' THEN 'buffer' ELSE 'other' END, sum(count) as count from pg_wait_sampling_profile where event_type = 'LWLockTranche' or event_type = 'LWLockNamed' group by 1 order by count desc;""" def run(self, zbx): def find_and_send(where, what, zbx): for item in what: item_found = False for result in where: if item[0] == result[0]: zbx.send( 'pgsql.{0}'.format(item[1]), float(result[1]), Plugin.DELTA.speed_per_second) item_found = True break if not item_found: zbx.send( 'pgsql.{0}'.format(item[1]), float(0), Plugin.DELTA.speed_per_second) self.disable_and_exit_if_extension_is_not_installed('pg_wait_sampling') find_and_send(Pooler.query(self.AllLockQuery), self.AllLockItems, zbx) find_and_send(Pooler.query(self.HWLockQuery), self.HWLockItems, zbx) find_and_send(Pooler.query(self.LWLockQuery), self.LWLockItems, zbx) def items(self, template): result = '' for item in (self.AllLockItems + self.LWLockItems + self.HWLockItems): result += template.item({ 'key': 'pgsql.{0}'.format(item[1]), 'name': 'PostgreSQL waits: {0}'.format(item[2]), 'delay': self.plugin_config('interval'), 'value_type': self.VALUE_TYPE.numeric_float}) return result def graphs(self, template): result = '' for graph_name, graph_items in [ ('PostgreSQL waits: Locks by type', self.AllLockItems), ('PostgreSQL waits: Heavyweight locks', self.HWLockItems), ('PostgreSQL waits: Lightweight locks', self.LWLockItems)]: items = [] for item in graph_items: items.append({ 'key': 'pgsql.{0}'.format(item[1]), 'color': item[3]}) result += template.graph({ 'name': graph_name, 'type': 1, 'items': items}) return result # discovery rule for agent type def discovery_rules(self, template): rule = ({ 'name': 'AllLockItems', 'key': self.key_all_lock_discovery.format('[{0}]'.format(self.Macros[self.Type])), 'filter': '{#ALL_LOCK}:.*' }) if self.Type == "mamonsu": delta = Plugin.DELTA.as_is else: delta = Plugin.DELTA.speed_per_second items = [] for item in (self.AllLockItems): keys = item[1].split('[') items.append({ 'key': self.right_type(self.key_all_lock, keys[1][:-1], var_discovery="{#ALL_LOCK},"), 'name': 'PostgreSQL waits: {0}'.format(item[2]), 'value_type': self.VALUE_TYPE.numeric_float, 'delay': self.plugin_config('interval'), 'delta': delta}) graphs = [] for graph_name, graph_items in [ ('PostgreSQL waits: Locks by type', self.AllLockItems)]: items = [] # for item in graph_items: # keys = item[1].split('[') items.append({ 'name': 'PostgreSQL Locks : {#ALL_LOCK}', 'key': self.right_type(self.key_all_lock, var_discovery="{#ALL_LOCK},"), 'color': item[3]}) graphs.append({'name': graph_name, 'type': 1, 'items': items}) return template.discovery_rule(rule=rule, items=items, graphs=graphs) def keys_and_queries(self, template_zabbix): result = [] # queries for zabbix agent only for all lock result.append( '{0},$2 $1 -c "{1}"'.format(self.key_all_lock_discovery.format("[*]"), self.query_agent_discovery_all_lock)) result.append( '{0},echo "{1}" | $3 $2 -v p1="$1"'.format(self.key_all_lock.format("[*]"), self.query_agent_all_lock)) return template_zabbix.key_and_query(result)
package mysorts // BucketSort implement bucket sort by aux array, // please be aware that it expect input array with Uniform distribution to get liner O(n) func BucketSort(in []int) { if len(in) <= 1 { return } bucketsCount := len(in) // find max value from in k := findMaxElement(in) piece := k / bucketsCount if piece == 0 { piece = 1 // to avoid divide by zero } // initialize buckets buckets := make([]bucket, bucketsCount, bucketsCount) // split elements into buckets for _, v := range in { index := v / piece if index >= bucketsCount { index = bucketsCount - 1 //set to last bucket if overflow } buckets[index] = append(buckets[index], v) } // sort each bucket for i := range buckets { InsertionSort(buckets[i]) } // copy back to in in = in[:0] for _, v := range buckets { in = append(in, v...) } } type bucket []int func (b bucket) Len() int { return len(b) } func (b bucket) Less(i, j int) bool { return b[i] < b[j] } func (b bucket) Swap(i, j int) { b[i], b[j] = b[j], b[i] }
/** * Provides facilities to create and re-use {@link VirtualColumn} definitions for dimensions, filters, and filtered * aggregators while constructing a {@link DruidQuery}. */ public class VirtualColumnRegistry { private final ExprMacroTable macroTable; private final RowSignature baseRowSignature; private final Map<ExpressionAndTypeHint, String> virtualColumnsByExpression; private final Map<String, ExpressionAndTypeHint> virtualColumnsByName; private final String virtualColumnPrefix; private int virtualColumnCounter; private VirtualColumnRegistry( RowSignature baseRowSignature, ExprMacroTable macroTable, String virtualColumnPrefix, Map<ExpressionAndTypeHint, String> virtualColumnsByExpression, Map<String, ExpressionAndTypeHint> virtualColumnsByName ) { this.macroTable = macroTable; this.baseRowSignature = baseRowSignature; this.virtualColumnPrefix = virtualColumnPrefix; this.virtualColumnsByExpression = virtualColumnsByExpression; this.virtualColumnsByName = virtualColumnsByName; } public static VirtualColumnRegistry create(final RowSignature rowSignature, final ExprMacroTable macroTable) { return new VirtualColumnRegistry( rowSignature, macroTable, Calcites.findUnusedPrefixForDigits("v", rowSignature.getColumnNames()), new HashMap<>(), new HashMap<>() ); } /** * Check if a {@link VirtualColumn} is defined by column name */ public boolean isVirtualColumnDefined(String virtualColumnName) { return virtualColumnsByName.containsKey(virtualColumnName); } /** * Get existing or create new {@link VirtualColumn} for a given {@link DruidExpression} and hinted {@link ColumnType}. */ public String getOrCreateVirtualColumnForExpression( DruidExpression expression, ColumnType typeHint ) { final ExpressionAndTypeHint candidate = wrap(expression, typeHint); if (!virtualColumnsByExpression.containsKey(candidate)) { final String virtualColumnName = virtualColumnPrefix + virtualColumnCounter++; virtualColumnsByExpression.put( candidate, virtualColumnName ); virtualColumnsByName.put( virtualColumnName, candidate ); } return virtualColumnsByExpression.get(candidate); } /** * Get existing or create new {@link VirtualColumn} for a given {@link DruidExpression} and {@link RelDataType} */ public String getOrCreateVirtualColumnForExpression( DruidExpression expression, RelDataType typeHint ) { return getOrCreateVirtualColumnForExpression( expression, Calcites.getColumnTypeForRelDataType(typeHint) ); } /** * Get existing virtual column by column name */ @Nullable public VirtualColumn getVirtualColumn(String virtualColumnName) { return Optional.ofNullable(virtualColumnsByName.get(virtualColumnName)) .map(v -> v.getExpression().toVirtualColumn(virtualColumnName, v.getTypeHint(), macroTable)) .orElse(null); } @Nullable public String getVirtualColumnByExpression(DruidExpression expression, RelDataType typeHint) { return virtualColumnsByExpression.get(wrap(expression, Calcites.getColumnTypeForRelDataType(typeHint))); } /** * Get a signature representing the base signature plus all registered virtual columns. */ public RowSignature getFullRowSignature() { final RowSignature.Builder builder = RowSignature.builder().addAll(baseRowSignature); final RowSignature baseSignature = builder.build(); for (Map.Entry<String, ExpressionAndTypeHint> virtualColumn : virtualColumnsByName.entrySet()) { final String columnName = virtualColumn.getKey(); // this is expensive, maybe someday it could use the typeHint, or the inferred type, but for now use native // expression type inference builder.add( columnName, virtualColumn.getValue().getExpression().toVirtualColumn( columnName, virtualColumn.getValue().getTypeHint(), macroTable ).capabilities(baseSignature, columnName).toColumnType() ); } return builder.build(); } /** * Given a list of column names, find any corresponding {@link VirtualColumn} with the same name */ public List<DruidExpression> findVirtualColumnExpressions(List<String> allColumns) { return allColumns.stream() .filter(this::isVirtualColumnDefined) .map(name -> virtualColumnsByName.get(name).getExpression()) .collect(Collectors.toList()); } public void visitAllSubExpressions(DruidExpression.DruidExpressionShuttle shuttle) { for (Map.Entry<String, ExpressionAndTypeHint> entry : virtualColumnsByName.entrySet()) { final String key = entry.getKey(); final ExpressionAndTypeHint wrapped = entry.getValue(); virtualColumnsByExpression.remove(wrapped); final List<DruidExpression> newArgs = shuttle.visitAll(wrapped.getExpression().getArguments()); final ExpressionAndTypeHint newWrapped = wrap(wrapped.getExpression().withArguments(newArgs), wrapped.getTypeHint()); virtualColumnsByName.put(key, newWrapped); virtualColumnsByExpression.put(newWrapped, key); } } public Collection<? extends VirtualColumn> getAllVirtualColumns(List<String> requiredColumns) { return requiredColumns.stream() .filter(this::isVirtualColumnDefined) .map(this::getVirtualColumn) .collect(Collectors.toList()); } /** * @deprecated use {@link #findVirtualColumnExpressions(List)} instead */ @Deprecated public List<VirtualColumn> findVirtualColumns(List<String> allColumns) { return allColumns.stream() .filter(this::isVirtualColumnDefined) .map(this::getVirtualColumn) .collect(Collectors.toList()); } /** * @deprecated use {@link #getOrCreateVirtualColumnForExpression(DruidExpression, ColumnType)} instead */ @Deprecated public VirtualColumn getOrCreateVirtualColumnForExpression( PlannerContext plannerContext, DruidExpression expression, ColumnType valueType ) { final String name = getOrCreateVirtualColumnForExpression(expression, valueType); return virtualColumnsByName.get(name).expression.toVirtualColumn(name, valueType, macroTable); } /** * @deprecated use {@link #getOrCreateVirtualColumnForExpression(DruidExpression, RelDataType)} instead */ @Deprecated public VirtualColumn getOrCreateVirtualColumnForExpression( PlannerContext plannerContext, DruidExpression expression, RelDataType dataType ) { return getOrCreateVirtualColumnForExpression( plannerContext, expression, Calcites.getColumnTypeForRelDataType(dataType) ); } /** * @deprecated use {@link #getVirtualColumnByExpression(DruidExpression, RelDataType)} instead */ @Deprecated @Nullable public VirtualColumn getVirtualColumnByExpression(String expression, RelDataType type) { final ColumnType columnType = Calcites.getColumnTypeForRelDataType(type); ExpressionAndTypeHint wrapped = wrap(DruidExpression.fromExpression(expression), columnType); return Optional.ofNullable(virtualColumnsByExpression.get(wrapped)) .map(this::getVirtualColumn) .orElse(null); } private static ExpressionAndTypeHint wrap(DruidExpression expression, ColumnType typeHint) { return new ExpressionAndTypeHint(expression, typeHint); } private static class ExpressionAndTypeHint { private final DruidExpression expression; private final ColumnType typeHint; public ExpressionAndTypeHint(DruidExpression expression, ColumnType valueType) { this.expression = expression; this.typeHint = valueType; } public DruidExpression getExpression() { return expression; } public ColumnType getTypeHint() { return typeHint; } @Override public boolean equals(Object o) { if (this == o) { return true; } if (o == null || getClass() != o.getClass()) { return false; } ExpressionAndTypeHint expressionAndTypeHint = (ExpressionAndTypeHint) o; return Objects.equals(typeHint, expressionAndTypeHint.typeHint) && Objects.equals(expression, expressionAndTypeHint.expression); } @Override public int hashCode() { return Objects.hash(expression, typeHint); } } }
n = int(input()) d = {} e = 10**4 for i in range(max(1, n-e), n+1): cnt = i + sum([int(x) for x in str(i)]) try: d[cnt].append(i) except KeyError: d[cnt] = [i] ans = len(d.setdefault(n, [])) print(ans) if ans: print(*sorted(d[n]), sep='\n')
def write_data(self, data): ofs = 0 size = len(data) try: while ofs < size: wr_size = self.wrbuf_chunksize if wr_size > size - ofs: wr_size = size - ofs n = self._write(data[ofs : ofs + wr_size]) if n <= 0: raise usbdev_error("USB bulk write error") ofs += n return ofs except usb.core.USBError as e: raise usbdev_error(str(e))
from datetime import date import pandas def transform_coinbase_data(historical_df: pandas.DataFrame): """ Given a pandas.DataFrame containing timeseries data, convert the timestamps to dates of ISO format and sort by date :param historical_df: Timeseries data retrieved from Coinbase API """ historical_df['date'] = historical_df['date'].map(lambda x: date.fromtimestamp(x).isoformat()) historical_df.sort_values(by='date', inplace=True, ignore_index=True)
Quantitation of plasma testosterone by improved competitive protein-binding technique. A method is described for quantitation of testosterone in plasma from females and males. The principal operations of extraction, chromatographic fractionation, and competitive protein-binding assay can be completed for eight duplicate samples in a single day. Experimental data used in the development of the test and the rationale for the specific operations and conditions are presented. The specificity of the method was established by comparing plasma testosterone concentrations determined by the double isotope derivative technique. Concentrations are reported for normal female and male subjects, and for patients with hirsutism, adrenal hyperplasia, Klinefelter’s syndrome, and other endocrine conditions.
Analysis of Stage-Specific Protein Expression during Babesia Bovis Development within Female Rhipicephalus Microplus. Arthropod-borne protozoan pathogens have a complex life cycle that includes asexual reproduction of haploid stages in mammalian hosts and the development of diploid stages in invertebrate hosts. The ability of pathogens to invade, survive, and replicate within distinct cell types is required to maintain their life cycle. In this study, we describe a comparative proteomic analysis of a cattle pathogen, Babesia bovis, during its development within the mammalian and tick hosts with the goal of identifying cell-surface proteins expressed by B. bovis kinetes as potential targets for the development of a transmission blocking vaccine. To determine parasite tick-stage-specific cell-surface proteins, CyDye labeling was performed with B. bovis blood stages from the bovine host and kinetes from the tick vector. Cell-surface kinete-stage-specific proteins were identified using 2D difference in gel electrophoresis and analyzed by mass spectrometry. Ten proteins were identified as kinete-stage-specific, with orthologs found in closely related Apicomplexan pathogens. Transcriptional analysis revealed two genes were highly expressed by kinetes as compared with blood stages. Immunofluorescence using antibodies against the two proteins confirmed kinete-stage-specific expression. The identified cell-surface kinete proteins are potential candidates for the development of a B. bovis transmission blocking vaccine.
<gh_stars>1-10 package policy import "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" //Contains Check if strint is contained in a list of string func containString(list []string, element string) bool { for _, e := range list { if e == element { return true } } return false } func isRunningPod(obj unstructured.Unstructured) bool { objMap := obj.UnstructuredContent() phase, ok, err := unstructured.NestedString(objMap, "status", "phase") if !ok || err != nil { return false } return phase == "Running" }
def fix_dev_edge_labels(self) -> "AMSentence": labels = self.get_edge_labels() return AMSentence([Entry(word.token, word.replacement, word.lemma, word.pos_tag, word.ner_tag, word.fragment, word.lexlabel, word.typ, word.head, labels[i], word.aligned,word.range) for i,word in enumerate(self.words)],self.attributes)
// NewSecretManager returns a new SecretManager. func NewSecretManager(log logr.Logger, cacheClient client.Client, apiReader client.Reader) *SecretManager { return &SecretManager{ log: log.WithName("secret_manager"), client: cacheClient, apiReader: apiReader, } }
w = input() G = [] q = 0 for x in w: if not ([x, True] in G or [x, False] in G): G.append([x, True]) else: if [x, True] in G: G[G.index([x, True])][1] = False else: G[G.index([x, False])][1] = True for h in range(len(G)): q += G[h][1] #print(G) if q == 0: print("Yes") else: print("No")
// BBox() returns the smallest box which encloses all the points of the PolyLine func (pl PolyLine) BBox() BBox { var rv BBox if Paranoid && len(pl) <= 0 { log.Fatalf("asked to create an empty bounds box\n") } rv.MinX = pl[0][0].X rv.MinY = pl[0][0].Y rv.MaxX = pl[0][0].X rv.MaxY = pl[0][0].Y for i := 0; i < len(pl); i++ { rv.ExpandByPts(pl[i]) } return rv }
/* * Copyright (c) 2015 Kaprica Security, Inc. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN * THE SOFTWARE. * */ #include "cgc_stdlib.h" #include "cgc_ctype.h" #include "cgc_string.h" #include "cgc_engine.h" #include "cgc_io.h" static char input_states[128 * 128]; static void move_to_str(move_t move, char *buf) { buf[0] = move.sc + 'm'; buf[1] = move.sr + '1'; buf[2] = move.dc + 'm'; buf[3] = move.dr + '1'; switch (move.p) { case KNIGHT: buf[4] = 'k'; break; case BISHOP: buf[4] = 'b'; break; case ROOK: buf[4] = 'r'; break; case QUEEN: buf[4] = 'q'; break; default: buf[4] = 0; break; } buf[5] = 0; } void cgc_init_states() { int i,j; cgc_memset(input_states, STATE_ERROR, sizeof(input_states)); #define ADD_STATE(c1, c2, s) input_states[((c1) << 7) + (c2)] = STATE_##s##_START for (i = 'm'; i <= 't'; i++) for (j = '1'; j <= '8'; j++) ADD_STATE(i, j, MOVE); ADD_STATE('b', 'l', BLACK); ADD_STATE('c', 'g', CGCBOARD); ADD_STATE('d', 'r', DRAW); ADD_STATE('f', 'o', FORCE); ADD_STATE('g', 'o', GO); ADD_STATE('n', 'e', NEW); ADD_STATE('q', 'u', QUIT); ADD_STATE('r', 'a', RANDOM); ADD_STATE('r', 'e', REMOVE_OR_RESULT); ADD_STATE('s', 'd', SD); ADD_STATE('u', 'n', UNDO); ADD_STATE('w', 'h', WHITE); ADD_STATE('?', '\n', PLAY); #undef ADD_STATE } int cgc_read_all(char *buf, cgc_size_t n) { while (n > 0) { cgc_size_t bytes; if (cgc_receive(STDIN, buf, n, &bytes) != 0 || bytes == 0) // bytes = cgc_read(0, buf, n); if (bytes == 0) return 0; n -= bytes; } return 1; } void cgc_write_string(const char *str) { cgc_size_t bytes; cgc_transmit(STDOUT, str, cgc_strlen(str), &bytes); // cgc_write(1, str, cgc_strlen(str)); } void cgc_send_move(move_t m) { char *buf = cgc_malloc(32); if (buf == NULL) return; cgc_strcpy(buf, "move "); move_to_str(m, buf + 5); cgc_strcat(buf, "\n"); cgc_write_string(buf); cgc_free(buf); } void cgc_send_result(int result) { if (result == WHITE_WON) cgc_write_string("1-0\n"); if (result == BLACK_WON) cgc_write_string("0-1\n"); if (result == DRAW) cgc_write_string("1/2-1/2\n"); if (result == UNFINISHED) cgc_write_string("*\n"); } void cgc_send_draw() { cgc_write_string("offer draw\n"); } void cgc_send_resign() { cgc_write_string("resign\n"); } void cgc_send_illegal(const char *reason, move_t move) { char movestr[8]; move_to_str(move, movestr); cgc_write_string("Illegal move ("); cgc_write_string(reason); cgc_write_string("): "); cgc_write_string(movestr); cgc_write_string("\n"); } void cgc_send_error(const char *error, const char *command) { cgc_write_string("Error ("); cgc_write_string(error); cgc_write_string("): "); cgc_write_string(command); cgc_write_string("\n"); } int cgc_sink_error(const char *buf) { char tmp[2]; cgc_size_t n = cgc_strlen(buf); cgc_write_string("Error (invalid command): "); cgc_write_string(buf); if (n >= 1 && buf[n-1] == '\n') return 1; tmp[1] = 0; do { if (!cgc_read_all(&tmp[0], 1)) return 0; cgc_write_string(tmp); } while (tmp[0] != '\n'); return 1; } /* implements state machine to parse input */ int cgc_read_keyword(char *input, cgc_size_t n) { int i, state; cgc_memset(input, 0, n); if (!cgc_read_all(input, 2)) return 0; state = input_states[(input[0] << 7) | input[1]]; if (state == STATE_ERROR) return cgc_sink_error(input); if (state == STATE_PLAY_START) return state; for (i = 2; i < n - 1; i++) { char c; if (!cgc_read_all(&c, 1)) return 0; input[i] = c; #define PARSE_CHAR(st, ch) \ if (state == st) { \ if (ch == c) { state++; continue; } \ else { state = STATE_ERROR; break; } \ } #define PARSE_END(st) \ if (state == st) { \ if (c == '\n') break; \ else { state = STATE_ERROR; break; }\ } PARSE_CHAR(STATE_BLACK_START+0, 'a') PARSE_CHAR(STATE_BLACK_START+1, 'c') PARSE_CHAR(STATE_BLACK_START+2, 'k') PARSE_END(STATE_BLACK) PARSE_CHAR(STATE_CGCBOARD_START+0, 'c') PARSE_CHAR(STATE_CGCBOARD_START+1, 'b') PARSE_CHAR(STATE_CGCBOARD_START+2, 'o') PARSE_CHAR(STATE_CGCBOARD_START+3, 'a') PARSE_CHAR(STATE_CGCBOARD_START+4, 'r') PARSE_CHAR(STATE_CGCBOARD_START+5, 'd') PARSE_END(STATE_CGCBOARD) PARSE_CHAR(STATE_DRAW_START+0, 'a') PARSE_CHAR(STATE_DRAW_START+1, 'w') PARSE_END(STATE_DRAW) PARSE_CHAR(STATE_FORCE_START+0, 'r') PARSE_CHAR(STATE_FORCE_START+1, 'c') PARSE_CHAR(STATE_FORCE_START+2, 'e') PARSE_END(STATE_FORCE) PARSE_END(STATE_GO) PARSE_CHAR(STATE_NEW_START+0, 'w') PARSE_END(STATE_NEW) PARSE_CHAR(STATE_QUIT_START+0, 'i') PARSE_CHAR(STATE_QUIT_START+1, 't') PARSE_END(STATE_QUIT) PARSE_CHAR(STATE_RANDOM_START+0, 'n') PARSE_CHAR(STATE_RANDOM_START+1, 'd') PARSE_CHAR(STATE_RANDOM_START+2, 'o') PARSE_CHAR(STATE_RANDOM_START+3, 'm') PARSE_END(STATE_RANDOM) if (state == STATE_SD) { if (c == ' ') { break; } else { state = STATE_ERROR; break; } } PARSE_CHAR(STATE_UNDO_START+0, 'd') PARSE_CHAR(STATE_UNDO_START+1, 'o') PARSE_END(STATE_UNDO) PARSE_CHAR(STATE_WHITE_START+0, 'i') PARSE_CHAR(STATE_WHITE_START+1, 't') PARSE_CHAR(STATE_WHITE_START+2, 'e') PARSE_END(STATE_WHITE) if (state == STATE_REMOVE_OR_RESULT_START) { if (c == 'm') { state = STATE_REMOVE_START; continue; } else if (c == 's') { state = STATE_RESULT_START; continue; } else { state = STATE_ERROR; break; } } PARSE_CHAR(STATE_REMOVE_START+0, 'o') PARSE_CHAR(STATE_REMOVE_START+1, 'v') PARSE_CHAR(STATE_REMOVE_START+2, 'e') PARSE_END(STATE_REMOVE) PARSE_CHAR(STATE_RESULT_START+0, 'u') PARSE_CHAR(STATE_RESULT_START+1, 'l') PARSE_CHAR(STATE_RESULT_START+2, 't') if (state == STATE_RESULT) { if (c == ' ') { break; } else { state = STATE_ERROR; break; } } if (state == STATE_MOVE_START+0) { if ('m' <= c && c <= 't') { state++; continue; } else { state = STATE_ERROR; break;; } } else if (state == STATE_MOVE_START+1) { #ifdef PATCHED if (c < '1' || c > '8') { state = STATE_ERROR; break;; } #endif if (cgc_isdigit(c)) { state++; continue; } else { state = STATE_ERROR; break;; } } else if (state == STATE_MOVE_START+2) { if (c == '\n') { state = STATE_MOVE; break; } else if (cgc_islower(c)) { state = STATE_MOVE; continue; } else { state = STATE_ERROR; break; } } else if (state == STATE_MOVE) { if (c == '\n') { break; } else { state = STATE_ERROR; break; } } state = STATE_ERROR; break; } if (state == STATE_ERROR) return cgc_sink_error(input); else input[i] = 0; return state; } move_t cgc_str_to_move(const char *str) { move_t move; move.sc = str[0] - 'm'; move.sr = str[1] - '1'; move.dc = str[2] - 'm'; move.dr = str[3] - '1'; switch (str[4]) { case 'k': move.p = KNIGHT; break; case 'r': move.p = ROOK; break; case 'q': move.p = QUEEN; break; case 'b': move.p = BISHOP; break; default: move.p = 0; break; } return move; }
package seeder import ( "math/rand" "time" "github.com/go-goyave/goyave-blog-example/database/model" "goyave.dev/goyave/v3/database" ) const ( // ArticleCount the number of articles generated by the User seeder ArticleCount = 40 ) // Article seeder for articles. Generate and save articles with a random // author in the database. func Article() { rand.Seed(time.Now().UTC().UnixNano()) users := make([]uint, 0, 10) db := database.Conn() if err := db.Model(&model.User{}).Select("id").Find(&users).Error; err != nil { panic(err) } factory := database.NewFactory(model.ArticleGenerator) articles := make([]*model.Article, 0, ArticleCount) for i := 0; i < ArticleCount; i++ { o := &model.Article{ AuthorID: uint(users[rand.Intn(len(users))]), } generated := factory.Override(o).Generate(1).([]*model.Article)[0] articles = append(articles, generated) } if err := db.Create(articles).Error; err != nil { panic(err) } }
#include <bits/stdc++.h> using namespace std; const int MAXN = 55; int t, n, m; char s[MAXN][MAXN]; bool ans, flg[MAXN][MAXN]; int dir[] = {0, -1, 0, 1, 0}; void dfs(int i, int j) { flg[i][j] = true; for(int k = 0; k < 4; ++k) { int ii = i + dir[k]; int jj = j + dir[k + 1]; if(1 <= ii and ii <= n and 1 <= jj and jj <= m) { if(s[ii][jj] != '#' and !flg[ii][jj]) dfs(ii, jj); } } } int main() { for(scanf("%d", &t); t--;) { scanf("%d %d", &n, &m); ans = true; for(int i = 1; i <= n; ++i) scanf("%s", s[i] + 1); for(int i = 1; i <= n and ans; ++i) for(int j = 1; j <= m and ans; ++j) { flg[i][j] = false; if(s[i][j] == 'B') { for(int k = 0; k < 4; ++k) { int ii = i + dir[k]; int jj = j + dir[k + 1]; if(1 <= ii and ii <= n and 1 <= jj and jj <= m) { if(s[ii][jj] == 'G') ans = false; else if(s[ii][jj] == '.') s[ii][jj] = '#'; } } } } if(s[n][m] == 'B') ans = false; else if(s[n][m] != '#') dfs(n, m); for(int i = 1; i <= n; ++i) for(int j = 1; j <= m; ++j) if(s[i][j] == 'G' and not flg[i][j]) ans = false; printf(ans ? "Yes\n" : "No\n"); } }
import * as THREE from 'three' import SpriteText from 'three-spritetext'; import OrbitControls from 'three-orbitcontrols'; import {ViewComponent} from './view.component'; import {CsvParser, AstrometryRecord} from './csv-parser'; import {linspace} from './utils' export class ThreeDView implements ViewComponent{ /** Speed factor of the orbit */ protected speed : number = 1.0; /** Current index of the animation */ protected index : number = 0; /** Scale of the animation */ protected scale : number; /** Scaling function */ protected scalinFun; /** frustrum size */ protected frustumSize = 400; protected aspect; /** width of the canvas */ protected width : number = 400; /** height of the canvas */ protected height : number = 400; /** Portion of the canvas used for graph */ protected portion : number = 0.7; /* Maximum distance from the origin */ protected maxDis : number = this.portion * this.width / 2; /** Scene for the orbit */ protected scene; /** Camera of the view */ protected camera; /** Renderer of the scene */ protected renderer; /** Camera control */ protected controls; /** Colors for the stars, first is primary, second is secondary of the main system */ protected starColorArray = [ 0x0000cc, // blue 0xf39c12, // orange 'rgb(203, 67, 53)', // dark red 'rgb(125, 60, 152)', // purple 'rgb(19, 141, 117)' // dark green ] /** Maximum value for position in absolute value */ protected maxAbsPos : number; /** Name of the div where it should be drawn */ protected divName : string; /** Raycasting */ protected raycaster; protected mouse; /** Initial and final time for the orbit */ protected initT : number; protected finalT : number; /* Scale factor for the plane size according to the axes size */ planeFactor : number = 5; /* Plane Transparency */ planeTrans : number = 0.2; /* Plane color */ planeColor : string = 'rgb(178, 186, 187)'; /* Line color */ lineColor : string = 'rgb(127, 140, 141)'; dataColor : string = 'rgb(255, 165, 0)'; dataSelectedColor : string ='rgb(255, 69, 0)'; /** card class for the component view card */ cardClass : string; /** State of the animation */ isRunning : boolean; /** Dictionary relating the object names and their associated data */ objectDataDict : { [name : string] : { [id : string] : string} } = {}; /** Selected data point information dict */ selectedData : { [id : string] : any}; /** Selected object */ selectedObj; /** Mesh groups */ dataGroup = new THREE.Group(); // Group for the data toFixed( num, precision ) { var multiplicator = Math.pow(10, precision); num = num * multiplicator; return Math.round(num) / multiplicator; } /** Not implemented on the parent class! */ moveFrames(f : number) : void{ } updateRotations(){ return; } constructor(divName : string){ /** Name of the div for the given component */ this.divName = divName; if(window.innerWidth > 1200){ // lg screen this.width = 500; } if(1200 >= window.innerWidth && window.innerWidth > 500){ // sm screen this.width = 400; } if(500 >= window.innerWidth ){ // xs screen this.width = 3 * window.innerWidth/4; } this.height = this.width; } clean(obj = this.scene) { this.isRunning = false; if (obj instanceof THREE.Mesh || obj instanceof THREE.LineLoop || obj instanceof THREE.Line || obj instanceof THREE.Sprite) { obj.geometry.dispose(); obj.geometry = null; if (obj.material instanceof THREE.Material || obj.material instanceof THREE.SpriteMaterial || obj.material instanceof THREE.LineBasicMaterial || obj.material instanceof THREE.MeshBasicMaterial ) { obj.material.dispose(); } obj.material = null; this.scene.remove(obj); obj = null; } else { if (obj.children !== undefined) { while (obj.children.length > 0) { this.clean(obj.children[0]); obj.remove(obj.children[0]); } } } this.renderer.renderLists.dispose(); } initScene() { let elem = document.getElementById(this.divName); /* Set up renderer */ this.renderer = new THREE.WebGLRenderer({alpha : true}); this.renderer.setPixelRatio( window.devicePixelRatio ); this.renderer.setSize( this.width, this.height ); elem.appendChild(this.renderer.domElement); /* Set up camera */ this.aspect = this.width / this.height; this.camera = new THREE.OrthographicCamera( this.frustumSize * this.aspect / - 2, this.frustumSize * this.aspect / 2, this.frustumSize / 2, this.frustumSize / - 2, 1, 1000 ); this.controls = new OrbitControls(this.camera, this.renderer.domElement); this.camera.position.set( 0, 0, 200 ); this.controls.update(); /* Set up scene */ this.scene = new THREE.Scene(); // Windows resize let onWindowResize = () => { this.aspect = this.width / this.height; this.camera.left = - this.frustumSize * this.aspect / 2; this.camera.right = this.frustumSize * this.aspect / 2; this.camera.top = this.frustumSize / 2; this.camera.bottom = - this.frustumSize / 2; this.camera.updateProjectionMatrix(); this.renderer.setSize(this.width, this.height); } window.addEventListener( 'resize', onWindowResize, false ); this.initInteractive(); } buildScaling(pathx, pathy, pathz) { let maxPos = Math.max(...pathx, ...pathy, ...pathz); let minPos = Math.min(...pathx, ...pathy, ...pathz); this.maxAbsPos = Math.max(Math.abs(maxPos), Math.abs(minPos)) this.scale = this.maxDis / this.maxAbsPos; this.scalinFun = (x) => {return this.scale * x;} } initInteractive() { /** Set up UI */ let onButtonClick = (event) => { this.camera = new THREE.OrthographicCamera( this.frustumSize * this.aspect / - 2, this.frustumSize * this.aspect / 2, this.frustumSize / 2, this.frustumSize / - 2, 1, 1000 ); this.controls = new OrbitControls(this.camera, this.renderer.domElement); this.camera.position.set( 0, 0, 200 ); this.controls.update(); } let buttons : any = document.getElementsByClassName("reset-view"); for(let button of buttons) { button.addEventListener("click", onButtonClick, false); button.style.width = this.width + 'px'; } //Raycaster for click events this.raycaster = new THREE.Raycaster(); this.mouse = new THREE.Vector2(); let rayCasterOnClick = (e) => { this.raycaster.setFromCamera(this.mouse, this.camera); // Data intersections var intersectsData = this.raycaster.intersectObjects(this.dataGroup.children); //array for ( var i = 0; i < intersectsData.length; i++ ) { // Set the infocard's visibility let infocard = <HTMLElement>document.querySelector('#selected-info-' + this.cardClass); infocard.style.display = "block"; // Get the object's data let objName = intersectsData[i].object.name; let data = this.objectDataDict[objName]; // Display the data this.selectedData = data; // Return previus selected to original color if(this.selectedObj) this.selectedObj.material.color.set(this.dataColor); // Update selected this.selectedObj = intersectsData[i].object; // Indicate the clicked object this.selectedObj.material.color.set(this.dataSelectedColor); } }; this.renderer.domElement.addEventListener("click", rayCasterOnClick, true); // Mouse tracker let onMouseMove = (e) => { // calculate mouse position in normalized device coordinates // (-1 to +1) for both components this.mouse.x = (e.offsetX / this.width) * 2 - 1; this.mouse.y = - (e.offsetY / this.height) * 2 + 1; } this.renderer.domElement.addEventListener( 'mousemove', onMouseMove, false ); } /** * Draws the axis, ticks and labels * @param {number} tickLenght The length of each tick on the axes * @param {number} scale The 3D scale for the actual space * @param {number} length The length of the axes * @param {number} steps Number of ticks on the axis */ drawAxisLabels(tickLength, scale, length, steps){ let fontsize = 0.45 * length/steps; /* Texto */ let labels = ['E ["]', 'N ["]', 'Z ["]']; let colors = ['blue', 'red', 'green']; let labelPos = [ new THREE.Vector3(-(length + 10), 10, 0), new THREE.Vector3(0, (length + 10), 0), new THREE.Vector3(0, 10, -(length + 10))]; let axisEndPoints = [ new THREE.Vector3(-length, 0, 0), new THREE.Vector3(0, length, 0), new THREE.Vector3(0, 0, -length) ]; let labelsDistances = linspace(0, length, steps) for(let i = 0; i < 3; i++) { var axisGeometry = new THREE.Geometry(); axisGeometry.vertices.push(new THREE.Vector3(0, 0,0)); axisGeometry.vertices.push(axisEndPoints[i]); var axisMaterial = new THREE.LineBasicMaterial( { color: colors[i] , linewidth : 2} ); let axis = new THREE.Line(axisGeometry, axisMaterial) this.scene.add(axis); var text = new SpriteText(labels[i]); text.color = colors[i]; text.position.copy(labelPos[i]); this.scene.add(text); let tickSpacing = +this.toFixed(((length/scale) / steps), 4); for(let j = 1; j < labelsDistances.length; j++) { let tickPosReal = j * tickSpacing; let tickTextLen = String(tickPosReal).length; switch(i){ case 0: var pos = new THREE.Vector3(-labelsDistances[j], -tickTextLen/2*fontsize, 0); var tickStart = new THREE.Vector3(-labelsDistances[j], -tickLength/2, 0); var tickEnd = new THREE.Vector3(-labelsDistances[j], tickLength/2, 0); var rotation = -Math.PI/2; break; case 1: var pos = new THREE.Vector3(tickTextLen/2*fontsize, labelsDistances[j], 0); var tickStart = new THREE.Vector3(-tickLength/2, labelsDistances[j], 0); var tickEnd = new THREE.Vector3(tickLength/2, labelsDistances[j], 0); var rotation = 0; break; case 2: var pos = new THREE.Vector3(0, -tickTextLen/2*fontsize, -labelsDistances[j]); var tickStart = new THREE.Vector3(0, -tickLength/2, -labelsDistances[j]); var tickEnd = new THREE.Vector3(0, tickLength/2, -labelsDistances[j]); var rotation = -Math.PI/2; break; } var text = new SpriteText(tickPosReal); text.color = colors[i]; text.position.copy(pos); text.textHeight = fontsize; text.material.rotation = rotation; this.scene.add(text); var tickGeometry = new THREE.Geometry(); tickGeometry.vertices.push(tickStart); tickGeometry.vertices.push(tickEnd); var tickMaterial = new THREE.LineBasicMaterial( { color: colors[i] , linewidth : tickLength/4} ); let tick = new THREE.Line(tickGeometry, tickMaterial) this.scene.add(tick); } } } /** * Draws the dotted line representing the orbit * @param {number[]} xPath x positions for the orbit * @param {number[]} yPath y positions for the orbit * @param {number[]} zPath z positions for the orbit * @param {number} color Color for the line * @param {number} seglen Segment length for the dashed line */ drawOrbitLine(xPath, yPath, zPath, color='black', segLen = undefined) { let material = new THREE.LineBasicMaterial( { color: color, linewidth : 1} ); let geometry = new THREE.Geometry(); segLen = segLen || Math.ceil(xPath.length/50); //let segLen = xPath.length; for(let i = 0; i < xPath.length; i++){ if(i%segLen===0 && i!=0){ let line = new THREE.Line( geometry, material ); this.scene.add(line); geometry = new THREE.Geometry(); i+= parseInt(String(5)); } geometry.vertices.push(new THREE.Vector3(xPath[i], yPath[i], zPath[i])); } } /** * Draws a line between the start and end spots * @param {THREE.Vector3} start * @param {THREE.Vector3} stop * @param {string} color * @return {THREE.Line} The line object */ drawLine(start : THREE.Vector3, stop : THREE.Vector3, color : string) { let material = new THREE.LineBasicMaterial( { color: color, linewidth : 1} ); let geometry = new THREE.Geometry(); geometry.vertices.push(start); geometry.vertices.push(stop); let line = new THREE.Line( geometry, material ); this.scene.add(line); return line; } /** * Draws a star with given properties, and returns the mesh * @param {number} color Color for the mesh * @param {number} size Size for the star * @return {Mesh} primary The mesh for the primary */ drawStar(color, size){ let starGeometry = new THREE.CircleGeometry(size, 16) let starMaterial = new THREE.MeshBasicMaterial({ color: color, side : THREE.DoubleSide }); let star = new THREE.Mesh(starGeometry, starMaterial); this.scene.add(star); return star; } /** * Draws a star projection with given properties as a circle, and returns the mesh * @param {Scene} scene Scene to drawn the primary * @param {number} color Color for the mesh * @param {number} size Size for the star * @return {Mesh} primary The mesh for the primary */ drawStarProjection(color, size){ let starGeometry = new THREE.CircleGeometry(size, 16) starGeometry.vertices.shift(); let starMaterial = new THREE.LineBasicMaterial({ color: color, side : THREE.DoubleSide, linewidth : 2 }); let star = new THREE.LineLoop(starGeometry, starMaterial); this.scene.add(star); return star; } /** * Draws a node as a filled square * @param {number} color Color for the mesh * @param {number} size Size for the marker */ drawNode(color, size){ let nodeGeometry = new THREE.PlaneGeometry(2 * size, 2 * size, 1, 1) let nodeMaterial = new THREE.MeshBasicMaterial({ color: color, side : THREE.DoubleSide }); let node = new THREE.Mesh(nodeGeometry, nodeMaterial); this.scene.add(node); return node; } /** * Draws a node projection as an empty square * @param {number} color Color for the mesh * @param {number} size Size for the marker */ drawNodeProjection(color, size){ let nodeGeometry = new THREE.EdgesGeometry(new THREE.PlaneGeometry(2 * size, 2 * size, 1, 1)); let nodeMaterial = new THREE.LineBasicMaterial({ color: color, side : THREE.DoubleSide, linewidth : 2 }); let node = new THREE.LineSegments(nodeGeometry, nodeMaterial); this.scene.add(node); return node; } /** * Draws the X-Y plane * @param {number} length The length of the axes * @param {number} steps Number of steps for the plane */ drawXYPlane(length, steps){ let planeProps = { color : this.planeColor, transparent : true, opacity : this.planeTrans, side : THREE.DoubleSide, depthWrite : false}; let planeLength = this.planeFactor * length; let planeGeometry = new THREE.PlaneGeometry( 2 * planeLength, 2 * planeLength); let planeMaterial = new THREE.MeshBasicMaterial(planeProps); let plane = new THREE.Mesh(planeGeometry, planeMaterial); this.scene.add(plane); // Funcion auxiliar let lineSteps = 2 * this.planeFactor * steps; let linesDistances = linspace(-planeLength, planeLength, lineSteps); let lineProps = { color : this.lineColor, transparent : true, opacity : this.planeTrans, depthWrite : false}; /* Lines for the grid */ for(let i = 0; i <= lineSteps; i++) { /* X axis */ var lineStart = new THREE.Vector3(-planeLength, linesDistances[i], 0); var lineEnd = new THREE.Vector3(planeLength, linesDistances[i], 0); var lineGeometry = new THREE.Geometry(); lineGeometry.vertices.push(lineStart); lineGeometry.vertices.push(lineEnd); var lineMaterial = new THREE.LineBasicMaterial(lineProps); var line = new THREE.Line(lineGeometry, lineMaterial) this.scene.add(line); var lineStart = new THREE.Vector3(linesDistances[i], -planeLength, 0); var lineEnd = new THREE.Vector3(linesDistances[i], planeLength, 0); var lineGeometry = new THREE.Geometry(); lineGeometry.vertices.push(lineStart); lineGeometry.vertices.push(lineEnd); var lineMaterial = new THREE.LineBasicMaterial(lineProps); var line = new THREE.Line(lineGeometry, lineMaterial) this.scene.add(line); } } animate = () => { if(this.isRunning) this.moveFrames(this.speed); this.updateRotations(); this.controls.update(); this.renderer.render( this.scene, this.camera ); requestAnimationFrame( this.animate ); } /** * Sends the data in the recieved file to be drawn * @param {{[type : string] : File | undefined}} fileDict sent file */ showData(fileDict : {[type : string] : File | undefined}) : void{ let astroFile = fileDict['astrometry']; if(!astroFile) return; let reader = new FileReader(); reader.onload = () => { var astroData = reader.result; let csvRecordsArray = (<string>astroData).split(/\r\n|\n/); let parser = new CsvParser(); let headersRow = parser.getHeaderArray(csvRecordsArray, " "); let records : AstrometryRecord[] = parser.getDataRecordsArrayFromCSVFile(csvRecordsArray, headersRow.length, false, " ", 'astrometry'); this.drawData(records); } reader.onerror = function () { console.log('error has occured while reading file!'); } reader.readAsText(astroFile); } /** * Template of the DrawData function */ drawData(records : AstrometryRecord[]){ return; } }
<reponame>nickgros/SynapseWebClient<filename>src/main/java/org/sagebionetworks/web/client/widget/accessrequirements/createaccessrequirement/CreateAccessRequirementStep1ViewImpl.java<gh_stars>10-100 package org.sagebionetworks.web.client.widget.accessrequirements.createaccessrequirement; import org.gwtbootstrap3.client.ui.Button; import org.gwtbootstrap3.client.ui.FormGroup; import org.gwtbootstrap3.client.ui.InputGroup; import org.gwtbootstrap3.client.ui.Radio; import org.gwtbootstrap3.client.ui.TextArea; import org.gwtbootstrap3.client.ui.TextBox; import org.gwtbootstrap3.client.ui.html.Div; import com.google.gwt.event.dom.client.ClickEvent; import com.google.gwt.event.dom.client.ClickHandler; import com.google.gwt.uibinder.client.UiBinder; import com.google.gwt.uibinder.client.UiField; import com.google.gwt.user.client.ui.IsWidget; import com.google.gwt.user.client.ui.Widget; import com.google.inject.Inject; public class CreateAccessRequirementStep1ViewImpl implements CreateAccessRequirementStep1View { public interface Binder extends UiBinder<Widget, CreateAccessRequirementStep1ViewImpl> { } Widget widget; @UiField Div subjectsContainer; @UiField TextBox entityIds; @UiField Button synapseMultiIdButton; @UiField TextBox teamIds; @UiField Button teamMultiIdButton; @UiField FormGroup arTypeUI; @UiField TextBox descriptionField; @UiField Radio managedActTypeButton; @UiField Radio actTypeButton; @UiField Radio termsOfUseButton; Presenter presenter; @UiField InputGroup teamUI; @UiField InputGroup entityUI; @Inject public CreateAccessRequirementStep1ViewImpl(Binder binder) { widget = binder.createAndBindUi(this); synapseMultiIdButton.addClickHandler(new ClickHandler() { @Override public void onClick(ClickEvent event) { presenter.onAddEntities(); } }); teamMultiIdButton.addClickHandler(new ClickHandler() { @Override public void onClick(ClickEvent event) { presenter.onAddTeams(); } }); } private void showEntityUI() { entityUI.setVisible(true); teamUI.setVisible(false); } private void showTeamUI() { entityUI.setVisible(false); teamUI.setVisible(true); } @Override public Widget asWidget() { return widget; } @Override public void setSubjects(IsWidget w) { subjectsContainer.clear(); subjectsContainer.add(w); } @Override public String getEntityIds() { return entityIds.getText(); } @Override public String getShortDescription() { return descriptionField.getText(); } @Override public void setShortDescription(String description) { descriptionField.setText(description); } @Override public void setEntityIdsString(String ids) { entityIds.setText(ids); showEntityUI(); } @Override public String getTeamIds() { return teamIds.getText(); } @Override public void setTeamIdsString(String ids) { teamIds.setText(ids); showTeamUI(); } @Override public boolean isACTAccessRequirementType() { return actTypeButton.getValue(); } @Override public boolean isManagedACTAccessRequirementType() { return managedActTypeButton.getValue(); } @Override public boolean isTermsOfUseAccessRequirementType() { return termsOfUseButton.getValue(); } @Override public void setAccessRequirementTypeSelectionVisible(boolean visible) { arTypeUI.setVisible(visible); } @Override public void setPresenter(Presenter p) { this.presenter = p; } }
def _get_enable_toolkit(self): from warnings import warn warn("Use of the enable_toolkit attribute is deprecated.") return self.toolkit
#pragma once class ShaderResource { public: ShaderResource(); ~ShaderResource(); HRESULT Init( const std::wstring & name, const UINT & width = 0, const UINT & height = 0, const UINT& arraySize = 1, const DXGI_FORMAT& format = DXGI_FORMAT_R8G8B8A8_UNORM); void Bind(const UINT & rootParameterIndex, ID3D12GraphicsCommandList * commandList, UINT offset = 0); void BindComputeShader(const UINT & rootParameterIndex, ID3D12GraphicsCommandList * commandList, UINT offset = 0); void BindComputeShaderUAV(const UINT & rootParameterIndex, ID3D12GraphicsCommandList * commandList, UINT offset = 0); void Clear(ID3D12GraphicsCommandList * commandList); ID3D12Resource * GetResource() const; void Release(); private: UINT m_width = 0; UINT m_height = 0; UINT m_arraySize = 1; ID3D12Resource * m_resource[FRAME_BUFFER_COUNT] = { nullptr }; ID3D12Resource * m_clearResource = nullptr; SIZE_T m_descriptorHeapOffset[FRAME_BUFFER_COUNT] = {0}; private: int _GetDXGIFormatBitsPerPixel(DXGI_FORMAT& dxgiFormat); };
main = getLine >>= putStr . solve solve [x, y, z, w] | x == y && y == z = "Yes" | y == z && z == w = "Yes" | otherwise = "No"
<filename>core/elements/src/Loading/index.tsx import React, {useContext} from 'react'; import {ActivityIndicator} from 'react-native'; import {useTheme} from '@sbf-providers/theme'; const Loading = () => { const theme = useTheme(); return <ActivityIndicator size="large" color={theme.colors.primary} />; }; export default Loading;
// Copyright 2015 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package buildutil // This logic was copied from stringsFlag from $GOROOT/src/cmd/go/build.go. import "fmt" const TagsFlagDoc = "a list of `build tags` to consider satisfied during the build. " + "For more information about build tags, see the description of " + "build constraints in the documentation for the go/build package" // TagsFlag is an implementation of the flag.Value and flag.Getter interfaces that parses // a flag value in the same manner as go build's -tags flag and // populates a []string slice. // // See $GOROOT/src/go/build/doc.go for description of build tags. // See $GOROOT/src/cmd/go/doc.go for description of 'go build -tags' flag. // // Example: // // flag.Var((*buildutil.TagsFlag)(&build.Default.BuildTags), "tags", buildutil.TagsFlagDoc) type TagsFlag []string func (v *TagsFlag) Set(s string) error { var err error *v, err = splitQuotedFields(s) if *v == nil { *v = []string{} } return err } func (v *TagsFlag) Get() interface{} { return *v } func splitQuotedFields(s string) ([]string, error) { // Split fields allowing '' or "" around elements. // Quotes further inside the string do not count. var f []string for len(s) > 0 { for len(s) > 0 && isSpaceByte(s[0]) { s = s[1:] } if len(s) == 0 { break } // Accepted quoted string. No unescaping inside. if s[0] == '"' || s[0] == '\'' { quote := s[0] s = s[1:] i := 0 for i < len(s) && s[i] != quote { i++ } if i >= len(s) { return nil, fmt.Errorf("unterminated %c string", quote) } f = append(f, s[:i]) s = s[i+1:] continue } i := 0 for i < len(s) && !isSpaceByte(s[i]) { i++ } f = append(f, s[:i]) s = s[i:] } return f, nil } func (v *TagsFlag) String() string { return "<tagsFlag>" } func isSpaceByte(c byte) bool { return c == ' ' || c == '\t' || c == '\n' || c == '\r' }
// File 4 : Autoboxing and Unboxing class AutoBoxingUnboxing { public static void main(String[] args) { /* // Autoboxing Integer i = 454; // Integer i = new Integer(454); System.out.println("i -> " + i); Double d = 454.545; // Double d = new Double(454.545); System.out.println("d -> " + d); */ /* // Unboxing Integer i = 454; // Autoboxing int x = i; // Unboxing System.out.println("x -> " + x); Double d = 4544.5485; // Autoboxing double y = d; // Unboxing System.out.println("d -> " + d); */ } }
/** * resume the actual playback * * @return a boolean. true if the command was executed, else if the command failed */ public boolean resume() { if (this.suppressed) { this.postSuppressionAction = PostSuppressionAction.NONE; return true; } return this.spotifyAPICalls.resume(); }
WASHINGTON — The security firm formerly known as Blackwater has agreed to pay a fine of $7.5 million to avoid US prosecution for smuggling arms, the Justice Department said in a statement Tuesday. The company, now known as Academi, will pay the fine in addition to a previously agreed $42 million settlement with the State Department over violations of the Arms Export Control Act, the Justice Department said. Under the agreement, the company previously known as Blackwater Worldwide and as Xe Services “admits certain facts” following a five-year, multi-agency federal investigation, said Thomas Walker, a prosecutor in North Carolina. The probe “covered an array of criminal allegations,” some “involving the manufacture and shipment of short-barreled rifles, fully automatic weapons, armored helicopters, armored personnel carriers,” said the statement. The company had also faced allegations under the Foreign Corrupt Practices Act regarding its conduct in Iraq and Sudan in relation to unlicensed training of foreign nationals and firearms violations. Blackwater became notorious following a September 16, 2007 incident in which five of its guards protecting a US diplomatic convoy opened fire in Baghdad’s busy Nisur Square, killing at least 14 Iraqi civilians. The company was then the largest private security firm employed by the Americans in Iraq, but it pulled out of the country in May 2009 after the State Department refused to renew its contracts. The Nisur Square incident became a running sore among the Iraqi population, but the company always maintained that its guards opened fire in self-defense. Blackwater Worldwide first changed its name — to Xe Services — in February 2009, following what it said was a change of business focus. Critics suggested that the rebranding was an effort to polish an image tarnished by an alleged culture of lawlessness and lack of accountability among Blackwater staff. The company then changed its name again — from Xe Services to Adacemi — in December 2011.
/* * Post-process DBCS state in the buffer. * This has two purposes: * * - Required post-processing validation, per the data stream spec, which can * cause the write operation to be rejected. * - Setting up the value of the all the db fields in ea_buf. * * This function is called at the end of every 3270 write operation, and also * after each batch of NVT write operations. It could also be called after * significant keyboard operations, but that might be too expensive. * * Returns 0 for success, -1 for failure. */ int ctlr_dbcs_postprocess(void) { int baddr; int faddr0; int faddr; int last_baddr; int pbaddr = -1; int dbaddr = -1; Boolean so = False, si = False; Boolean dbcs_field = False; int rc = 0; if (!dbcs) return 0; faddr0 = find_field_attribute(0); baddr = faddr0; INC_BA(baddr); if (faddr0 < 0) last_baddr = 0; else last_baddr = faddr0; faddr = faddr0; dbcs_field = (ea_buf[faddr].cs & CS_MASK) == CS_DBCS; do { if (ea_buf[baddr].fa) { faddr = baddr; ea_buf[faddr].db = DBCS_NONE; dbcs_field = (ea_buf[faddr].cs & CS_MASK) == CS_DBCS; if (dbcs_field) { dbaddr = baddr; INC_BA(dbaddr); } else { dbaddr = -1; } if (pbaddr >= 0 && ea_buf[pbaddr].db == DBCS_SI) ea_buf[pbaddr].db = DBCS_NONE; } else { switch (ea_buf[baddr].cc) { case EBC_so: if (so || dbcs_field) { trace_ds("DBCS postprocess: invalid SO " "found at %s\n", rcba(baddr)); rc = -1; } else { dbaddr = baddr; INC_BA(dbaddr); } ea_buf[baddr].db = DBCS_NONE; so = True; si = False; break; case EBC_si: if (si || dbcs_field) { trace_ds("Postprocess: Invalid SO found " "at %s\n", rcba(baddr)); rc = -1; ea_buf[baddr].db = DBCS_NONE; } else { ea_buf[baddr].db = DBCS_SI; } dbaddr = -1; si = True; so = False; break; default: if (so && ea_buf[baddr].cs != CS_BASE) { trace_ds("DBCS postprocess: invalid " "character set found at %s\n", rcba(baddr)); rc = -1; ea_buf[baddr].cs = CS_BASE; } if ((ea_buf[baddr].cs & CS_MASK) == CS_DBCS) { if (dbaddr < 0) { dbaddr = baddr; } } else if (!so && !dbcs_field) { dbaddr = -1; } if (dbaddr >= 0) { if ((baddr + ROWS*COLS - dbaddr) % 2) { if (!valid_dbcs_char( ea_buf[pbaddr].cc, ea_buf[baddr].cc)) { ea_buf[pbaddr].cc = EBC_space; ea_buf[baddr].cc = EBC_space; } MAKE_RIGHT(baddr); } else { MAKE_LEFT(baddr); } } else ea_buf[baddr].db = DBCS_NONE; break; } } if (pbaddr >= 0 && IS_LEFT(ea_buf[pbaddr].db) && !IS_RIGHT(ea_buf[baddr].db) && ea_buf[pbaddr].db != DBCS_DEAD) { if (!ea_buf[baddr].fa) { trace_ds("DBCS postprocess: dead position " "at %s\n", rcba(pbaddr)); rc = -1; } ea_buf[pbaddr].cc = EBC_null; ea_buf[pbaddr].db = DBCS_DEAD; } if (pbaddr >= 0 && ea_buf[pbaddr].db == DBCS_SI) ea_buf[baddr].db = DBCS_SB; pbaddr = baddr; INC_BA(baddr); } while (baddr != last_baddr); return rc; }
package com.damon.cqrs.sample.red_packet; import com.alibaba.fastjson.JSONObject; import com.damon.cqrs.event.EventCommittingService; import com.damon.cqrs.sample.red_packet.command.RedPacketCreateCommand; import com.damon.cqrs.sample.red_packet.command.RedPacketGetCommand; import com.damon.cqrs.sample.red_packet.command.RedPacketGrabCommand; import com.damon.cqrs.sample.red_packet.command.RedPacketTypeEnum; import com.damon.cqrs.sample.red_packet.domain_service.CqrsConfig; import com.damon.cqrs.sample.red_packet.domain_service.RedPacketDomainServcie; import com.damon.cqrs.sample.red_packet.dto.WeixinRedPacketDTO; import com.damon.cqrs.utils.IdWorker; import org.apache.rocketmq.client.exception.MQClientException; import java.util.concurrent.CountDownLatch; public class RedPacketServiceBootstrap { public static void main(String[] args) throws InterruptedException, MQClientException { EventCommittingService committingService = CqrsConfig.init(); RedPacketDomainServcie redPacketServcie = new RedPacketDomainServcie(committingService); Long id = IdWorker.getId(); RedPacketCreateCommand create = new RedPacketCreateCommand(IdWorker.getId(), id); create.setMoney(1000000L); create.setNumber(1000000); create.setSponsorId(1L); create.setType(RedPacketTypeEnum.AVG); redPacketServcie.createRedPackage(create); Long startDate = System.currentTimeMillis(); CountDownLatch latch = new CountDownLatch(1000000); for (int i = 0; i < 500; i++) { new Thread(() -> { for (int number = 0; number < 2000; number++) { RedPacketGrabCommand grabCommand = new RedPacketGrabCommand(IdWorker.getId(), id); grabCommand.setUserId(IdWorker.getId()); redPacketServcie.grabRedPackage(grabCommand); latch.countDown(); } }).start(); } latch.await(); Long endDate = System.currentTimeMillis(); System.out.println(endDate - startDate); RedPacketGetCommand getCommand = new RedPacketGetCommand(IdWorker.getId(), id); WeixinRedPacketDTO packet = redPacketServcie.get(getCommand); System.out.println(endDate - startDate); System.out.println(JSONObject.toJSONString(packet)); } }
/** * Determine if this event is one that starts a feed flow * @param event the event * @param checkRemoteSourceFlowFileId flag to check additional attributes on the flow to verify its not linked to another flow file * @return */ public static boolean isStartingFeedFlow(ProvenanceEventRecord event) { if(StringUtils.isBlank(event.getSourceSystemFlowFileIdentifier())) { return contains(STARTING_EVENT_TYPES, event.getEventType()); } else { return contains(STARTING_EVENT_TYPES, event.getEventType()) && event.getFlowFileUuid() != event.getSourceSystemFlowFileIdentifier(); } }
// appendTags fills the colBufs for the tag columns with the tag value. func (t *table) appendTags() { for j := range t.cols { v := t.tags[j] if v != nil { if t.colBufs[j] == nil { t.colBufs[j] = make([]string, t.l) } colBuf := t.colBufs[j].([]string) if cap(colBuf) < t.l { colBuf = make([]string, t.l) } else { colBuf = colBuf[:t.l] } vStr := string(v) for i := range colBuf { colBuf[i] = vStr } t.colBufs[j] = colBuf } } }
def isRecordInPDS(self, record, subject): er_rh = ServiceClient.get_rh_for( record_type=ServiceClient.EXTERNAL_RECORD) pds_records = er_rh.get( external_system_url=self.data_source.url, path=self.path, subject_id=subject.id) if record and pds_records: if record in pds_records: return True return False
Roger Federer has dropped to number five in the men's world rankings, his lowest position in a decade. The 31-year-old Swiss fell two places after being knocked out of Wimbledon in the second round by Sergiy Stakhovsky. Briton Andy Murray, 26, won Wimbledon but stays second, behind number one Novak Djokovic, who was runner-up. Federer facts Federer's second-round defeat by Sergiy Stakhovsky was his earliest at Wimbledon since losing in the first round in 2002 It was also his earliest defeat at a Grand Slam event since losing in the first round of the French Open in 2003 The result ended his run of 36 consecutive Grand Slam quarter-final appearances It was his first defeat by a player ranked outside the top 100 since losing to number 101 Richard Gasquet at Monte Carlo in 2005 And it was the earliest defeat for a defending Wimbledon men's singles champion since Lleyton Hewitt lost in the first round in 2003 Marion Bartoli, 28, claimed the women's title at the All England Club to move into seventh in the women's rankings, with Britain's Laura Robson up to 27th. Serena Williams remains world number one despite being beaten by losing finalist Sabine Lisicki in the fourth round at Wimbledon. British number one Robson also reached the fourth round, which helped move her from 38th to a career-high ranking. The 19-year-old becomes the first British woman inside the top 30 since Jo Durie in 1987. British number two Heather Watson, 21, lost in the first round at Wimbledon and dropped 12 places to 68th. Federer's new ranking is his lowest since he was fifth on 23 June 2003, two weeks before he won Wimbledon for the first of his record 17 Grand Slam victories. Rafael Nadal, who won the French Open, was a shock first-round casualty at Wimbledon and is fourth in the men's rankings behind fellow Spaniard David Ferrer, who reached the quarter-finals.
package arpcnet import ( "code.cloudfoundry.org/bytefmt" "github.com/rektorphi/arpcnet/rpc" ) // A Node is the runtime data structure for a node in the Arpc network. type Node struct { group *rpc.Address core *rpc.Core gin *GRPCServer closeables []func() } // NewNode creates a Node from the parameters in a Config struct. // Returns an error if any configuration values are invalid or incompatible. Note that the Node is only fully operational when the Run()-function is called. func NewNode(config *Config) (n *Node, err error) { group, err := rpc.ParseAddress(config.Group) if err != nil { return } coremem := -1 if len(config.CoreMemory) > 0 { bs, err := bytefmt.ToBytes(config.CoreMemory) if err != nil { return nil, err } coremem = int(bs) } n = &Node{group, rpc.NewCore(group, coremem), nil, []func(){}} err = config.apply(n) if err != nil { return } n.gin = NewGRPCServer(n.core, config.GRPCPort) return } // Group returns the identifier of the group this node belongs to. func (n *Node) Group() *rpc.Address { return n.group } // ID returns the identifier of this node. It has the group as a prefix. func (n *Node) ID() *rpc.Address { return n.core.ID() } // Run blocks the goroutine and takes the node fully operational. func (n *Node) Run() { n.gin.Serve() } // AddCloseable adds a function call to this node that is called when the node is stopped. // It can be used to link the lifetime of any module that depends on this node with its lifetime. func (n *Node) AddCloseable(c func()) { n.closeables = append(n.closeables, c) } // Stop ends the operation of this node and terminates any associated modules. func (n *Node) Stop() { n.gin.Stop() for _, c := range n.closeables { c() } }