content
stringlengths
10
4.9M
Using modeling to understand how athletes in different disciplines solve the same problem: swimming versus running versus speed skating. Every new competitive season offers excellent examples of human locomotor abilities, regardless of the sport. As a natural consequence of competitions, world records are broken every now and then. World record races not only offer spectators the pleasure of watching very talented and highly trained athletes performing muscular tasks with remarkable skill, but also represent natural models of the ultimate expression of human integrated muscle biology, through strength, speed, or endurance performances. Given that humans may be approaching our species limit for muscular power output, interest in how athletes improve on world records has led to interest in the strategy of how limited energetic resources are best expended over a race. World record performances may also shed light on how athletes in different events solve exactly the same problem-minimizing the time required to reach the finish line. We have previously applied mathematical modeling to the understanding of world record performances in terms of improvements in facilities/equipment and improvements in the athletes' physical capacities. In this commentary, we attempt to demonstrate that differences in world record performances in various sports can be explained using a very simple modeling process.
It was meant to be a relaxing break in the Bavarian Alps during a summer break from college. I was staying with a friend's aunt, who ran a hotel. One Monday morning, I set out for a hike in the sunshine. But after an hour or so fog started to roll in and, as it thickened, I became lost in a web of trails. Each time I followed a track it led nowhere and the situation swiftly took on the sensation of a bad dream. I climbed on to a ledge overlooking a deep valley to peek over and orient myself, but could see nothing. I tried to scramble back up to the trail but slipped and fell 15ft back down again. I crashed against rock, barely stopping myself from rolling straight off the ledge into the valley below. I'd broken four ribs, fractured my ankle and dislocated my shoulder – the pain made this instantly obvious, and my arm hung limply at my side. I had no phone, water or food – I had planned to be gone for only a few hours. I sat in the rain all night, calling for help, my neoprene top offering little protection against the cold. The ledge sloped up too steeply for me to climb out. But by dawn I had spotted a nearby waterfall, and above me a large cave. Leaning on a stick, I made my way carefully up the steep slope to the cave. Outside the entrance I could see a cable, typical of the looped systems used by logging companies in the area to haul wood up the mountain. I assumed it must be out of action, which is why half of it lay slack on the ground. The taut half of the pulley, heading back down the valley, was 30 feet above my head. Inside the cave I found an empty plastic bottle which I used to collect water dripping from the roof. I spent the rest of the day sitting in front of the cave, hoping my yellow shirt would be bright enough to attract the attention of any potential rescuers – I knew the alarm would have been raised when I'd failed to return to the hotel by nightfall. Sure enough, I later heard helicopters whirring nearby. I tried to make myself as visible as possible, standing up and waving my good arm, but the sound receded into the distance and as darkness fell I went back to the cave. During the second day, I was sitting outside again, waiting for the helicopters to return, when the cable lying along the ground suddenly jerked. Realising this might be my only opportunity to send a signal down into the valley, I did a quick stocktake. All I had was the clothes I stood up in – and the item that seemed most expendable and likely to have the most impact was my bra. I quickly undid it and tied it to the cable. Seconds later, the whole length rose into the air, way out of my reach, and my bra was swiftly carried away and out of sight. That night I heard helicopters again, but rushed out of the cave too late to attract their attention – I later heard they'd been using infrared devices to try to detect body heat. By this time, I'd eaten nothing for three days and the cave had produced only about a cup and a half of water. I carefully made my way down to the waterfall, drank my fill and washed the blood and dirt out of my clothes. That's when I heard the helicopters again. Gripping the water bottle between my teeth, I used my stick to scramble back up to the plateau. It was about an hour later that I was spotted. My body had partly shut down, I think, blocking out most of the pain I should have been feeling. In hospital, though, I felt everything. I struggled to breathe, due to a partly deflated lung, and walked with a stoop for weeks. I had also become host to 40 hungry ticks, each of which had to be carefully removed. It was a fortnight before I was well enough to leave the country and, four years on, I'm still having operations to fix my ankle. I learned that the rescue operation – 80 people on foot, supported by five helicopters – would soon have been called off. They'd been searching in the wrong area until the worker testing the pulley had discovered my bra and raised the alarm. After that, it was simply a case of following the cable up the mountain. I try not to dwell too much on the overwhelming coincidence that led to him testing the line that day and spotting my bra. • As told to Chris Broughton Do you have an experience to share? Email [email protected]
<reponame>PaulGustafson/dpq<filename>app/Main.hs module Main where import ReadEvalPrint import Dispatch import TopMonad import ConcreteSyntax import System.Environment import System.Exit import Control.Exception hiding (TypeError) import System.FilePath main :: IO () main = do p <- getEnv "DPQ" `catches` handlers runTop p $ read_eval_print 1 where mesg = "please set the environment variable DPQ to the DPQ installation directory.\n" handlers = [Handler handle1] handle1 :: IOException -> IO String handle1 = (\ ex -> do {putStrLn $ mesg ++ show ex; exitWith (ExitFailure 1)}) error_handler e = do top_display_error e return () loadPrelude = do p <- getPath dispatch (Load True $ p </> "lib/Prelude.dpq") return ()
// Copyright 2016 Google Inc. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. #![crate_name="rustcxx_codegen"] #![feature(rustc_private)] extern crate syntax; extern crate rustc; extern crate rustc_driver; extern crate rustcxx_common; #[cfg(feature="gcc")] extern crate gcc; use rustcxx_common::{Cxx, Rust, Function, parse_rust_macro, tokens_to_cpp}; use syntax::parse; use syntax::ast; use syntax::codemap::respan; use syntax::ext::base::{DummyResolver, ExtCtxt}; use syntax::ext::expand; use syntax::visit; use std::fs::File; use std::io::{self, Write}; use std::sync::{Arc, Mutex}; pub struct Codegen { code: String, deps: Vec<String>, } impl Codegen { /// Parse a rust file, and generate the corresponding C++ code pub fn generate(src: &std::path::Path) -> Codegen { let result : Arc<Mutex<Option<Codegen>>> = Arc::new(Mutex::new(None)); let result_ = result.clone(); let src = src.to_owned(); rustc_driver::monitor(move || { let sess = parse::ParseSess::new(); let krate = match parse::parse_crate_from_file(&src, Vec::new(), &sess) { Ok(krate) => krate, Err(mut err) => { err.emit(); sess.span_diagnostic.abort_if_errors(); unreachable!(); } }; let cfg = expand::ExpansionConfig::default("foo".to_string()); let mut resolver = DummyResolver; let ecx = ExtCtxt::new(&sess, Vec::new(), cfg, &mut resolver); let mut visitor = CodegenVisitor::new(&ecx); visit::walk_crate(&mut visitor, &krate); let code = visitor.code(); let deps = sess.codemap().files.borrow().iter() .filter(|fmap| fmap.is_real_file()) .filter(|fmap| !fmap.is_imported()) .map(|fmap| fmap.name.clone()) .collect(); *result.lock().expect("lock poisoned") = Some(Codegen { code: code, deps: deps, }); sess.span_diagnostic.abort_if_errors(); }); let value = result_.lock().expect("lock poisoned") .take().expect("missing result"); value } pub fn write_code<W: Write>(&self, writer: &mut W) -> io::Result<()> { writer.write_all(self.code.as_bytes()) } pub fn write_depfile<W: Write>(&self, writer: &mut W, out_path: &str) -> io::Result<()> { try!(writeln!(writer, "{}: {}", out_path, self.deps.join(" "))); // Include an empty rule for every source file. // Keeps make happy even if a file is deleted for dep in self.deps.iter() { try!(writeln!(writer, "{}:", dep)); } Ok(()) } pub fn write_code_to_path<P>(&self, path: P) -> io::Result<()> where P: AsRef<std::path::Path>{ let mut file = try!(File::create(path)); try!(self.write_code(&mut file)); Ok(()) } pub fn write_depfile_to_path<P>(&self, path: P, out_path: &str) -> io::Result<()> where P: AsRef<std::path::Path> { let mut file = try!(File::create(path)); try!(self.write_depfile(&mut file, out_path)); Ok(()) } } /// Visitor which walks a crate looking for cxx! and cxx_inline! macros, and generates /// the corresponding code. struct CodegenVisitor<'a, 'b: 'a> { code: Vec<String>, ecx: &'a ExtCtxt<'b>, } impl <'a, 'b> CodegenVisitor<'a, 'b> { fn new(ecx: &'a ExtCtxt<'b>) -> CodegenVisitor<'a, 'b> { CodegenVisitor { code: Vec::new(), ecx: ecx, } } fn code(&self) -> String { self.code.join("\n") } } impl <'a, 'b> visit::Visitor for CodegenVisitor<'a, 'b> { fn visit_mac(&mut self, mac: &ast::Mac) { match mac.node.path.to_string().as_ref() { "cxx" => { let func = Function::<Cxx>::parse( self.ecx, mac.span, &mac.node.tts ).and_then(|func| func.cxx_code(self.ecx)); match func { Ok(func) => self.code.push(func), Err(mut err) => err.emit(), } } "cxx_inline" => { let tokens = parse_rust_macro(&mac.node.tts, &mut |span, tts| { let func = Function::<Rust>::parse(self.ecx, span, tts).and_then(|func| { let decl = try!(func.cxx_decl(self.ecx)); let call = try!(func.cxx_call(self.ecx)); Ok((decl, call)) }); match func { Ok((decl, call)) => { self.code.push(decl); vec![respan(span, call)] }, Err(mut err) => { err.emit(); vec![] } } }); let content = tokens_to_cpp(self.ecx, &tokens); self.code.push(content); } _ => {} } } } #[cfg(feature="gcc")] pub fn build<P>(path: P) where P: AsRef<std::path::Path> { build_with_config(path, |_| {}) } #[cfg(feature="gcc")] pub fn build_with_config<P, F>(path: P, f: F) where P: AsRef<std::path::Path>, F: FnOnce(&mut gcc::Config) { let out = std::path::PathBuf::from(std::env::var("OUT_DIR").unwrap()).join("rustcxx_generated.cpp"); Codegen::generate(path.as_ref()).write_code_to_path(&out).expect("Could not write generated source"); let mut config = gcc::Config::new(); config.cpp(true).file(&out); f(&mut config); config.compile("librustcxx_generated.a"); }
def with_additional_config(self, environment_dict): check.opt_nullable_dict_param(environment_dict, 'environment_dict') if environment_dict is None: return self else: return PresetDefinition( name=self.name, solid_subset=self.solid_subset, mode=self.mode, environment_dict=merge_dicts(self.environment_dict, environment_dict), )
/*Creates a new log for new friends*/ public void createLog(String user1, String user2) { try { if (!containsLog(user1, user2)) { String insert = "insert into chatlog (user1, user2, log) values ('" + user1 + "', '" + user2 + "', '')"; statement.executeUpdate(insert); } } catch (SQLException e) { e.printStackTrace(); } }
/* Adds new listener to the listener list. Listener is notified of * any change in ordering of nodes. * * @param chl new listener */ public void addChangeListener(final ChangeListener chl) { if (listeners == null) { listeners = new HashSet<ChangeListener>(); } listeners.add(chl); }
// init runs during package initialization. this will only run during an // an instance's cold start func init() { ctx := context.Background() createTraceExporter() vault.GetSecrets(ctx, &env, envMap) }
<reponame>Sojourn/sundown #ifndef SUNDOWN_SYS_SELECTOR_H #define SUNDOWN_SYS_SELECTOR_H namespace Sundown { class SelectorItem : public std::enable_shared_from_this<SelectorItem> { public: typedef std::shared_ptr<SelectorItem> SP; virtual ~SelectorItem(); virtual const FileDescriptor &fd() const = 0; }; class SelectorEvent { public: SelectorItem::SP item() const; bool readable() const; bool writable() const; bool urgent() const; bool hangup() const; bool error() const; private: struct epoll_event event_; }; static_assert(sizeof(SelectorEvent) == sizeof(struct epoll_event), "Check SelectorEvent layout"); class Selector { public: typedef std::shared_ptr<Selector> SP; typedef std::shared_ptr<Selector> WP; enum Events { Readable = (1 << 0), Writable = (1 << 1), }; static Optional<Selector> create(); Selector(const Selector &) = delete; Selector(Selector &&) = default; Selector &operator=(const Selector &) = delete; Selector &operator=(Selector &&) = default; void add(const SelectorItem::SP &item, Events events); void modify(const SelectorItem::SP &item, Events events); void remove(const SelectorItem::SP &item); Range<const SelectorEvent *> select(); protected: Selector(FileDescriptor epollFd); private: FileDescriptor epollFd_; std::vector<SelectorItem::SP> items_; std::array<SelectorEvent, 64> events_; }; } #endif // SUNDOWN_SYS_SELECTOR_H
// GetWorkflowRunLogs gets a redirect URL to download a plain text file of logs for a workflow run. // // GitHub API docs: https://developer.github.com/v3/actions/workflow-runs/#download-workflow-run-logs func (s *ActionsService) GetWorkflowRunLogs(ctx context.Context, owner, repo string, runID int64, followRedirects bool) (*url.URL, *Response, error) { u := fmt.Sprintf("repos/%v/%v/actions/runs/%v/logs", owner, repo, runID) resp, err := s.getWorkflowLogsFromURL(ctx, u, followRedirects) if err != nil { return nil, nil, err } if resp.StatusCode != http.StatusFound { return nil, newResponse(resp), fmt.Errorf("unexpected status code: %s", resp.Status) } parsedURL, err := url.Parse(resp.Header.Get("Location")) return parsedURL, newResponse(resp), err }
/** * Like the static method {@link #gallopLeft}, except that if the range contains an element * equal to the specified {@link Comparable} key, the static method {@link #gallopRight} returns * the index after the rightmost equal element. * <p> * @param key the {@link Comparable} key whose insertion point to search for * @param array the array in which to search * @param base the index of the first element in the range * @param length the length of the range (must be greater than 0) * @param hint the index at which to begin the search, {@code 0 <= hint < n} (the closer hint * is to the result, the faster this method will run) * <p> * @return the integer {@code k}, {@code 0 <= k <= n} such that * {@code a[b + k - 1] <= key < a[b + k]} * <p> * @throws ClassCastException if any {@code array} elements cannot be mutually compared */ protected static int gallopRight(final Comparable<Object> key, final Object[] array, final int base, final int length, final int hint) { assert length > 0 && hint >= 0 && hint < length; int ofs = 1, lastOfs = 0; if (Comparables.compare(key, array[base + hint]) < 0) { /* * Gallop left until {@code a[b + hint - ofs] <= key < a[b + hint - lastOfs]}. */ final int maxOfs = hint + 1; while (ofs < maxOfs && Comparables.compare(key, array[base + hint - ofs]) < 0) { lastOfs = ofs; ofs = (ofs << 1) + 1; if (ofs <= 0) { ofs = maxOfs; } } if (ofs > maxOfs) { ofs = maxOfs; } final int temp = lastOfs; lastOfs = hint - ofs; ofs = hint - temp; } else { /* * Gallop right until {@code a[b + hint + lastOfs] <= key < a[b + hint + ofs]}. */ final int maxOfs = length - hint; while (ofs < maxOfs && Comparables.compare(key, array[base + hint + ofs]) >= 0) { lastOfs = ofs; ofs = (ofs << 1) + 1; if (ofs <= 0) { ofs = maxOfs; } } if (ofs > maxOfs) { ofs = maxOfs; } lastOfs += hint; ofs += hint; } assert -1 <= lastOfs && lastOfs < ofs && ofs <= length; /* * Now {@code a[b + lastOfs] <= key < a[b + ofs]}, so key belongs somewhere to the right of * {@code lastOfs} but no farther right than {@code ofs}. Do a binary search, with invariant * {@code a[b + lastOfs - 1] <= key < a[b + ofs]}. */ ++lastOfs; while (lastOfs < ofs) { final int m = lastOfs + (ofs - lastOfs >>> 1); if (Comparables.compare(key, array[base + m]) < 0) { ofs = m; } else { lastOfs = m + 1; } } assert lastOfs == ofs; return ofs; }
Expanding the Clinico-Genetic Spectrum of Myofibrillar Myopathy: Experience From a Chinese Neuromuscular Center Background: Myofibrillar myopathy is a group of hereditary neuromuscular disorders characterized by dissolution of myofibrils and abnormal intracellular accumulation of Z disc-related proteins. We aimed to characterize the clinical, physiological, pathohistological, and genetic features of Chinese myofibrillar myopathy patients from a single neuromuscular center. Methods: A total of 18 patients were enrolled. Demographic and clinical data were collected. Laboratory investigations, electromyography, and cardiac evaluation was performed. Routine and immunohistochemistry stainings against desmin, αB-crystallin, and BAG3 of muscle specimen were carried out. Finally, next-generation sequencing panel array for genes associated with hereditary neuromuscular disorders were performed. Results: Twelve pathogenic variants in DES, BAG3, FLNC, FHL1, and TTN were identified, of which seven were novel mutations. The novel DES c.1256C>T substitution is a high frequency mutation. The combined recessively/dominantly transmitted c.19993G>T and c.107545delG mutations in TTN gene cause a limb girdle muscular dystrophy phenotype with the classical myofibrillar myopathy histological changes. Conclusions: We report for the first time that hereditary myopathy with early respiratory failure patient can have peripheral nerve and severe spine involvement. The mutation in Ig-like domain 16 of FLNC is associated with the limb girdle type of filaminopathy, and the mutation in Ig-like domain 18 with distal myopathy type. These findings expand the phenotypic and genotypic correlation spectrum of myofibrillar myopathy. INTRODUCTION Myofibrillar myopathy (MFM) is a group of hereditary neuromuscular disorders characterized by dissolution of myofibrils and abnormal intracellular accumulation of proteins, which are the constitutive or functional components of the Z disc. The defining morphological features of MFM are streaming, thickening, or dissolution of Z disc on electron microscopy. Other characteristic light microscopic changes are eosinophilic materials, rimmed vacuoles, and amorphous deposits and rubbed out fibers. Despite the common histological features, significant variation exists in each subtype in terms of clinical manifestation and molecular basis. There is an ever-expanding panel of genes associated with myofibrillar myopathies including DES, CRYAB, MYOT, LDB3, FLNC, BAG3, FHL1, TTN, PYROXD1 and KY, which encode proteins that are the integral part of or functionally associated with Z disc (1). Meanwhile, there are case reports in which other mutated genes cause histological changes compatible with myofibrillar myopathy. These genes include ACTA1, HSPB8, PLEC, DNAJB6, and LMNA (2-6). The majority of MFM patients follow an autosomal dominant inheritance pattern, while less frequently, the disease is transmitted by autosomal recessive or X-linked dominant/recessive pattern (7)(8)(9). There are several case reports of Chinese MFM patients, and one study identified a founder mutation in FLNC gene in 34 patients from Hong Kong area (10)(11)(12)(13). In this study, we present the clinical, histological, immunohistochemical, and genetic analysis in 18 Chinese MFM patients diagnosed in our neuromuscular center. Seven novel mutations in DES, FLNC, FHL1, and TTN have been identified. Patients Between 2012 and 2019 in the Department of Neurology, Xiangya Hospital, 17 patients were diagnosed of MFMs based on myopathological findings including the following: eosinophilic bodies on eosin and hematoxylin (HE) staining, cytoplasmic bodies, rimmed vacuoles on modified trichrome Gomori staining and positive sarcoplasmic immunostaining for MFM-related proteins, and exclusion of other diseases that could demonstrate eosinophilic bodies or rimmed vacuoles, e.g., inclusion body myositis, on clinical features. One patient was excluded because of decline of gene test. Two additional patients were diagnosed as MFM based on genetic pedigree analysis (patient 2 was the younger brother of patient 1, patient 7 was patient 6's father) and clinical symptoms. As a result, 18 patients were recruited for this study. All recruited patients have signed consent forms. Clinical Data Demographic and clinical data were collected. Laboratory investigations including blood routine, serum CK levels, electrocardiogram, echocardiography, and electromyography were performed. Histology and Histochemistry Muscle specimens of biceps brachii or quadriceps femoris of 16 patients were snap frozen by isopropene cooled in liquid nitrogen. Sections of 8 µm thickness were cut using a cryostat (Leica CM1900). Routine stainings were performed as follows: HE, modified trichrome Gomori, NADH, SDH, COX/SDH double stain, acid phosphatase, oil red, PAS, and ATPase (pH 4.2, 4.6, and 9.6). Histological changes such as necrotic and regenerating fibers, eosinophilic bodies, and amorphous deposits were counted in six random fields under 200× magnification. Immunohistochemical Studies Ten micrometer thick serial sections were cut for immunohistochemistry studies. Biopsies from subjects, who were ultimately deemed to be free of muscle diseases, were used as normal controls. Sections were blocked by 0.3% hydrogen peroxide in methanol and 10% goat serum in PBS for 30 min each, then incubated in primary antibodies against desmin (Abcam, 1:400), alpha-B crystalin (Abcam, 1:500), and bcl-2 associated athanogene 3 (BAG3, Abcam, 1:200) overnight at 4 • C. After rinsing in PBS, sections were incubated in biotinylated secondary antibodies for 30 min. The Vectastain ABC kit (Vector Laboratories, CA) was used for immunodetection. After developed by DAB, the tissues were counterstained by hematoxylin for 10 s, then dehydrated through graded ethanol, cleared in xylene, and finally mounted by resin. The numbers of fibers with focal areas of increased reactivity for desmin, alpha-B crystalin, and BAG3 were counted in six random fields under 200× magnification. Overview In the present study, 11 mutations were identified in DES, BAG3, FLNC, FHL1, and TTN (Table 1), of which 7 were not reported previously. The causative gene for patient 18 remained elusive despite extensive screening for the known genes for hereditary neuromuscular disorders. The clinical features were summarized in Table 2. There was a male predominance with a male to female ratio of 1.6:1. The age of disease onset ranged from 1 to 48 years (mean ± SD = 25.0 ± 16.3 years) with duration from 1 to 27 years (10.6 ± 8.1 years). Except for one filaminopathy patient who presented with finger muscle atrophy, all cases demonstrated a more severe involvement of the lower limbs. Half of the patients demonstrated a pattern of mixed proximal and distal Of the 16 patients who underwent heart assessment, 13 exhibited cardiac involvement. Both cardiac structural and electrophysical abnormalities were found in 31.3% of cases, 37.5% had only structural changes, and 12.5% only arrhythmia. The types of arrhythmias included bundle branch block and atrial/ventricular premature beat. Structural heart abnormalities included ventricle thickening, atrium enlargement, and valve regurgitation. The atrial septal defect in patient 17 was considered incidental. NCS and EMG were performed in 17 patients. Nine patients (52.9%) demonstrated pure myogenic changes including small MUAPs, early recruitment, and polyphasia. Of these, one patient with FLNC mutation and one with FHL1 mutation also showed myotonic discharges. Four patients (23.5%) showed mixed myopathic and neuropathic features. Seven patients (41.2%) demonstrated peripheral nerve involvement, of which six were consistent with an axonal type. Motor nerves were preferentially involved in these patients ( Table 3). Findings on muscle pathology were summarized in Table 4 and presented in the following sections. Desminopathy There were four pedigrees with DES mutations (Figures 1A-D), and the inheritance pattern was consistent with an autosomal dominant mode. The disease tended to present in adulthood (age of onset 35.1 ± 10.9 years). All eight patients demonstrated lower extremity weakness, four also had upper limb weakness. Three patients had heart pacemaker implantation. Patient 3 also had episodic palpitation and syncope. His grandmother on mother's side, mother, and aunt all had sudden death of presumable "heart problems." His half-uncle on mother's side had similar lower limb weakness in his thirtieth (II:2 of Figure 1B). Regardless of disease duration, none exhibited joint contractures. The seven patients who finished EMG studies all showed myogenic changes, and the one with mixed myogenic and neurogenic changes was shown to have axonal polyneuropathy with motor nerve involvement. Three cases showed significant structural changes of heart ( Table 2). Apart from the three patients with heart implantation, another three showed arrhythmia including atrial premature beat, right bundle block, and fascicular block. On muscle biopsy, the fibers with eosinophilic bodies ranged from 0.1 to 6.6%, rimmed vacuoles from null to 3.5%, rubbed out fibers from 0.1 to 6.8%. Next-generation sequencing of patients 1 and 2 (brothers, Figure 1A) revealed two candidate mutations, c.772C>T in BAG3 and c.1256C>T in DES. The BAG3 variant was previously reported in an individual with long QT interval but no muscle symptoms (24). The DES c.1256C>T variant was also identified in patients 3 and his affected half-uncle ( Figure 1B), as well as in patient 4. This variant was absent in the unaffected siblings and children of patient 1 and 2, as well as in the parents of patient 4. It causes replacement of a conserved proline by leucine. This substitution is listed as of uncertain significance by ClinVar database and is predicted to be probably damaging by PolyPhen-2. Based on the homogenous phenotype of these patients, we propose that the DES c.1256C>T substitution is more likely the causative mutation. It is worth mentioning that the BAG3 c.772C>T variant was also found as the only possible pathogenic variant in another patient from our department, who has proximal limb weakness, scoliosis, and scapular winging. Muscle morphology was of mild myopathic changes and lack of any characteristic MFM changes (data not shown). We could not definitively negate the pathogenicity of this variant. BAG3opathy The two BAG3opathy patients carried the same c.626C>T mutation, as in accordance with most other BAG3opathy cases. They both presented in childhood and had severe lower limb weakness, especially distal muscles (MRC 2-3/5). Ten years into disease progression, patient 9 developed type 2 respiratory failure (pH 7.31, pO 2 78 mmHg, pCO 2 74 mmHg) induced by community-acquired pneumonia. Since then, she had been intermittently using noninvasive ventilator (Bi-level Positive Airway Pressure mode) at night (5-7 h per night for 3-7 days per week), as suggested by local pulmonologist. During hospitalization, she was also found to have obstructive hypertrophic cardiomyopathy. Both patients had axonal sensorimotor polyneuropathy confirmed by NCS and EMG ( Table 3). Mildly to moderately increased positive sharp waves and large motor unit potentials (MUPs) were present in the upper and lower limb muscles of patient 10. Compound motor action potentials (CMAPs) of her tibial and common femoral nerves were not elicitable, neither were sensory nerve action potentials (SNAPs) of sural, median, and ulnar nerves. In patient 10, there were moderately to severely increased positive sharp waves and fibrillation potentials. She demonstrated large as well as small MUPs with minimal voluntary contractions. The recruitment was reduced. All tested sensory nerves were inexcitable, while CMAPs of motor nerves were of small amplitude. The nerve involvement was so extensive and severe that both patients were initially diagnosed as axonal CMT. Mild joint abnormalities, including contracted Achilles tendons and scoliosis, were noticed in both patients. Pathological changes of this group were similar to those with desminopathy, including eosinophilic bodies (0.5-2.4%), cytoplasmic bodies (1.6-6.7%), amorphous deposits (2.2-6.8%), and rubbed out fibers (0.8-2.0%; Figure 2A). Another feature of BAG3opathy patients was that these changes were conspicuous in focal areas while in other field the muscle may appear to be completely normal (Figures 2B-D). Filaminopathy The disease presented at mid-thirtieth in both patients. Patient 11 first noticed atrophy of both hands with minimal difficulties in fine motor skills, and developed lower extremity weakness within 10 years. There was atrophy of his first dorsal interosseous muscles and tibialis anterior. He had mild contracture of metacarpophalangeal, proximal interphalangeal joints, and elbows, as well as mild scoliosis. Patient 12 complained of progressive leg weakness. Both patients exhibited mixed myogenic and neurogenic changes on EMG, but NCS was unremarkable. The myopathological changes were minimal ( Figure 2E). Immunohistochemical staining against the three Z band associated proteins was unremarkable. Additional immunohistochemistry study using antibody against filamin C demonstrated filamin C aggregations in myofibers in the two patients, but not in FHLopathy or desminopathy patients (data not shown). The intronic substitution c.6004+3G>A in patient 11 was not found in general population according to the Human Gene Mutation Database, and was conserved among species (Figure 3). It was right next to the donor splice site and likely to cause skipping of exon 36. The p.Thr1823Met missense mutation in patient 12 was present in 0.06% of general population according to gnomAD, yet was listed as variant of uncertain significance by ClinVar. It was predicted to be probably damaging by PolyPhen-2 (score 1.0), but was tolerated by SIFT. Blood samples of the parents were unfortunately unavailable as both were deceased. Titinopathy Four tininopathy patients were included in the present study. The phenotype of patient 14 and 15 were in line with hereditary myopathy with early respiratory failure (HMERF). Patient 14 presented with distal lower extremity weakness in his early fortieth. Four years after disease onset, he developed nocturnal dyspnea and soon required noninvasive ventilation. There was marked reduction of CMAP and SNAP amplitudes with conduction velocity being normal. EMG demonstrated increased fibrillation potentials and large MUPs. The presenting symptom of patient 15 was progressive scoliosis and mild walking difficulty at age 15. Subsequent spinal fusion surgery at age 16 did not ameliorate his leg weakness. He developed post-exercise dyspnea at age 19. Pulmonary function test showed severe restrictive ventilatory defect and artery blood gas analysis revealed type II respiratory failure. Noninvasive ventilation was recommended. On muscle biopsy, the characteristic fibers with necklace cytoplasmic bodies (Figures 2G,H) were found in both patients. Two missense mutations in exon 344 of TTN (c. 95134T>C in patient 14, c.95185T>C in patient 15) were identified. Patients 16 and 17 were sisters presenting with similar lower limb weakness. They learnt to walk at one and half years of age, and always ran more slowly than their peers. Despite the continuous progression of weakness, the two patients were still ambulatory at the time of biopsy. On physical examination, pronounced atrophy of quadriceps femoris, hamstrings, and tibialis anterior was noted. Both had lordosis and talipes cavus. Their mother had similar yet much milder lower limb weakness presenting in her 20th. She was still capable of sedentary work and ambulatory in her fiftieth. The father did not complain of any muscle symptoms and showed no obvious muscle atrophy. Of the third generation of this family, the second son of patient 17 (III:4), who was five years old, had frequent falls. Other children were asymptomatic. Muscle biopsies of biceps brachii from the two cases revealed pathological changes of different degrees. The main changes of patient 16 were increased central nuclei (10.5%) and selective type 1 atrophy (Figures 2I,J). Overall, the titinopathy group had the highest numbers of central nuclei (52.0 ± 37.0%). The sisters harbored compound heterozygous mutations in TTN (Figure 1E). The allele with p.Glu6665X nonsense mutation originated from their mother, whereas the other allele with p.35849A>Qfs * 16 mutation came from the father. Miscellaneous Patient 13 managed to reach her developmental milestones until early childhood. Her parents noticed her having frequent falls and demonstrating a waddling gait from age 6 years. Physical examination revealed marked weakness of neck and lower limbs with preferential involvement of tibialis anterior. Biopsy of biceps ( Figure 2F) showed central nuclear fibers (10%), eosinophilic bodies (2.4%), and cytoplasmic bodies (4.4%). Myofibers with desmin aggregates accounted for only 0.2% of total fibers. An unreported variant (c.386G>A) in FHL1 gene was identified. This missense mutation caused substitution of a cysteine for a tyrosine at amino acid position 129, which was predicted to be probably damaging (score 0.999) according to PolyPhen-2. Patient 18 was the only case with unidentified mutation in this study. He presented with lower limb weakness in young adulthood. He subsequently developed mild dysphagia and quadriceps atrophy. Nerve conduction studies revealed motor axonal neuropathy. His echocardiography at age 23 showed mild mitral and tricuspid regurgitation. Increased central nucleated fibers (36.7%), occasional eosinophilic bodies (1.8%), and fibers focally immunoreactive to desmin (1.5%), αB crystallin (1.1%), and BAG3 (1.3%) were found on muscle biopsy. DISCUSSION Since the main pathological event in MFM is considered disintegration of the Z disc, we first present a brief summary on its physiological features with emphasis on MFM-related proteins. The Z disc, whose core structure is formed by αactinin homodimers, defines the boundaries of a sarcomere unit and provides an anchoring point for sarcomeres by crosslinking the neighboring actin thin filaments. The interaction between α-actinin and actin itself does not suffice to maintain proper contractile functions of sarcomeres. Other integral Z disc proteins also play a part in the Z disc-thin filament connection. For example, ZASP and myotilin are associated with α-actinin and actin, respectively (25,26). The large protein titin binds to α-actinin and myosin on each terminal, thus serves as an elastic anchor for thick filaments to the Z disc. Another binding partner for titin is FHL1, which is associated with the I band part of titin and acts as a part of the mechanosensing machinery (27). Titin also interacts with filamin C to participate in stabilization of the Z disc (28). Filamin C in turn associates with actin, which strengthens the Z disc-thin filament connection. Desmin is one of the most important intermediate filament proteins in striated muscles. It links the Z disc and the costamere complex so as to stabilize the Z disc, and also links the nucleus to cytoskeleton network (29). Not only the innate defects of the Z disc-associated proteins lead to MFM pathology, disturbance of the turnover homoeostasis of these constitutive proteins also shows similar pathogenicity. Under both physiological and stress conditions, the small heat shock protein αB crystallin binds to titin to retain appropriate conformation of the latter and prevent it from denaturation (30,31). The chaperone activity of αB crystallin also enables it to assist desmin scaffold assembly (32). BAG3 is a co-chaperone molecule involved in the protein quality control system, and is dedicated to clearance of aberrant protein aggregates by means of chaperone-assisted selective autophagy (CASA) and macrophagy (33)(34)(35). It has been shown that BAG3 interacts with αB crystallin and prevents mutant αB crystallin aggregation (36). Another co-chaperone DNAJB6 interacts with CASA complex that includes BAG3, the exact physiological significance of which requires further exploration (5). In this retrospective study, we have described the clinical, electrophysiological, pathological, and genetic characteristics of 18 MFM patients at our neuromuscular clinic. Symptoms of desminopathy and filaminopathy tend to present in adulthood. In comparison, BAG3opathy cases have childhood onset, while titinopathy patients demonstrate a wider range of onset age from infancy to adulthood. Joint involvement is more prominent in BAG3opathy and titinopathy cases, yet is not a feature of desminopathy. Our desminopathy patients exhibit the most severe cardiac electrophysiological abnormalities, to the extent that three out of eight patients have undergone pacemaker implantation, whilst no cases of other genotypes require such procedure. Regardless of the genotypes, motor, or sensorimotor axonopathy is the predominant form of neuropathy in this cohort. BAG3opathy patients demonstrate the most severe peripheral nerve involvement. There are two reports of patients with the canonical MFM-related BAG3 mutation displaying a CMT plus rigid spine phenotype (37,38). Neither patient manifests signs of cardiomyopathy, which is common among BAG3opathy. Moreover, despite the telltale finding of Z disc disarray in ultrastructural evaluation, no protein aggregation on light microscopy was reported in the two previous cases. In comparison, one of our BAG3opathy patients developed hypertrophic cardiomyopathy ten years after disease onset. Both of our BAG3opathy patients show the characteristic eosinophilic and cytoplasmic bodies, as well as Z disc protein aggregates. In the scenario of early onset, slowly progressive symmetrical distal weakness with paresthesia, and diffuse axonal changes on EMG/NCS, diagnosis of CMT should be made with caution as BAG3opathy can present with very similar manifestations. Proof of cardiac muscle involvement serves as a warning sign, and if present, muscle biopsy should be considered to seek for protein aggregation typical of BAG3opathy. In terms of the molecular genetics of our cohort, DES is the most common gene linked to MFM, accounting for 44.4% of all cases, followed by TTN (22.2%). The types of mutation in DES include missense, deletion, and deletion/insertion. The novel c.1256C>T missense substitution seems to be a high frequency mutation with three families and one sporadic case in this study. The inheritance pattern of TTN mutations can be autosomal dominant or recessive, or even the combination of both (9,16,39). In a large cohort of congenital titinopathy caused by autosomal recessive TTN mutations, axial weakness, early joint contractures, and progressive respiratory deficiency are the predominant clinical manifestations (40). The muscle pathology consists of increased central nuclei and cores/minicores, which is more indicative of congenital myopathy than MFM (40,41). In another titinopathy cohort with autosomal recessive inheritance pattern, the patients present with either childhood onset generalized weakness or adult onset distal lower limb weakness (7). The coexistence of one dominantly and one recessively inherited mutations has been reported in several titinopathy cases with infantile to adult onset (9). The weakness pattern of these semi-dominant/recessive titinopathy cases is proximal and/or distal limb weakness, whilst pathology is myopathic with or without rimmed vacuoles. The dominant mutations are all tibial muscular dystrophy-related and are located in exon 363 (M-band exon 5), while the recessive mutations are all frameshifting. In the case of patients 16 and 17, the two sisters exhibit an infantile onset of limb girdle weakness, scoliosis plus the characteristic tibialis weakness without significant respiratory or cardiac insufficiency. Whilst the elder sister demonstrates a full picture of MFM pathology, the younger one only shows changes consistent with centronuclear myopathy, resembling the autosomal recessive titinopathy cases (40). Considering the remarkably similar phenotype between the two sisters, a different genetic background other than the compound heterozygous TTN mutations is unlikely. Again, we propose that different degrees of MFM pathology may coexist in the same patient and sampling bias may be the cause of discrepant morphological findings. According to the 2018 ENMC nomenclature consensus of limb girdle muscular dystrophy (LGMD) (42), patients16 and 17 could be diagnosed as "LGMD R10 titin-related." Yet their inheritance pattern does not completely comply with an autosomal recessive manner. Instead, it follows a combined autosomal recessive and dominant pattern. The frameshifting c.107545delG mutation is situated in exon 363 and passed down by an autosomal recessive fashion, as carriers of this single variant in the family are all asymptomatic. That this variant is of benign nature is unlikely as it causes truncation of titin protein by 128 amino acids, which includes the 152nd immunoglobulin domain of titin. The dominantly inherited c.19993G>T nonsense mutation results in creation of a premature stop codon in the tandem immunoglobulin domain of the I-band part of titin. Whether this mutation is partially or completely penetrated needs further follow up of the third generation of this pedigree, as only one out of three offspring (III:4 of Figure 1E) carrying the nonsense mutation is manifesting. So far, mutations associated with the HMERF phenotype are all located in exon 344 of TTN, which is in the 119th fibronectin type 3 region of the A-band part of titin. The c. 95134T>C missense mutation of patient 14 was first associated with HMERF in three Scandinavian families (16) and later in various ethnic groups including the Chinese population (17)(18)(19)(20)(21)(22)(23). It is noteworthy that in over 100 reported cases of HMERF, peripheral nerves have been considered spared in this phenotype (43). Our patient is the first HMERF patient that shows peripheral nerve involvement, which is consistent with an axonal sensorimotor polyneuropathy type. The recurrent c.95185T>C mutation patient 15 carries was first reported in one German HMERF family presenting with both proximal and distal weakness (23). Contracture of Achilles tendon and rigid spine has been reported in some HMERF cases (21), severe joint abnormalities are nevertheless not the predominant feature of HMERF. The early and severe involvement of spine in patient 15 is reminiscent of an Emery-Dreifuss muscular dystrophy phenotype, which has been reported in recessive titinopathy cases (44). Taken together, the titinopathy patients in the present study illustrate that: 1. the inheritance pattern of TTN is dependent on the malignant level of the mutation, which is at least partially determined by factors such as location, type, as well as the underlying pathomechanisms connected to the mutation; 2. titinopathy is a spectrum of disease entity with a plethora of combination of involved systems, disease manifestation and temporal progression, and pathologies. At least three phenotypes have been associated with filaminopathy. Patients with the first form has an adult onset distal upper limb weakness with non-specific myopathic changes on muscle biopsy and lack of conspicuous intramuscular protein aggregation (45)(46)(47). Mutations in the N-terminal and Iglike domain 15 of FLNC are related to this collective group. The second form is characterized by adult onset limb girdle weakness and the typical MFM pathology. So far mutations in Ig-like domains 7, 22, and 24 have been linked to this phenotype (48)(49)(50). Recently, a third filaminopathy phenotype, which is clinically delineated by restrictive cardiomyopathy and congenital myopathy, was reported (51). The causative mutations are in Ig-like domain 10. We report that the intronic variant possibly disrupting proper splicing of the region coding for Iglike domain 18 is associated with the distal myopathy phenotype, and the novel c.5468C>T missense mutation in Ig-like domain 16 can cause the limb girdle phenotype. CONCLUSIONS To conclude, in the present Chinese MFM cohort, desminopathy is the most common MFM subtype. The novel DES c.1256C>T substitution is a high frequency mutation. Sensorimotor axonopathy is the most common form of peripheral neuropathy in MFM patients. We confirm that BAG3opathy has the most severe peripheral nerve involvement, which can mimic CMT both clinically and electromyographically. We also find that combined recessive/dominant TTN mutations can cause a limb girdle muscular dystrophy phenotype with the characteristic MFM pathology. Patients with HMERF can have peripheral nerve, as well as severe spine involvement. The mutation in Ig-like domain 16 is associated with the limb girdle type of filaminopathy, and the mutation in Ig-like domain 18 with distal myopathy type. The pathogenicity of novel variants reported in this study requires further functional validation. DATA AVAILABILITY STATEMENT The datasets generated for this study can be found in the ClinVar database. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Ethics Committee of Xiangya Hospital, Central South University. Written informed consent to participate in this study was provided by the participants and participants' legal guardian. Written informed consent was obtained from the individuals, and minor's legal guardian, for the publication of any potentially identifiable images or data included in this article. AUTHOR CONTRIBUTIONS Y-BL and HY designed this study. Y-BL wrote and HY revised this manuscript. YL and YP collected the data. FB, QL, and HD contributed to the muscle pathological evaluation. All authors contributed to the article and approved the submitted version.
import puppeteer from "puppeteer"; export const clickElement = async (selector: string, page: puppeteer.Page) => { await page.waitForSelector(selector); await page.click(selector); };
// RunWithExitCode run a command and returns the exit code func RunWithExitCode(cmd *exec.Cmd) int { if err := cmd.Run(); err != nil { if exitError, ok := err.(*exec.ExitError); ok { return exitError.Sys().(syscall.WaitStatus).ExitStatus() } else { return -1 } } else { return cmd.ProcessState.Sys().(syscall.WaitStatus).ExitStatus() } }
package option import ( "fmt" ) func ExampleMessage() { fmt.Println(Message(0), Message(1)) // Output: // message0 message1 } func ExampleArgs() { fmt.Println(Args(0), Args(1)) // Output: // args0 args1 }
/* Create a GRPC_QUEUE_SHUTDOWN event without queuing it anywhere */ static event *create_shutdown_event(void) { event *ev = gpr_malloc(sizeof(event)); ev->base.type = GRPC_QUEUE_SHUTDOWN; ev->base.call = NULL; ev->base.tag = NULL; ev->on_finish = null_on_finish; return ev; }
<gh_stars>1-10 import pg from "pg" import { Logger } from "src/logging" import { delayMilliseconds } from "src/utils" type Tables<T extends string> = { [K in T]: `public.${K}` } export const table: Tables<"issue_to_project_field_rule" | "token"> = { issue_to_project_field_rule: "public.issue_to_project_field_rule", token: "public.token", } const acquirePoolClient = (pool: pg.Pool, logger: Logger) => { return async () => { let client: pg.PoolClient | undefined = undefined acquireClientLoop: for (let i = 0; i < 3; i++) { const acquiredClient: pg.PoolClient & { hasTestQueryWorked?: true } = await pool.connect() if (acquiredClient.hasTestQueryWorked) { client = acquiredClient break acquireClientLoop } else { for (let j = 0; j < 3; j++) { try { await acquiredClient.query("SELECT 1") acquiredClient.hasTestQueryWorked = true } catch (error) { logger.warn( `Acquired client has failed the test query on try #${j}. Retrying...`, ) } if (acquiredClient.hasTestQueryWorked) { client = acquiredClient break acquireClientLoop } else { // Wait before trying the test query again logger.info( "Test query failed upon acquiring database client. Retrying...", ) await delayMilliseconds(1024) } } // We've tried the test query but it did not work; assume this client is broken try { // FIXME: We have to manually make the "end" method available in the // type here because the @types/pg package incorrectly does not // include it. await ( acquiredClient as unknown as { end: () => Promise<void> } ).end() } finally { acquiredClient.release() } } } if (client === undefined) { throw new Error("Failed to acquire a database client") } return client } } export type WithDatabaseClient<T, WrapperContext = undefined> = ( client: pg.PoolClient, context: WrapperContext, ) => T | Promise<T> export const withDatabaseClientCallback = function < T, WrapperContext = undefined, >( pool: pg.Pool, logger: Logger, getQueryContext: (client: pg.PoolClient) => WrapperContext, ) { return async (fn: WithDatabaseClient<T, WrapperContext>) => { let result: T | Error const client = await acquirePoolClient(pool, logger)() try { result = await fn(client, getQueryContext(client)) } catch (error) { result = error } finally { client.release() } if (result instanceof Error) { throw result } return result } } export type DynamicQueryParam = { column: string value: any } type WrapForDynamicQueryContext = { paramsPlaceholdersJoined: string updateStatementParamsJoined: string values: any[] columnsJoined: string } export type WithDatabaseClientForDynamicQuery<T> = ( inputParams: DynamicQueryParam[], fn: WithDatabaseClient<T, WrapForDynamicQueryContext>, ) => T | Promise<T> export const withDatabaseClientForDynamicQueryCallback = <T>( pool: pg.Pool, logger: Logger, ) => { return async ( ...[inputParams, fn]: Parameters<WithDatabaseClientForDynamicQuery<T>> ) => { const queryParams = inputParams.map((change, i) => { return { ...change, placeholder: `$${i + 1}` } }) const columns = queryParams.map(({ column }) => { return column }) const paramsPlaceholdersJoined = queryParams .reduce((acc, { placeholder }) => { return `${acc}, ${placeholder}` }, "") .slice(1) .trim() const values = queryParams.map(({ value }) => { return value }) return await withDatabaseClientCallback<T, WrapForDynamicQueryContext>( pool, logger, (client) => { const updateStatementParamsJoined = queryParams .reduce((acc, { placeholder, column }) => { return `${acc}, ${client.escapeIdentifier(column)} = ${placeholder}` }, "") .slice(1) .trim() const columnsJoined = columns .reduce((acc, v) => { return `${acc}, ${client.escapeIdentifier(v)}` }, "") .slice(1) .trim() return { paramsPlaceholdersJoined, updateStatementParamsJoined, values, columnsJoined, queryParams, } }, )(fn) } }
def distance_correlation(x, y): assert isinstance(x, np.ndarray) and isinstance(y, np.ndarray) if x.dtype not in [np.float32, np.float64]: x = x.astype(np.float32, copy=False) if y.dtype not in [np.float32, np.float64]: y = y.astype(np.float32, copy=False) if x.ndim == 1: x = x[:, np.newaxis] if y.ndim == 1: y = y[:, np.newaxis] assert (x.ndim == 2) and (y.ndim == 2) assert (x.shape[0] == y.shape[0]) n = x.shape[0] a = dist_eucl(x) b = dist_eucl(y) denom = float(n * n) dcov2_xy = (a * b).sum() / denom dcov2_xx = (a * a).sum() / denom dcov2_yy = (b * b).sum() / denom dcor = np.sqrt(dcov2_xy) / np.sqrt(np.sqrt(dcov2_xx) * np.sqrt(dcov2_yy)) return dcor
<filename>src/bin/compression/dreamcoder/expr.rs //! The language of Dream&shy;Coder expressions. use super::{parse, util::parens}; use babble::{ ast_node::{Arity, AstNode, Expr}, teachable::{BindingExpr, Teachable}, }; use egg::{RecExpr, Symbol}; use internment::ArcIntern; use nom::error::convert_error; use ref_cast::RefCast; use serde::{Deserialize, Serialize}; use std::{ borrow::Cow, convert::TryFrom, fmt::{self, Display, Formatter}, ops::{Deref, DerefMut}, str::FromStr, }; /// A wrapper around a string, used as an intermediary for serializing other /// types as strings. #[allow(single_use_lifetimes)] #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, Deserialize)] #[serde(transparent)] struct RawStr<'a>(Cow<'a, str>); /// An expression in Dream&shy;Coder's generic programming language. #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, Deserialize, RefCast)] #[serde(try_from = "RawStr<'_>")] #[serde(into = "RawStr<'_>")] #[repr(transparent)] pub struct DcExpr(Expr<DreamCoderOp>); impl Deref for DcExpr { type Target = Expr<DreamCoderOp>; fn deref(&self) -> &Self::Target { &self.0 } } impl DerefMut for DcExpr { fn deref_mut(&mut self) -> &mut Self::Target { &mut self.0 } } impl From<DcExpr> for Expr<DreamCoderOp> { fn from(expr: DcExpr) -> Self { expr.0 } } impl From<DcExpr> for RecExpr<AstNode<DreamCoderOp>> { fn from(expr: DcExpr) -> Self { expr.0.into() } } impl From<Expr<DreamCoderOp>> for DcExpr { fn from(expr: Expr<DreamCoderOp>) -> Self { Self(expr) } } impl From<DcExpr> for RawStr<'static> { fn from(expr: DcExpr) -> Self { Self(expr.to_string().into()) } } impl<'a> TryFrom<RawStr<'a>> for DcExpr { type Error = ParseExprError; fn try_from(raw_expr: RawStr<'a>) -> Result<Self, Self::Error> { raw_expr.0.parse() } } /// An AST node in the DreamCoder language. #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)] pub enum DreamCoderOp { /// A variable. Var(usize), /// A symbol, typically one of the language's primitives. Symbol(Symbol), /// An "inlined" expression. This is how Dream&shy;Coder represents learned /// functions. Inlined(ArcIntern<Expr<Self>>), /// An anonymous function. Lambda, /// An application of a function to a variable. Dream&shy;Coder allows /// applying a function to multiple arguments, we translate these to nested /// applications. That is, `(foo bar baz quux)`, is interpreted as /// `(((foo bar) baz) quux)`. App, Lib, Shift, } impl Arity for DreamCoderOp { fn min_arity(&self) -> usize { match self { DreamCoderOp::Var(_) | DreamCoderOp::Symbol(_) | DreamCoderOp::Inlined(_) => 0, DreamCoderOp::Lambda | DreamCoderOp::Shift => 1, DreamCoderOp::App | DreamCoderOp::Lib => 2, } } } impl Teachable for DreamCoderOp { fn from_binding_expr<T>(binding_expr: BindingExpr<T>) -> AstNode<Self, T> { match binding_expr { BindingExpr::Var(index) => AstNode::leaf(DreamCoderOp::Var(index)), BindingExpr::Lambda(body) => AstNode::new(DreamCoderOp::Lambda, [body]), BindingExpr::Apply(fun, arg) => AstNode::new(DreamCoderOp::App, [fun, arg]), BindingExpr::Let(def, body) => AstNode::new(DreamCoderOp::Lib, [def, body]), BindingExpr::Shift(expr) => AstNode::new(DreamCoderOp::Shift, [expr]), } } fn as_binding_expr<T>(node: &AstNode<Self, T>) -> Option<BindingExpr<&T>> { let binding_expr = match node.as_parts() { (DreamCoderOp::Var(index), []) => BindingExpr::Var(*index), (DreamCoderOp::Lambda, [body]) => BindingExpr::Lambda(body), (DreamCoderOp::App, [fun, arg]) => BindingExpr::Apply(fun, arg), (DreamCoderOp::Lib, [def, body]) => BindingExpr::Let(def, body), (DreamCoderOp::Shift, [expr]) => BindingExpr::Shift(expr), _ => return None, }; Some(binding_expr) } } impl Display for DreamCoderOp { fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result { let s = match self { DreamCoderOp::Lambda => "lambda", DreamCoderOp::App => "@", DreamCoderOp::Lib => "lib", DreamCoderOp::Shift => "shift", DreamCoderOp::Var(index) => return write!(f, "${}", index), DreamCoderOp::Inlined(expr) => return write!(f, "#{}", DcExpr::ref_cast(expr)), DreamCoderOp::Symbol(symbol) => return write!(f, "{}", symbol), }; f.write_str(s) } } impl Display for DcExpr { fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result { let node: &AstNode<_, _> = self.as_ref(); match node.as_parts() { (DreamCoderOp::Symbol(name), []) => write!(f, "{}", name), (DreamCoderOp::Var(index), []) => { write!(f, "${}", index) } (DreamCoderOp::Inlined(expr), []) => { write!(f, "#{}", Self::ref_cast(expr)) } (DreamCoderOp::Lambda, [body]) => { write!(f, "(lambda {:.1})", Self::ref_cast(body)) } (DreamCoderOp::App, [fun, arg]) => parens(0, f, |f| { write!(f, "{:.0} {:.1}", Self::ref_cast(fun), Self::ref_cast(arg)) }), (op, args) => { write!(f, "({}", op)?; for arg in args { write!(f, " {}", Self::ref_cast(arg))?; } f.write_str(")") } } } } /// An error produced when a string can't be parsed as a valid [`DcExpr`]. #[derive(Debug, Clone)] pub struct ParseExprError { message: String, } impl Display for ParseExprError { fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result { f.write_str(&self.message) } } impl FromStr for DcExpr { type Err = ParseExprError; fn from_str(s: &str) -> Result<Self, Self::Err> { parse::parse(s).map(DcExpr).map_err(|e| ParseExprError { message: convert_error(s, e), }) } } #[cfg(test)] mod tests { use babble::ast_node::AstNode; use internment::ArcIntern; use super::{DcExpr, DreamCoderOp}; impl DcExpr { fn lambda(body: Self) -> Self { Self(AstNode::new(DreamCoderOp::Lambda, [body.0]).into()) } fn app(fun: Self, arg: Self) -> Self { Self(AstNode::new(DreamCoderOp::App, [fun.0, arg.0]).into()) } fn symbol(name: &str) -> Self { Self(AstNode::leaf(DreamCoderOp::Symbol(name.into())).into()) } fn var(index: usize) -> Self { Self(AstNode::leaf(DreamCoderOp::Var(index)).into()) } fn inlined(expr: Self) -> Self { Self(AstNode::leaf(DreamCoderOp::Inlined(ArcIntern::new(expr.0))).into()) } } #[test] fn parser_test() { let input = "(lambda (map #(lambda (+ $0 1)) $0))"; let expr = DcExpr::lambda(DcExpr::app( DcExpr::app( DcExpr::symbol("map"), DcExpr::inlined(DcExpr::lambda(DcExpr::app( DcExpr::app(DcExpr::symbol("+"), DcExpr::var(0)), DcExpr::symbol("1"), ))), ), DcExpr::var(0), )); let parsed: DcExpr = input.parse().unwrap(); assert_eq!(parsed, expr); assert_eq!(expr.to_string(), input); } }
def solve_maze(maze, y, x): if not solved(maze): for yshift, xshift in [(-1, 0), (0, 1), (1, 0), (0, -1)]: if not maze[y + yshift][x + xshift]: maze[y + yshift][x + xshift] = 1 success = solve_maze(maze, y + yshift, x + xshift) if success: return True else: maze[y + yshift][x + xshift] = 0 elif maze[y + yshift][x + xshift] == 3: return True return False
<reponame>titimoby/circuitpython #!/usr/bin/env python3 # # This file is part of the MicroPython project, http://micropython.org/ # # The MIT License (MIT) # # Copyright (c) 2020 <NAME> # Copyright (c) 2020 <NAME> # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. import argparse import glob import fnmatch import itertools import os import pathlib import re import sys import subprocess # Relative to top-level repo dir. PATHS = [ # C "devices/**/*.[ch]", "drivers/bus/*.[ch]", "extmod/*.[ch]", "lib/netutils/*.[ch]", "lib/timeutils/*.[ch]", "lib/utils/*.[ch]", "mpy-cross/**/*.[ch]", "ports/**/*.[ch]", "py/**/*.[ch]", "shared-bindings/**/*.[ch]", "shared-module/**/*.[ch]", "supervisor/**/*.[ch]", # Python "extmod/*.py", "ports/**/*.py", "py/**/*.py", "tools/**/*.py", "tests/**/*.py", ] EXCLUSIONS = [ # STM32 build includes generated Python code. "ports/*/build*", # gitignore in ports/unix ignores *.py, so also do it here. "ports/unix/*.py", # not real python files "tests/**/repl_*.py", # needs careful attention before applying automatic formatting "tests/basics/*.py", ] # None of the standard Python path matching routines implement the matching # we want, which is most like git's "pathspec" version of globs. # In particular, we want "**/" to match all directories. # This routine is sufficient to work with the patterns we have, but # subtle cases like character classes that contain meta-characters # are not covered def git_glob_to_regex(pat): def transform(m): m = m.group(0) if m == "*": return "[^/]*" if m == "**/": return "(.*/)?" if m == "?": return "[^/]" if m == ".": return r"\." return m result = [transform(part) for part in re.finditer(r"(\*\*/|[*?.]|[^*?.]+)", pat)] return "(^" + "".join(result) + "$)" # Create a single, complicated regular expression that matches exactly the # files we want, accounting for the PATHS as well as the EXCLUSIONS. path_re = ( "" # First a negative lookahead assertion that it doesn't match # any of the EXCLUSIONS + "(?!" + "|".join(git_glob_to_regex(pat) for pat in EXCLUSIONS) + ")" # Then a positive match for any of the PATHS + "(?:" + "|".join(git_glob_to_regex(pat) for pat in PATHS) + ")" ) path_rx = re.compile(path_re) # Path to repo top-level dir. TOP = os.path.abspath(os.path.join(os.path.dirname(__file__), "..")) UNCRUSTIFY_CFG = os.path.join(TOP, "tools/uncrustify.cfg") C_EXTS = ( ".c", ".h", ) PY_EXTS = (".py",) def check_uncrustify_version(): version = subprocess.check_output( ["uncrustify", "--version"], encoding="utf-8", errors="replace" ) if version < "Uncrustify-0.71": raise SystemExit(f"codeformat.py requires Uncrustify 0.71 or newer, got {version}") # Transform a filename argument relative to the current directory into one # relative to the TOP directory, which is what we need when checking against # path_rx. def relative_filename(arg): return str(pathlib.Path(arg).resolve().relative_to(TOP)) def list_files(args): return sorted(arg for arg in args if path_rx.match(relative_filename(arg))) def fixup_c(filename): # Read file. with open(filename) as f: lines = f.readlines() # Write out file with fixups. with open(filename, "w", newline="") as f: dedent_stack = [] i = 0 while lines: # Get next line. i += 1 l = lines.pop(0) # Revert "// |" back to "//| " if l.startswith("// |"): l = "//|" + l[4:] # Dedent #'s to match indent of following line (not previous line). m = re.match(r"( +)#(if |ifdef |ifndef |elif |else|endif)", l) if m: indent = len(m.group(1)) directive = m.group(2) if directive in ("if ", "ifdef ", "ifndef "): l_next = lines[0] indent_next = len(re.match(r"( *)", l_next).group(1)) if indent - 4 == indent_next and re.match(r" +(} else |case )", l_next): # This #-line (and all associated ones) needs dedenting by 4 spaces. l = l[4:] dedent_stack.append(indent - 4) else: # This #-line does not need dedenting. dedent_stack.append(-1) elif dedent_stack: if dedent_stack[-1] >= 0: # This associated #-line needs dedenting to match the #if. indent_diff = indent - dedent_stack[-1] assert indent_diff >= 0 l = l[indent_diff:] if directive == "endif": dedent_stack.pop() # Write out line. f.write(l) assert not dedent_stack, filename def main(): cmd_parser = argparse.ArgumentParser( description="Auto-format C and Python files -- to be used via pre-commit only." ) cmd_parser.add_argument("-c", action="store_true", help="Format C code only") cmd_parser.add_argument("-p", action="store_true", help="Format Python code only") cmd_parser.add_argument("-v", action="store_true", help="Enable verbose output") cmd_parser.add_argument("--dry-run", action="store_true", help="Print, don't act") cmd_parser.add_argument("files", nargs="+", help="Run on specific globs") args = cmd_parser.parse_args() if args.dry_run: print(" ".join(sys.argv)) # Setting only one of -c or -p disables the other. If both or neither are set, then do both. format_c = args.c or not args.p format_py = args.p or not args.c # Expand the arguments passed on the command line, subject to the PATHS and EXCLUSIONS files = list_files(args.files) # Extract files matching a specific language. def lang_files(exts): for file in files: if os.path.splitext(file)[1].lower() in exts: yield file # Run tool on N files at a time (to avoid making the command line too long). def batch(cmd, files, N=200): while True: file_args = list(itertools.islice(files, N)) if not file_args: break if args.dry_run: print(" ".join(cmd + file_args)) else: subprocess.call(cmd + file_args) # Format C files with uncrustify. if format_c: check_uncrustify_version() command = ["uncrustify", "-c", UNCRUSTIFY_CFG, "-lC", "--no-backup"] if not args.v: command.append("-q") batch(command, lang_files(C_EXTS)) for file in lang_files(C_EXTS): fixup_c(file) # Format Python files with black. if format_py: command = ["black", "--fast", "--line-length=99"] if args.v: command.append("-v") else: command.append("-q") batch(command, lang_files(PY_EXTS)) if __name__ == "__main__": main()
<gh_stars>1-10 package io.hektor.fsm.impl; import io.hektor.fsm.Context; import io.hektor.fsm.Data; import io.hektor.fsm.State; import io.hektor.fsm.Transition; import java.util.ArrayList; import java.util.Collections; import java.util.List; import java.util.Optional; import java.util.function.BiConsumer; import java.util.stream.Collectors; /** * @author <EMAIL> */ public class StateImpl<S extends Enum<S>, C extends Context, D extends Data> implements State<S, C, D> { private final S state; private final boolean isInitial; private final boolean isFinal; private final boolean isTransient; private final List<Transition<?, S, C, D>> transitions; private final Optional<Transition<?, S, C, D>> defaultTransition; private final Optional<BiConsumer<C, D>> initialEnterAction; private final Optional<BiConsumer<C, D>> enterAction; private final Optional<BiConsumer<C, D>> exitAction; private final List<S> connectedNodes = new ArrayList<>(); public StateImpl(final S state, final boolean isInitial, final boolean isFinal, final boolean isTransient, final List<Transition<?, S, C, D>> transitions, final Optional<Transition<?, S, C, D>> defaultTransition, final BiConsumer<C, D> initialEnterAction, final BiConsumer<C, D> enterAction, final BiConsumer<C, D> exitAction) { this.state = state; this.isInitial = isInitial; this.isFinal = isFinal; this.isTransient = isTransient; this.transitions = transitions; this.defaultTransition = defaultTransition; this.initialEnterAction = Optional.ofNullable(initialEnterAction); this.enterAction = Optional.ofNullable(enterAction); this.exitAction = Optional.ofNullable(exitAction); transitions.forEach(this::markConnectedNode); markConnectedNode(defaultTransition.orElse(null)); } /** * We need to keep track of what other states we are connected to because when * we validate the FSM upon build time, there are certain transitions that isn't * allowed. * * Note that this only happens when you build the FSM, which you will only really do * once (so don't confuse this with instantiating the FSM) * @param transition */ private void markConnectedNode(final Transition<?, S, C, D> transition) { if (transition == null) { return; } final S toState = transition.getToState(); if (!connectedNodes.contains(toState)) { connectedNodes.add(toState); } } public List<Transition<?, S, C, D>> getTransitionsToState(final S state) { final List<Transition<?, S, C, D>> ts = transitions.stream().filter(t -> t.getToState() == state).collect(Collectors.toList()); defaultTransition.ifPresent(d -> { if (d.getToState() == state) { ts.add(d); } }); return ts; }; @Override public S getState() { return state; } @Override public Optional<BiConsumer<C, D>> getInitialEnterAction() { return initialEnterAction; } @Override public Optional<BiConsumer<C, D>> getEnterAction() { return enterAction; } @Override public Optional<BiConsumer<C, D>> getExitAction() { return exitAction; } @Override public boolean isInital() { return isInitial; } @Override public boolean isFinal() { return isFinal; } @Override public boolean isTransient() { return isTransient; } @Override public List<S> getConnectedNodes() { return Collections.unmodifiableList(connectedNodes); } @Override public Optional<Transition<? extends Object, S, C, D>> accept(final Object event, final C ctx, final D data) { final Optional<Transition<? extends Object, S, C, D>> optional = transitions.stream().filter(t -> t.match(event, ctx, data)).findFirst(); if (optional.isPresent()) { return optional; } return defaultTransition; } }
/* * Evaluate SubscriptingRef fetch for an array slice. * * Source container is in step's result variable (it's known not NULL, since * we set fetch_strict to true), and indexes have already been evaluated into * workspace array. */ static void array_subscript_fetch_slice(ExprState *state, ExprEvalStep *op, ExprContext *econtext) { SubscriptingRefState *sbsrefstate = op->d.sbsref.state; ArraySubWorkspace *workspace = (ArraySubWorkspace *) sbsrefstate->workspace; Assert(!(*op->resnull)); *op->resvalue = array_get_slice(*op->resvalue, sbsrefstate->numupper, workspace->upperindex, workspace->lowerindex, sbsrefstate->upperprovided, sbsrefstate->lowerprovided, workspace->refattrlength, workspace->refelemlength, workspace->refelembyval, workspace->refelemalign); }
import { Logger, Module } from '@nestjs/common'; import { NetworksService } from './networks.service'; import { NetworksController } from './networks.controller'; import { MongooseModule } from '@nestjs/mongoose'; import { Network, NetworkSchema } from './schemas/network.schema'; @Module({ imports: [ MongooseModule.forFeature([{ name: Network.name, schema: NetworkSchema }]), ], controllers: [NetworksController], providers: [NetworksService, Logger], }) export class NetworksModule {}
// Searches smallest index of tables whose its smallest // key is after or equal with given key. func (tf tFiles) searchMin(icmp *iComparer, ikey internalKey) int { return sort.Search(len(tf), func(i int) bool { return icmp.Compare(tf[i].imin, ikey) >= 0 }) }
Scalability vs. Utility: Do We Have to Sacrifice One for the Other in Data Importance Quantification? Quantifying the importance of each training point to a learning task is a fundamental problem in machine learning and the estimated importance scores have been leveraged to guide a range of data workflows such as data summarization and domain adaption. One simple idea is to use the leave-one-out error of each training point to indicate its importance. Recent work has also proposed to use the Shapley value, as it defines a unique value distribution scheme that satisfies a set of appealing properties. However, calculating Shapley values is often expensive, which limits its applicability in real-world applications at scale. Multiple heuristics to improve the scalability of calculating Shapley values have been proposed recently, with the potential risk of compromising their utility in real-world applications.How well do existing data quantification methods perform on existing workflows? How do these methods compare with each other, empirically and theoretically? Must we sacrifice scalability for the utility in these workflows when using these methods? In this paper, we conduct a novel theoretical analysis comparing the utility of different importance quantification methods, and report extensive experimental studies on existing and proposed workflows such as noisy label detection, watermark removal, data summarization, data acquisition, and domain adaptation. We show that Shapley value approximation based on a KNN surrogate over pretrained feature embeddings obtains comparable utility with existing algorithms while achieving significant scalability improvement, often by orders of magnitude. Our theoretical analysis also justifies its advantage over the leave-one-out error.The code is available at https://github.com/AIsecure/Shapley-Study.
/* { dg-additional-options "-Ofast -fno-common" } */ /* { dg-additional-options "-Ofast -fno-common -mavx" { target avx_runtime } } */ #include "tree-vect.h" __attribute__((noinline, noclone)) void foo (float *__restrict x, float *__restrict y, float *__restrict z) { float *__restrict p = __builtin_assume_aligned (x, 32); float *__restrict q = __builtin_assume_aligned (y, 32); float *__restrict r = __builtin_assume_aligned (z, 32); int i; for (i = 0; i < 1024; i++) { if (p[i] < 0.0f) q[i] = p[i] + 2.0f; else p[i] = r[i] + 3.0f; } } float a[1024] __attribute__((aligned (32))); float b[1024] __attribute__((aligned (32))); float c[1024] __attribute__((aligned (32))); int main () { int i; check_vect (); for (i = 0; i < 1024; i++) { a[i] = (i & 1) ? -i : i; b[i] = 7 * i; c[i] = a[i] - 3.0f; asm (""); } foo (a, b, c); for (i = 0; i < 1024; i++) if (a[i] != ((i & 1) ? -i : i) || b[i] != ((i & 1) ? a[i] + 2.0f : 7 * i) || c[i] != a[i] - 3.0f) abort (); return 0; } /* { dg-final { scan-tree-dump-times "note: vectorized 1 loops" 1 "vect" { target avx_runtime } } } */
<reponame>ntbrewer/pixie_ldf_slim<filename>source/include/SsdProcessor.h /** \file SsdProcessor.h * * Header file for DSSD analysis */ #ifndef __SSD_PROCESSOR_H_ #define __SSD_PROCESSOR_H_ #include "EventProcessor.h" class DetectorSummary; class RawEvent; /** * \brief Handles detectors of type dssd_front and dssd_back */ class SsdProcessor : public EventProcessor { private: DetectorSummary *ssdSummary; ///< all detectors of type dssd_front public: SsdProcessor(); // no virtual c'tors virtual void DeclarePlots(void) const; virtual bool Process(RawEvent &event); }; #endif // __SSD_POCESSOR_H_
<filename>source/index.ts /** * react-initials-avatar * * @author abhijithvijayan <https://abhijithvijayan.in> * @license MIT License */ export {default} from './avatar'; export type {InitialsAvatarProps} from './avatar';
//This function will create and initialize the singly linkedlist with length=len, void LinkedListCreate(Node * * source, int len, int* arr) { *(source) = CreateNode(arr[0]); Node * one = (*source); Node * two = NULL; for(int i = 1; i < len; i++) { two = CreateNode(arr[i]); one->next = two; one = two; } }
/** * Stats scalar statistics. * * @param data the data * @return the scalar statistics */ @javax.annotation.Nonnull public static com.simiacryptus.util.data.ScalarStatistics stats(@javax.annotation.Nonnull final double[] data) { @javax.annotation.Nonnull final com.simiacryptus.util.data.ScalarStatistics statistics = new PercentileStatistics(); Arrays.stream(data).forEach(statistics::add); return statistics; }
Fort McMurray evacuees making Kamloops a temporary home Alejandra, her husband Ryan, and their three boys: Anthony (10), Benjamin (9) and Maximus (4). Image Credit: Alejandra Carroll via Facebook May 10, 2016 - 8:30 PM KAMLOOPS - As evacuees from the wildfire in Fort McMurray look for a safe place to stay while they wait for news on their hometown, some are setting up in Kamloops. Alejandra Carroll and her family fled the flames on Tuesday, May 3, and have made their way to Kamloops. The Carrolls — Alejandra, her husband, three sons (aged 10, nine and four) and family dog — made it to Knutsford Friday night, where they’re staying with her husband’s mother. “We’re trying to make the boys' life as normal as possible,” she says. “We’re trying to just pretend we’re living here. It’s kind of weird to explain the feeling when you’re displaced.” She says the uncertainty of knowing when Fort McMurray will open for residents again makes it hard to plan the future. For now they’ve registered her two older sons in school, one at South Sahali Elementary and the other at Summit Elementary, and at this point plan on staying until at least the end of the school year. Her husband's employer, a company which serviced the oil sands, is being supportive but everything is uncertain, she says. She was a realtor in Fort McMurray, and is considering looking for work here in the same field. With a family of five and a dog it’s a lot to add to her mother-in-law’s house, Carroll says, so they’re looking for their own housing, but the uncertain timeline and pet make it difficult. Elly Grabner, who helped organize the Kamloops Pit Stop for Fort McMurray Evacuees Facebook group says she’s spoken to other evacuees who are having similar issues. Through the pit stop group she’s spoken with roughly 25 individuals who are looking at Kamloops as their home for now, at least until they know more about what’s left in Fort McMurray. “Most of the people that I’ve spoken to, they really want to get back there and back to their life,” she says. “They’re here temporarily, and came because they have roots in Kamloops.” Grabner says she’s heard of similar issues as the Carrolls, with landlords typically wanting longer term rentals. She says furniture is also an issue because most evacuees have homes in Fort McMurray which did survive the fire and don’t want to buy a home’s worth of supplies for a temporary house. Carroll says they believe their stuff survived the wildfire, though she’s not sure. From the images she’s seen the fire got to the treehouse in her family’s back yard in the Thickwood neighbourhood, but to her best knowledge, the house remains standing. “We don’t want to pack a lot of things, our stuff is still there,” she says. “We’re trying to be proactive, stay positive.” To contact a reporter for this story, email Brendan Kergin or call 250-819-6089 or email the editor. You can also submit photos, videos or news tips to the newsroom and be entered to win a monthly prize draw. We welcome your comments and opinions on our stories but play nice. We won't censor or delete comments unless they contain off-topic statements or links, unnecessary vulgarity, false facts, spam or obviously fake profiles. If you have any concerns about what you see in comments, email the editor in the link above.
/** * Supported value in where cause. * */ public class Condition { public static enum ConditionType { RANGE, EQUAL } final private ConditionType type; /** Field's name **/ final private String fieldName; /** Supported type: Integer , Long, Float, String **/ private SQLExpr value; /** Supported type: Integer , Long, Float, String **/ private SQLExpr left; /** GreaterThan, GreaterThanOrEqual **/ private SQLBinaryOperator leftOperator; /** Supported type: Integer , Long, Float, String **/ private SQLExpr right; /** LessThan, LessThanOrEqual **/ private SQLBinaryOperator rightOperator; /** * Constructor. * * @param fieldName * @param type * @param value */ public Condition(String fieldName, ConditionType type, SQLExpr value) { this(fieldName, type, value, null); } /** * Constructor. * * @param fieldName * @param type * @param value * @param operator */ public Condition(String fieldName, ConditionType type, SQLExpr value, SQLBinaryOperator operator) { this.fieldName = fieldName; this.type = type; if (operator == null) { this.value = value; } else if (operator == SQLBinaryOperator.GreaterThan || operator == SQLBinaryOperator.GreaterThanOrEqual) { this.left = value; this.leftOperator = operator; } else { this.right = value; this.rightOperator = operator; } } public ConditionType getType() { return type; } public String getFieldName() { return fieldName; } public SQLExpr getValue() { return value; } public SQLExpr getLeft() { return left; } public SQLBinaryOperator getLeftOperator() { return leftOperator; } public SQLExpr getRight() { return right; } public SQLBinaryOperator getRightOperator() { return rightOperator; } public void resetValue(SQLExpr value, SQLBinaryOperator operator) { if (operator == SQLBinaryOperator.GreaterThan || operator == SQLBinaryOperator.GreaterThanOrEqual) { this.left = value; this.leftOperator = operator; } else { this.right = value; this.rightOperator = operator; } } public Object convert(SQLExpr expr) { if (expr instanceof SQLIntegerExpr) {// Integer,Long Number number = ((SQLIntegerExpr) expr).getNumber(); return number; } else if (expr instanceof SQLCharExpr) {// String String value = ((SQLCharExpr) expr).getText(); return value; } else if (expr instanceof SQLNumberExpr) {// Float, Double Number number = ((SQLNumberExpr) expr).getNumber(); return number; } return null; } @Override public String toString() { StringBuilder sb = new StringBuilder(); sb.append("Field "); sb.append(fieldName); sb.append(' '); sb.append(" Type "); sb.append(type); sb.append(" Value "); if (type == ConditionType.EQUAL) { sb.append(convert(value)); } else { if (leftOperator == SQLBinaryOperator.GreaterThan) { sb.append("("); } else { sb.append("["); } sb.append(convert(left)); sb.append(", "); sb.append(convert(right)); if (rightOperator == SQLBinaryOperator.LessThan) { sb.append(")"); } else { sb.append("]"); } } return sb.toString(); } }
#include <stdlib.h> #include <stdio.h> #include <string.h> #include <sys/ptrace.h> #include <sys/wait.h> #include <sys/user.h> #include <errno.h> #include <unistd.h> #include "trace64.h" #include "../logging.h" void tracex64(const char *filename, char **argv) { pid_t pid = fork(); switch (pid) { case -1: log_(L_ERROR, "Error happened while fork()..."); case 0: ptrace(PTRACE_TRACEME, 0, 0, 0); execvp(filename, argv+1); } waitpid(pid, 0, 0); ptrace(PTRACE_SETOPTIONS, pid, 0, PTRACE_O_EXITKILL); for (;;) { if (ptrace(PTRACE_SYSCALL, pid, 0, 0) == -1) log_(L_ERROR, "Error on ptrace syscall..."); if (waitpid(pid, 0, 0) == -1) log_(L_ERROR, "Error..."); struct user_regs_struct regs; ptrace(PTRACE_GETREGS, pid, 0, &regs); if (regs.orig_rax < 335 ) { char *name = syscall_name[regs.orig_rax]; printf("%s()\n", name); } else printf("%llx\n", regs.orig_rax); // if (regs.orig_rax == 231) exit(-1); // exit_group syscall if (ptrace(PTRACE_SYSCALL, pid, 0, 0) == -1) log_(L_ERROR, "Syscall error..."); waitpid(pid, 0, 0); } }
Channel Treatment for Internet Addiction with Puskesmas Application: Design Approach with Usability Heuristics Developed since 1968, Puskesmas is the most important healthcare facility and is at the forefront of providing basic healthcare services at the community level. Basically, its role is very important and should be one of the keys to Indonesia’s success in improving the health and nutritional status of the community. Meanwhile, Internet addiction become potential threat to the community that might be more dangerous compare to drug and sex addiction in term of social consequences. This study offer the integrated solution through usability heuristics design for the purpose of redesigning current apps after the identification of business process in several Puskesmas in the Bandung. It also want to increase the quality of service in order to provide proper communication between health workers and patients. In evaluation phase, the 4 criteria was used to generate sketches in relation to contextual settings. Then, the mockups will be assessed with 12 transaction features to ensure the readiness of its financial management before the application launching.
<reponame>damonkd/CampPowerThree<filename>src/main/java/com/perscholas/application/views/login/LoginView.java package com.perscholas.application.views.login; import com.vaadin.flow.component.button.Button; import com.vaadin.flow.component.button.ButtonVariant; import com.vaadin.flow.component.html.H1; import com.vaadin.flow.component.html.H2; import com.vaadin.flow.component.html.Image; import com.vaadin.flow.component.login.LoginForm; import com.vaadin.flow.component.orderedlayout.VerticalLayout; import com.vaadin.flow.router.*; import com.vaadin.flow.server.auth.AnonymousAllowed; @AnonymousAllowed @Route("login") @PageTitle("Login | Vaadin CRM") public class LoginView extends VerticalLayout implements BeforeEnterObserver { private final LoginForm login = new LoginForm(); public LoginView(){ addClassName("login-view"); setSizeFull(); setAlignItems(Alignment.CENTER); setJustifyContentMode(JustifyContentMode.CENTER); login.setAction("login"); add(new H1("Camp Power"), login); //RouterLink register = new RouterLink("Register", RegistrationFormView.class); add(new H2("New Users Register Below")); Button register = new Button("Register"); register.addThemeVariants(ButtonVariant.LUMO_PRIMARY); add(register); register.addClickListener(e -> register.getUI().ifPresent(ui -> ui.navigate("registration-form")) ); } @Override public void beforeEnter(BeforeEnterEvent beforeEnterEvent) { // inform the user about an authentication error if(beforeEnterEvent.getLocation() .getQueryParameters() .getParameters() .containsKey("error")) { login.setError(true); } } }
/** * Adds the policy to the user's list of policies if the user has no policy for the given insurance module. * <p> * If the user already has a matching policy, that policy is overwritten. */ public Customer updatePolicy(String customerId, PolicyDto policyDto) { log.info("Updating a policy for user {}", customerId); Customer customer = findCustomer(customerId); Optional<InsuranceModule> insuranceModuleOptional = insuranceRepository.findById(policyDto.getInsuranceModuleId()); if (!insuranceModuleOptional.isPresent()) { String message = "Couldn't find an insurance module with ID " + policyDto.getInsuranceModuleId(); log.error(message); throw new EntityNotFoundException(message); } InsuranceModule insuranceModule = insuranceModuleOptional.get(); ValidationErrors errors = policyValidator.validate(policyDto, insuranceModule); if (errors != null) { ValidationException exception = errors.toException(); log.error(exception.getMessage()); throw exception; } Policy policy = createPolicy(policyDto.getCoverage(), insuranceModule); return addOrReplacePolicy(customer, policy); }
// threadedCreateAndUploadFiles is a worker that creates and uploads files func threadedCreateAndUploadFiles(timestamp string, workerIndex int) { for { fileIndex, ok := createdNotDownloadedFiles.managedIncCountLimit(nFiles) if !ok { break } fileIndexStr := fmt.Sprintf("%03d", fileIndex) sizeStr := modules.FilesizeUnits(uint64(actualFileSize)) filename := "Randfile" + fileIndexStr + "_" + sizeStr + "_" + timestamp filename = strings.ReplaceAll(filename, " ", "") createFile(filename) uploadFile(filename) path := filepath.Join(upDir, filename) deleteLocalFile(path) createdNotDownloadedFiles.managedAddFile(filename) } log.Printf("Upload worker #%d finished", workerIndex) uploadWG.Done() }
<filename>firmware/prototype - final/firmware/src/system_config/default/bsp/bsp.c /******************************************************************************* Board Support Package Implementation Company: Microchip Technology Inc. File Name: bsp.c Summary: Board Support Package implementation for PIC32MZ Embedded Connectivity (EC) Starter Kit. Description: This file contains routines that implement the board support package for PIC32MZ Embedded Connectivity (EC) Starter Kit. *******************************************************************************/ // DOM-IGNORE-BEGIN /******************************************************************************* Copyright (c) 2012 released Microchip Technology Inc. All rights reserved. Microchip licenses to you the right to use, modify, copy and distribute Software only when embedded on a Microchip microcontroller or digital signal controller that is integrated into your product or third party product (pursuant to the sublicense terms in the accompanying license agreement). You should refer to the license agreement accompanying this Software for additional information regarding your rights and obligations. SOFTWARE AND DOCUMENTATION ARE PROVIDED AS IS WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION, ANY WARRANTY OF MERCHANTABILITY, TITLE, NON-INFRINGEMENT AND FITNESS FOR A PARTICULAR PURPOSE. IN NO EVENT SHALL MICROCHIP OR ITS LICENSORS BE LIABLE OR OBLIGATED UNDER CONTRACT, NEGLIGENCE, STRICT LIABILITY, CONTRIBUTION, BREACH OF WARRANTY, OR OTHER LEGAL EQUITABLE THEORY ANY DIRECT OR INDIRECT DAMAGES OR EXPENSES INCLUDING BUT NOT LIMITED TO ANY INCIDENTAL, SPECIAL, INDIRECT, PUNITIVE OR CONSEQUENTIAL DAMAGES, LOST PROFITS OR LOST DATA, COST OF PROCUREMENT OF SUBSTITUTE GOODS, TECHNOLOGY, SERVICES, OR ANY CLAIMS BY THIRD PARTIES (INCLUDING BUT NOT LIMITED TO ANY DEFENSE THEREOF), OR OTHER SIMILAR COSTS. *******************************************************************************/ // DOM-IGNORE-END // ***************************************************************************** // ***************************************************************************** // Section: Included Files // ***************************************************************************** // ***************************************************************************** #include "bsp.h" // ***************************************************************************** /* Function: void BSP_USBVBUSSwitchStateSet(BSP_USB_VBUS_SWITCH_STATE state); Summary: This function enables or disables the USB VBUS switch on the board. Description: This function enables or disables the VBUS switch on the board. Remarks: Refer to bsp_config.h for usage information. */ void BSP_USBVBUSSwitchStateSet(BSP_USB_VBUS_SWITCH_STATE state) { /* Enable the VBUS switch */ PLIB_PORTS_PinWrite( PORTS_ID_0, PORT_CHANNEL_None, PORTS_BIT_POS_-1, state ); } // ***************************************************************************** /* Function: bool BSP_USBVBUSSwitchOverCurrentDetect(uint8_t port) Summary: Returns true if the over current is detected on the VBUS supply. Description: This function returns true if over current is detected on the VBUS supply. Remarks: None. */ bool BSP_USBVBUSSwitchOverCurrentDetect(uint8_t port) { return(false); } // ***************************************************************************** /* Function: bool BSP_USBVBUSPowerEnable(uint8_t port, bool enable) Summary: This function controls the USB VBUS supply. Description: This function controls the USB VBUS supply. Remarks: None. */ void BSP_USBVBUSPowerEnable(uint8_t port, bool enable) { /* Enable the VBUS switch */ PLIB_PORTS_PinWrite( PORTS_ID_0, PORT_CHANNEL_None, PORTS_BIT_POS_-1, enable ); } // ***************************************************************************** // ***************************************************************************** // ***************************************************************************** // Section: Interface Routines // ***************************************************************************** // ***************************************************************************** // ***************************************************************************** /* Function: void BSP_Initialize(void) Summary: Performs the necessary actions to initialize a board Description: This function initializes the LED, Switch and other ports on the board. This function must be called by the user before using any APIs present in this BSP. Remarks: Refer to bsp.h for usage information. */ void BSP_Initialize(void ) { /* Setup the USB VBUS Switch Control Pin */ BSP_USBVBUSSwitchStateSet(BSP_USB_VBUS_SWITCH_STATE_DISABLE); } /******************************************************************************* End of File */
Messaging with WhatsApp WhatsApp Messenger is a freeware and cross-platform messaging and Voice over IP (VoIP) service owned by Facebook.[45] The application allows the sending of text messages and voice calls, as well as video calls, images and other media, documents, and user location.[46][47] The application runs from a mobile device but is also accessible from desktop computers; the service requires[48] consumer users to provide a standard cellular mobile number. Originally, users could only communicate with others individually or in groups of individual users, but in September 2017, WhatsApp announced a forthcoming business platform that will enable companies to provide customer service to users at scale.[43] The client was created by WhatsApp Inc., based in Mountain View, California, which was acquired by Facebook in February 2014 for approximately US$19.3 billion.[49][50] By February 2018, WhatsApp had a user base of over one and a half billion,[51][52] making it the most popular messaging application at the time.[52][53] WhatsApp has grown in multiple countries, including Brazil, India, and large parts of Europe, including the United Kingdom and France.[52] History 2009–2014 WhatsApp was founded in 2009 by Brian Acton and Jan Koum, both former employees of Yahoo!. After Koum and Acton left Yahoo! in September 2007, the duo traveled to South America to take a break from work.[12] At one point, they applied for jobs at Facebook but were rejected.[12] For the rest of the following years Koum relied on his $400,000 savings from Yahoo!.[citation needed] In January 2009, after purchasing an iPhone and realizing the potential of the app industry on the App Store, Koum started visiting his friend Alex Fishman in West San Jose where the three would discuss "... having statuses next to individual names of the people", but this was not possible without an iPhone developer. Fishman found a Russian developer on RentACoder.com, Igor Solomennikov, and introduced him to Koum. Koum named the app "WhatsApp" to sound like "what's up". On February 24, 2009, he incorporated WhatsApp Inc. in California. However, because early versions of WhatsApp often crashed or got stuck at a particular point, Koum felt like giving up and looking for a new job, upon which Acton encouraged him to wait for a "few more months".[12] In June 2009, Apple launched push notifications, allowing users to be pinged when they were not using an app. Koum changed WhatsApp so that when a user's status is changed, everyone in the user's network would be notified.[12] WhatsApp 2.0 was released with a messaging component and the number of active users suddenly increased to 250,000. Acton was still unemployed and managing another startup, and he decided to join the company.[12] In October 2009, Acton persuaded five former friends in Yahoo! to invest $250,000 in seed funding, and Acton became a co-founder and was given a stake. He officially joined on November 1.[12] After months at beta stage, the application eventually launched in November 2009 exclusively on the App Store for the iPhone. Koum then hired a friend who lived in Los Angeles, Chris Peiffer, to develop the BlackBerry version, which arrived two months later.[12] WhatsApp was switched from a free to paid service to avoid growing too fast, mainly because the primary cost was sending verification texts to users. In December 2009, the ability to send photos was added to WhatsApp for the iPhone. By early 2011, WhatsApp was one of the top 20 apps in Apple's U.S. App Store.[12] In April 2011, Sequoia Capital invested approximately $8 million for more than 15 percent of the company, after months of negotiation with Sequoia partner Jim Goetz.[54][55][56] By February 2013, WhatsApp had about 200 million active users and 50 staff members. Sequoia invested another $50 million, and WhatsApp was valued at $1.5 billion.[12] In a December 2013 blog post, WhatsApp claimed that 400 million active users used the service each month.[57] Facebook subsidiary (2014–present) On February 19, 2014, months after a venture capital financing round at a $1.5 billion valuation,[58] Facebook announced it was acquiring WhatsApp for US$19 billion, its largest acquisition to date.[50] At the time, the acquisition was the largest purchase of a venture-backed company in history.[49] Sequoia Capital received an approximate 50x return on its initial investment.[59] Facebook, which was advised by Allen & Co, paid $4 billion in cash, $12 billion in Facebook shares, and (advised by Morgan Stanley) an additional $3 billion in restricted stock units granted to WhatsApp's founders, Koum and Acton.[60] Employee stock was scheduled to vest over four years subsequent to closing.[50] Days after the announcement, WhatsApp users experienced a loss of service, leading to anger across social media.[61] The acquisition caused a considerable number of users to move, or try out other message services as well. Telegram claimed to have seen 8 million additional downloads of its app.[62] Line claimed to have seen 2 million new users for its service.[63] At a keynote presentation at the Mobile World Congress in Barcelona in February 2014, Facebook CEO Mark Zuckerberg said that Facebook's acquisition of WhatsApp was closely related to the Internet.org vision.[64][65] According to a TechCrunch article, Zuckerberg's vision for Internet.org was as follows: The idea, he said, is to develop a group of basic internet services that would be free of charge to use – 'a 911 for the internet.' These could be a social networking service like Facebook, a messaging service, maybe search and other things like weather. Providing a bundle of these free of charge to users will work like a gateway drug of sorts – users who may be able to afford data services and phones these days just don’t see the point of why they would pay for those data services. This would give them some context for why they are important, and that will lead them to paying for more services like this – or so the hope goes.[64] Just three days after announcing that WhatsApp had been purchased by Facebook, Koum said they were working to introduce voice calls in the coming months. He also advanced that new mobile phones would be sold in Germany with the WhatsApp brand, as their main goal was to be in all smartphones.[66] In August 2014, WhatsApp was the most globally popular messaging app, with more than 600 million active users.[67] By early January 2015, WhatsApp had 700 million monthly active users with over 30 billion messages being sent every day.[68] In April 2015, Forbes predicted that between 2012 and 2018, the telecommunications industry will lose a combined total of $386 billion because of OTT services like WhatsApp and Skype.[69] That month, WhatsApp had over 800 million active users.[70][71] By September 2015, the user base had grown to 900 million,[72] and by February 2016 it had grown to one billion.[73] As of November 30, 2015, the Android client for WhatsApp started making links to another messenger called Telegram unclickable and uncopyable.[74][75][76] This is an active block, as confirmed by multiple sources, rather than a bug,[76] and the Android source code which recognizes Telegram URLs has been identified.[76] URLs with "telegram" as domain-name are targeted actively and explicitly – the word "telegram" appears in the code.[76] This functioning risks being considered anti-competitive,[74][75][76] and has not been explained by WhatsApp. In response to the Facebook acquisition in 2014, Slate columnist Matthew Yglesias questioned whether the company's business model of charging users $1 a year was viable in the United States in the long term. It had prospered by exploiting a "loophole" in mobile phone carriers' pricing. "Mobile phone operators aren't really selling consumers some voice service, some data service, and some SMS service", he explained. "They are selling access to the network. The different pricing schemes they come up with are just different ways of trying to maximize the value they extract from consumers."[77] As part of that, carriers sold SMS separately. That made it easy for WhatsApp to find a way to replicate SMS using data, and then sell that to mobile customers for $1 a year. "But if WhatsApp gets big enough, then carrier strategy is going to change", he predicted. "You stop selling separate SMS plans and just have a take-it-or-leave-it overall package. And then suddenly WhatsApp isn't doing anything."[77] The situation may have been different in countries other than the United States. Recent (2016–present) On January 18, 2016, WhatsApp's co-founder Jan Koum announced that the service would no longer charge their users a $1 annual subscription fee in an effort to remove a barrier faced by some users who do not have a credit card to pay for the service.[78][79] He also explained that the app would not display any third party advertisement and instead would bring new features such as the ability to communicate with business organizations.[73][80] By June 2016, more than 100 million voice calls are made per day on WhatsApp according to a post on the company's blog.[81] On November 10, 2016, WhatsApp launched a two-step verification feature in beta for Android users. After enabling this feature, users can add their email address for further protection.[82] Also in November 2016, Facebook ceased collecting WhatsApp data for advertising in Europe.[83] On February 24, 2017, (WhatsApp's 8th birthday), WhatsApp launched a new Status feature similar to Snapchat and Facebook stories.[84] On May 18, 2017, it was reported that the European Commission was fining Facebook €110 million for "misleading" it during the 2014 takeover of WhatsApp. The Commission alleged that in 2014, when Facebook acquired the messaging app, it "falsely claimed it was technically impossible to automatically combine user information from Facebook and WhatsApp." However, in the summer of 2016, WhatsApp had begun sharing user information with its parent company, allowing information such as phone numbers to be used for targeted Facebook advertisements. Facebook acknowledged the breach, but said the errors in their 2014 filings were "not intentional."[83] In September 2017, WhatsApp's co-founder Brian Acton left the company to start a non-profit,[85] which was later revealed to be the Signal Foundation.[86] WhatsApp also announced a forthcoming business platform which will enable companies to provide customer service to users at scale.[43] Airlines KLM and Aeroméxico both announced their participation in the testing.[87][88][89][90] Both airlines had previously launched customer services on the Facebook Messenger platform. In January 2018, WhatsApp launched WhatsApp Business for small business use.[91] In April 2018, WhatsApp's co-founder and CEO Jan Koum announced that he would be leaving the company.[92] Facebook later announced that Koum's replacement as WhatsApp's CEO would be Chris Daniels.[10] Later in September 2018, WhatsApp introduced group audio and video call feature.[93][94] In October, "Swipe to Reply" option was made available for the Android beta version, 16 months after it was introduced for iOS.[95] SMB and Enterprise platforms Until 2017, WhatsApp positioned itself as a solution for a single party with a single smartphone to communicate with another such party, enabling small businesses to use the platform to communicate with customers,[96] but not at scale (e.g. in a contact center environment). However, in September 2017 WhatsApp announced what had long been rumored,[97][98] that they are building and testing new tools for businesses to use WhatsApp:[90] a free WhatsApp Business app for small companies [99] and for small companies and an Enterprise Solution for bigger companies operating at a large scale with a global base of customers, like airlines, e-commerce retailers, and banks, who for the first time can offer customer service and conversational commerce (e-commerce via WhatsApp chat (via live agents or chatbots) Note that some companies as far back as 2015 like Meteordesk[100] had provided unofficial solutions for enterprises to attend to large numbers of users, however these setups were shut down by WhatsApp. Platform support After months at beta stage, the application eventually launched in November 2009 exclusively on the App Store for the iPhone. In January 2010, support for BlackBerry smartphones was added, and subsequently for Symbian OS in May 2010 and for Android OS in August 2010. In August 2011, a beta for Nokia's non-smartphone OS Series 40 was added. A month later, support for Windows Phone was added, followed by BlackBerry 10 in March 2013.[101] In April 2015, support for Samsung's Tizen OS was added.[102] An unofficial port has been released for the MeeGo-based Nokia N9 called Wazapp,[103] as well as a port for the Maemo-based Nokia N900 called Yappari.[104] The oldest device that was capable of running WhatsApp was the Symbian-based Nokia N95 released in March 2007 (which is no longer functioning as of June 2017). In August 2014, WhatsApp released an update to its Android app, adding support for Android Wear smartwatches.[105] In 2014, an unofficial open source plug-in called whatsapp-purple was released for Pidgin, implementing its XMPP and making it possible to use WhatsApp on a Microsoft Windows or Linux PC.[106][third-party source needed] WhatsApp responded by automatically blocking phone numbers that connected to WhatsApp using this plug-in.[citation needed] On January 21, 2015, WhatsApp launched WhatsApp Web, a web client which can be used through a web browser by syncing with the mobile device's connection.[107] On February 26, 2016, WhatsApp announced they would cease support for BlackBerry (including BlackBerry 10), Series 40, and Symbian S60, as well as older versions of Android (2.2), Windows Phone (7.0), and iOS (6), by the end of 2016.[108] BlackBerry, Series 40, and Symbian support was since then extended further to June 30, 2017.[109] In June 2017, support for BlackBerry and Series 40 was once again extended until the end of 2017, while Symbian was dropped.[110] Support for BlackBerry and older (version 8.0) Windows Phone and older (version 6) iOS devices was dropped on January 1, 2018, but for Nokia Series 40 was extended again, until December 2018.[111] In July 2018, it was announced that WhatsApp will soon be available for KaiOS feature phones.[112][113] WhatsApp Web WhatsApp was officially made available for PCs through a web client, under the name WhatsApp Web, in late January 2015 through an announcement made by Koum on his Facebook page: "Our web client is simply an extension of your phone: the web browser mirrors conversations and messages from your mobile device—this means all of your messages still live on your phone". The WhatsApp user's handset must still be connected to the Internet for the browser application to function. All major desktop browsers are supported except for Internet Explorer. WhatsApp Web's user interface is based on the default Android one.[citation needed] As of January 21, 2015, the desktop version was only available to Android, BlackBerry, and Windows Phone users. Later on, it also added support for iOS, Nokia Series 40, and Nokia S60 (Symbian).[114][115] An unofficial derivative called WhatsAppTime has been developed, which is a standard Win32 application for PCs and supports notifications through the Windows notification area.[116] There are similar solutions for macOS, such as the open-source ChitChat,[117][118][119] and multiple wrappers available in the App Store.[citation needed] Microsoft Windows and Mac On May 10, 2016, the messaging service was introduced for both Microsoft Windows and macOS operating systems. WhatsApp currently does not allow audio or video calling from desktop operating systems. Similar to the WhatsApp Web format, the app, which will be synced with a user's mobile device, is available for download on the website. It supports OS versions of Windows 8 and OS X 10.9 and higher.[120][121] Technical WhatsApp uses a customized version of the open standard Extensible Messaging and Presence Protocol (XMPP).[122] Upon installation, it creates a user account using one's phone number as the username (Jabber ID: [phone number]@s.whatsapp.net ). WhatsApp software automatically compares all the phone numbers from the device's address book with its central database of WhatsApp users to automatically add contacts to the user's WhatsApp contact list. Previously the Android and Nokia Series 40 versions used an MD5-hashed, reversed-version of the phone's IMEI as password,[123] while the iOS version used the phone's Wi-Fi MAC address instead of IMEI.[124][125] A 2012 update now generates a random password on the server side.[126] Some Dual SIM devices may not be compatible with WhatsApp, though there are some workarounds for this.[127] In February 2015, WhatsApp introduced a voice calling feature; this helped WhatsApp to attract a completely different segment of the user population.[128][129] On November 14, 2016, Whatsapp added video calling feature for users across Android, iPhone, and Windows Phone devices.[130][131] On November 2017, Whatsapp released a new feature that would let its users delete messages sent by mistake within a time frame of 7 minutes.[132] Multimedia messages are sent by uploading the image, audio or video to be sent to an HTTP server and then sending a link to the content along with its Base64 encoded thumbnail (if applicable).[133] WhatsApp follows a "store and forward" mechanism for exchanging messages between two users. When a user sends a message, it first travels to the WhatsApp server where it is stored. Then the server repeatedly requests the receiver acknowledge receipt of the message. As soon as the message is acknowledged, the server drops the message; it is no longer available in the database of the server. The WhatsApp server keeps the message only for 30 days in its database when it is not delivered (when the receiver is not active on WhatsApp for 30 days).[134][self-published source?] End-to-end encryption On November 18, 2014, Open Whisper Systems announced a partnership with WhatsApp to provide end-to-end encryption by incorporating the encryption protocol used in Signal into each WhatsApp client platform.[135] Open Whisper Systems said that they had already incorporated the protocol into the latest WhatsApp client for Android, and that support for other clients, group/media messages, and key verification would be coming soon after.[136] WhatsApp confirmed the partnership to reporters, but there was no announcement or documentation about the encryption feature on the official website, and further requests for comment were declined.[137] In April 2015, German magazine Heise Security used ARP spoofing to confirm that the protocol had been implemented for Android-to-Android messages, and that WhatsApp messages from or to iPhones running iOS were still not end-to-end encrypted.[138] They expressed the concern that regular WhatsApp users still could not tell the difference between end-to-end encrypted messages and regular messages.[138] On April 5, 2016, WhatsApp and Open Whisper Systems announced that they had finished adding end-to-end encryption to "every form of communication" on WhatsApp, and that users could now verify each other's keys.[39][139] Users were also given the option to enable a trust on first use mechanism in order to be notified if a correspondent's key changes.[140] According to a white paper that was released along with the announcement, WhatsApp messages are encrypted with the Signal Protocol.[141] WhatsApp calls are encrypted with SRTP, and all client-server communications are "layered within a separate encrypted channel".[141] The Signal Protocol library used by WhatsApp is open-source and published under the GPLv3 license.[141][142] Cade Metz, writing in Wired, said, "WhatsApp, more than any company before it, has taken encryption to the masses."[45] WhatsApp Payments WhatsApp Payments is a peer-to-peer money transfer feature that is set to launch in India. WhatsApp has received permission from the National Payments Corporation of India (NPCI) to enter into partnership with multiple banks in July 2017[143] to allow users to make in-app payments and money transfers using the Unified Payments Interface (UPI).[144] UPI enables account-to-account transfers from a mobile app without having any details of the beneficiary's bank.[145] Reception and criticism Hoaxes and fake news Mob murders in India In July 2018, WhatsApp took action to encourage people to report fraudulent or violent messages after a wave of murders carried out by mobs on people who were falsely accused (via WhatsApp messages) of intending to abduct children.[146] 2018 elections in Brazil In an investigation on the use of social media in politics, it was found that WhatsApp was being abused for the spread of fake news in the 2018 presidential elections in Brazil.[147] Furthermore, it has been reported US$3 million spending in illegal off-the-books contributions related to this practice.[148] Researchers and journalists have called on WhatsApp parent company, Facebook, to adopt measures similar to those adopted in India and restrict the spread of hoaxes and fake news.[147] Security and privacy Alleged vulnerability of encryption On January 13, 2017, The Guardian reported that security researcher Tobias Boelter had found that WhatsApp's policy of forcing re-encryption of initially undelivered messages, without informing the recipient, constituted a serious loophole whereby WhatsApp could disclose, or be compelled to disclose, the content of these messages.[149] WhatsApp[150] and Open Whisper Systems[151] officials disagreed with this assessment. A follow-up article by Boelter himself explains in greater detail what he considers to be the specific vulnerability.[152] In June 2017, The Guardian readers’ editor Paul Chadwick wrote, "The Guardian was wrong to report in January that the popular messaging service WhatsApp had a security flaw so serious that it was a huge threat to freedom of speech."[153] "In a detailed review I found that misinterpretations, mistakes and misunderstandings happened at several stages of the reporting and editing process. Cumulatively they produced an article that overstated its case." Paul Chadwick, The Guardian[153] Chadwick also noted that since the Guardian article, WhatsApp has been "better secured by the introduction of optional two-factor verification in February."[153] NHS In 2018 it was reported that around 500,000 NHS staff used WhatsApp and other instant messaging systems at work and around 29,000 had faced disciplinary action for doing so. Higher usage was reported by frontline clinical staff to keep up with care needs, even though NHS trust policies do not permit their use.[154] Terrorism In December 2015, it was reported that Islamic State terrorists had been using WhatsApp to plot the November 2015 Paris attacks.[155] ISIS also uses WhatsApp to traffic sex slaves.[156] In March 2017, U.K. Secretary of State Amber Rudd said encryption capabilities of messaging tools like WhatsApp are unacceptable, as news reported that Khalid Masood used the application several minutes before perpetrating the 2017 Westminster attack. Rudd publicly called for police and intelligence agencies to be given access to WhatsApp and other encrypted messaging services to prevent future terror attacks.[157] In April 2017, the perpetrator of the Stockholm attack reportedly used WhatsApp to exchange messages with an ISIS supporter shortly before and after the 2017 Stockholm attack. The messages involved discussing how to make an explosive device and a confession of the perpetration the attack.[158] Scams and malware It has been asserted that WhatsApp is plagued by scams invites hackers to spread malicious viruses or malware.[159][160] In May 2016, some WhatsApp users were reported to have been tricked into downloading a third-party application called WhatsApp Gold, which was part of a scam that infected the users' phones with malware.[161] A message that promises to allow access to their WhatsApp friends' conversations, or their contact lists, has become the most popular hit against anyone who uses the application in Brazil. Since December, 2016, more than 1.5 million people have clicked and lost money[162] Another application called GB Whatsapp is considered malicious by cybersecurity firm Symantec because it usually performs some unauthorized operations on end-user devices.[163] Bans China In 2017, security researchers reported to The New York Times that the WhatsApp service had been completely blocked in China.[164] WhatsApp is owned by Facebook, whose main social media service has been blocked in China since 2009.[165] Iran On May 9, 2014, the government of Iran announced that it had proposed to block the access to WhatsApp service to Iranian residents. "The reason for this is the assumption of WhatsApp by the Facebook founder Mark Zuckerberg, who is an American Zionist," said Abdolsamad Khorramabadi, head of the country's Committee on Internet Crimes. Subsequently, Iranian president Hassan Rouhani issued an order to the Ministry of ICT to stop filtering WhatsApp.[166][167] Turkey Turkey temporarily banned WhatsApp in 2016, following the assassination of the Russian ambassador to Turkey.[168] Brazil On March 1, 2016, Diego Dzodan, Facebook's vice-president for Latin America was arrested in Brazil for not cooperating with an investigation in which WhatsApp conversations were requested.[169] On March 2, 2016, at dawn the next day, Dzodan was released because the Court of Appeal held that the arrest was disproportionate and unreasonable.[170] On May 2, 2016, mobile providers in Brazil were ordered to block WhatsApp for 72 hours for the service's second failure to cooperate with criminal court orders.[171][172] Once again, the block was lifted following an appeal, after nearly 24 hours.[173] Sri Lanka WhatsApp, one of the most activated messaging apps along with other social media networks such as Facebook and Instagram were temporarily blocked, banned and had been unavailable for about two days (7–8 March 2018) in certain parts of the country to eradicate communal violence, especially the anti-Muslim riots.[174] This was probably the first such instance where social media platforms had been banned in Sri Lanka. The ban was finally lifted on the 14th of March, 2018 around midnight time in Sri Lanka.[175] Uganda Government of Uganda banned WhatsApp and Facebook.[176] Users are to be charged 200 shilling according to the new law set by parliament.[177] User statistics As of April 22, 2014, WhatsApp had over 500 million monthly active users, 700 million photos and 100 million videos were being shared daily, and the messaging system was handling more than 10 billion messages each day.[178][179] On August 24, 2014, Koum announced on his Twitter account that WhatsApp had over 600 million active users worldwide. At that point WhatsApp was adding about 25 million new users every month, or 833,000 active users per day.[67][180] With 65 million active users representing 10% of the total worldwide users, India has the largest number of consumers.[181] In May 2017, it was reported that WhatsApp users spend over 340 million minutes on video calls each day on the app. This is the equivalent of roughly 646 years of video calls per day.[182] As of February 2017, WhatsApp had over 1.2 billion users globally,[183] reaching 1.5 billion monthly active users by the end of 2017.[184] Specific markets India is by far WhatsApp's largest market in terms of total number of users. In May 2014, WhatsApp crossed 50 million monthly active users in India, which is also its largest country by the number of monthly active users.[185], then 70 million in October 2014, making users in India 10% of WhatsApp's total user base.[186] In February 2017, WhatsApp reached 200 million monthly active users in India.[187] Israel is one of WhatsApp's strongest markets in terms of ubiquitous usage. According to Globes, already by 2013 the application was installed on 92% of all smartphones, with 86% of users reporting daily use.[188] WhatsApp's group chat feature is reportedly used by many Israeli families to stay in contact with each other.[189] Competition WhatsApp competes with a number of Asian-based messaging services (that as of 2014, were services like WeChat (468 million active users), Viber (209 million active users[190]) and LINE (170 million active users[191]), WhatsApp handled ten billion messages per day in August 2012,[192] growing from two billion in April 2012,[193] and one billion the previous October.[194] On June 13, 2013, WhatsApp announced that they had reached their new daily record by processing 27 billion messages.[195] According to the Financial Times, WhatsApp "has done to SMS on mobile phones what Skype did to international calling on landlines."[196] See also References
module Advent.Y2017.Day20 where import Data.Function (on) import Data.List (sortBy) import Data.List.Split (splitOn) import Math.Geometry.Vector import Math.Geometry.Distance day20a, day20b :: String -> String day20a input = show $ minimum $ zip distances [0..(length distances-1)] where positions = map position $ run tickA 1000 $ loadParticles input distances = map (manhattan $ origin) $ positions day20b input = show $ length $ run tickB 1000 $ loadParticles input data Particle = Particle { position :: Vector , velocity :: Vector , acceleration :: Vector } deriving (Show, Eq) run :: ([Particle] -> [Particle]) -> Int -> [Particle] -> [Particle] run fn n particles = head $ drop n $ iterate fn particles tickA :: [Particle] -> [Particle] tickA particles = map tick particles tickB :: [Particle] -> [Particle] tickB particles = removeCollisions $ sortBy (compare `on` position) $ map tick particles tick :: Particle -> Particle tick (Particle p v a) = Particle (plus p newV) newV a where newV = (plus v a) removeCollisions :: [Particle] -> [Particle] removeCollisions [] = [] removeCollisions (x:rest) = if numSameElements > 0 then removeCollisions $ drop numSameElements rest else x : (removeCollisions rest) where numSameElements = length $ takeWhile (\y -> position x == position y) rest loadParticles :: String -> [Particle] loadParticles input = map parseParticle $ lines input parseParticle :: String -> Particle parseParticle str = Particle p v a where (p:v:a:_) = map parseVector $ words str parseVector :: String -> Vector parseVector (_:'=':'<':xs) = map read $ splitOn "," ys where (ys:_) = splitOn ">" xs parseVector _ = error "Unable to read vector"
/** * @author Kin-man Chung */ public class AstMapData extends SimpleNode { public AstMapData(int id) { super(id); } public Object getValue(EvaluationContext ctx) { HashSet<Object> set = new HashSet<Object>(); HashMap<Object, Object> map = new HashMap<Object, Object>(); int paramCount = this.jjtGetNumChildren(); for (int i = 0; i < paramCount; i++) { Node entry = this.children[i]; Object v1 = entry.jjtGetChild(0).getValue(ctx); if (entry.jjtGetNumChildren() > 1) { // expr: expr map.put(v1, entry.jjtGetChild(1).getValue(ctx)); } else { set.add(v1); } } // It is error to have mixed set/map entries if (set.size() > 0 && map.size() > 0) { throw new ELException("Cannot mix set entry with map entry."); } if (map.size() > 0) { return map; } return set; } }
package de.fraunhofer.iosb.svs.sae.db; import com.fasterxml.jackson.annotation.JsonFilter; import com.fasterxml.jackson.annotation.JsonIgnore; import javax.persistence.*; @Entity public class PolicyAnalysisReport { @Id @GeneratedValue(strategy = GenerationType.AUTO) @JsonIgnore private Long id; @Column(columnDefinition = "TEXT") private String report; @ManyToOne @JoinColumn(name = "analysis_report_id") @JsonIgnore private AnalysisReport analysisReport; @ManyToOne @JoinColumn(name = "policy_analysis_id") @JsonFilter("nameOnly") private PolicyAnalysis policyAnalysis; public PolicyAnalysisReport(String report) { this.report = report; } public PolicyAnalysisReport() { } public Long getId() { return id; } public void setId(Long id) { this.id = id; } public PolicyAnalysis getPolicyAnalysis() { return policyAnalysis; } public void setPolicyAnalysis(PolicyAnalysis policyAnalysis) { this.policyAnalysis = policyAnalysis; } public String getReport() { return report; } public void setReport(String report) { this.report = report; } public AnalysisReport getAnalysisReport() { return analysisReport; } public void setAnalysisReport(AnalysisReport analysisReport) { this.analysisReport = analysisReport; } }
package uk.gov.ida.notification.stubconnector; import net.shibboleth.utilities.java.support.component.ComponentInitializationException; import net.shibboleth.utilities.java.support.security.SecureRandomIdentifierGenerationStrategy; import org.joda.time.DateTime; import org.opensaml.core.xml.Namespace; import org.opensaml.messaging.context.MessageContext; import org.opensaml.messaging.handler.MessageHandlerException; import org.opensaml.saml.common.binding.impl.SAMLOutboundDestinationHandler; import org.opensaml.saml.common.binding.security.impl.SAMLOutboundProtocolMessageSigningHandler; import org.opensaml.saml.common.messaging.context.SAMLEndpointContext; import org.opensaml.saml.common.messaging.context.SAMLPeerEntityContext; import org.opensaml.saml.common.messaging.context.SAMLSelfEntityContext; import org.opensaml.saml.saml2.core.Attribute; import org.opensaml.saml.saml2.core.AuthnContextClassRef; import org.opensaml.saml.saml2.core.AuthnContextComparisonTypeEnumeration; import org.opensaml.saml.saml2.core.AuthnRequest; import org.opensaml.saml.saml2.core.Extensions; import org.opensaml.saml.saml2.core.Issuer; import org.opensaml.saml.saml2.core.NameID; import org.opensaml.saml.saml2.core.NameIDPolicy; import org.opensaml.saml.saml2.core.NameIDType; import org.opensaml.saml.saml2.core.RequestedAuthnContext; import org.opensaml.saml.saml2.metadata.Endpoint; import org.opensaml.xmlsec.SignatureSigningParameters; import org.opensaml.xmlsec.context.SecurityParametersContext; import se.litsec.eidas.opensaml.common.EidasConstants; import se.litsec.eidas.opensaml.common.EidasLoaEnum; import se.litsec.eidas.opensaml.ext.RequestedAttribute; import se.litsec.eidas.opensaml.ext.RequestedAttributes; import se.litsec.eidas.opensaml.ext.SPType; import se.litsec.eidas.opensaml.ext.SPTypeEnumeration; import uk.gov.ida.notification.saml.SamlBuilder; import java.util.List; public class EidasAuthnRequestContextFactory { private final SecureRandomIdentifierGenerationStrategy identifierGenerationStrategy = new SecureRandomIdentifierGenerationStrategy(); public MessageContext generate( Endpoint destinationEndpoint, String connectorEntityId, SPTypeEnumeration spType, List<String> requestedAttributes, EidasLoaEnum loa, SignatureSigningParameters signingParameters) throws ComponentInitializationException, MessageHandlerException { AuthnRequest request = SamlBuilder.build(AuthnRequest.DEFAULT_ELEMENT_NAME); request.getNamespaceManager().registerNamespaceDeclaration(new Namespace(EidasConstants.EIDAS_NS, EidasConstants.EIDAS_PREFIX)); // Add the request attributes. // request.setForceAuthn(true); request.setID(identifierGenerationStrategy.generateIdentifier(true)); request.setIssueInstant(new DateTime()); // Add the issuer element (the entity that issues this request). // Issuer issuer = SamlBuilder.build(Issuer.DEFAULT_ELEMENT_NAME); issuer.setFormat(NameIDType.ENTITY); issuer.setValue(connectorEntityId); request.setIssuer(issuer); request.setDestination(destinationEndpoint.getLocation()); Extensions extensions = SamlBuilder.build(Extensions.DEFAULT_ELEMENT_NAME); // Add the type of SP as an extension. // SPType spTypeElement = SamlBuilder.build(SPType.DEFAULT_ELEMENT_NAME); spTypeElement.setType(spType); extensions.getUnknownXMLObjects().add(spTypeElement); // Add the eIDAS requested attributes as an extension. // if (requestedAttributes != null && !requestedAttributes.isEmpty()) { RequestedAttributes requestedAttributesElement = SamlBuilder.build(RequestedAttributes.DEFAULT_ELEMENT_NAME); // Also see the RequestedAttributeTemplates class ... for (String attr : requestedAttributes) { RequestedAttribute reqAttr = SamlBuilder.build(RequestedAttribute.DEFAULT_ELEMENT_NAME); reqAttr.setName(attr); reqAttr.setNameFormat(Attribute.URI_REFERENCE); reqAttr.setIsRequired(true); requestedAttributesElement.getRequestedAttributes().add(reqAttr); } extensions.getUnknownXMLObjects().add(requestedAttributesElement); } request.setExtensions(extensions); // Set the requested NameID policy to "persistent". // NameIDPolicy nameIDPolicy = SamlBuilder.build(NameIDPolicy.DEFAULT_ELEMENT_NAME); nameIDPolicy.setFormat(NameID.PERSISTENT); nameIDPolicy.setAllowCreate(true); request.setNameIDPolicy(nameIDPolicy); // Create the requested authentication context and assign the "level of assurance" that we require // the authentication to be performed under. // RequestedAuthnContext requestedAuthnContext = SamlBuilder.build(RequestedAuthnContext.DEFAULT_ELEMENT_NAME); requestedAuthnContext.setComparison(AuthnContextComparisonTypeEnumeration.MINIMUM); // Should be exact! AuthnContextClassRef authnContextClassRef = SamlBuilder.build(AuthnContextClassRef.DEFAULT_ELEMENT_NAME); authnContextClassRef.setAuthnContextClassRef(loa.getUri()); requestedAuthnContext.getAuthnContextClassRefs().add(authnContextClassRef); request.setRequestedAuthnContext(requestedAuthnContext); MessageContext context = new MessageContext() {{ setMessage(request); getSubcontext(SAMLPeerEntityContext.class, true) .getSubcontext(SAMLEndpointContext.class, true) .setEndpoint(destinationEndpoint); getSubcontext(SAMLSelfEntityContext.class, true) .setEntityId(connectorEntityId); getSubcontext(SecurityParametersContext.class, true) .setSignatureSigningParameters(signingParameters); }}; SAMLOutboundProtocolMessageSigningHandler signingHandler = new SAMLOutboundProtocolMessageSigningHandler(); signingHandler.initialize(); signingHandler.invoke(context); SAMLOutboundDestinationHandler destinationHandler = new SAMLOutboundDestinationHandler(); destinationHandler.initialize(); signingHandler.invoke(context); return context; } }
<reponame>ScottyPoi/Ultralight-UTP import debug from "debug"; import { IOException } from "../../Utils/exceptions"; import UtpAlgConfiguration from "../UtpAlgConfiguration"; import OutPacketBuffer from "./OutPacketBuffer"; import fs from 'fs' export interface StatisticLogger { currentWindow(currentWindow: number): void; ourDifference( difference: number): void; minDelay( minDelay: number): void; ourDelay( ourDelay: number): void; offTarget( offTarget: number): void; delayFactor( delayFactor: number): void; windowFactor( windowFactor: number): void; gain(gain: number): void; ackReceived(seqNrToAck: number): void; sACK(sackSeqNr: number): void; next(): void; maxWindow(maxWindow: number): void; end(bytesLength: number): void; microSecTimeStamp( logTimeStampMillisec: number): void; pktRtt( packetRtt: number): void; rttVar( rttVar: number): void; rtt( rtt: number): void; advertisedWindow(advertisedWindowSize: number): void; theirDifference(theirDifference: number): void; theirMinDelay( theirMinDelay: number): void; bytesSend(bytesSend: number): void; } export default class UtpDataLogger implements StatisticLogger { LOG_NAME: string; private _currentWindow: number; private _difference: number; private _minDelay: number; private _ourDelay: number; private _offTarget: number; private _delayFactor: number; private _windowFactor: number; private _gain: number; private _ackReceived: number; private _sACK: string | null; private _maxWindow: number; private _minimumTimeStamp: number; private _timeStamp: number; private _packetRtt: number; private _rttVar: number; private _rtt: number; private _advertisedWindow: number; private _theirDifference: number; private _theirMinDelay: number; private _bytesSend: number; private _loggerOn: boolean = true; // private _logFile: RandomAccessFile | undefined ; // private _fileChannel: FileChannel | undefined; private _log: debug.Debugger; constructor() { this.LOG_NAME = "log.csv"; this._currentWindow = 0; this._difference= 0; this._minDelay= 0; this._ourDelay= 0; this._offTarget= 0; this._delayFactor= 0; this._windowFactor= 0; this._gain= 0; this._ackReceived= 0; this._sACK = null; this._maxWindow= 0; this._minimumTimeStamp= 0; this._timeStamp= 0; this._packetRtt= 0; this._rttVar= 0; this._rtt= 0; this._advertisedWindow= 0; this._theirDifference= 0; this._theirMinDelay= 0; this._bytesSend= 0; // this._loggerOn = true; // this._logFile= undefined // this._fileChannel=undefined; this._log = debug("OutPacketBuffer") // this.openFile(); // this.writeEntry("TimeMicros;AckRecieved;CurrentWidow_Bytes;Difference_Micros;MinDelay_Micros;OurDelay_Micros;Their_Difference_Micros;" // + "Their_MinDelay_Micros;Their_Delay_Micros;OffTarget_Micros;DelayFactor;WindowFactor;Gain_Bytes;PacketRtt_Millis;" // + "RttVar_Millis;Rtt_Millis;AdvertisedWindow_Bytes;MaxWindow_Bytes;BytesSend;SACK\n"); this._minimumTimeStamp = 0; } public currentWindow( currentWindow: number): void { this._currentWindow = currentWindow; } public ourDifference( difference: number): void { this._difference = difference; } public minDelay( minDelay: number): void { this._minDelay = minDelay; } public ourDelay( ourDelay: number): void { this._ourDelay = ourDelay; } public offTarget( offTarget: number): void { this._offTarget = offTarget; } public delayFactor(delayFactor: number): void { this._delayFactor = delayFactor; } public windowFactor(windowFactor: number): void { this._windowFactor = windowFactor; } public gain( gain: number): void { this._gain = gain; } public ackReceived( seqNrToAck: number): void { this._ackReceived = seqNrToAck; } public sACK( sackSeqNr: number): void { if (this._sACK != null) { this._sACK += " " + sackSeqNr; } else { this._sACK = "" + sackSeqNr; } } public next(): void { if (UtpAlgConfiguration.DEBUG && this._loggerOn) { let logEntry = "" + (this._timeStamp - this._minimumTimeStamp) + ";"; logEntry += this._ackReceived + ";"; logEntry += this._currentWindow + ";"; logEntry += this._difference + ";"; logEntry += this._minDelay + ";"; logEntry += this._ourDelay + ";"; logEntry += this._theirDifference + ";"; logEntry += this._theirMinDelay + ";"; logEntry += (this._theirDifference - this._theirMinDelay) + ";"; logEntry += this._offTarget + ";"; logEntry += this._delayFactor + ";"; logEntry += this._windowFactor + ";"; logEntry += this._gain + ";"; logEntry += this._packetRtt + ";"; logEntry += this._rttVar + ";"; logEntry += this._rtt + ";"; logEntry += this._advertisedWindow + ";"; logEntry += this._maxWindow + ";"; logEntry += this._bytesSend; if (this._sACK != null) { logEntry += ";(" + this._sACK + ")\n"; } else { logEntry += "\n"; } this._log(logEntry); this._sACK = null; } } public maxWindow( maxWindow: number): void { this._maxWindow = maxWindow; } // private openFile(): void { // if (UtpAlgConfiguration.DEBUG && this._loggerOn) { // try { // this._logFile = fs. writeFileSync("testData/" + this.LOG_NAME, "rw") ; // this._fileChannel = this._logFile.getChannel(); // this._fileChannel.truncate(0); // } catch (e: any) { // e.prStackTrace(); // } // } // } // private closeFile(): void { // if (UtpAlgConfiguration.DEBUG && this._loggerOn) { // try { // this._logFile.close(); // this._fileChannel.close(); // } catch ( exp: any) { // exp.prStackTrace(); // } // } // } public end( bytesLength: number): void { if (UtpAlgConfiguration.DEBUG && this._loggerOn) { // this.closeFile(); let seconds = (this._timeStamp - this._minimumTimeStamp) / 1000000; let sendRate = 0; if (seconds != 0) { sendRate = (bytesLength / 1024) / seconds; } console.debug("SENDRATE: " + sendRate + "kB/sec"); } } // private writeEntry(entry: string): void { // if (UtpAlgConfiguration.DEBUG && this._loggerOn && entry != null) { // let bbuffer: Buffer = Buffer.alloc(entry.length) // bbuffer.write(entry) // bbuffer.reverse() // try { // this._fileChannel.write(bbuffer); // } catch ( e) { // console.error("failed to write: " + entry); // } // } // } public microSecTimeStamp( logTimeStampMillisec: number): void { if (this._minimumTimeStamp == 0) { this._minimumTimeStamp = logTimeStampMillisec; } this._timeStamp = logTimeStampMillisec; } public pktRtt( packetRtt: number): void { this._packetRtt = packetRtt; } public rttVar( rttVar: number): void { this._rttVar = rttVar; } public rtt( rtt: number): void { this._rtt = rtt; } public advertisedWindow( advertisedWindowSize: number): void { this._advertisedWindow = advertisedWindowSize; } public theirDifference( theirDifference: number): void { this._theirDifference = theirDifference; } public theirMinDelay( theirMinDelay: number): void { this._theirMinDelay = theirMinDelay; } public bytesSend( bytesSend: number): void { this._bytesSend = bytesSend; } }
import Data.List (unfoldr) myIterate :: (a -> a) -> a -> [a] myIterate f x = x : myIterate f (f x) myUnfoldr :: (b -> Maybe (a, b)) -> b -> [a] myUnfoldr f s = case f s of Nothing -> [] Just (x, s') -> x : myUnfoldr f s' dict :: [(Integer, (Char, Integer))] dict = [ (5, ('k', 11)), (7, ('l', 3)), (0, ('H', 9)), (17, ('s', 5)), (9, ('a', 17)), (11, ('e', 7)), (3, ('l', 20)) ] takeTo :: (a -> Bool) -> [a] -> [a] takeTo p = unfoldr $ \s -> case s of [] -> Nothing x : xs | p x -> Just (x, []) | otherwise -> Just (x, xs)
#ifndef OPENPOSE_HAND_HAND_GPU_RENDERER_HPP #define OPENPOSE_HAND_HAND_GPU_RENDERER_HPP #include <openpose/core/common.hpp> #include <openpose/core/gpuRenderer.hpp> #include <openpose/hand/handParameters.hpp> #include <openpose/hand/handRenderer.hpp> namespace op { class OP_API HandGpuRenderer : public GpuRenderer, public HandRenderer { public: HandGpuRenderer(const float renderThreshold, const float alphaKeypoint = HAND_DEFAULT_ALPHA_KEYPOINT, const float alphaHeatMap = HAND_DEFAULT_ALPHA_HEAT_MAP); ~HandGpuRenderer(); void initializationOnThread(); void renderHand(Array<float>& outputData, const std::array<Array<float>, 2>& handKeypoints); private: float* pGpuHand; // GPU aux memory DELETE_COPY(HandGpuRenderer); }; } #endif // OPENPOSE_HAND_HAND_GPU_RENDERER_HPP
/** * Creates a new node with the passed dot, no payload, no context and stage SLT. * * # Arguments * * `dot` - Message dot */ pub fn new(dot: Dot) -> Node { let predecessors = SmallVec::new(); let successors = SmallVec::new(); let bits = BV::default(); Node { payload: None, dot, context: None, predecessors, successors, stage: Stage::SLT, bits, } }
def validate_key(config, schema, key): if schema[key].get('required') and config.get(key) is None: raise ValidationFail('Required key "%s" missing.' % key) if config.get(key) is None: return value = config[key] field_type = schema[key].get('type') if field_type == 'string': _validate_string(value, key) elif field_type == 'integer': _validate_integer(value, key) elif field_type == 'revision': _validate_revision(value, key) elif field_type == 'boolean': _validate_boolean(value, key) if 'choices' in schema[key] and value not in schema[key]['choices']: _fail(value, key)
<gh_stars>10-100 from srsly import json_loads from ..util import partition def stack_exchange(loc=None): if loc is None: raise ValueError("No default path for Stack Exchange yet") rows = [] with loc.open("r", encoding="utf8") as file_: for line in file_: eg = json_loads(line) rows.append(((eg["text1"], eg["text2"]), int(eg["label"]))) train, dev = partition(rows, 0.7) return train, dev
Generating infinite digraphs by derangements A set $\mathcal{S}$ of derangements (fixed-point-free permutations) of a set $V$ generates a digraph with vertex set $V$ and arcs $(x,x^\sigma)$ for $x\in V$ and $\sigma\in\mathcal{S}$. We address the problem of characterising those infinite (simple loopless) digraphs which are generated by finite sets of derangements. The case of finite digraphs was addressed in earlier work by the second and third authors. A criterion is given for derangement generation which resembles the criterion given by De Bruijn and Erd\H{o}s for vertex colourings of graphs in that the property for an infinite digraph is determined by properties of its finite sub-digraphs. The derangement generation property for a digraph is linked with the existence of a finite $1$-factor cover for an associated bipartite (undirected) graph. Introduction A derangement of a set V is a bijection σ : V → V such that x σ = x for all x ∈ V . A digraph D = (V, E) consists of a (possibly infinite) vertex set V = V (D) and an arc set E = E(D) ⊆ {(x, y) : x, y ∈ V, x = y}. Thus our digraphs are simple and loopless by definition. Each set S of derangements of V corresponds to a digraph, namely (V, E(S)) where E(S) := {(x, x σ ) : x ∈ V, σ ∈ S}. Moreover we say that a set S of derangements of V generates a digraph D = (V, E) if E = E(S). Digraphs of this kind were introduced by the second and third author in , where they were called derangement action digraphs and denoted − − → DA(V, S) = (V, E(S)). Special attention was paid in to those where the arc set E is symmetric in the sense that E is equal to E * := {(y, x) : (x, y) ∈ E}, and in this case the digraph is viewed as a simple undirected graph. It was proved in that the class of derangement action digraphs contains several families of finite simple graphs and digraphs, including all regular digraphs, and all regular graphs which are either of even valency, or are bipartite, or are vertex-transitive, . In it is always assumed that the generating set S of derangements is finite. This requirement is always satisfied in the case of finite digraphs, but becomes important when infinite digraphs are considered. Accordingly the authors asked which infinite regular simple graphs and digraphs are generated by finitely many derangements, . Here we characterise such digraphs. We introduce some standard terminology. Let D = (V, E) be a digraph. The conditions (i), (ii) and (iii) can be seen to be necessary for D to be generated by at most k derangements as follows. Suppose that D is a digraph generated by at most k derangements. Because a derangement in the generating set of D induces exactly one arc (x, x σ ) from each vertex x, and exactly one arc (x σ −1 , x) to each vertex x, condition (i) must hold. Let T be a finite subset of V . Any derangement in the generating set of D must induce |T | arcs from vertices in T to vertices in N + D (T ) and hence |N + D (T )| − |T | arcs from vertices outside of T to vertices in N + D (T ). However, there are x∈N + arcs from vertices outside of T to vertices in N + D (T ), and hence condition (ii) must hold. A similar argument with the direction of the arcs reversed shows that condition (iii) must also hold. We prove Theorem 1 by representing digraphs as bipartite graphs and proving an analogous result concerning (possibly infinite) bipartite graphs. A graph G = (V, E) consists of a (possibly infinite) vertex set V = V (G) and an edge set E = E(G) ⊆ {{x, y} : x, y ∈ V, x = y}. For a vertex x ∈ V , we define the neighbourhood of x to be N G (x) = {y ∈ V : {x, y} ∈ E} and the degree of x to be deg G (x) = |N G (x)|. The graph G is said to be k-regular if all vertices have degree k. For a subset S of V we define N G (S) := x∈S N G (x). A 1-factor F of G is a set of edges of G such that each vertex of G is incident with exactly one edge in F . A 1-factor cover of G is a set F of 1-factors of G such that each edge of G is in at least one 1-factor in F . Theorem 2. Let G be a (possibly infinite) bipartite graph with bipartition {V 1 , V 2 } and let k be a positive integer. Then G has a 1-factor cover with at most k 1-factors if and only if Similar arguments to those given above establish the necessity of these conditions for the existence of a 1-factor cover of G with at most k 1-factors. Bonisoli and Cariolaro introduced the study of 1-factor covers in , and Cariolaro and Rizzi gave a characterisation of the family of finite bipartite graphs which have a 1-factor cover with at most k 1-factors. Theorems 1 and 2 resemble the De Bruijn-Erdős theorem in that they characterise a property of an infinite graph in terms of properties of its finite subgraphs. The proofs of Theorems 1 and 2 depend on the axiom of choice, in the guise of Zorn's lemma (see Lemma 9). We state one consequence of Theorems 1 and 2 that generalises and respectively, and gives some explicit families of infinite digraphs and graphs that can be generated by derangements, . Corollary 3. Let k be a positive integer. (a) A k-regular digraph can be generated by k derangements but no fewer. (b) A k-regular graph can be generated by k derangements but no fewer. The rest of this paper is organised as follows. In the next section we describe the notion of the bipartite double of a digraph and show that Theorem 2 implies Theorem 1, and that these two results imply Corollary 3. In Section 3 we characterise those digraphs with finite in-and out-degrees that can be generated by finitely many derangements. This characterisation follows without too much effort from well known results. In Section 4 we undertake the more substantial task of proving Theorem 2. We conclude with Section 5 in which we exhibit examples of digraphs with low maximum degree that require many (including infinitely many) derangements to generate them. Let D be a digraph generated by some set S of derangements of a set V . The arc subset generated by a single derangement σ ∈ S is E(σ) := {(x, x σ ) : x ∈ V } and by the definition of a derangement, for each x ∈ V , E(σ) contains exactly one arc of the form (x, y) and one arc of the form (y ′ , x) (for some y, y ′ ∈ V \ {x}). Thus E(σ) comprises exactly one arc of D into each vertex, and also comprises exactly one arc of D out of each vertex. The edge subset of the bipartite double G of D corresponding to E(σ) is F (σ) := {{(x, 1), (x σ , 2)} : x ∈ V } and each vertex of G is incident with exactly one edge of F (σ). Thus F (σ) is a 1-factor of G. Since each arc of D is generated by some derangement in S, the 1-factors F (σ), for σ ∈ S, form a 1-factor cover of G with |S| 1-factors. Conversely, every 1-factor cover F of G corresponds to a set of |F | derangements that generates D. With this equivalence established it is easy to see that Theorem 2 implies Theorem 1. Proof that Theorem 2 implies Theorem 1. Let D be a digraph and let k be a positive integer. Let G be the bipartite double of D and recall that G has parts V (D) ×{1} and V (D) × {2}. By our discussion immediately above, D can be generated by at most k derangements if and only if G has a 1-factor cover with at most k 1-factors. Assuming Theorem 2 holds, G has a 1-factor cover with at most We also show that Theorem 1 implies Corollary 3. Proof that Theorem 1 implies Corollary 3. Let D be a k-regular digraph. By Theorem 1(i), D cannot be generated by fewer than k derangements. Let T be a finite subset of V (D). Possibly infinite sets of derangements In this section we characterise the locally finite digraphs which can be generated by some (possibly infinite) set of derangements. The characterisation follows without too much difficulty from well known results. By considering bipartite doubles, it suffices to characterise those locally finite bipartite graphs that have a 1-factor cover. Graphs G such that each edge of G is in some 1-factor of G are commonly called 1-extendable. Any graph with a 1-factor cover is clearly 1-extendable. Conversely, for any 1-extendable graph G and each edge {u, v} of G, let F {u,v} denote a 1-factor of G containing {u, v}. Then {F {u,v} : {u, v} ∈ E(G)} is a 1-factor cover of G. Thus it suffices, in fact, to characterise those locally finite bipartite graphs that are 1-extendable. For a set X, let P(X) denote the set of all subsets of X. We will make use of the following result of Rado which extends a famous theorem of Hall concerning finite bipartite graphs to the case of locally finite graphs. Theorem 4 ( ). A locally finite bipartite graph In Lemma 5 below we use Theorem 4 to obtain a criterion for 1-extendability of locally finite bipartite graphs. A version of Lemma 5 for finite graphs was proved in (see also , will rely heavily on multigraphs. We now introduce some notation concerning them. Multigraph definitions and notation A multigraph G = (V, E) has vertex set V = V (G) and edge set E = E(G) such that each edge is incident with exactly two distinct vertices (that is, G has no loops). Two vertices incident with some edge are called adjacent. For x ∈ V (G), we denote by N G (x) the set of vertices adjacent to x, and for a subset T of V (G), let N G (T ) = x∈T N G (x). For distinct vertices x, y ∈ V (G), let µ G (x, y) denote the number of edges between (incident with) x and y. The degree deg G (x) of a vertex x in a multigraph G is the number of edges incident with it, so deg G (x) = y∈N G (x) µ G (x, y) |N G (x)| but equality need not hold. A multigraph is k-regular if each of its vertices has degree k. A multigraph G is a (simple) graph if µ G (x, y) 1 for all x, y ∈ V (G). (Note that in this case we can identify each edge with the pair of vertices it is incident with and so recover the definition of graph given in the introduction.) A multigraph G 1 is said to be a subgraph of a multigraph for all distinct x, y ∈ S. As in the case of graphs, a 1-factor F of a multigraph G is a set of edges of G such that each vertex of G is incident with exactly one edge in F . For a set X, recall that P(X) denotes the set of all subsets of X. We also denote by P fin (X) the set of all finite subsets of X. Thickenings of multigraphs To study 1-factor covers of graphs it is convenient to be able to 'add further edges between pairs of already adjacent vertices'. The following concepts allow us to do this formally. 2. For a multigraph G and a subset S of V (G), we say that a thickening H of G The following result is critical to our approach. Part (i) of it can be obtained from well known results in several ways. Here, we sketch a proof based on Theorem 4. (ii) A bipartite (simple) graph G has a 1-factor cover with at most k 1-factors if and only if G has a k-regular thickening. Proof. (i) Let G * be a k-regular bipartite multigraph with bipartition {V 1 , V 2 }. For any T ∈ P(V 1 ) ∪ P(V 2 ), the number of edges incident with vertices in N G * (T ) is and hence |N G * (T )| |T |. So by applying Theorem 4 to G * (or, more precisely, to the unique simple graph of which G * is a thickening) we see that G * has a 1-factor. Removing the edges of this 1-factor from G * results in a (k − 1)-regular bipartite multigraph, and so we can proceed inductively to prove part (i). (ii) Let G be a simple bipartite graph. If G has a k-regular thickening G * then, by part (i), there is a partition F * = {F * 1 , . . . , F * k } of the edge set of G * into k 1-factors. For each i ∈ {1, . . . , k}, let F i be the 1-factor of G obtained by replacing each edge of F * i with the edge of G that is incident with the same two vertices. Then F = {F 1 , . . . , F k } is a 1-factor cover of G with at most k 1-factors (note F 1 , . . . , F k may not all be distinct). We establish some basic properties of thickenings in our next result. Lemma 8. Let G be a multigraph, let S be a subset of V (G), and let S ′ be a subset of S. Then, because G * is a thickening of G, every edge of G * that is incident with x is also in G * and (ii) is proved. Lemma 8 is the motivation for the definition of a 'k-thickening on a set S'. It implies that having a k-thickening on S for each S ⊆ V (G) is a necessary condition for a multigraph G to have a k-regular thickening. In our next result we show, in the style of the De Bruijn-Erdős theorem , that possessing this property on all finite vertex subsets is sufficient to guarantee that a multigraph has a k-regular thickening. Lemma 9. Let G be a bipartite (simple) graph and k be a positive integer. Then G has a 1-factor cover with at most k 1-factors if and only if, for each finite subset S of V (G), there exists a k-thickening of G on S. Proof. Let V = V (G). Suppose first that G has a 1-factor cover with at most k 1-factors. Then, by Lemma 7(ii), G has a k-regular thickening G * . Therefore, by Lemma 8(i), for any S ∈ P fin (V ), G * is a k-thickening of G on S. Conversely suppose that, for every S ∈ P fin (V ), there exists a k-thickening of G on S. Consider the set G of all thickenings G ′ of G with the property that there is a k-thickening of G ′ on S for every finite subset S of V . Note that G is non-empty (since G ∈ G), and each multigraph in G has maximum degree at most k (for if deg G ′ (x) > k for some multigraph G ′ and x ∈ V (G ′ ), then there is no k-thickening of G ′ on {x}). Let (G, ) be the poset formed by G under subgraph inclusion. Let C be a chain in (G, ) and let G C be the union of the multigraphs in C, so µ G C (x, y) = max{µ G ′ (x, y) : G ′ ∈ C} for all distinct x, y ∈ V . In particular, if {x, y} ∈ E(G), then µ G ′ (x, y) = 0 for all G ′ ∈ C and hence µ G C (x, y) = 0. It follows that G C is a thickening of G. We will show that G C ∈ G and hence that G C is an upper bound for C in (G, ). Let S ∈ P fin (V ). Because S is finite and G has maximum degree at most k, the set E S of edges of G that are incident with at least one vertex in S is finite. By definition of G C , for each {y, z} ∈ E S , there is some G ′ {y,z} ∈ C such that µ G ′ {y,z} (y, z) = µ G C (y, z). So, since C is a chain, there exists a single G ′ It follows from our definition of G ′ S that H ′ is also a k-thickening of G C on S. Thus G C ∈ G and G C is an upper bound for C in (G, ). So every chain in (G, ) has an upper bound, and by Zorn's lemma, (G, ) contains a maximal element, say G * . We claim that G * is a k-regular thickening of G. Note that, if this is true, then by Lemma 7(ii), G has a 1-factor cover of G with at most k 1-factors. Since G * is a thickening of G (by the definition of G), it only remains to prove that G * is k-regular. As noted above, since G * ∈ G, each vertex of G * has degree at most k. Suppose for a contradiction that G * has a vertex x with deg G * (x) < k. Let N G (x) = {y 1 , . . . , y t }, where t < k, and for each i ∈ {1, . . . , t}, let G * i be the multigraph obtained from G * by adding one additional edge between x and y i . Because G * is maximal in (G, ), for each i ∈ {1, . . . , t} we have G * i / ∈ G and hence for some S i ∈ P fin (V ) there is no k-thickening of G * i on S i . Let S = N G (x) ∪ S 1 ∪ · · · ∪ S t , and note that S is finite. Because G * ∈ G, there is a k-thickening H * of G * on S. By definition of a 'k-thickening of G * on S' it follows that deg H * (x) = k since N G * (x) = N G (x) ⊆ S. Thus, for some j ∈ {1, . . . , t}, we have µ H * (x, y j ) µ G * (x, y j ) + 1 = µ G * j (x, y j ), and so H * is a k-thickening of G * j on S. But then, by Lemma 8(ii), H * is a k-thickening of G * j on S j , and this is a contradiction to the definition of S j . So G * is indeed k-regular and the proof is complete. Flow networks and the proof of Theorem 2 Following , our proof of Theorem 2 will make use of flow networks. We give the basic definitions here and refer the reader to for a more detailed treatment. A flow network is a finite digraph where every arc has a nonnegative capacity associated with it and where two special vertices are distinguished as a source and a sink such that the source has no arcs into it and the sink has no arcs out from it. A flow in such a network is an assignment of a nonnegative value to each arc such that no value exceeds the capacity of its arc and, at each vertex other than the source and sink, the total flow in equals the total flow out. The sum of the flows on arcs emerging from the source is the magnitude of the flow (and will necessarily be equal to the sum of the flows on arcs going into the sink). A cut in such a network is a bipartition (A, B) of the vertices with the source in A and the sink in B. The capacity of a cut (A, B) is the total capacity of the arcs that emerge from vertices in A and go into vertices in B. The max flow min cut theorem states that the maximum magnitude of the flow through such a network is equal to the minimum capacity of a cut over all possible cuts of the network. Furthermore, the integer flow theorem states that if a flow network has integer capacities on all of its arcs then it has an integer-valued maximum flow. Proof of Theorem 2. Let V = V (G) and {V 1 , V 2 } be the bipartition of V . We first prove the 'only if' direction. Suppose that G has a 1-factor cover with at most k 1-factors. Because a 1-factor contains exactly one edge incident with a given vertex, condition (i) must hold. For any set T ∈ P fin (V 1 ) ∪ P fin (V 2 ), each 1-factor of G contains |T | edges between vertices in T and vertices in N G (T ) and hence contains |N G (T )| − |T | edges that are incident with a vertex in N G (T ) but not with a vertex in T . Since there are at most k 1-factors, there are at most k(|N G (T )| − |T |) edges of G that are incident with a vertex in N G (T ) but not with a vertex in T . On the other hand, there are exactly x∈N G (T ) deg G (x) − x∈T deg G (x) such edges (noting that no edge is incident with two vertices in N G (T ) since N G (T ) ⊆ V 1 or N G (T ) ⊆ V 2 ). Thus condition (ii) holds. So it remains to prove the 'if' direction. Fix k and suppose that conditions (i) and (ii) of Theorem 2 hold for G. We will use Lemma 9 to show that G has a 1-factor cover with at most k 1-factors. Let S * ∈ P fin (V ). We wish to find a k-thickening of G on S * . We may assume that S * ∩ V i = ∅ for each i ∈ {1, 2} for otherwise G is an empty graph and so the empty graph on S * is the unique k-thickening of G on S * . Let S 1 = S * ∩ V 1 , let S = S * ∪ N G (S 1 ), let S 2 = S ∩ V 2 , and note that S is finite and S 1 = S ∩ V 1 . It suffices to find a k-thickening H of G on S because then, by Lemma 8(ii), H will be a k-thickening of G on S * . For each x ∈ S, let c x = k − deg G (x) and note that c x 0 because (i) holds. Because (ii) holds, for any T ∈ P fin (S 1 ) ∪ P fin (S 2 ) we have, Let S ′′ 2 = {y ∈ S 2 : N G (y) ⊆ S 1 } and S ′ 2 = S 2 \ S ′′ 2 . If either T ⊆ S 1 or T ⊆ S ′′ 2 , then N G (T ) ⊆ S, and hence (1) is equivalent to Let m = x∈S 1 c x and m ′′ = y∈S ′′ 2 c y . Because (2) holds with T = S ′′ 2 and because N G (S ′′ 2 ) ⊆ S 1 , we have m ′′ x∈N G (S ′′ 2 ) c x m. We will show that a k-thickening of G on S must exist if there is an integer flow of magnitude m through the flow network D defined as follows. • D has vertex set the disjoint union S ∪ {a, b, b ′ }, where a is the source, b is the sink, each vertex of S is an internal vertex of D, and b ′ is an additional internal vertex. • The arcs of D and their capacities are as follows: for each x ∈ S 1 , y ∈ S 2 with {x, y} ∈ E(G), (x, y) is an arc with infinite capacity; -for each x ∈ S 1 , (a, x) is an arc with capacity c x ; -for each y ∈ S ′ 2 , (y, b ′ ) is an arc with capacity c y ; -for each y ∈ S ′′ 2 , (y, b) is an arc with capacity c y ; -(b ′ , b) is an arc with capacity m − m ′′ . Observe that, since x∈S 1 c x = m, any flow of magnitude m through D must use each arc incident with a at full capacity. Similarly, such a flow must use each arc incident with b at full capacity because m = (m − m ′′ ) + y∈S ′′ 2 c y . For any integer flow of magnitude m through D we associate the k-thickening H of G on S obtained from G by adding, for each x ∈ S 1 and y ∈ S 2 such that {x, y} ∈ E(G), i xy further edges between x and y, where i xy is the flow along the arc (x, y) in D. To see that H is indeed a k-thickening of G on S, note the following (recalling the definition of a flow given immediately before this proof). • For each x ∈ S 1 , N G (x) ⊆ S 2 ⊆ S and the total flow into x is c x because the arc ax is used at full capacity. So the total flow out of x, i x = y∈N G (x) i xy , must be c x . Thus, • For each y ∈ S ′′ 2 , N G (y) ⊆ S 1 ⊆ S and the total flow out of y is c y because the arc yb is used at full capacity. So the total flow into y, i y = x∈N G (y) i xy , must be c y . Thus, since N G (y) ⊆ S, we have deg H (y) = deg G (y) + i y = deg G (y) + c y = k. • For each y ∈ S ′ 2 , the total flow out of y is at most c y , the capacity of the arc yb ′ . So the total flow into y, i y = x∈N G (y)∩S 1 i xy , must be at most c y . Thus we have deg H (y) = deg We will complete the proof by showing that c(A, B) − m is nonnegative, considering two cases according to whether or not b ′ ∈ A. First suppose that b ′ / ∈ A. Then where the sums in the first line come from arcs from a to vertices in B 1 , and arcs from vertices in A ′ 2 ∪ A ′′ 2 to vertices in {b, b ′ }, and where the last equality follows by applying the definition of m. Because (2) holds with T = A 1 and because where the positive terms in the first line come from arcs from a to vertices in B 1 , arcs from vertices in A ′′ 2 to b, and the arc from b ′ to b, and where the last equality follows by applying the definition of m ′′ . Because (2) holds with T = B ′′ 2 and because N G (B ′′ 2 ) ⊆ B 1 , this last expression is nonnegative. Low degree digraphs requiring many derangements Considering Theorem 4 and Lemma 5, it is clear where we should look for examples of bipartite graphs that have no 1-factor, and examples of bipartite graphs that have a 1-factor but no 1-factor cover, and indeed it is not hard to find such examples. Correspondingly, considering Theorem 6, one can find examples of digraphs that cannot be generated by any (infinite or finite) set of derangements. Further, Corollary 3 makes it easy to find examples of digraphs that require k derangements to generate them for any positive integer k, but in these examples each in-and out-degree will be equal to k. Here we present examples of bipartite graphs with low maximum degree that do possess 1-factor covers but for which the number of 1-factors in any 1-factor cover must be arbitrarily large, or infinite. These lead to examples of digraphs with low maximum in-and out-degree whose generating sets of derangements must be arbitrarily large or infinite. Figure 1. Note that G is connected, has maximum degree 3, and is bipartite with bipartition {{u 2i+1 , v 2i : i ∈ Z}, {u 2i , v 2i+1 : i ∈ Z}}. For any positive integer n, consider the finite subset T = {v 2i : Since n can be chosen to be any positive integer, by Theorem 2(ii), G does not have a 1-factor cover with finitely many 1-factors. Intuitively, this is because each one factor of G contains at most one of the vertical edges {{v 2i+1 , u 2i+1 } : i ∈ Z}. However, by Lemma 5 or by simple inspection, it can be seen that G is 1-extendable and so does admit a 1-factor cover with infinitely many 1-factors. In addition, for each positive integer k, G has a finite subgraph that requires exactly k 1-factors to cover it: consider the induced bipartite subgraph G k of G with vertex set V k = {v i , u i : 1 i 2k − 1}. It is readily seen that G k has a unique 1-factor cover with k 1-factors and, by applying Theorem 2 with T = {v 2i : 1 i k − 1}, we see that G k does not admit a 1-factor cover with fewer than k 1-factors. Example 11. Example 10 can be generalised to produce, for any integer k 1, a bipartite graph with maximum degree k + 2 that can be covered with infinitely many 1-factors but not with finitely many. Consider the cartesian product P ∞ ✷H where H is a finite k-regular bipartite graph and P ∞ is the infinite path with vertex set Z and edge set {{i, i + 1} : i ∈ Z}. Thus the edge set of P ∞ ✷H is {(i, y), (i + 1, y)} : i ∈ Z, y ∈ V (H) ∪ {(i, y), (i, z)} : i ∈ Z, {y, z} ∈ E(H) . Now subdivide each edge {(i, y), (i + 1, y)} such that i ∈ Z and y ∈ V (H) with a new vertex (i * , y). The resulting graph G has maximum degree k+2, and has a 1-factor cover with infinitely many 1-factors which we describe in the next paragraph. However, applying Theorem 2 with T = {(i * , y) : 1 i n} for some fixed y ∈ V (H) and arbitrarily large positive integer n implies that G cannot be covered with finitely many 1-factors. When k = 1 and H is the complete graph K 2 , we recover Example 10. We can construct an infinite 1-factor cover of G as follows. For each i ∈ Z, let H i be the subgraph of G induced by the set V i = {(i, y) : y ∈ V (H)} and let F i be the unique 1-factor of G \ V i , where G \ V i is the graph obtained from G by deleting the vertices in V i (see Figure 2). For each i ∈ Z, H i is isomorphic to H and, by Lemma 7(i), there is a partition {H i,1 , . . . , H i,k } of E(H i ) into k 1-factors. Then F i,j = F i ∪H i,j is a 1-factor of G for any i ∈ Z and j ∈ {1, . . . , k} and it can be seen that {F i,j : i ∈ Z, 1 j k} is a 1-factor cover of G. For any positive integer n, consider the subset T = {w 2i : 1 i n} of V . We have |N + D (T )| = n + 1, |T | = n, and x∈N + D (T ) deg − D (x) − x∈T deg + D (x) = 3(n + 1) − 2n = n + 3. Since n can be chosen to be any positive integer, by Theorem 1(ii), D cannot be generated by any finite set of derangements. Intuitively, this is because each derangement in a putative generating set for D generates at most one of the distance 2 arcs {(w 2i−1 , w 2i+1 ) : i ∈ Z}. However, by Theorem 6 or by simple inspection, it can be seen that D can be generated by infinitely many derangements. In addition, D has a (non-induced) subdigraph that requires exactly k derangements to generate it for each positive integer k. For any positive integer k consider the subdigraph D k of D obtained by taking the induced subdigraph of D with vertex set V k = {w i : 1 i 2k + 1} and removing the arcs (w 2 , w 3 ) and (w 2k−1 , w 2k ). (Note that D k does not correspond to the subgraph G k of G described in Example 10.) It can be seen that there is a unique set of k derangements that generates D k . For example, for k = 2 this set is {(w 1 w 3 w 2 )(w 4 w 5 ), (w 1 w 2 )(w 3 w 5 w 4 )}, and for k = 3 it is {(w 1 w 3 w 2 )(w 4 w 5 )(w 6 w 7 ), (w 1 w 2 )(w 3 w 5 w 4 )(w 6 w 7 ), (w 1 w 2 )(w 3 w 4 )(w 5 w 7 w 6 )}. Furthermore, by applying Theorem 1(ii) with T = {v 2i : 1 i k}, we see that D k cannot be generated by fewer than k derangements. w −3 w −2 w −1 w 0 w 1 w 2 w 3 w 4 w 5 · · · · · · Figure 3: Digraph in Example 12 that can be generated by infinitely many derangements but not by finitely many.
<filename>src/index.ts export { createStore } from './createStore'; export { withStore } from './withStore'; export { composeHooks } from './composeHooks';
#include "p5.hpp" using namespace p5; void setup() { createCanvas(800, 600); } typedef float (*CFUNC)(float u); void draw() { clear(); // 0 - 2PI double minx = 0; double maxx = 720; double maxy = 200; double lastx = minx; double frequency = 3; // Cosine curve double lastcosy = map(cos(0), -1, 1, maxy - 1, 0); for (int x = (int)lastx+1; x <= maxx; x++) { double angx = map(x, minx, maxx, 0, frequency*2 * maths::Pi); double cosx = cos(angx); double cosy = map(cosx, -1, 1, (double)maxy - 1, 0); stroke(0, 0, 255); line(lastx, lastcosy, x, cosy); lastcosy = cosy; lastx = x; } // midway line stroke(0, 255, 0); line(minx, maxy / 2, (double)maxx - 1, maxy / 2); // Sine curve lastx = 0; double lastsiny = map(sin(0), -1, 1, (double)maxy - 1, 0); for (int x = (int)lastx+1; x < (int)maxx - 1; x++) { double angx = map(x, 0, (double)maxx - 1, 0, frequency * 2 * maths::Pi2); double sinx = sin(angx-(maths::PiOver2)); // shift phase to be opposite with cosx double siny = map(sinx, -1, 1, (double)maxy - 1, 0); stroke(255, 0, 0); line(lastx, lastsiny, x, siny); lastsiny = siny; lastx = x; } }
. Tooth discoloration's may cause serious damage to the aesthetic appearance of the patients. These discoloration's can be treated in several ways but up to lately tooth structure had to be removed in an irreversible manner in order to provide sufficient bulk for the new restorative material. Nowadays thanks to the work of T. Croll and others a method has been established in a scientific way whereby tooth discoloration's confined to the outermost layer of enamel can be removed, through the application of an compound called PREMA which will simultaneous erode and abrade the enamel surface. If carefully selected the discoloration can be removed with only a limited loss of tooth structure. The authors describe the historical background of the method. Furthermore patient's selection, specific instruments and materials are described in detail. The clinical procedure is explained step by step, followed by a complete report of the effects of the procedure on the different tooth structures. Finally the long term results are discussed and the possibility to enhance the result by combining this method with home bleaching.
<gh_stars>1-10 use crate::context::Context; use anyhow::Context as AnyhowContext; use clap::Parser; use m10_protos::sdk::transaction_error::Code; use m10_sdk::sdk; use serde::{Deserialize, Serialize}; use std::fs::File; use std::io::Read; use std::path::PathBuf; #[derive(Clone, Parser, Debug, Serialize, Deserialize)] #[clap(about)] pub(crate) struct ActionOptions { /// Name of the registered action #[clap(short, long)] name: String, /// Account ID invoking the action #[clap(long)] from: String, /// Target account ID #[clap(short, long)] target: String, /// Opaque payload. Interpreted as raw string #[clap(short, long, conflicts_with = "file")] payload: Option<String>, /// Read payload from file #[clap(short, long, conflicts_with = "payload")] file: Option<PathBuf>, } impl ActionOptions { pub(crate) async fn invoke(&self, config: &crate::Config) -> anyhow::Result<()> { let mut context = Context::new(config).await?; let from = hex::decode(&self.from).context("Invalid <from> format")?; let target = hex::decode(&self.target).context("Invalid <target> format")?; let mut buf = vec![]; let payload = if let Some(payload) = self.payload.as_ref() { // Use string as UTF-8 payload.as_bytes().to_vec() } else if let Some(path) = self.file.as_ref() { let mut file = File::open(path).context("Could not read payload file")?; file.read_to_end(&mut buf)?; buf } else { eprintln!("Reading payload from STDIN. Press ENTER to continue.."); let mut value = String::new(); std::io::stdin() .read_line(&mut value) .context("Could not read from STDIN")?; value.as_bytes().to_vec() }; let action = sdk::InvokeAction { name: self.name.clone(), payload, from_account: from, target: Some(sdk::Target { target: Some(sdk::target::Target::AccountId(target)), }), }; let response = context .submit_transaction(action, config.context_id.clone()) .await?; if let Err(err) = response { let err_type = Code::from_i32(err.code).unwrap_or(Code::Unknown); eprintln!("Could not invoke action: {:?} {}", err_type, err.message); Err(anyhow::anyhow!("failed action")) } else { eprintln!("Invoked action {}:", self.name); Ok(()) } } }
//============================================================================= // // Adventure Game Studio (AGS) // // Copyright (C) 1999-2011 <NAME> and 2011-20xx others // The full list of copyright holders can be found in the Copyright.txt // file, which is part of this source code distribution. // // The AGS source code is provided under the Artistic License 2.0. // A copy of this license can be found in the file License.txt and at // http://www.opensource.org/licenses/artistic-license-2.0.php // //============================================================================= #include <string.h> #include "ac/view.h" #include "util/alignedstream.h" using AGS::Common::AlignedStream; using AGS::Common::Stream; ViewFrame::ViewFrame() : pic(0) , xoffs(0) , yoffs(0) , speed(0) , flags(0) , sound(0) { reserved_for_future[0] = 0; reserved_for_future[1] = 0; } void ViewFrame::ReadFromFile(Stream *in) { pic = in->ReadInt32(); xoffs = in->ReadInt16(); yoffs = in->ReadInt16(); speed = in->ReadInt16(); flags = in->ReadInt32(); sound = in->ReadInt32(); reserved_for_future[0] = in->ReadInt32(); reserved_for_future[1] = in->ReadInt32(); } void ViewFrame::WriteToFile(Stream *out) { out->WriteInt32(pic); out->WriteInt16(xoffs); out->WriteInt16(yoffs); out->WriteInt16(speed); out->WriteInt32(flags); out->WriteInt32(sound); out->WriteInt32(reserved_for_future[0]); out->WriteInt32(reserved_for_future[1]); } ViewLoopNew::ViewLoopNew() : numFrames(0) , flags(0) , frames(nullptr) { } bool ViewLoopNew::RunNextLoop() { return (flags & LOOPFLAG_RUNNEXTLOOP); } void ViewLoopNew::Initialize(int frameCount) { numFrames = frameCount; flags = 0; frames = (ViewFrame*)calloc(numFrames + 1, sizeof(ViewFrame)); } void ViewLoopNew::Dispose() { if (frames != nullptr) { free(frames); frames = nullptr; numFrames = 0; } } void ViewLoopNew::WriteToFile_v321(Stream *out) { out->WriteInt16(numFrames); out->WriteInt32(flags); WriteFrames_Aligned(out); } void ViewLoopNew::WriteFrames_Aligned(Stream *out) { AlignedStream align_s(out, Common::kAligned_Write); for (int i = 0; i < numFrames; ++i) { frames[i].WriteToFile(&align_s); align_s.Reset(); } } void ViewLoopNew::ReadFromFile_v321(Stream *in) { Initialize(in->ReadInt16()); flags = in->ReadInt32(); ReadFrames_Aligned(in); // an extra frame is allocated in memory to prevent // crashes with empty loops -- set its picture to teh BLUE CUP!! frames[numFrames].pic = 0; } void ViewLoopNew::ReadFrames_Aligned(Stream *in) { AlignedStream align_s(in, Common::kAligned_Read); for (int i = 0; i < numFrames; ++i) { frames[i].ReadFromFile(&align_s); align_s.Reset(); } } ViewStruct::ViewStruct() : numLoops(0) , loops(nullptr) { } void ViewStruct::Initialize(int loopCount) { numLoops = loopCount; if (numLoops > 0) { loops = (ViewLoopNew*)calloc(numLoops, sizeof(ViewLoopNew)); } } void ViewStruct::Dispose() { if (numLoops > 0) { free(loops); numLoops = 0; } } void ViewStruct::WriteToFile(Stream *out) { out->WriteInt16(numLoops); for (int i = 0; i < numLoops; i++) { loops[i].WriteToFile_v321(out); } } void ViewStruct::ReadFromFile(Stream *in) { Initialize(in->ReadInt16()); for (int i = 0; i < numLoops; i++) { loops[i].ReadFromFile_v321(in); } } ViewStruct272::ViewStruct272() : numloops(0) { memset(numframes, 0, sizeof(numframes)); memset(loopflags, 0, sizeof(loopflags)); } void ViewStruct272::ReadFromFile(Stream *in) { numloops = in->ReadInt16(); for (int i = 0; i < 16; ++i) { numframes[i] = in->ReadInt16(); } in->ReadArrayOfInt32(loopflags, 16); for (int j = 0; j < 16; ++j) { for (int i = 0; i < 20; ++i) { frames[j][i].ReadFromFile(in); } } } void Convert272ViewsToNew (const std::vector<ViewStruct272> &oldv, ViewStruct *newv) { for (size_t a = 0; a < oldv.size(); a++) { newv[a].Initialize(oldv[a].numloops); for (int b = 0; b < oldv[a].numloops; b++) { newv[a].loops[b].Initialize(oldv[a].numframes[b]); if ((oldv[a].numframes[b] > 0) && (oldv[a].frames[b][oldv[a].numframes[b] - 1].pic == -1)) { newv[a].loops[b].flags = LOOPFLAG_RUNNEXTLOOP; newv[a].loops[b].numFrames--; } else newv[a].loops[b].flags = 0; for (int c = 0; c < newv[a].loops[b].numFrames; c++) newv[a].loops[b].frames[c] = oldv[a].frames[b][c]; } } }
def all(self, *a, to_dict=False): if to_dict: with patch_object(self, 'row_factory', dict_factory): cursor = self.execute(*a) else: cursor = self.execute(*a) fetchone = cursor.fetchone row = fetchone() if not row: return if not to_dict and row.__len__() == 1: while row: yield row[0] row = fetchone() else: while row: yield row row = fetchone()
<filename>services/yfuzz-server/kubernetes/create.go // Copyright 2018 Oath, Inc. // Licensed under the terms of the Apache version 2.0 license. See LICENSE file for terms. package kubernetes import ( "bytes" "errors" "text/template" "github.com/spf13/viper" batchv1 "k8s.io/api/batch/v1" "k8s.io/client-go/kubernetes/scheme" ) var jobTemplateHead = ` apiVersion: batch/v1 kind: Job metadata: name: {{ .Name }} spec: parallelism: {{ .JobConfig.GetString "parallelism" }} template: metadata: name: {{ .Name }} spec: containers: - name: yfuzz image: {{ .Image }} resources: requests: memory: {{ .JobConfig.GetString "memory.request" }} cpu: {{ .JobConfig.GetString "cpu.request" }} limits: memory: {{ .JobConfig.GetString "memory.limit" }} cpu: {{ .JobConfig.GetString "cpu.limit" }} command: ["/bin/sh"] args: ["-c", "/run_fuzzer.sh"] volumeMounts: - name: yfuzz-volume mountPath: /shared_data - name: varlog mountPath: /var/log/yfuzz` var logContainerTemplate = ` - name: logging image: {{ .JobConfig.GetString("log.image") }} volumeMounts: - name: "varlog" mountPath: "/var/log/yfuzz" envFrom: - configMapRef: name: {{ .JobConfig.GetString("log.config-map") }} - secretRef: name: {{ .JobConfig.GetString("log.secret") }}` var jobTemplateTail = ` restartPolicy: Never volumes: - name: yfuzz-volume persistentVolumeClaim: claimName: {{ .JobConfig.GetString "persistent-volume-claim" }} - name: varlog emptyDir: {}` type options struct { JobConfig *viper.Viper Name string Image string } // CreateJob creates a new job in the Kubernetes cluster. func (k API) CreateJob(name, registryLink string) (*batchv1.Job, error) { var jobTemplateString string if viper.IsSet("kubernetes.job-config.log.image") { jobTemplateString = jobTemplateHead + logContainerTemplate + jobTemplateTail } else { jobTemplateString = jobTemplateHead + jobTemplateTail } jobTemplate, err := template.New("jobTemplate").Parse(jobTemplateString) if err != nil { return nil, err } opts := options{ JobConfig: viper.Sub("kubernetes.job-config"), Name: name, Image: registryLink, } var rawJob bytes.Buffer err = jobTemplate.Execute(&rawJob, opts) if err != nil { return nil, err } decode := scheme.Codecs.UniversalDeserializer().Decode obj, _, err := decode(rawJob.Bytes(), nil, nil) if err != nil { return nil, err } jobSpec, ok := obj.(*batchv1.Job) if !ok { return nil, errors.New("could not parse job template") } return k.client.BatchV1().Jobs(viper.GetString("kubernetes.namespace")).Create(jobSpec) }
<gh_stars>1-10 package client import ( "context" pb "github.com/textileio/fil-tools/wallet/pb" ) // Wallet provides an API for managing filecoin wallets type Wallet struct { client pb.APIClient } // NewWallet creates a new filecoin wallet [bls|secp256k1] func (w *Wallet) NewWallet(ctx context.Context, typ string) (string, error) { resp, err := w.client.NewWallet(ctx, &pb.NewWalletRequest{Typ: typ}) if err != nil { return "", err } return resp.GetAddress(), nil } // WalletBalance gets a filecoin wallet's balance func (w *Wallet) WalletBalance(ctx context.Context, address string) (int64, error) { resp, err := w.client.WalletBalance(ctx, &pb.WalletBalanceRequest{Address: address}) if err != nil { return -1, err } return resp.GetBalance(), nil }
<gh_stars>0 import { ApiProperty } from "@nestjs/swagger"; export class findStuinfoDto{ @ApiProperty({description:'每页显示条目数'}) limit:number @ApiProperty({description:'当前页数'}) position:number }
package types import ( "fmt" sdk "github.com/cosmos/cosmos-sdk/types" "github.com/cosmos/cosmos-sdk/x/params" ) // Parameter keys and default values var ( KeyAllowedPools = []byte("AllowedPools") KeySwapFee = []byte("SwapFee") DefaultAllowedPools = AllowedPools{} DefaultSwapFee = sdk.ZeroDec() MaxSwapFee = sdk.OneDec() ) // Params are governance parameters for the swap module type Params struct { AllowedPools AllowedPools `json:"allowed_pools" yaml:"allowed_pools"` SwapFee sdk.Dec `json:"swap_fee" yaml:"swap_fee"` } // NewParams returns a new params object func NewParams(pairs AllowedPools, swapFee sdk.Dec) Params { return Params{ AllowedPools: pairs, SwapFee: swapFee, } } // DefaultParams returns default params for swap module func DefaultParams() Params { return NewParams( DefaultAllowedPools, DefaultSwapFee, ) } // String implements fmt.Stringer func (p Params) String() string { return fmt.Sprintf(`Params: AllowedPools: %s SwapFee: %s`, p.AllowedPools, p.SwapFee) } // ParamKeyTable Key declaration for parameters func ParamKeyTable() params.KeyTable { return params.NewKeyTable().RegisterParamSet(&Params{}) } // ParamSetAllowedPools implements the ParamSet interface and returns all the key/value pairs func (p *Params) ParamSetPairs() params.ParamSetPairs { return params.ParamSetPairs{ params.NewParamSetPair(KeyAllowedPools, &p.AllowedPools, validateAllowedPoolsParams), params.NewParamSetPair(KeySwapFee, &p.SwapFee, validateSwapFee), } } // Validate checks that the parameters have valid values. func (p Params) Validate() error { if err := validateAllowedPoolsParams(p.AllowedPools); err != nil { return err } return validateSwapFee(p.SwapFee) } func validateAllowedPoolsParams(i interface{}) error { p, ok := i.(AllowedPools) if !ok { return fmt.Errorf("invalid parameter type: %T", i) } return p.Validate() } func validateSwapFee(i interface{}) error { swapFee, ok := i.(sdk.Dec) if !ok { return fmt.Errorf("invalid parameter type: %T", i) } if swapFee.IsNil() || swapFee.IsNegative() || swapFee.GT(MaxSwapFee) { return fmt.Errorf(fmt.Sprintf("invalid swap fee: %s", swapFee)) } return nil }
Identification of VLDLR as a novel endothelial cell receptor for fibrin that modulates fibrin-dependent transendothelial migration of leukocytes. While testing the effect of the (β15-66)(2) fragment, which mimics a pair of fibrin βN-domains, on the morphology of endothelial cells, we found that this fragment induces redistribution of vascular endothelial-cadherin in a process that is inhibited by the receptor-associated protein (RAP). Based on this finding, we hypothesized that fibrin may interact with members of RAP-dependent low-density lipoprotein (LDL) receptor family. To test this hypothesis, we examined the interaction of (β15-66)(2), fibrin, and several fibrin-derived fragments with 2 members of this family by ELISA and surface plasmon resonance. The experiments showed that very LDL (VLDL) receptor (VLDLR) interacts with high affinity with fibrin through its βN-domains, and this interaction is inhibited by RAP and (β15-66)(2). Furthermore, RAP inhibited transendothelial migration of neutrophils induced by fibrin-derived NDSK-II fragment containing βN-domains, suggesting the involvement of VLDLR in fibrin-dependent leukocyte transmigration. Our experiments with VLDLR-deficient mice confirmed this suggestion by showing that, in contrast to wild-type mice, fibrin-dependent leukocyte transmigration does not occur in such mice. Altogether, the present study identified VLDLR as a novel endothelial cell receptor for fibrin that promotes fibrin-dependent leukocyte transmigration and thereby inflammation. Establishing the molecular mechanism underlying this interaction may result in the development of novel inhibitors of fibrin-dependent inflammation.
str1=raw_input() N,M,a,b=map(int,(str1.split(" "))) r=N%M if(r==0): print(0) else: print(min((r*b),((M-r)*a)))
Have you ever tried to explain to a 14-year-old girl that she does not have to have sex with all her boyfriend's friends to show that she loves him? That she has, in fact, been raped? Have you taken her on the bus to get her contraception, only to watch her throw the pills out of the window on the way back? I had to do this, when young myself and working as a residential care worker. It was my duty to report a child missing if he or she did not come back to the home at night. For some girls, that was most nights. The police and my co-workers cheerily referred to these girls as "being on the game". If you want to know about ethnicity – as everyone appears to think this is key – these girls were of Caribbean descent, as were their pimps. The men who paid to rape these children, they said, were mostly white. That was London in the 80s, so the whole "child protection is in tatters" number is not news. Child protection services have not worn down: they have been torn apart. Care has never been a place of safety, and anyone who wanted to know that could do so. Just look at who is in prison, who is homeless, who is an addict and ask how good our care system has ever been. I had wanted to stay in social work, but after a placement answering calls on what was known as the frontline I realised that most of my work would be sorting out emergency payments for food and heating. People needed money, not cod psychoanalysis. It was also obvious that social work systems were not only failing, but under attack. First they came for the social workers (bearded do-gooders), then they came for the teachers (the blob) … this is how neoliberal ideology has been so effective in running down the public sector. Now we are to feign suprise that the victims of this failure emerge, and they turn out to be girls of the underclass. Slags, skets, skanks, hos: every day I hear a new word for them. The report on Rotherham is clear-eyed about who targeted the girls: men of Pakistani and Kashmiri descent, working in gangs to rape and torture girls. The men called the girls "white trash", but white girls were not their only victims. They also abused women in their own community who had pressure put on them never to name names. Certain journalists, including Julie Bindel, have been covering this story for years and have never shied away from describing the men's ethnic origin. Ethnicity is a factor but there is also a shared assumption beneath the police inaction and the council workers' negligence: all of them deemed the girls worthless. The police described them as "undesirables" while knowing they were indeed "desired" by both Pakistani and white men for sex. They were never seen as children at all, but as somehow unrapeable, capable of consensual sex with five men at the age of 11. Heroin use, self-harm, attempted suicide, unwanted pregnancies, all of this was reported to the authorities. Meanwhile, "care" was being outsourced and some of these girls were moved to homes outside the area. This just meant the rapists' taxis had to go a bit further. The running down of children's services to a skeletal organisation in an already deprived area is spelled out in the report, which talks of "the dramatic reduction of resources available … By 2016 Rotherham will have lost 33% of its spending power" compared with 2010. Buckinghamshire, by contrast, will have suffered a 4.5% reduction. It is as if everyone has agreed who is worthless and who isn't; who can be saved and who can't. The police, the local authority, the government, and indeed the grooming gangs, appear to share the same ideology about sexual purity – and its value. The rightwing likes the cheap thrill of an underclass woman, drunk and showing her knickers, and now blames rape on political correctness gone mad, as though a bit of robust racism is the answer to misogyny. OK. So let's join the dots to Savile and the other recent sex-abuse scandals. We have the police in on the case; we have institutions basically offering up the most vulnerable as victims; we have a protection racket centred around fame rather than ethnicity. At the top we have abusive men, at the bottom powerless young girls and boys. So the bigger picture is the systematic rape of poor children by men. Not all men – I have to say this to be politically correct, don't I? The right can make it only about race. I have no problem in calling certain attitudes of certain Muslims appalling. I just can't see them in isolation from class and gender. The macho environment in which the girls were not listened to, or even seen as children, is part of a continuum of thought in which girls, once deemed sexually active, even if it is against their will, are seen as damaged goods. Thus they can be bought and sold in a market that has made it apparent it no longer considers them worth protecting. Where is the profit in that? Whatever resignations are proffered, what is horrifying is this wholesale resignation to an economic caste system. Our untouchables turn out to be little girls raped by powerful men.
package com.coolioasjulio.ev3; import trclib.TrcDriveBase; import trclib.TrcMotor; import trclib.TrcUtil; import java.util.LinkedList; import java.util.Queue; public class DifferentialDriveBase extends TrcDriveBase { private static final int ROT_VEL_SMOOTH_WINDOW = 6; private TrcMotor leftMotor, rightMotor; private double trackWidth; private double lastLeftPos, lastRightPos; private double inchesPerTick; private Double lastTime; private Queue<double[]> headingQueue; private double xPos; private double yPos; private double xVel; private double yVel; private double heading; private double rotVel; public DifferentialDriveBase(TrcMotor leftMotor, TrcMotor rightMotor, double trackWidth, double inchesPerTick) { super(leftMotor, rightMotor); this.leftMotor = leftMotor; this.rightMotor = rightMotor; this.trackWidth = trackWidth; this.inchesPerTick = inchesPerTick; this.setPositionScales(1, 1, 1); headingQueue = new LinkedList<>(); new Thread(() -> { while (!Thread.interrupted()) { updateOdometry(); try { Thread.sleep(10); } catch (InterruptedException e) { break; } } }).start(); } @Override public void resetOdometry(boolean hardware, boolean resetHeading) { if (leftMotor != null) { leftMotor.resetPosition(hardware); } if (rightMotor != null) { rightMotor.resetPosition(hardware); } xPos = 0.0; yPos = 0.0; xVel = 0.0; yVel = 0.0; if (resetHeading) { heading = 0.0; } rotVel = 0.0; } @Override protected void updateOdometry(Odometry odometry) { } @Override public double getRawYPosition() { return yPos; } @Override public double getYPosition() { return getRawYPosition(); } @Override public double getRawXPosition() { return xPos; } @Override public double getXPosition() { return getRawXPosition(); } @Override public double getYVelocity() { return yVel; } @Override public double getXVelocity() { return xVel; } @Override public double getHeading() { return heading; } @Override public double getGyroTurnRate() { return rotVel; } private void updateOdometry() { double currTime = TrcUtil.getCurrentTime(); double leftPos = leftMotor.getMotorPosition(); double rightPos = rightMotor.getMotorPosition(); if (lastTime != null) { double left = inchesPerTick * (leftPos - lastLeftPos); double right = inchesPerTick * (rightPos - lastRightPos); double dTheta = 0; //(left - right) / trackWidth; // in radians double turningRadius = (trackWidth / 2) * (left + right) / (left - right); double dx, dy; if (Double.isFinite(turningRadius)) { dx = turningRadius * (1 - Math.cos(dTheta)); dy = turningRadius * Math.sin(dTheta); } else { dx = 0.0; dy = TrcUtil.average(left, right); } double headingRad = Math.toRadians(heading); double x = dx * Math.cos(headingRad) + dy * Math.sin(headingRad); double y = -dx * Math.sin(headingRad) + dy * Math.cos(headingRad); xPos += x; yPos += y; double dt = currTime - lastTime; xVel = x / dt; yVel = y / dt; double dThetaDeg = Math.toDegrees(dTheta); heading += dThetaDeg; headingQueue.add(new double[]{currTime, heading}); while (headingQueue.size() > ROT_VEL_SMOOTH_WINDOW) { headingQueue.remove(); } if (headingQueue.size() == ROT_VEL_SMOOTH_WINDOW) { double[] prevHeading = headingQueue.remove(); rotVel = (heading - prevHeading[1]) / (currTime - prevHeading[0]); } else { rotVel = 0; } } lastLeftPos = leftPos; lastRightPos = rightPos; lastTime = currTime; } @Override public void tankDrive(double leftPower, double rightPower, boolean inverted) { if (inverted) { double temp = -leftPower; leftPower = -rightPower; rightPower = temp; } leftMotor.set(leftPower); rightMotor.set(rightPower); } }
import click from aiohttp import web async def index(request): return web.Response(text='index') @click.group() def cli(): pass @cli.command() def run(): app = web.Application() app.router.add_get('/', index) web.run_app(app)
// Code generated by protoc-gen-go. DO NOT EDIT. // versions: // protoc-gen-go v1.25.0 // protoc v3.14.0 // source: common/common.proto package common import ( proto "github.com/golang/protobuf/proto" protoreflect "google.golang.org/protobuf/reflect/protoreflect" protoimpl "google.golang.org/protobuf/runtime/protoimpl" reflect "reflect" sync "sync" ) const ( // Verify that this generated code is sufficiently up-to-date. _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion) // Verify that runtime/protoimpl is sufficiently up-to-date. _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) ) // This is a compile-time assertion that a sufficiently up-to-date version // of the legacy proto package is being used. const _ = proto.ProtoPackageIsVersion4 type ErrorCode int32 const ( ErrorCode_SUCCESS ErrorCode = 0 // 参数类错误 ErrorCode_INVALID_PARAMETERS ErrorCode = 1 ErrorCode_UNSUPPORTED_OPERATION ErrorCode = 2 ErrorCode_UNAUTHORIZED ErrorCode = 3 // 内部错误 ErrorCode_INTERNAL_ERROR ErrorCode = 10 // 外部错误,20开头 ErrorCode_EXTERNAL_ERROR ErrorCode = 20 ) // Enum value maps for ErrorCode. var ( ErrorCode_name = map[int32]string{ 0: "SUCCESS", 1: "INVALID_PARAMETERS", 2: "UNSUPPORTED_OPERATION", 3: "UNAUTHORIZED", 10: "INTERNAL_ERROR", 20: "EXTERNAL_ERROR", } ErrorCode_value = map[string]int32{ "SUCCESS": 0, "INVALID_PARAMETERS": 1, "UNSUPPORTED_OPERATION": 2, "UNAUTHORIZED": 3, "INTERNAL_ERROR": 10, "EXTERNAL_ERROR": 20, } ) func (x ErrorCode) Enum() *ErrorCode { p := new(ErrorCode) *p = x return p } func (x ErrorCode) String() string { return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x)) } func (ErrorCode) Descriptor() protoreflect.EnumDescriptor { return file_common_common_proto_enumTypes[0].Descriptor() } func (ErrorCode) Type() protoreflect.EnumType { return &file_common_common_proto_enumTypes[0] } func (x ErrorCode) Number() protoreflect.EnumNumber { return protoreflect.EnumNumber(x) } // Deprecated: Use ErrorCode.Descriptor instead. func (ErrorCode) EnumDescriptor() ([]byte, []int) { return file_common_common_proto_rawDescGZIP(), []int{0} } type BaseRequest struct { state protoimpl.MessageState sizeCache protoimpl.SizeCache unknownFields protoimpl.UnknownFields RequestID string `protobuf:"bytes,1,opt,name=requestID,proto3" json:"requestID,omitempty"` } func (x *BaseRequest) Reset() { *x = BaseRequest{} if protoimpl.UnsafeEnabled { mi := &file_common_common_proto_msgTypes[0] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } } func (x *BaseRequest) String() string { return protoimpl.X.MessageStringOf(x) } func (*BaseRequest) ProtoMessage() {} func (x *BaseRequest) ProtoReflect() protoreflect.Message { mi := &file_common_common_proto_msgTypes[0] if protoimpl.UnsafeEnabled && x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { ms.StoreMessageInfo(mi) } return ms } return mi.MessageOf(x) } // Deprecated: Use BaseRequest.ProtoReflect.Descriptor instead. func (*BaseRequest) Descriptor() ([]byte, []int) { return file_common_common_proto_rawDescGZIP(), []int{0} } func (x *BaseRequest) GetRequestID() string { if x != nil { return x.RequestID } return "" } type BaseResponse struct { state protoimpl.MessageState sizeCache protoimpl.SizeCache unknownFields protoimpl.UnknownFields Code ErrorCode `protobuf:"varint,1,opt,name=code,proto3,enum=common.ErrorCode" json:"code,omitempty"` Msg string `protobuf:"bytes,2,opt,name=msg,proto3" json:"msg,omitempty"` } func (x *BaseResponse) Reset() { *x = BaseResponse{} if protoimpl.UnsafeEnabled { mi := &file_common_common_proto_msgTypes[1] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } } func (x *BaseResponse) String() string { return protoimpl.X.MessageStringOf(x) } func (*BaseResponse) ProtoMessage() {} func (x *BaseResponse) ProtoReflect() protoreflect.Message { mi := &file_common_common_proto_msgTypes[1] if protoimpl.UnsafeEnabled && x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { ms.StoreMessageInfo(mi) } return ms } return mi.MessageOf(x) } // Deprecated: Use BaseResponse.ProtoReflect.Descriptor instead. func (*BaseResponse) Descriptor() ([]byte, []int) { return file_common_common_proto_rawDescGZIP(), []int{1} } func (x *BaseResponse) GetCode() ErrorCode { if x != nil { return x.Code } return ErrorCode_SUCCESS } func (x *BaseResponse) GetMsg() string { if x != nil { return x.Msg } return "" } var File_common_common_proto protoreflect.FileDescriptor var file_common_common_proto_rawDesc = []byte{ 0x0a, 0x13, 0x63, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x2f, 0x63, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x06, 0x63, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x22, 0x2b, 0x0a, 0x0b, 0x42, 0x61, 0x73, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x1c, 0x0a, 0x09, 0x72, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x49, 0x44, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x72, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x49, 0x44, 0x22, 0x47, 0x0a, 0x0c, 0x42, 0x61, 0x73, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x25, 0x0a, 0x04, 0x63, 0x6f, 0x64, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x11, 0x2e, 0x63, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x2e, 0x45, 0x72, 0x72, 0x6f, 0x72, 0x43, 0x6f, 0x64, 0x65, 0x52, 0x04, 0x63, 0x6f, 0x64, 0x65, 0x12, 0x10, 0x0a, 0x03, 0x6d, 0x73, 0x67, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6d, 0x73, 0x67, 0x2a, 0x85, 0x01, 0x0a, 0x09, 0x45, 0x72, 0x72, 0x6f, 0x72, 0x43, 0x6f, 0x64, 0x65, 0x12, 0x0b, 0x0a, 0x07, 0x53, 0x55, 0x43, 0x43, 0x45, 0x53, 0x53, 0x10, 0x00, 0x12, 0x16, 0x0a, 0x12, 0x49, 0x4e, 0x56, 0x41, 0x4c, 0x49, 0x44, 0x5f, 0x50, 0x41, 0x52, 0x41, 0x4d, 0x45, 0x54, 0x45, 0x52, 0x53, 0x10, 0x01, 0x12, 0x19, 0x0a, 0x15, 0x55, 0x4e, 0x53, 0x55, 0x50, 0x50, 0x4f, 0x52, 0x54, 0x45, 0x44, 0x5f, 0x4f, 0x50, 0x45, 0x52, 0x41, 0x54, 0x49, 0x4f, 0x4e, 0x10, 0x02, 0x12, 0x10, 0x0a, 0x0c, 0x55, 0x4e, 0x41, 0x55, 0x54, 0x48, 0x4f, 0x52, 0x49, 0x5a, 0x45, 0x44, 0x10, 0x03, 0x12, 0x12, 0x0a, 0x0e, 0x49, 0x4e, 0x54, 0x45, 0x52, 0x4e, 0x41, 0x4c, 0x5f, 0x45, 0x52, 0x52, 0x4f, 0x52, 0x10, 0x0a, 0x12, 0x12, 0x0a, 0x0e, 0x45, 0x58, 0x54, 0x45, 0x52, 0x4e, 0x41, 0x4c, 0x5f, 0x45, 0x52, 0x52, 0x4f, 0x52, 0x10, 0x14, 0x42, 0x28, 0x5a, 0x26, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x77, 0x69, 0x6c, 0x65, 0x6e, 0x63, 0x65, 0x79, 0x61, 0x6f, 0x2f, 0x68, 0x75, 0x6d, 0x6f, 0x72, 0x2d, 0x61, 0x70, 0x69, 0x2f, 0x63, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33, } var ( file_common_common_proto_rawDescOnce sync.Once file_common_common_proto_rawDescData = file_common_common_proto_rawDesc ) func file_common_common_proto_rawDescGZIP() []byte { file_common_common_proto_rawDescOnce.Do(func() { file_common_common_proto_rawDescData = protoimpl.X.CompressGZIP(file_common_common_proto_rawDescData) }) return file_common_common_proto_rawDescData } var file_common_common_proto_enumTypes = make([]protoimpl.EnumInfo, 1) var file_common_common_proto_msgTypes = make([]protoimpl.MessageInfo, 2) var file_common_common_proto_goTypes = []interface{}{ (ErrorCode)(0), // 0: common.ErrorCode (*BaseRequest)(nil), // 1: common.BaseRequest (*BaseResponse)(nil), // 2: common.BaseResponse } var file_common_common_proto_depIdxs = []int32{ 0, // 0: common.BaseResponse.code:type_name -> common.ErrorCode 1, // [1:1] is the sub-list for method output_type 1, // [1:1] is the sub-list for method input_type 1, // [1:1] is the sub-list for extension type_name 1, // [1:1] is the sub-list for extension extendee 0, // [0:1] is the sub-list for field type_name } func init() { file_common_common_proto_init() } func file_common_common_proto_init() { if File_common_common_proto != nil { return } if !protoimpl.UnsafeEnabled { file_common_common_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { switch v := v.(*BaseRequest); i { case 0: return &v.state case 1: return &v.sizeCache case 2: return &v.unknownFields default: return nil } } file_common_common_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} { switch v := v.(*BaseResponse); i { case 0: return &v.state case 1: return &v.sizeCache case 2: return &v.unknownFields default: return nil } } } type x struct{} out := protoimpl.TypeBuilder{ File: protoimpl.DescBuilder{ GoPackagePath: reflect.TypeOf(x{}).PkgPath(), RawDescriptor: file_common_common_proto_rawDesc, NumEnums: 1, NumMessages: 2, NumExtensions: 0, NumServices: 0, }, GoTypes: file_common_common_proto_goTypes, DependencyIndexes: file_common_common_proto_depIdxs, EnumInfos: file_common_common_proto_enumTypes, MessageInfos: file_common_common_proto_msgTypes, }.Build() File_common_common_proto = out.File file_common_common_proto_rawDesc = nil file_common_common_proto_goTypes = nil file_common_common_proto_depIdxs = nil }
""" A subpackage dedicated to descriptions of internal halo properties, such as their mass. See ``halomod`` for more extended quantities in this regard. """ from . import mass_definitions from .mass_definitions import MassDefinition
/** * <NAME> <<EMAIL>> * Engineering Department * University of Oxford * Copyright (C) 2006. All rights reserved. * * Use and modify all you like, but do NOT redistribute. No warranty is * expressed or implied. No liability or responsibility is assumed. */ #ifndef __JP_STATS_HPP #define __JP_STATS_HPP template<class T> class jp_stats_mean_var { public: typedef T value_type; private: size_t count_; value_type sum_x_; value_type sum_xx_; public: jp_stats_mean_var() : count_(0), sum_x_(0), sum_xx_(0) { } template<class F> void operator()(const F& val) { sum_x_ += val; sum_xx_ += (val*val); count_++; } value_type mean() const { return sum_x_/count_; } value_type variance() const { if (count_ == 1) return value_type(0); else return (sum_xx_ - (value_type(1)/count_)*sum_x_*sum_x_)/(count_ - 1); //return std::max((value_type(1)/(count_ - 1)) * (sum_xx_ - (value_type(1)/count_)*sum_x_*sum_x_), value_type(0.0)); } size_t count() const { return count_; } }; #endif
/** * Package: PACKAGE_NAME * Created by Ben Zhao on 2021/7/14 * Project: Ben-leetcode-practice */ public class leetcode1846 { public static void main(String[] args) { readMeSet.addnewline("https://leetcode.com/problems/maximum-element-after-decreasing-and-rearranging/", 1846); } public static int maximumElementAfterDecrementingAndRearranging(int[] arr) { int[] map = new int[arr.length]; int ans = 0; int miss = 0; for (int n: arr) { if (n >= arr.length) { map[arr.length - 1]++; }else if (n > 0) { map[n - 1]++; } } for (int i = map.length - 1; i >= 0 ; i--) { if (map[i] == 0) { if (miss == 0) { ans++; }else { miss--; } }else { miss += map[i] - 1; } } return map.length - ans; } }
def CompileProtoBuf(env, input_proto_files): proto_compiler_path = '%s/protoc.exe' % os.getenv('OMAHA_PROTOBUF_BIN_DIR') proto_path = env['PROTO_PATH'] cpp_out = env['CPP_OUT'] targets = [os.path.join(cpp_out, os.path.splitext(base)[0] + '.pb.cc') for base in [RelativePath(in_file, proto_path) for in_file in input_proto_files]] proto_arguments = (' --proto_path=%s --cpp_out=%s %s ' % (proto_path, cpp_out, ' '.join(input_proto_files))) proto_cmd_line = proto_compiler_path + proto_arguments compile_proto_buf = env.Command( target=targets, source=input_proto_files, action=proto_cmd_line, ) return compile_proto_buf
// This test checks if an expired declined lease can be reused in SOLICIT (fake allocation) TEST_F(AllocEngine6Test, solicitReuseDeclinedLease6) { AllocEnginePtr engine(new AllocEngine(AllocEngine::ALLOC_ITERATIVE, 100)); ASSERT_TRUE(engine); string addr_txt("2001:db8:1::ad"); IOAddress addr(addr_txt); initSubnet(IOAddress("2001:db8:1::"), addr, addr); Lease6Ptr declined = generateDeclinedLease(addr_txt, 100, -10); ASSERT_TRUE(declined->expired()); Lease6Ptr assigned; testReuseLease6(engine, declined, "::", true, SHOULD_PASS, assigned); ASSERT_TRUE(assigned); EXPECT_EQ(addr, assigned->addr_); checkLease6(duid_, assigned, Lease::TYPE_NA, 128); testReuseLease6(engine, declined, addr_txt, true, SHOULD_PASS, assigned); ASSERT_TRUE(assigned); EXPECT_EQ(addr, assigned->addr_); }
// WriteToRequest writes these params to a swagger request func (o *GetAlertGroupsParams) WriteToRequest(r runtime.ClientRequest, reg strfmt.Registry) error { if err := r.SetTimeout(o.timeout); err != nil { return err } var res []error if o.Active != nil { var qrActive bool if o.Active != nil { qrActive = *o.Active } qActive := swag.FormatBool(qrActive) if qActive != "" { if err := r.SetQueryParam("active", qActive); err != nil { return err } } } valuesFilter := o.Filter joinedFilter := swag.JoinByFormat(valuesFilter, "multi") if err := r.SetQueryParam("filter", joinedFilter...); err != nil { return err } if o.Inhibited != nil { var qrInhibited bool if o.Inhibited != nil { qrInhibited = *o.Inhibited } qInhibited := swag.FormatBool(qrInhibited) if qInhibited != "" { if err := r.SetQueryParam("inhibited", qInhibited); err != nil { return err } } } if o.Receiver != nil { var qrReceiver string if o.Receiver != nil { qrReceiver = *o.Receiver } qReceiver := qrReceiver if qReceiver != "" { if err := r.SetQueryParam("receiver", qReceiver); err != nil { return err } } } if o.Silenced != nil { var qrSilenced bool if o.Silenced != nil { qrSilenced = *o.Silenced } qSilenced := swag.FormatBool(qrSilenced) if qSilenced != "" { if err := r.SetQueryParam("silenced", qSilenced); err != nil { return err } } } if len(res) > 0 { return errors.CompositeValidationError(res...) } return nil }
class QueuedAfter: """QueuedAfter objects are objects representing a call to '.after'.""" def __init__(self, ms, func, args): self.ms = ms self.func = func self.args = args
//g++ 7.4.0 #include <iostream> #include <bits/stdc++.h> using namespace std; typedef long long ll; void solve(string s[],ll N,ll M) { ll getA[N]; ll getB[N]; for(ll i=0;i<N;++i) { getA[i] = -1; getB[i] = -1; } for(ll i=0;i<N;++i) { for(ll j=0;j<N;++j) { if(s[i][j] == 'a') getA[i] = j; if(s[i][j] == 'b') getB[i] = j; } } for(ll i=0;i<N;++i) { for(ll j = (i + 1);j<N;++j) { if(s[i][j] == s[j][i]) { cout<<"YES"<<endl; for(ll k=0;k<=M;++k) { if(k%2) cout<<(j + 1)<<" "; else cout<<(i + 1)<<" "; } cout<<endl; return; } } } if(M%2) { cout<<"YES"<<endl; for(ll k=0;k<=M;++k) { if(k%2) cout<<1<<" "; else cout<<2<<" "; } cout<<endl; return; } for(ll i=0;i<N;++i) { for(ll j=0;j<N;++j) { ll K = -1; if(((s[i][j] == 'a') && (getA[j] != -1))) { K = getA[j]; } else if(((s[i][j] == 'b') && (getB[j] != -1))) { K = getB[j]; } if(K == -1) continue; cout<<"YES"<<endl; if((M/2) % 2) { for(ll k=0;k<=M;++k) { if(k%4 == 0) cout<<(i + 1)<<" "; else if(k%4 == 1) cout<<(j + 1)<<" "; else if(k%4 == 2) cout<<(K + 1)<<" "; else cout<<(j + 1)<<" "; } cout<<endl; return; } for(ll k=0;k<=M;++k) { if(k%4 == 0) cout<<(j + 1)<<" "; else if(k%4 == 1) cout<<(i + 1)<<" "; else if(k%4 == 2) cout<<(j + 1)<<" "; else cout<<(K + 1)<<" "; } return; } } cout<<"NO"<<endl; } int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); ll T; cin>>T; while(T--) { ll N,M; cin>>N>>M; string s[N]; for(ll i=0;i<N;++i) cin>>s[i]; solve(s,N,M); } }
// reset is called by program creator between each processor's emit code. It increments the // stage offset for variable name mangling, and also ensures verfication variables in the // fragment shader are cleared. void reset() { this->enterStage(); this->addStage(); fFS.reset(); }
<filename>testsuite/tests/rename/should_compile/rn003.hs module Foo (f) where -- export food f x = x -- !!! weird patterns with no variables 1 = f 1 [] = f [] 1 = f (f 1) [] = f (f [])
/** * Tests the parameters class verifies the getters and setters. Verifies that it * can serialize/deserialize and verifies that the xml format is as expected. * */ @SuppressWarnings("boxing") public class ParametersTest { /** * Tests the parameters class. * * @throws JAXBException Bad xml. * @throws IOException Bad stream. */ @Test public void test() throws JAXBException, IOException { Parameters expected = new Parameters(); expected.setBgColor("bgColor"); expected.setCustom("custom"); expected.setFormat("format"); expected.setHeight(10); expected.setSrs("srs"); expected.setStyle("style"); expected.setTransparent(true); expected.setWidth(11); expected.setLayerName("layerName"); InputStream stream = XMLUtilities.writeXMLObjectToInputStreamSync(expected); StringBuilder builder = new StringBuilder(); int character = stream.read(); while (character > 0) { builder.append((char)character); character = stream.read(); } String xmlString = builder.toString(); assertTrue(xmlString.contains("<BGCOLOR>bgColor</BGCOLOR>")); assertTrue(xmlString.contains("<CUSTOM>custom</CUSTOM>")); assertTrue(xmlString.contains("<FORMAT>format</FORMAT>")); assertTrue(xmlString.contains("<HEIGHT>10</HEIGHT>")); assertTrue(xmlString.contains("<SRS>srs</SRS>")); assertTrue(xmlString.contains("<STYLE>style</STYLE>")); assertTrue(xmlString.contains("<TRANSPARENT>true</TRANSPARENT>")); assertTrue(xmlString.contains("<WIDTH>11</WIDTH>")); assertTrue(xmlString.contains("<LAYERS>layerName</LAYERS>")); stream.reset(); Parameters actual = XMLUtilities.readXMLObject(stream, Parameters.class); assertEquals(expected.getBgColor(), actual.getBgColor()); assertEquals(expected.getCustom(), actual.getCustom()); assertEquals(expected.getFormat(), actual.getFormat()); assertEquals(expected.getHeight(), actual.getHeight()); assertEquals(expected.getSrs(), actual.getSrs()); assertEquals(expected.getStyle(), actual.getStyle()); assertEquals(expected.isTransparent(), actual.isTransparent()); assertEquals(expected.getWidth(), actual.getWidth()); assertEquals(expected.getLayerName(), actual.getLayerName()); } }
import math import statistics n = int(input()) li = list(map(int,input().split())) my_mean = sum(li) / len(li) ll = math.floor(my_mean) lll = math.ceil(my_mean) c = 0 for i in range(len(li)): c += (li[i] - ll)**2 cc = 0 for i in range(len(li)): cc += (li[i] - lll)**2 if c < cc: print(c) else: print(cc) #20
/* * Copyright (c) 2021 Samsung Electronics Co., Ltd. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ // CLASS HEADER #include <dali/internal/adaptor/common/framework.h> // EXTERNAL INCLUDES #include <android_native_app_glue.h> #include <unistd.h> #include <queue> #include <unordered_set> #include <dali/devel-api/events/key-event-devel.h> #include <dali/devel-api/events/touch-point.h> #include <dali/integration-api/adaptor-framework/adaptor.h> #include <dali/integration-api/adaptor-framework/android/android-framework.h> #include <dali/integration-api/debug.h> #include <dali/public-api/actors/actor.h> #include <dali/public-api/actors/layer.h> #include <dali/public-api/events/key-event.h> // INTERNAL INCLUDES #include <dali/internal/adaptor/android/android-framework-impl.h> #include <dali/internal/system/common/callback-manager.h> namespace Dali { namespace Internal { namespace Adaptor { namespace { // Copied from x server static unsigned int GetCurrentMilliSeconds(void) { struct timeval tv; struct timespec tp; static clockid_t clockid; if(!clockid) { #ifdef CLOCK_MONOTONIC_COARSE if(clock_getres(CLOCK_MONOTONIC_COARSE, &tp) == 0 && (tp.tv_nsec / 1000) <= 1000 && clock_gettime(CLOCK_MONOTONIC_COARSE, &tp) == 0) { clockid = CLOCK_MONOTONIC_COARSE; } else #endif if(clock_gettime(CLOCK_MONOTONIC, &tp) == 0) { clockid = CLOCK_MONOTONIC; } else { clockid = ~0L; } } if(clockid != ~0L && clock_gettime(clockid, &tp) == 0) { return (tp.tv_sec * 1000) + (tp.tv_nsec / 1000000L); } gettimeofday(&tv, NULL); return (tv.tv_sec * 1000) + (tv.tv_usec / 1000); } /// Recursively removes constraints from an actor and all it's children. void RemoveAllConstraints(Dali::Actor actor) { if(actor) { const auto childCount = actor.GetChildCount(); for(auto i = 0u; i < childCount; ++i) { Dali::Actor child = actor.GetChildAt(i); RemoveAllConstraints(child); } actor.RemoveConstraints(); } } /// Removes constraints from all actors in all windows. void RemoveAllConstraints(const Dali::WindowContainer& windows) { for(auto& window : windows) { RemoveAllConstraints(window.GetRootLayer()); } } } // Unnamed namespace /** * Impl to hide android data members */ struct Framework::Impl { struct IdleCallback { int timestamp; int timeout; int id; void* data; bool (*callback)(void* data); IdleCallback(int timeout, int id, void* data, bool (*callback)(void* data)) : timestamp(GetCurrentMilliSeconds() + timeout), timeout(timeout), id(id), data(data), callback(callback) { } bool operator()() { return callback(data); } bool operator<(const IdleCallback& rhs) const { return timestamp > rhs.timestamp; } }; // Constructor Impl(Framework* framework) : mAbortCallBack(nullptr), mCallbackManager(CallbackManager::New()), mLanguage("NOT_SUPPORTED"), mRegion("NOT_SUPPORTED"), mFinishRequested(false), mIdleId(0), mIdleReadPipe(-1), mIdleWritePipe(-1) { AndroidFramework::GetImplementation(AndroidFramework::Get()).SetFramework(framework); } ~Impl() { AndroidFramework::GetImplementation(AndroidFramework::Get()).SetFramework(nullptr); delete mAbortCallBack; mAbortCallBack = nullptr; // we're quiting the main loop so // mCallbackManager->RemoveAllCallBacks() does not need to be called // to delete our abort handler delete mCallbackManager; mCallbackManager = nullptr; } std::string GetLanguage() const { return mLanguage; } std::string GetRegion() const { return mRegion; } void OnIdle() { // Dequeue the pipe int8_t msg = -1; read(mIdleReadPipe, &msg, sizeof(msg)); unsigned int ts = GetCurrentMilliSeconds(); if(!mIdleCallbacks.empty()) { IdleCallback callback = mIdleCallbacks.top(); if(callback.timestamp <= ts) { mIdleCallbacks.pop(); // Callback wasn't removed if(mRemovedIdleCallbacks.find(callback.id) == mRemovedIdleCallbacks.end()) { if(callback()) // keep the callback { AddIdle(callback.timeout, callback.data, callback.callback, callback.id); } } // Callback cane be also removed during the callback call auto i = mRemovedIdleCallbacks.find(callback.id); if(i != mRemovedIdleCallbacks.end()) { mRemovedIdleCallbacks.erase(i); } } } if(mIdleCallbacks.empty()) { mRemovedIdleCallbacks.clear(); } } unsigned int AddIdle(int timeout, void* data, bool (*callback)(void* data), unsigned int existingId = 0) { unsigned int chosenId; if(existingId) { chosenId = existingId; } else { ++mIdleId; if(mIdleId == 0) { ++mIdleId; } chosenId = mIdleId; } mIdleCallbacks.push(IdleCallback(timeout, chosenId, data, callback)); // To wake up the idle pipe and to trigger OnIdle int8_t msg = 1; write(mIdleWritePipe, &msg, sizeof(msg)); return chosenId; } void RemoveIdle(unsigned int id) { if(id != 0) { mRemovedIdleCallbacks.insert(id); } } int GetIdleTimeout() { int timeout = -1; if(!mIdleCallbacks.empty()) { IdleCallback idleTimeout = mIdleCallbacks.top(); timeout = idleTimeout.timestamp - GetCurrentMilliSeconds(); if(timeout < 0) { timeout = 0; } } return timeout; } // Data CallbackBase* mAbortCallBack; CallbackManager* mCallbackManager; std::string mLanguage; std::string mRegion; bool mFinishRequested; int mIdleReadPipe; int mIdleWritePipe; unsigned int mIdleId; std::priority_queue<IdleCallback> mIdleCallbacks; std::unordered_set<int> mRemovedIdleCallbacks; // Static methods /** * Called by the native activity loop when the application APP_CMD_INIT_WINDOW event is processed. */ static void NativeWindowCreated(Framework* framework, ANativeWindow* window) { if(framework) { framework->AppStatusHandler(APP_WINDOW_CREATED, window); } } /** * Called by the native activity loop when the application APP_CMD_DESTROY event is processed. */ static void NativeWindowDestroyed(Framework* framework, ANativeWindow* window) { if(framework) { framework->AppStatusHandler(APP_WINDOW_DESTROYED, window); } } /** * Called by the native activity loop when the application APP_CMD_INIT_WINDOW event is processed. */ static void NativeAppPaused(Framework* framework) { if(framework) { framework->AppStatusHandler(APP_PAUSE, nullptr); } } /** * Called by the native activity loop when the application APP_CMD_TERM_WINDOW event is processed. */ static void NativeAppResumed(Framework* framework) { if(framework) { framework->AppStatusHandler(APP_RESUME, nullptr); } } /** * Called by the native activity loop when the application input touch event is processed. */ static void NativeAppTouchEvent(Framework* framework, Dali::TouchPoint& touchPoint, int64_t timeStamp) { Dali::Adaptor::Get().FeedTouchPoint(touchPoint, timeStamp); } /** * Called by the native activity loop when the application input key event is processed. */ static void NativeAppKeyEvent(Framework* framework, Dali::KeyEvent& keyEvent) { Dali::Adaptor::Get().FeedKeyEvent(keyEvent); } /** * Called by the native activity loop when the application APP_CMD_DESTROY event is processed. */ static void NativeAppDestroyed(Framework* framework) { if(framework) { framework->AppStatusHandler(APP_DESTROYED, nullptr); } } /* Order of events: APP_CMD_START APP_CMD_RESUME APP_CMD_INIT_WINDOW APP_CMD_GAINED_FOCUS APP_CMD_PAUSE APP_CMD_LOST_FOCUS APP_CMD_SAVE_STATE APP_CMD_STOP APP_CMD_TERM_WINDOW */ static void HandleAppCmd(struct android_app* app, int32_t cmd) { Framework* framework = AndroidFramework::GetImplementation(AndroidFramework::Get()).GetFramework(); switch(cmd) { case APP_CMD_SAVE_STATE: break; case APP_CMD_START: break; case APP_CMD_STOP: break; case APP_CMD_RESUME: break; case APP_CMD_PAUSE: break; case APP_CMD_INIT_WINDOW: // The window is being shown, get it ready. AndroidFramework::Get().SetApplicationWindow(app->window); Dali::Internal::Adaptor::Framework::Impl::NativeWindowCreated(framework, app->window); Dali::Internal::Adaptor::Framework::Impl::NativeAppResumed(framework); break; case APP_CMD_TERM_WINDOW: // The window is being hidden or closed, clean it up. AndroidFramework::Get().SetApplicationWindow(nullptr); Dali::Internal::Adaptor::Framework::Impl::NativeAppPaused(framework); Dali::Internal::Adaptor::Framework::Impl::NativeWindowDestroyed(framework, app->window); break; case APP_CMD_GAINED_FOCUS: break; case APP_CMD_LOST_FOCUS: break; case APP_CMD_DESTROY: Dali::Internal::Adaptor::Framework::Impl::NativeAppPaused(framework); Dali::Internal::Adaptor::Framework::Impl::NativeAppDestroyed(framework); break; } } static int32_t HandleAppInput(struct android_app* app, AInputEvent* event) { Framework* framework = AndroidFramework::GetImplementation(AndroidFramework::Get()).GetFramework(); if(AInputEvent_getType(event) == AINPUT_EVENT_TYPE_MOTION) { int32_t deviceId = AInputEvent_getDeviceId(event); float x = AMotionEvent_getX(event, 0); float y = AMotionEvent_getY(event, 0); Dali::PointState::Type state = Dali::PointState::DOWN; int32_t action = AMotionEvent_getAction(event); int64_t timeStamp = AMotionEvent_getEventTime(event); switch(action & AMOTION_EVENT_ACTION_MASK) { case AMOTION_EVENT_ACTION_DOWN: break; case AMOTION_EVENT_ACTION_UP: state = Dali::PointState::UP; break; case AMOTION_EVENT_ACTION_MOVE: state = Dali::PointState::MOTION; break; case AMOTION_EVENT_ACTION_CANCEL: state = Dali::PointState::INTERRUPTED; break; case AMOTION_EVENT_ACTION_OUTSIDE: state = Dali::PointState::LEAVE; break; } Dali::TouchPoint point(deviceId, state, x, y); Dali::Internal::Adaptor::Framework::Impl::NativeAppTouchEvent(framework, point, timeStamp); return 1; } else if(AInputEvent_getType(event) == AINPUT_EVENT_TYPE_KEY) { int32_t deviceId = AInputEvent_getDeviceId(event); int32_t keyCode = AKeyEvent_getKeyCode(event); int32_t action = AKeyEvent_getAction(event); int64_t timeStamp = AKeyEvent_getEventTime(event); Dali::KeyEvent::State state = Dali::KeyEvent::DOWN; switch(action) { case AKEY_EVENT_ACTION_DOWN: break; case AKEY_EVENT_ACTION_UP: state = Dali::KeyEvent::UP; break; } std::string keyName = ""; switch(keyCode) { case 4: keyName = "XF86Back"; break; default: break; } Dali::KeyEvent keyEvent = Dali::DevelKeyEvent::New(keyName, "", "", keyCode, 0, timeStamp, state, "", "", Device::Class::NONE, Device::Subclass::NONE); Dali::Internal::Adaptor::Framework::Impl::NativeAppKeyEvent(framework, keyEvent); return 1; } return 0; } static void HandleAppIdle(struct android_app* app, struct android_poll_source* source) { Framework* framework = AndroidFramework::GetImplementation(AndroidFramework::Get()).GetFramework(); if(framework && framework->mImpl) { framework->mImpl->OnIdle(); } } }; Framework::Framework(Framework::Observer& observer, int* argc, char*** argv, Type type) : mObserver(observer), mInitialised(false), mPaused(false), mRunning(false), mArgc(argc), mArgv(argv), mBundleName(""), mBundleId(""), mAbortHandler(MakeCallback(this, &Framework::AbortCallback)), mImpl(NULL) { mImpl = new Impl(this); } Framework::~Framework() { if(mRunning) { Quit(); } delete mImpl; mImpl = nullptr; } void Framework::Run() { struct android_app* app = AndroidFramework::Get().GetNativeApplication(); app->onAppCmd = Framework::Impl::HandleAppCmd; app->onInputEvent = Framework::Impl::HandleAppInput; struct android_poll_source* source; struct android_poll_source idlePollSource; idlePollSource.id = LOOPER_ID_USER; idlePollSource.app = app; idlePollSource.process = Impl::HandleAppIdle; int idlePipe[2]; if(pipe(idlePipe)) { DALI_LOG_ERROR("Failed to open idle pipe\n"); return; } mImpl->mIdleReadPipe = idlePipe[0]; mImpl->mIdleWritePipe = idlePipe[1]; ALooper_addFd(app->looper, idlePipe[0], LOOPER_ID_USER, ALOOPER_EVENT_INPUT, NULL, &idlePollSource); mRunning = true; // Read all pending events. int events; int idleTimeout = -1; while(true) { if(mImpl) { idleTimeout = mImpl->GetIdleTimeout(); } int id = ALooper_pollAll(idleTimeout, NULL, &events, (void**)&source); // Process the error. if(id == ALOOPER_POLL_ERROR) { DALI_LOG_ERROR("ALooper error\n"); Quit(); std::abort(); } // Process the timeout, trigger OnIdle. if(id == ALOOPER_POLL_TIMEOUT) { int8_t msg = 1; write(mImpl->mIdleWritePipe, &msg, sizeof(msg)); } // Process the application event. if(id >= 0 && source != NULL) { source->process(app, source); } // Check if we are exiting. if(app->destroyRequested) { break; } } while(!mImpl->mIdleCallbacks.empty()) { mImpl->mIdleCallbacks.pop(); } mImpl->mRemovedIdleCallbacks.clear(); mImpl->mIdleId = 0; ALooper_removeFd(app->looper, idlePipe[0]); if(mImpl) { mImpl->mIdleReadPipe = -1; mImpl->mIdleWritePipe = -1; } close(idlePipe[0]); close(idlePipe[1]); mRunning = false; } unsigned int Framework::AddIdle(int timeout, void* data, bool (*callback)(void* data)) { if(mImpl) { return mImpl->AddIdle(timeout, data, callback); } return 0; } void Framework::RemoveIdle(unsigned int id) { if(mImpl) { mImpl->RemoveIdle(id); } } void Framework::Quit() { struct android_app* app = AndroidFramework::Get().GetNativeApplication(); if(app && !app->destroyRequested && !mImpl->mFinishRequested) { mImpl->mFinishRequested = true; ANativeActivity_finish(app->activity); } } bool Framework::IsMainLoopRunning() { return mRunning; } void Framework::AddAbortCallback(CallbackBase* callback) { mImpl->mAbortCallBack = callback; } std::string Framework::GetBundleName() const { return mBundleName; } void Framework::SetBundleName(const std::string& name) { mBundleName = name; } std::string Framework::GetBundleId() const { return mBundleId; } std::string Framework::GetResourcePath() { return DALI_DATA_RO_DIR; } std::string Framework::GetDataPath() { return ""; } void Framework::SetBundleId(const std::string& id) { mBundleId = id; } void Framework::AbortCallback() { // if an abort call back has been installed run it. if(mImpl->mAbortCallBack) { CallbackBase::Execute(*mImpl->mAbortCallBack); } else { Quit(); } } bool Framework::AppStatusHandler(int type, void* data) { Dali::Adaptor* adaptor = nullptr; switch(type) { case APP_WINDOW_CREATED: { if(!mInitialised) { mObserver.OnInit(); mInitialised = true; } mObserver.OnSurfaceCreated(data); break; } case APP_RESET: { mObserver.OnReset(); break; } case APP_RESUME: { mObserver.OnResume(); adaptor = &Dali::Adaptor::Get(); adaptor->Resume(); break; } case APP_WINDOW_DESTROYED: { mObserver.OnSurfaceDestroyed(data); break; } case APP_PAUSE: { adaptor = &Dali::Adaptor::Get(); adaptor->Pause(); mObserver.OnPause(); break; } case APP_LANGUAGE_CHANGE: { mObserver.OnLanguageChanged(); break; } case APP_DESTROYED: { adaptor = &Dali::Adaptor::Get(); // Need to remove constraints before Terminate is called as the constraint function // can be destroyed before the constraints get a chance to clean up. RemoveAllConstraints(adaptor->GetWindows()); mObserver.OnTerminate(); mInitialised = false; break; } default: { break; } } return true; } void Framework::InitThreads() { } std::string Framework::GetLanguage() const { return mImpl->GetLanguage(); } std::string Framework::GetRegion() const { return mImpl->GetRegion(); } } // namespace Adaptor } // namespace Internal } // namespace Dali
package jenkins.plugins.logstash.pipeline; import java.io.PrintStream; import java.util.HashSet; import java.util.Set; import org.jenkinsci.plugins.workflow.steps.Step; import org.jenkinsci.plugins.workflow.steps.StepContext; import org.jenkinsci.plugins.workflow.steps.StepDescriptor; import org.jenkinsci.plugins.workflow.steps.StepExecution; import org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution; import org.kohsuke.stapler.DataBoundConstructor; import edu.umd.cs.findbugs.annotations.SuppressFBWarnings; import hudson.Extension; import hudson.model.Run; import hudson.model.TaskListener; import jenkins.plugins.logstash.LogstashWriter; import jenkins.plugins.logstash.Messages; /** * Sends the tail of the log in a single event to a logstash indexer. * Pipeline counterpart of the LogstashNotifier. */ public class LogstashSendStep extends Step { private int maxLines; private boolean failBuild; private String logFile; @DataBoundConstructor public LogstashSendStep(int maxLines, boolean failBuild, String logFile) { this.maxLines = maxLines; this.failBuild = failBuild; this.logFile = logFile; } public int getMaxLines() { return maxLines; } public boolean isFailBuild() { return failBuild; } @Override public StepExecution start(StepContext context) throws Exception { return new Execution(context, maxLines, failBuild, logFile); } @SuppressFBWarnings(value="SE_TRANSIENT_FIELD_NOT_RESTORED", justification="Only used when starting.") private static class Execution extends SynchronousNonBlockingStepExecution<Void> { private static final long serialVersionUID = 1L; private transient final int maxLines; private transient final boolean failBuild; private transient final String logFile; Execution(StepContext context, int maxLines, boolean failBuild, String logFile) { super(context); this.maxLines = maxLines; this.failBuild = failBuild; this.logFile = logFile; } @Override protected Void run() throws Exception { Run<?, ?> run = getContext().get(Run.class); TaskListener listener = getContext().get(TaskListener.class); PrintStream errorStream = listener.getLogger(); LogstashWriter logstash = new LogstashWriter(run, errorStream, listener, run.getCharset()); logstash.writeBuildLog(maxLines, logFile); if (failBuild && logstash.isConnectionBroken()) { throw new Exception("Failed to send data to Indexer"); } return null; } } @Extension public static class DescriptorImpl extends StepDescriptor { /** {@inheritDoc} */ @Override public String getDisplayName() { return Messages.DisplayName(); } @Override public String getFunctionName() { return "logstashSend"; } @Override public Set<? extends Class<?>> getRequiredContext() { Set<Class<?>> contexts = new HashSet<>(); contexts.add(TaskListener.class); contexts.add(Run.class); return contexts; } } }
import { TypeScriptType } from "./TypeScriptType"; export interface EdmxBase { ReferencedBy: FunctionType[]; Name: string; Properties: EntityTypeProperty[]; NavigationProperties: EntityTypeNavigationProperty[]; ReferencedByRoot?: ComplexType[]; } export interface EntityTypeProperty { TypescriptType?: TypeScriptType; IsMultiSelect: boolean; Name: string; Type: string; Nullable?: boolean; IsRequired?: boolean; Format: string; DisplayName: string; Description: string; IsEnum: boolean; } export interface EntityTypeNavigationProperty { LogicalName: string; FullName: string; Type: string; Name: string; Relationship?: string; FromRole?: string; ToRole?: string; IsCollection?: boolean; ReferentialConstraint?: string; ReferencedProperty?: string; } export interface EntityType extends EdmxBase { SchemaName?: string; OpenType: boolean; EntitySetName?: string; BaseType: string; Abstract?: boolean; KeyName?: string; Key: Array<{ PropertyRef: { Name: string; }; }>; } export interface EntitySet { Name: string; EntitySetName: string; EntityType: string; CustomActions: ActionType[]; CustomFunctions: FunctionType[]; } export type ComplexType = EdmxBase; export interface Association { Name: string; End: Array<{ Role: string; Multiplicity: string; }>; } export interface EnumType { Name: string; UnderlyingType?: string; Value?: string; StringMembers?: boolean; Members?: EnumMember[]; ReferencedBy?: FunctionType[]; ReferencedByRoot?: ComplexType[]; } export interface EnumMember { Name: string; Value: string; } export type ActionType = FunctionType; export interface FunctionParameterType { structuralTypeName?: string; TypescriptTypes?: TypeScriptType[]; Name: string; Type: string; } export interface FunctionType extends EdmxBase { Name: string; IsBound: boolean; BindingParameter: string; IsBindable: boolean; IsSideEffecting: boolean; ReturnType?: string; IsCollectionAction: boolean; ReturnsCollection: boolean; Parameters: FunctionParameterType[]; }
Image-based Bone Density Classification Using Fractal Dimensions and Histological Analysis of Implant Recipient Site Background: Success of dental implants is affected by the quality and density of the alveolar bone. These parameters are essential for implant stability and influence its load-bearing capacity. Their assessment is usually based on preoperative radiographs used as a tool prior to implant procedures. Objective: The aim of the study was to compare the bone density of surgically harvested bone specimens at implant recipient sites in the maxillary and mandibular posterior region using histological analysis to the radiographic bone density using fractal dimension for reliability and determining an image based classification of bone density prior surgery. Methods: Fifty implants were placed in the posterior region of male patients, (twenty five implants in the maxilla and twenty five in the mandible). The edentulous regions were presurgically assessed using Photo Stimulable Phosphor Plate (PSP) intra-oral radiographs and the fractal dimension box counting of region of interest was calculated at the implant recipient site. During surgery, bone core specimens were trephined, and bone densities and minerals parameters were evaluated based on histological analysis using SEM (Scanning Electron Microscopy), and atomic absorption spectrometry. Results: Fractal dimensions (FD) values for the same region of interest (ROI) selected on the radiographs of bone blocks and edentulous sites were different but showed a proportional variation in molar and premolar region of the maxilla and mandible. Bone density results, calculated by the ratio of bone mass (BM) to the bone volume (BV) of the bone core specimen (D=M/V), increased in the mandibular bone blocks, and decreased in the maxilla specimens. Moreover, fractal dimension values of preoperative radiographs at implant recipient sites and bone density of trephined showed a statistically similar distribution. However, no significant difference was shown in the percentage of minerals contents and mass of calcium phosphate of each bone specimen between maxilla and mandible based on scanning electron microscopy analysis. Four types of bone densities were classified according to the distribution of FD values based on preoperative radiographs and on the densities of bone cores calculations. Conclusion: Radiographic estimation of bone quality calculated with fractal dimension could be a useful, non-invasive tool when using preoperative intra-oral radiographs to predict bone density at implant recipient sites with caution and limits concerning the kind of digital radiographs and size of region of interest, especially when these results were based with bone specimens harvested from implant site as an absolute reference. INTRODUCTION The success rate of dental implant is considered to be influenced by the quantity of bone surrounding the implant and the quality of bone which is one of the most critical parameters for the success of implant placement (1,2). Comparing the two jaws, the success rate of implants in the lower jaw is higher than the upper based on the literature data (3)(4)(5), moreover a survey of studies on implant planning and treatment, showed a diversity of classification systems and measurement units of bone quality (6). Generally, the quality of bone is considered as the sum of all the characteristics of bone that influence its resistance to fracture (7), including Image-based Bone Density Classification Using Fractal Dimensions and Histological Analysis of Implant Recipient Site the structural aspects and the degree of bone tissue mineralization (8)(9)(10)(11). The density is the quality of radiopacity, the concept of Mass per Volume (D=M/V) is based that the absorption of x-ray is proportional to the mass of calcium in that unit of bone volume. The bone density is a key factor in determining the treatment planning, surgical approach and healing time in an edentulous site (12). However, many clinicians in order to differentiate the different type of bone use bone density as an objective indicator to be evaluated at implant sites based on preoperative radiographs (13)(14)(15). In the field of dentistry, several classification of bone density has been proposed for assessing bone quality. Lekholm and Zarb (16), classified alveolar bone quality from type 1 to type 4 according to the radiographic morphology and amount of cortical bone versus trabecular bone. Mish et al. (17), adjusted this classification based on a subjective tactile sensations and assessment detected by the surgeon during implant drilling, yet these classifications are still without the analysis of bone tissue specimens as a reference and for additional observation. Fractal analysis (FA) is a method which a complex and irregular body structures may be evaluated mathematically, and the fractal dimension (FD) is the definition of the quantitative outcome of this method (18). Many studies have been using the fractal analysis method to explore bone structures (19)(20)(21)(22). In dentistry, FA has been applied to analyze the images of panoramic and periapical radiographs which are the most frequently used (23), and to assess and quantify the trabecular bone pattern of the jaw through counting the bone marrow and trabecular bone interface (24). This method has been employed by many researchers for bone mineral density (BMD) investigation. When the box counting value is high, the trabecular and medullar bone boundary is more complex. Moreover, the reduction of bone density values corresponds to a diminishing of fractal dimension (25). Fractal analysis may be calculated from digitalized images using a computer program described by White & Rudolph where the morphology features of the trabecular bone can be measured (26). Box counting algorithm method was used for calculation which considered easy and accessible (27,28), and frequently used by radiologist to predict bone density (29) and to facilitate quantitative evaluation of the bone microarchitecture from the high precision of digital images available (21). Very few studies have estimated bone density by analyzing minerals content and ratio mass to volume calculation of bone biopsies in order to provide supplementary information. AIM The aim of this study is to compare the fractal dimension values of two similar regions of interest, one from a preoperative implant site, and the other from a trephined bone block. Another aim is to propose a suitable image-based classification of bone density type by correlating between the bone density values calculated from bone biopsies, the mass of calcium phosphate, and the fractal dimension values from preoperative radiographs to assess the bone quality before implant placement. Ethical aspects This study was conducted in the Division of Maxillofacial Radiology and approved by the Institutional Ethical Board of the Lebanese University (no 52018/117). All patients were given detailed information about the study objectives and procedures, and their written informed consent was obtained. Inclusion criteria The study included only healthy male patients ranging in age between 20 and 50, requiring at least one implantation in the premolar/molar region of the maxilla or mandible with more than 10 mm of a residual alveolar bone crest height and more than 5 mm of residual bone crest width based on cone beam computed tomography (CBCT) images. Exclusion criteria The following patients were excluded from the study: (1) patients with systemic conditions, (2) patients under medications affecting the bone metabolism, (3) patients with alveolar bone lesions, (4) patients having received any type of bone graft, (5) smokers. Surgical procedures All individuals were subjected to full mouth prophylaxis sessions prior to intraoral radiographs and surgeries. We placed 50 implants equally in premolar and molar areas of the maxilla and the mandible. All implants were 10 mm long with a 4 mm diameter. Implant surgeries were performed by one experienced surgeon with a standardized protocol. Cylindrical bone specimens were harvested during the surgery using a trephine bur (Trepan Bur, 2.0 mm diameter, 7 mm long, Komet Dental. Gebr. Brasseler Gmbh &Co, Germany), and dental implant fixtures were then inserted into the edentulous site. Digital periapical image acquisition Size 2 Photostimulable phosphor (PSP) plate from the VistaScan digital radiographic system (Dürr Dental) were used and scanned at 1270 dpi (25 lp/mm) special resolution. The Kodak dental X-ray unit was adjusted to operate at 70 kVp, 7 mA, an exposure time of 0.16 second, and a focus-receptor distance of 30 cm. Two radiographs were obtained, one for the maxillary or mandibular molar/premolar region, and a second for the harvested bone block using paralleling technique and posterior paralleling device with standard exposure. Images were stored on a personal computer and given to the investigator for fractal dimension (FD) calculations. Region of interest selection Two identical rectangular regions of interest (ROI) were marked on the radiographic images measuring 25× 50 pixels, the first ROI was drawn at the edentulous implant site on the preoperative radiograph, and the second was placed within the bone specimen radiograph ( Figure 1). All ROIs obtained were saved in the computer memory. Assessment of fractal dimension ImageJ software (version 1.36b, U.S. National Institutes of Health) was used to analyze the ROIs. The sequence includes cropping of the regions of interest (ROI), duplication of the ROI and the use a blurred Gaussian filter to remove large-scale variations, in brightness the subtraction of ROI from the original image, binarization, and skeletonization ( Figure 2). Bone core specimen analysis The analytical information of bone biopsy was done using Scanning Electron Microscopy (SEM) (Seron technologies Brand). Before SEM treatment, the mass of each bone specimen was measured using a scientific electronic balance. The sample was placed on a holder using Carbone tape, a thin film of gold was deposit on the sample to make its surface conductive (deposition time was 20-30 sec), the bone specimen was then placed in a chamber of high vacuum and finally an electron beam initiate to hit the sample surface in order to obtain a various signals. Two signals were used: the secondary electrons that display an image of the sample's surface and characteristic x-ray that allows elemental quantification of elements present in the sample. Each of these two signals has their own detector embedded in the Energy Dispersive X-ray machine (EDX). This technique was performed by a blinded examiner using a standard protocol to measure the percentage of mineral contents founded in the bone biopsy. For the descriptive histological analysis, the parameters evaluated were: percentage of calcium, phosphorus inside bone specimen, mass of the specimen then the density of bone biopsy was calculated from the ratio of mass and volume of the specimen (D=M/V). The mass of calcium phosphate was calculated as follows: 10 ml of HCL was added to the specimen and the mixture was heated for 5 hours on a hot plate at a 60°C of temperature, and then was diluted up to 100 ml to have a liquid solution. After that the solution was filtered by a 45µm syringe filter and was ready for the analysis by atomic absorption spectrometry to measure the concentration of calcium phosphate in mg/l, and then the mass of calcium phosphate in the bone specimen was calculated. Statistical Analysis All results were saved in an Excel sheet and data were statistically analyzed using the IBM® SPSS® software version 20.0 (SPSS Inc, Chicago, Illinois). Box counting values based on radiographs and bone specimen densities and minerals contents were compared. The correlations of bone specimen densities based on histological analysis with fractal dimensions values based on radiographs were examined using uni-and/or multivariate regression analyses. Results were considered statistically significant if p < 0.05. RESULTS The results showed that the data distribution was normal. Of the 50 implants placed, 25 implants were inserted in the posterior region of the maxilla and 25 implants in the posterior region of the mandible. In the maxilla, seven implants were installed in molar sites (three in the left side and four in the right), and 18 implants in premolar sites (12 in the left side and six in the right). In the mandible, among the 25 implants placed, 17 were in molar sites (11 in the right side and six in the left) and eight in premolar sites (two implants in the left side and six in the right). The distribution of implants insertions is listed in Table 1. Fractal dimension of implant sites calculated using preoperative radiographs showed an increased in the average of values from molar to premolar maxillary and from molar to premolar mandibular and between maxilla and mandible separately ( Table 2). When using the bone block radiographs, the average of FD values were lower than the values at implant sites despite the same ROI's were used. In other words, all results showed a proportional distribution to these obtained from preoperative radiographs in regards to maxillary and mandibular molar and premolar regions or between the two arches but with lower values (Table 3). Significant increases in bone mass, therefore increases in bone density were noted in mandibular bone core specimens compared to those in maxillary specimens. However, using multivariate regression analysis, no correlation was noted between the amount and percentage of minerals component (calcium, phosphorus) and bone Figure 1. ImageJ program was used to select two identical ROI of 25x50 pixels on two radiographs, one for preoperative edentulous site and second for bone core specimen. Molar Premolar Total densities and there was no significant variability of these results in mandibular and maxillary specimens separately, only mass of calcium phosphate values increased with bone densities (Table 4). Significant increases in bone densities calculation were observed in the posterior region of the mandible compared to those calculated in the maxilla, and increase in premolar regions of each jaw compared to molar regions (Table 5). Right However, comparing image based bone type assessed by FD values using preoperative radiographs in the posterior regions of maxilla and mandible and densities of bone block respectively; a positive significant correlation was noted. As well, when the maxilla and mandible were separately examined, the correlation could still be found. A classification was proposed by the authors according to the distribution of FD values and bone densities, divided bone densities into 4 types. Type 1: FD1.60, Type 2: 1.60FD1.55, Type 3: 1.55FD1.50, Type 4: FD1.50. In our study, classification Type 2-Type 3 were noted in the posterior regions of the mandible with FD values between 1,550 and 1,563, whereas type 3-type 4 were found in the maxilla with FD values between 1.451 to 1.544 (Table 5). DISCUSSION Quality of bone is one of the keys of success in the implant treatment plan but it is also unfortunately, one of the variants that cannot be accurately determined prior to implant placement. Implant stability is due to the amount of cortical bone whereas the long-term stability is the responsibility of cancellous bone. Bone architecture and constituents of cancellous bone have a considerable interest in evaluating quality of bone. Assessment of bone quality and bone density based on preoperative radiographs has frequently been used as a tool prior to implant procedures. This study was undertaken to assess the bone density at 50 implant sites distributed equally in the posterior regions of maxilla and mandible, using fractal dimension with periapical digital radiograph, and 50 bone cores, where the implants would engage, obtained from the central part of the implant sites excluding the buccal and lingual cortical plates and then radiographed. The Table 5. Correlations of histological characteristics of bone specimen density with the fractal dimensions assessed by radiographs and types of bone densities bone blocks were examined to calculate bone densities by calculating mass and volume and calcium phosphate mass. It has been reported that thicknesses variations of the buccal and lingual cortical plates at different sites may affect the radiographic diagnosis but not when the specimens bone cores were considered as a reference (30). Fractal analysis provide the clinician with box counting values, a subjective method of evaluating bone density for a proposed implant site using preoperative intra-oral radiograph. Geraets and van der Stelt (31) stated that all stages in the "analytic chain" of FA have an influence in the assessment of bone due to the wide variations of analyzing methods. In the present study a specific methodology has been followed using the ImageJ software to ensure that the ROIs were exactly similar on the radiographic images. Previous studies stated that FA of PSP radiographic images of alveolar bone is influenced by some digital enhancement filters and the spatial resolution of the system (32). Therefore, the same image specifications and processing conditions should be used for FD comparison. Our results showed that the bone density assessments by FD as a primary tool using preoperative radiographs present higher values in the mandible than the maxilla and differences between molar/premolar regions in both arches. Reported results on human cadaver jaw bone specimens using micro-CT (33)(34) are consistent with our results. FD of bone cores in table 3 showed a similar and proportional distribution of the FD but with lower values compared to the FD values in table 2 for the same ROI's. This is attributed to the presence of the buccal and lingual cortical plates of the preoperative radiographs. A very few studies investigated the local elemental composition of bone core specimen using SEM, which allows, for example, the percentage of calcium and phosphorus and mass of calcium phosphate to be measured. In this current study, the measures of quantitative components of bone cores were significant. The ratio of mass to volume and therefore bone densities increases in mandibles more than in maxilla that the mass and volume of specimens harvested from mandibles was larger than of those obtained from maxilla, and that denser bones were harvested from the mandibular implant sites than from maxillary implant sites. The mean value of bone density in the posterior region of maxilla was 0.227 comparing to 0.408 in the mandible. Conversely, no significant differences were observed when measuring the mineral composition of bone between maxilla (22.384 of calcium, 11.929 of phosphorus) and mandible (22.929 of calcium, 11.784 of phosphorus). However, the reason that the mass of calcium phosphate matched with bone density value is still unclear. In our study, we proposed a bone density classification according to the distribution of FD values based on preoperative radiographs and on the densities of bone cores calculations and we divided bone densities into four A correlation was found between the bone density based analytical characteristics of bone specimens and fractal dimension values based on intra oral radiographs of the posterior region in either maxilla or mandible. Moreover, the types of bone density in molar or premolar region assessed based on fractal dimension values of each region and bone densities respectively, were analyzed and classified per region as follow: Maxillary molar region: Type 4 with FD mean values 1.451 and bone densities average 0.150 g/cm 3 Maxillary premolar region: Type 3 with FD mean values is 1.544 and bone densities 0.305 g/cm 3 Mandibular molar region: Type 3 with FD mean values is 1.550 and bone densities 0.379 g/cm 3 Mandibular premolar region: Type 2 average of FD values is 1,563 and bone densities 0.489 g/cm 3 In summary, Type 2-Type 3 of bone were noted in the posterior regions of the mandible with FD values between 1,550 and 1,563 and bone densities average 0.408 g/ cm 3 , while Type 3-Type 4 were found in the maxilla with FD values between 1.451 to 1.544 and bone densities average 0.227 g/cm 3 . When comparing fractal dimensions and bone densities based respectively on intra-oral radiographs and bone core specimens, a significant correlation was noted but this was not limited to mandible or maxilla but also to the molar and premolar region when Type 2 and Type 3 and Type 4 were compared. According to the above results, assessment of bone density based on periapical radiographs using fractal dimension values might be useful, particularly when these values were compared to bone core densities harvested from the recipient implant sites as an absolute reference. Therefore, image-based classification using fractal dimension to predict the bone type prior to implant treatment may be clinically time-consuming and a determining factor in implant design, surgical approach, and healing time. CONCLUSION Our study demonstrates a correlation between fractal dimension values and bone core specimens densities harvested from the recipient implant sites and proposed an image-based classification of bone density in the posterior regions of maxilla and mandible and separately in molar and premolar region of each jaw. Fractal dimension provides an economical and easy option to measure the density at implant sites, as opposed to the expensive and unfeasible density assessment of other techniques. Image-based Bone Density Classification Using Fractal Dimensions and Histological Analysis of Implant Recipient Site Image-based bone density for the anterior region in maxilla and mandibles must be established in further studies.
def policy_action(module, state=None, policy_name=None, policy_arn=None, policy_document=None, path=None, description=None): changed = False policy = None error = {} if state == 'present': try: if isinstance(policy_document, dict): policy_document = json.dumps(policy_document) response = policy_m.create_policy( policy_name=policy_name, path=path, policy_document=policy_document, description=description) if 'error' in response: error = response['error'] else: if response['state'] == 'New': changed = True policy = response['policy'] except Exception as e: module.fail_json(msg='policy action {0} failed: {1} {2}'.format('present', e,traceback.format_exc())) elif state == 'absent': try: response = policy_m.delete_policy( policy_name=policy_name, path=path) if 'error' in response: error = response['error'] else: changed = True policy = response['policy'] except Exception as e: module.fail_json(msg='policy action {0} failed: {1} {2}'.format('absent', e,traceback.format_exc())) else: error = {"error": "state must be either 'present' or 'absent'"} if error: module.fail_json(msg='policy action failed: {0}'.format(error)) return changed, policy
/** * show an alert dialog when user try to delete his account */ private void dialogDeleteAccountsCopies() { AlertDialog.Builder builder = new AlertDialog.Builder(this); builder.setMessage(getString(R.string.dialoge_delete_all_account_cp)); builder.setTitle(""); builder.setPositiveButton("Oui", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { deleteUserCopies(); deleteUserAccount(); GridLayout gridLayout=findViewById(R.id.users_list); gridLayout.removeAllViews(); SysApplication.getInstance().exit(); } }); builder.setNegativeButton("Annuler", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { dialog.dismiss(); } }); builder.create().show(); }
Microsoft Corp.'s massive security update yesterday marked the completion of the sixth year of the company's move to a monthly patch release schedule. Since moving to a monthly schedule in October 2003, Microsoft has released about 400 security bulletins based on an informal count of releases in its bulletin archives. The bulletins address about 745 vulnerabilities across almost every Microsoft product. More than half of the bulletins, or about 230, addressed security vulnerabilities that were described by Microsoft as "critical." The company typically uses this definition for vulnerabilities that allow attackers to take full administrative control of a system from a remote location. More vulnerabilities are being discovered in Microsoft products than when the company first moved to a monthly patch schedule. The total number of flaws disclosed and patched by the software maker so far this year stands at around 160, more than the 155 or so that Microsoft reported for all of 2008. The number of flaws reported in Microsoft products over the last two years is more than double the number of flaws disclosed in 2004 and 2005, the first two full years of Patch Tuesdays. The last time Microsoft did not release any patches on a Patch Tuesday was March 2007, more than 30 months ago. In the past six years, Microsoft had just four patch-free months -- two of which were in 2005. In contrast, the company has issued patches for 10 or more vulnerabilities on more than 20 occasions and patches for 20 or more flaws in a single month on about 10 occasions, including yesterday. The increase in the number of flaws being discovered comes at a time when attackers are getting much faster at exploiting them. A survey by security vendor Qualys earlier this year showed that 80% of vulnerability exploits are available within 10 days of the vulnerability's disclosure. Nearly 50% of the vulnerabilities patched by Microsoft in its security updates for April this year already had known exploits by the time the patches were available. The numbers highlight Microsoft's continuing challenges on the security front, said David Rice, president of the Monterey Group, a security consultancy in Monterey, Calif. and author of Geekonomics: The Real Cost of Insecurity Software. But it is important to keep them in perspective, he added. Other major vendors, such as Oracle Corp. and Apple Inc., have also announced large numbers of vulnerabilities over the past few years, Rice said. But neither of these two vendors have invested anywhere near the money and resources that Microsoft has spent on security over the past several years, largely because of the lack of incentive for them to do so, he said. Unlike other sectors, such as the automobile industry, where a vehicle's safety rating has an impact on product sales, software vendors have not been penalized by consumers for buggy software. Many have therefore chosen not to invest in increased product security, he said. "Apple and Oracle are classic examples of companies that have not invested in security to the same level that Microsoft has," Rice said. David Jordan, chief information security officer at Virginia's Arlington County government, is growing impatient at the continuing number of flaws disclosed by Microsoft. "These updates are necessary, but the vulnerabilities they fix are most unwelcome," Jordan said. "Much like auto recalls, one has to wonder how some of this stuff got to production so many years down the road," he said. "We've just recently retired our IBM mainframe, but back in the day, there was in software development a phrase known as 'zero tolerance for defects.'" Jordan said such an attitude toward software development would be welcome today. Microsoft's huge installed base and the popularity of its products make it a prime target for hackers looking for software vulnerabilities to exploit, which is one of the reasons so many flaws continue to be found. "Microsoft has gargantuan cash reserves, and they are doing all they need to do. If they are still meeting with so much failure, it means they may have reached a glass ceiling" in terms of their ability to reduce flaws, he said. The sheer number of patches Microsoft releases each month shows the company may have reached the "inherent limits" of the software debugging process, said Amichai Shulman, CTO at security vendor Imperva in a blog post. Microsoft could not be reached for comment at deadline. Microsoft has been investing more than any other company in secure coding practices with its software development life cycle process, Shulman said. Yet in the past year, the number of vulnerabilities is still on the rise. "There is a point in time in which any increase in QA resources (and time) has a negligible effect over software quality," he said. "This is giving us an excellent perspective about the inherent limitations of SDLC as the first and last line of defense when it comes to information security," he said. "The crooks tend to spend the majority of their effort on Windows," because of it huge market share, said Tim O'Pry, CTO at the Henssler Financial Group in Kennesaw, Ga. While the sheer number of patches released by Microsoft is "a royal PIA for system administrators," Microsoft is getting better at locking down some of the bigger holes in their operating systems, he said. One big reason why Microsoft is still reporting so many vulnerabilities is because they have "decided to drag the ball and chain of backward compatibility with them from the DOS days," O'Pry said. But overall, "I think Microsoft is doing a reasonable job considering the huge installed base and their attempts to break as little as possible," from a backward compatibility standpoint, he said. Matt Kesner, chief technology officer at Fenwick & West LLP in Mountain View, Calif., said it isn't surprising that during times of economic trouble there are more attempts to exploit systems. "So, on the one hand we applaud Microsoft's continuing efforts to patch its software in a timely manner," he said. "On the other hand, the number of patches shows that security still isn't a primary consideration when software is written."
<reponame>Azn2000/Explvs-AIO package org.aio.activities.skills.smithing; import org.aio.util.item_requirement.ItemReq; public enum Bar { BRONZE("Bronze bar", 1, true, new ItemReq("Copper ore", 1), new ItemReq("Tin ore", 1)), BLURITE("Blurite bar", 8, false, new ItemReq("Blurite ore", 1)), IRON("Iron bar", 15, true, new ItemReq("Iron ore", 1)), SILVER ("Silver bar", 20, false, new ItemReq("Silver ore", 1)), STEEL("Steel bar", 30, true, new ItemReq("Coal", 2), new ItemReq("Iron ore", 1)), GOLD ("Gold bar", 40, false, new ItemReq("Gold ore", 1)), MITHRIL("Mithril bar", 50, true, new ItemReq("Coal", 4), new ItemReq("Mithril ore", 1)), ADAMANTITE("Adamantite bar", 70, true, new ItemReq("Coal", 6), new ItemReq("Adamantite ore", 1)), RUNITE("Runite bar", 85, true, new ItemReq("Coal", 8), new ItemReq("Runite ore", 1)); String name; int levelRequired; public boolean smithable; public ItemReq[] oresRequired; Bar(final String name, final int levelRequired, final boolean smithable, final ItemReq... oresRequired){ this.name = name; this.smithable = smithable; this.levelRequired = levelRequired; this.oresRequired = oresRequired; } @Override public String toString(){ return name; } }
<filename>src/main/java/nl/andrewl/railsignalapi/rest/dto/component/in/ComponentPayload.java package nl.andrewl.railsignalapi.rest.dto.component.in; import com.fasterxml.jackson.annotation.JsonIgnoreProperties; import com.fasterxml.jackson.annotation.JsonSubTypes; import com.fasterxml.jackson.annotation.JsonTypeInfo; import nl.andrewl.railsignalapi.model.component.Position; import javax.validation.constraints.NotBlank; import javax.validation.constraints.NotNull; @JsonIgnoreProperties(ignoreUnknown = true) @JsonTypeInfo(use = JsonTypeInfo.Id.NAME, property = "type", include = JsonTypeInfo.As.EXISTING_PROPERTY, visible = true) @JsonSubTypes({ @JsonSubTypes.Type(value = SignalPayload.class, name = "SIGNAL"), @JsonSubTypes.Type(value = SwitchPayload.class, name = "SWITCH"), @JsonSubTypes.Type(value = SegmentBoundaryPayload.class, name = "SEGMENT_BOUNDARY"), @JsonSubTypes.Type(value = LabelPayload.class, name = "LABEL") }) public abstract class ComponentPayload { @NotNull @NotBlank public String name; @NotNull @NotBlank public String type; @NotNull public Position position; }
<filename>src/app/utils/handle-mongo-error.ts import { MongoError } from 'mongodb' import { Error as MongooseError } from 'mongoose' import { DatabaseConflictError, DatabaseError, DatabasePayloadSizeError, DatabaseValidationError, PossibleDatabaseError, } from '../modules/core/core.errors' /** * Exported for testing. * Format error recovery message to be returned to client. * @param errMsg the error message */ export const formatErrorRecoveryMessage = (errMsg: string): string => { return `Error: [${errMsg}]. Please refresh and try again. If you still need help, email us at <EMAIL>.` } export const getMongoErrorMessage = ( err?: unknown, // Default error message if no more specific error defaultErrorMessage = 'An unexpected error happened. Please try again.', ): string => { if (!err) { return '' } // Handle base Mongo engine errors if (err instanceof MongoError) { switch (err.code) { case 10334: // BSONObj size invalid error return formatErrorRecoveryMessage( 'Your form is too large to be supported by the system.', ) default: return formatErrorRecoveryMessage(defaultErrorMessage) } } // Handle mongoose errors if (err instanceof MongooseError.ValidationError) { // Deduplicate then Join all error messages into a single message if available. const messages = Object.values(err.errors).map((error) => error.message) const dedupMessages = [...new Set(messages)] const joinedMessage = dedupMessages.join(', ') return formatErrorRecoveryMessage( (joinedMessage || err.message) ?? defaultErrorMessage, ) } if (err instanceof MongooseError || err instanceof Error) { return formatErrorRecoveryMessage(err.message ?? defaultErrorMessage) } if (typeof err === 'string') { return formatErrorRecoveryMessage(err ?? defaultErrorMessage) } return formatErrorRecoveryMessage(defaultErrorMessage) } /** * Transforms mongo returned errors into ApplicationErrors * @param error the error thrown by database operations * @returns errors that extend from ApplicationError class */ export const transformMongoError = (error: unknown): PossibleDatabaseError => { const errorMessage = getMongoErrorMessage(error) if (!(error instanceof Error)) { return new DatabaseError(errorMessage) } if (error instanceof MongooseError.ValidationError) { return new DatabaseValidationError(errorMessage) } if (error instanceof MongooseError.VersionError) { return new DatabaseConflictError(errorMessage) } if ( // Exception when Mongoose breaches Mongo 16MB size limit. error instanceof RangeError || // MongoDB Invalid BSON error. (error instanceof MongoError && error.code === 10334) || // FormSG-imposed limit in pre-validate hook. error.name === 'FormSizeError' ) { return new DatabasePayloadSizeError(errorMessage) } return new DatabaseError(errorMessage) } export const isMongoError = (error: Error): boolean => { switch (error.constructor) { case DatabaseConflictError: case DatabaseError: case DatabasePayloadSizeError: case DatabaseValidationError: return true default: return false } }
def increase_wave(lst): result = [] for item in lst: result.append( [ int(max(min(item[0] * 1.2, MAX_INT), MIN_INT)), int(max(min(item[1] * 1.2, MAX_INT), MIN_INT)) ] ) return result
package MyCar; import android.opengl.GLSurfaceView; /** * 加速値が高い代わりに最高速度が低い車 * Created by SayouKouki */ public class AcceleCar extends SuperCar { public AcceleCar(GLSurfaceView glView) { super(glView); } public AcceleCar(String objname) { super(objname); } /** * ステータスフィールドの初期化を行う */ @Override protected void initFields() { A = 0.4;//加速度 F = 0.35;//摩擦力、減速度 TOP_SPEED = 2.5;//最高速度 BACK_MAX_SPEED = -2.0;//バック時の最高速度 CURVE_ANGLE = 1.5;//1Fごとに増えるカーブの角度 MAX_CURVE_ANGLE = 11.0;//カーブの限界角 } }
<filename>deps/github.com/arangodb/go-velocypack/decoder.go<gh_stars>10-100 // // DISCLAIMER // // Copyright 2017 ArangoDB GmbH, Cologne, Germany // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. // // Copyright holder is ArangoDB GmbH, Cologne, Germany // // Author <NAME> // // This code is heavily inspired by the Go sources. // See https://golang.org/src/encoding/json/ package velocypack import ( "bytes" "encoding" "encoding/base64" "encoding/json" "fmt" "io" "reflect" "runtime" "strconv" ) // A Decoder decodes velocypack values into Go structures. type Decoder struct { r io.Reader } // Unmarshaler is implemented by types that can convert themselves from Velocypack. type Unmarshaler interface { UnmarshalVPack(Slice) error } // NewDecoder creates a new Decoder that reads data from the given reader. func NewDecoder(r io.Reader) *Decoder { return &Decoder{ r: r, } } // Unmarshal reads v from the given Velocypack encoded data slice. // // Unmarshal uses the inverse of the encodings that // Marshal uses, allocating maps, slices, and pointers as necessary, // with the following additional rules: // // To unmarshal VelocyPack into a pointer, Unmarshal first handles the case of // the VelocyPack being the VelocyPack literal Null. In that case, Unmarshal sets // the pointer to nil. Otherwise, Unmarshal unmarshals the VelocyPack into // the value pointed at by the pointer. If the pointer is nil, Unmarshal // allocates a new value for it to point to. // // To unmarshal VelocyPack into a value implementing the Unmarshaler interface, // Unmarshal calls that value's UnmarshalVPack method, including // when the input is a VelocyPack Null. // Otherwise, if the value implements encoding.TextUnmarshaler // and the input is a VelocyPack quoted string, Unmarshal calls that value's // UnmarshalText method with the unquoted form of the string. // // To unmarshal VelocyPack into a struct, Unmarshal matches incoming object // keys to the keys used by Marshal (either the struct field name or its tag), // preferring an exact match but also accepting a case-insensitive match. // Unmarshal will only set exported fields of the struct. // // To unmarshal VelocyPack into an interface value, // Unmarshal stores one of these in the interface value: // // bool, for VelocyPack Bool's // float64 for VelocyPack Double's // uint64 for VelocyPack UInt's // int64 for VelocyPack Int's // string, for VelocyPack String's // []interface{}, for VelocyPack Array's // map[string]interface{}, for VelocyPack Object's // nil for VelocyPack Null. // []byte for VelocyPack Binary. // // To unmarshal a VelocyPack array into a slice, Unmarshal resets the slice length // to zero and then appends each element to the slice. // As a special case, to unmarshal an empty VelocyPack array into a slice, // Unmarshal replaces the slice with a new empty slice. // // To unmarshal a VelocyPack array into a Go array, Unmarshal decodes // VelocyPack array elements into corresponding Go array elements. // If the Go array is smaller than the VelocyPack array, // the additional VelocyPack array elements are discarded. // If the VelocyPack array is smaller than the Go array, // the additional Go array elements are set to zero values. // // To unmarshal a VelocyPack object into a map, Unmarshal first establishes a map to // use. If the map is nil, Unmarshal allocates a new map. Otherwise Unmarshal // reuses the existing map, keeping existing entries. Unmarshal then stores // key-value pairs from the VelocyPack object into the map. The map's key type must // either be a string, an integer, or implement encoding.TextUnmarshaler. // // If a VelocyPack value is not appropriate for a given target type, // or if a VelocyPack number overflows the target type, Unmarshal // skips that field and completes the unmarshaling as best it can. // If no more serious errors are encountered, Unmarshal returns // an UnmarshalTypeError describing the earliest such error. // // The VelocyPack Null value unmarshals into an interface, map, pointer, or slice // by setting that Go value to nil. Because null is often used in VelocyPack to mean // ``not present,'' unmarshaling a VelocyPack Null into any other Go type has no effect // on the value and produces no error. // func Unmarshal(data Slice, v interface{}) error { if err := unmarshalSlice(data, v); err != nil { return WithStack(err) } return nil } // Decode reads v from the decoder stream. func (e *Decoder) Decode(v interface{}) error { s, err := SliceFromReader(e.r) if err != nil { return WithStack(err) } if err := unmarshalSlice(s, v); err != nil { return WithStack(err) } return nil } // unmarshalSlice reads v from the given slice. func unmarshalSlice(data Slice, v interface{}) (err error) { defer func() { if r := recover(); r != nil { if _, ok := r.(runtime.Error); ok { panic(r) } err = r.(error) } }() rv := reflect.ValueOf(v) if rv.Kind() != reflect.Ptr || rv.IsNil() { return &InvalidUnmarshalError{reflect.TypeOf(v)} } d := &decodeState{} // We decode rv not rv.Elem because the Unmarshaler interface // test must be applied at the top level of the value. d.unmarshalValue(data, rv) return d.savedError } var ( textUnmarshalerType = reflect.TypeOf(new(encoding.TextUnmarshaler)).Elem() numberType = reflect.TypeOf(json.Number("")) ) type decodeState struct { useNumber bool errorContext struct { // provides context for type errors Struct string Field string } savedError error } // error aborts the decoding by panicking with err. func (d *decodeState) error(err error) { panic(d.addErrorContext(err)) } // saveError saves the first err it is called with, // for reporting at the end of the unmarshal. func (d *decodeState) saveError(err error) { if d.savedError == nil { d.savedError = d.addErrorContext(err) } } // addErrorContext returns a new error enhanced with information from d.errorContext func (d *decodeState) addErrorContext(err error) error { if d.errorContext.Struct != "" || d.errorContext.Field != "" { switch err := err.(type) { case *UnmarshalTypeError: err.Struct = d.errorContext.Struct err.Field = d.errorContext.Field return err } } return err } // unmarshalValue unmarshals any slice into given v. func (d *decodeState) unmarshalValue(data Slice, v reflect.Value) { if !v.IsValid() { return } switch data.Type() { case Array: d.unmarshalArray(data, v) case Object: d.unmarshalObject(data, v) case Bool, Int, SmallInt, UInt, Double, Binary, BCD, String: d.unmarshalLiteral(data, v) } } // indirect walks down v allocating pointers as needed, // until it gets to a non-pointer. // if it encounters an Unmarshaler, indirect stops and returns that. // if decodingNull is true, indirect stops at the last pointer so it can be set to nil. func (d *decodeState) indirect(v reflect.Value, decodingNull bool) (Unmarshaler, json.Unmarshaler, encoding.TextUnmarshaler, reflect.Value) { // If v is a named type and is addressable, // start with its address, so that if the type has pointer methods, // we find them. if v.Kind() != reflect.Ptr && v.Type().Name() != "" && v.CanAddr() { v = v.Addr() } for { // Load value from interface, but only if the result will be // usefully addressable. if v.Kind() == reflect.Interface && !v.IsNil() { e := v.Elem() if e.Kind() == reflect.Ptr && !e.IsNil() && (!decodingNull || e.Elem().Kind() == reflect.Ptr) { v = e continue } } if v.Kind() != reflect.Ptr { break } if v.Elem().Kind() != reflect.Ptr && decodingNull && v.CanSet() { break } if v.IsNil() { v.Set(reflect.New(v.Type().Elem())) } if v.Type().NumMethod() > 0 { if u, ok := v.Interface().(Unmarshaler); ok { return u, nil, nil, reflect.Value{} } if u, ok := v.Interface().(json.Unmarshaler); ok { return nil, u, nil, reflect.Value{} } if !decodingNull { if u, ok := v.Interface().(encoding.TextUnmarshaler); ok { return nil, nil, u, reflect.Value{} } } } v = v.Elem() } return nil, nil, nil, v } // unmarshalArray unmarshals an array slice into given v. func (d *decodeState) unmarshalArray(data Slice, v reflect.Value) { // Check for unmarshaler. u, ju, ut, pv := d.indirect(v, false) if u != nil { if err := u.UnmarshalVPack(data); err != nil { d.error(err) } return } if ju != nil { json, err := data.JSONString() if err != nil { d.error(err) } else { if err := ju.UnmarshalJSON([]byte(json)); err != nil { d.error(err) } } return } if ut != nil { d.saveError(&UnmarshalTypeError{Value: "array", Type: v.Type()}) return } v = pv // Check type of target. switch v.Kind() { case reflect.Interface: if v.NumMethod() == 0 { // Decoding into nil interface? Switch to non-reflect code. v.Set(reflect.ValueOf(d.arrayInterface(data))) return } // Otherwise it's invalid. fallthrough default: d.saveError(&UnmarshalTypeError{Value: "array", Type: v.Type()}) return case reflect.Array: case reflect.Slice: break } i := 0 it, err := NewArrayIterator(data) if err != nil { d.error(err) } for it.IsValid() { value, err := it.Value() if err != nil { d.error(err) } // Get element of array, growing if necessary. if v.Kind() == reflect.Slice { // Grow slice if necessary if i >= v.Cap() { newcap := v.Cap() + v.Cap()/2 if newcap < 4 { newcap = 4 } newv := reflect.MakeSlice(v.Type(), v.Len(), newcap) reflect.Copy(newv, v) v.Set(newv) } if i >= v.Len() { v.SetLen(i + 1) } } if i < v.Len() { // Decode into element. d.unmarshalValue(value, v.Index(i)) } i++ if err := it.Next(); err != nil { d.error(err) } } if i < v.Len() { if v.Kind() == reflect.Array { // Array. Zero the rest. z := reflect.Zero(v.Type().Elem()) for ; i < v.Len(); i++ { v.Index(i).Set(z) } } else { v.SetLen(i) } } if i == 0 && v.Kind() == reflect.Slice { v.Set(reflect.MakeSlice(v.Type(), 0, 0)) } } // unmarshalObject unmarshals an object slice into given v. func (d *decodeState) unmarshalObject(data Slice, v reflect.Value) { // Check for unmarshaler. u, ju, ut, pv := d.indirect(v, false) if u != nil { if err := u.UnmarshalVPack(data); err != nil { d.error(err) } return } if ju != nil { json, err := data.JSONString() if err != nil { d.error(err) } else { if err := ju.UnmarshalJSON([]byte(json)); err != nil { d.error(err) } } return } if ut != nil { d.saveError(&UnmarshalTypeError{Value: "object", Type: v.Type()}) return } v = pv // Decoding into nil interface? Switch to non-reflect code. if v.Kind() == reflect.Interface && v.NumMethod() == 0 { v.Set(reflect.ValueOf(d.objectInterface(data))) return } // Check type of target: // struct or // map[T1]T2 where T1 is string, an integer type, // or an encoding.TextUnmarshaler switch v.Kind() { case reflect.Map: // Map key must either have string kind, have an integer kind, // or be an encoding.TextUnmarshaler. t := v.Type() switch t.Key().Kind() { case reflect.String, reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: default: if !reflect.PtrTo(t.Key()).Implements(textUnmarshalerType) { d.saveError(&UnmarshalTypeError{Value: "object", Type: v.Type()}) return } } if v.IsNil() { v.Set(reflect.MakeMap(t)) } case reflect.Struct: // ok default: d.saveError(&UnmarshalTypeError{Value: "object", Type: v.Type()}) return } var mapElem reflect.Value it, err := NewObjectIterator(data) if err != nil { d.error(err) } for it.IsValid() { key, err := it.Key(true) if err != nil { d.error(err) } keyUTF8, err := key.GetStringUTF8() if err != nil { d.error(err) } value, err := it.Value() if err != nil { d.error(err) } // Figure out field corresponding to key. var subv reflect.Value destring := false // whether the value is wrapped in a string to be decoded first if v.Kind() == reflect.Map { elemType := v.Type().Elem() if !mapElem.IsValid() { mapElem = reflect.New(elemType).Elem() } else { mapElem.Set(reflect.Zero(elemType)) } subv = mapElem } else { var f *field fields := cachedTypeFields(v.Type()) for i := range fields { ff := &fields[i] if bytes.Equal(ff.nameBytes, key) { f = ff break } if f == nil && ff.equalFold(ff.nameBytes, keyUTF8) { f = ff } } if f != nil { subv = v destring = f.quoted for _, i := range f.index { if subv.Kind() == reflect.Ptr { if subv.IsNil() { subv.Set(reflect.New(subv.Type().Elem())) } subv = subv.Elem() } subv = subv.Field(i) } d.errorContext.Field = f.name d.errorContext.Struct = v.Type().Name() } } if destring { // Value should be a string that we'll decode as JSON valueUTF8, err := value.GetStringUTF8() if err != nil { d.saveError(fmt.Errorf("json: invalid use of ,string struct tag, expected string, got %s in %v (%v)", value.Type(), subv.Type(), err)) } v, err := ParseJSONFromUTF8(valueUTF8) if err != nil { d.saveError(err) } else { d.unmarshalValue(v, subv) } } else { d.unmarshalValue(value, subv) } // Write value back to map; // if using struct, subv points into struct already. if v.Kind() == reflect.Map { kt := v.Type().Key() var kv reflect.Value switch { case kt.Kind() == reflect.String: kv = reflect.ValueOf(keyUTF8).Convert(kt) case reflect.PtrTo(kt).Implements(textUnmarshalerType): kv = reflect.New(v.Type().Key()) d.literalStore(key, kv, true) kv = kv.Elem() default: keyStr := string(keyUTF8) switch kt.Kind() { case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: n, err := strconv.ParseInt(keyStr, 10, 64) if err != nil || reflect.Zero(kt).OverflowInt(n) { d.saveError(&UnmarshalTypeError{Value: "number " + keyStr, Type: kt}) return } kv = reflect.ValueOf(n).Convert(kt) case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: n, err := strconv.ParseUint(keyStr, 10, 64) if err != nil || reflect.Zero(kt).OverflowUint(n) { d.saveError(&UnmarshalTypeError{Value: "number " + keyStr, Type: kt}) return } kv = reflect.ValueOf(n).Convert(kt) default: panic("json: Unexpected key type") // should never occur } } v.SetMapIndex(kv, subv) } d.errorContext.Struct = "" d.errorContext.Field = "" if err := it.Next(); err != nil { d.error(err) } } } // unmarshalLiteral unmarshals a literal slice into given v. func (d *decodeState) unmarshalLiteral(data Slice, v reflect.Value) { d.literalStore(data, v, false) } // The xxxInterface routines build up a value to be stored // in an empty interface. They are not strictly necessary, // but they avoid the weight of reflection in this common case. // valueInterface is like value but returns interface{} func (d *decodeState) valueInterface(data Slice) interface{} { switch data.Type() { case Array: return d.arrayInterface(data) case Object: return d.objectInterface(data) default: return d.literalInterface(data) } } // arrayInterface is like array but returns []interface{}. func (d *decodeState) arrayInterface(data Slice) []interface{} { l, err := data.Length() if err != nil { d.error(err) } v := make([]interface{}, 0, l) it, err := NewArrayIterator(data) if err != nil { d.error(err) } for it.IsValid() { value, err := it.Value() if err != nil { d.error(err) } v = append(v, d.valueInterface(value)) // Move to next field if err := it.Next(); err != nil { d.error(err) } } return v } // objectInterface is like object but returns map[string]interface{}. func (d *decodeState) objectInterface(data Slice) map[string]interface{} { m := make(map[string]interface{}) it, err := NewObjectIterator(data) if err != nil { d.error(err) } for it.IsValid() { key, err := it.Key(true) if err != nil { d.error(err) } keyStr, err := key.GetString() if err != nil { d.error(err) } value, err := it.Value() if err != nil { d.error(err) } // Read value. m[keyStr] = d.valueInterface(value) // Move to next field if err := it.Next(); err != nil { d.error(err) } } return m } // literalInterface is like literal but returns an interface value. func (d *decodeState) literalInterface(data Slice) interface{} { switch data.Type() { case Null: return nil case Bool: v, err := data.GetBool() if err != nil { d.error(err) } return v case String: v, err := data.GetString() if err != nil { d.error(err) } return v case Double: v, err := data.GetDouble() if err != nil { d.error(err) } return v case Int, SmallInt: v, err := data.GetInt() if err != nil { d.error(err) } intV := int(v) if int64(intV) == v { // Value fits in int return intV } return v case UInt: v, err := data.GetUInt() if err != nil { d.error(err) } return v case Binary: v, err := data.GetBinary() if err != nil { d.error(err) } return v default: // ?? d.error(fmt.Errorf("unknown literal type: %s", data.Type())) return nil } } // literalStore decodes a literal stored in item into v. // // fromQuoted indicates whether this literal came from unwrapping a // string from the ",string" struct tag option. this is used only to // produce more helpful error messages. func (d *decodeState) literalStore(item Slice, v reflect.Value, fromQuoted bool) { // Check for unmarshaler. if len(item) == 0 { //Empty string given d.saveError(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal empty slice into %v", v.Type())) return } isNull := item.IsNull() // null u, ju, ut, pv := d.indirect(v, isNull) if u != nil { if err := u.UnmarshalVPack(item); err != nil { d.error(err) } return } if ju != nil { json, err := item.JSONString() if err != nil { d.error(err) } else { if err := ju.UnmarshalJSON([]byte(json)); err != nil { d.error(err) } } return } if ut != nil { if !item.IsString() { //if item[0] != '"' { if fromQuoted { d.saveError(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal Slice of type %s into %v", item.Type(), v.Type())) } else { val := item.Type().String() d.saveError(&UnmarshalTypeError{Value: val, Type: v.Type()}) } return } s, err := item.GetStringUTF8() if err != nil { if fromQuoted { d.error(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal slice of type %s into %v", item.Type(), v.Type())) } else { d.error(InternalError) // Out of sync } } if err := ut.UnmarshalText(s); err != nil { d.error(err) } return } v = pv switch item.Type() { case Null: // null // The main parser checks that only true and false can reach here, // but if this was a quoted string input, it could be anything. if fromQuoted /*&& string(item) != "null"*/ { d.saveError(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type())) break } switch v.Kind() { case reflect.Interface, reflect.Ptr, reflect.Map, reflect.Slice: v.Set(reflect.Zero(v.Type())) // otherwise, ignore null for primitives/string } case Bool: // true, false value, err := item.GetBool() if err != nil { d.error(err) } // The main parser checks that only true and false can reach here, // but if this was a quoted string input, it could be anything. if fromQuoted /*&& string(item) != "true" && string(item) != "false"*/ { d.saveError(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type())) break } switch v.Kind() { default: if fromQuoted { d.saveError(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type())) } else { d.saveError(&UnmarshalTypeError{Value: "bool", Type: v.Type()}) } case reflect.Bool: v.SetBool(value) case reflect.Interface: if v.NumMethod() == 0 { v.Set(reflect.ValueOf(value)) } else { d.saveError(&UnmarshalTypeError{Value: "bool", Type: v.Type()}) } } case String: // string s, err := item.GetString() if err != nil { d.error(err) } switch v.Kind() { default: d.saveError(&UnmarshalTypeError{Value: "string", Type: v.Type()}) case reflect.Slice: if v.Type().Elem().Kind() != reflect.Uint8 { d.saveError(&UnmarshalTypeError{Value: "string", Type: v.Type()}) break } b, err := base64.StdEncoding.DecodeString(s) if err != nil { d.saveError(err) break } v.SetBytes(b) case reflect.String: v.SetString(string(s)) case reflect.Interface: if v.NumMethod() == 0 { v.Set(reflect.ValueOf(string(s))) } else { d.saveError(&UnmarshalTypeError{Value: "string", Type: v.Type()}) } } case Double: value, err := item.GetDouble() if err != nil { d.error(err) } switch v.Kind() { default: if v.Kind() == reflect.String && v.Type() == numberType { s, err := item.JSONString() if err != nil { d.error(err) } v.SetString(s) break } if fromQuoted { d.error(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type())) } else { d.error(&UnmarshalTypeError{Value: "number", Type: v.Type()}) } case reflect.Interface: n, err := d.convertNumber(value) if err != nil { d.saveError(err) break } if v.NumMethod() != 0 { d.saveError(&UnmarshalTypeError{Value: "number", Type: v.Type()}) break } v.Set(reflect.ValueOf(n)) case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: n := int64(value) if err != nil || v.OverflowInt(n) { d.saveError(&UnmarshalTypeError{Value: fmt.Sprintf("number %v", value), Type: v.Type()}) break } v.SetInt(n) case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: n := uint64(value) if err != nil || v.OverflowUint(n) { d.saveError(&UnmarshalTypeError{Value: fmt.Sprintf("number %v", value), Type: v.Type()}) break } v.SetUint(n) case reflect.Float32, reflect.Float64: n := value v.SetFloat(n) } case Int, SmallInt: value, err := item.GetInt() if err != nil { d.error(err) } switch v.Kind() { default: if v.Kind() == reflect.String && v.Type() == numberType { s, err := item.JSONString() if err != nil { d.error(err) } v.SetString(s) break } if fromQuoted { d.error(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type())) } else { d.error(&UnmarshalTypeError{Value: "number", Type: v.Type()}) } case reflect.Interface: var n interface{} intValue := int(value) if int64(intValue) == value { // When the value fits in an int, use int type. n, err = d.convertNumber(intValue) } else { n, err = d.convertNumber(value) } if err != nil { d.saveError(err) break } if v.NumMethod() != 0 { d.saveError(&UnmarshalTypeError{Value: "number", Type: v.Type()}) break } v.Set(reflect.ValueOf(n)) case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: n := value if err != nil || v.OverflowInt(n) { d.saveError(&UnmarshalTypeError{Value: fmt.Sprintf("number %v", value), Type: v.Type()}) break } v.SetInt(n) case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: n := uint64(value) if err != nil || v.OverflowUint(n) { d.saveError(&UnmarshalTypeError{Value: fmt.Sprintf("number %v", value), Type: v.Type()}) break } v.SetUint(n) case reflect.Float32, reflect.Float64: n := float64(value) if err != nil || v.OverflowFloat(n) { d.saveError(&UnmarshalTypeError{Value: fmt.Sprintf("number %v", value), Type: v.Type()}) break } v.SetFloat(n) } case UInt: value, err := item.GetUInt() if err != nil { d.error(err) } switch v.Kind() { default: if v.Kind() == reflect.String && v.Type() == numberType { s, err := item.JSONString() if err != nil { d.error(err) } v.SetString(s) break } if fromQuoted { d.error(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type())) } else { d.error(&UnmarshalTypeError{Value: "number", Type: v.Type()}) } case reflect.Interface: n, err := d.convertNumber(value) if err != nil { d.saveError(err) break } if v.NumMethod() != 0 { d.saveError(&UnmarshalTypeError{Value: "number", Type: v.Type()}) break } v.Set(reflect.ValueOf(n)) case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: n := int64(value) if err != nil || v.OverflowInt(n) { d.saveError(&UnmarshalTypeError{Value: fmt.Sprintf("number %v", value), Type: v.Type()}) break } v.SetInt(n) case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: n := value if err != nil || v.OverflowUint(n) { d.saveError(&UnmarshalTypeError{Value: fmt.Sprintf("number %v", value), Type: v.Type()}) break } v.SetUint(n) case reflect.Float32, reflect.Float64: n := float64(value) if err != nil || v.OverflowFloat(n) { d.saveError(&UnmarshalTypeError{Value: fmt.Sprintf("number %v", value), Type: v.Type()}) break } v.SetFloat(n) } case Binary: value, err := item.GetBinary() if err != nil { d.error(err) } switch v.Kind() { default: d.saveError(&UnmarshalTypeError{Value: "string", Type: v.Type()}) case reflect.Slice: if v.Type().Elem().Kind() != reflect.Uint8 { d.saveError(&UnmarshalTypeError{Value: "binary", Type: v.Type()}) break } v.SetBytes(value) case reflect.Interface: if v.NumMethod() == 0 { v.Set(reflect.ValueOf(value)) } else { d.saveError(&UnmarshalTypeError{Value: "binary", Type: v.Type()}) } } default: // number d.error(fmt.Errorf("Unknown type %s", item.Type())) } } // convertNumber converts the number literal s to a float64 or a Number // depending on the setting of d.useNumber. func (d *decodeState) convertNumber(s interface{}) (interface{}, error) { if d.useNumber { return json.Number(fmt.Sprintf("%v", s)), nil } return s, nil }
async def simulate_id_sim_id(request: Request, sim_id: str): sim_status_url = app.url_path_for( "fronted_simulation_status", sim_id=sim_id ) return templates.TemplateResponse( "simulation-id-or-status.html", { "request": request, "status": False, "sim_id": sim_id, "sim_status_url": sim_status_url, } )
<gh_stars>0 /* * PXT_M_and_M extention * * This is a MakeCode (pxt) extension for the M&M colour sorting machine or Project Recycle. * The extension includes blocks for the colour sensor type TCS34725 and the servo driver type PCA9685 connected to a micro:bit via the i2c bus. * The TCS34725 sensor is assumed to be an Adafruit TCS34725 colour sensor board with inbuilt illumination LED. I2C bus address = 0x29. * The PCA9685 driver is assummed to be a Kitronix PCA9685 16 channel servo motor driver board. I2C bus address = 0x6A. * * Five blocks deal with the TCS32725: * get red light component, get green light component, get blue light component, get total light intensity and get the colour of a M & M confectionery (0-6). * The colour component readings are normalised against the total light reading. * Interrupts are disabled in the sensor and no provision is made to control the inbuilt white illumination LED. The illumination LED is permanentlky on. * The M & M colour block returns a number between 0 and 6. Encoding is shown below. * Refer to Adafruit docs and tutorial for more information. * * Two blocks deal with the PCA9685: * setRange - Set the specified Servo motor output channel to a specified pulse range and servoWrite set the specified servo to the specified angle (0 - 180 degrees). * A private initialisation function is provided to initialise the PCA9685 chip, it is called by the first use of the servoWrite function. * The initialisation function sets all servo channels to the same default pulse range, currently R700_2400uS. Any call to the setRange function should be done after * a call to servoWrite, otherwise the results of the setRange call will be overwritten by the first use of servoWrite. Subsequent calls to servoWrite will not effect * any data setup by setRange. * * The RC servo motor industry default pulse width of 0.5mS (0deg) to 2.5mS (180deg) does not always function correctly. Any excursions outside this default range will * result in damage to the servo motor. However, some cheaper servo motors struggle to deal with this standard industry range and begin growl, buzz, draw lots of current * and overheat that results in the ultimate failure of the servo motor. This problem mainly occurs at the 0.5mS end of the range. Some servo motors will not extend to 180deg * at 2.5mS and require a maximum pulse width of 2.7mS to reach 180deg. Caution should be exercised if extending the pulse range beyond 2.5mS. * The setRange block will allow each of the 16 servo outputs of the PCA9685 to be individually configured to one of the following six pulse ranges: * 1mS - 2mS, 0.9mS - 2.1mS, 0.8mS - 2.2mS, 0.7mS - 2.3mS, 0.6mS - 2.4mS and 0.5mS - 2.5mS to help eliminate growling, buzzing and overheating. * The PWM frequency is set to 50Hz making each bit of the PCA9685 4096 count equal to 4.88uS * */ //% color="#AA278D" icon="\uf06e" namespace M_and_M { /* * TCS34725: Color sensor register address and control bit definitions */ const TCS34725_ADDRESS: number = 0x29; // I2C bus address of TCS34725 sensor (0x39 for TCS34721) const REG_TCS34725_COMMAND_BIT: number = 0x80; // Command register access bit const REG_TCS34725_ENABLE: number = 0X00; // Enable register address const REG_TCS34725_TIMING: number = 0X01; // RGBC timing register address const REG_TCS34725_WAIT: number = 0x03; // Wait time register address const REG_TCS34725_CONFIG: number = 0x0D; // Configuration register address const REG_TCS34725_CONTROL: number = 0x0F; // Control register address, sets gain const REG_TCS34725_ID: number = 0x12; // ID register address, should contain 0x44 for TCS34725 or 0x4D for TCS34725 const REG_TCS34725_STATUS: number = 0x13; // Status register address const REG_CLEAR_CHANNEL_L: number = 0X14; // Clear data low byte register address const REG_RED_CHANNEL_L: number = 0X16; // Red data low byte register address const REG_GREEN_CHANNEL_L: number = 0X18; // Green data low byte register address const REG_BLUE_CHANNEL_L: number = 0X1A; // Blue data low byte register address const TCS34725_AIEN: number = 0X10; // Enable register RGBC interrupt enable bit, 0 = IRQ not enabled, 1 = IRQ enabled const TCS34725_PON: number = 0X01; // Enable register PON bit, 0 = power off, 1 = power on const TCS34725_AEN: number = 0X02; // Enable register RGBC enable bit, 0 = disable AtoD conversion, 1 = enable AtoD conversion const TCS34725_ID: number = 0x44; // Sensor ID = 0x44 or 68 decimal const TCS34729_ID: number = 0x4D; // Sensor ID = 0x4D or 77 decimal /* * TSC34725: M and M colour encoding */ const BLANK: number = 0; // Broken, missing, discoloured or chipped M & M const BROWN: number = 1; const RED: number = 2; const ORANGE: number = 3; const YELLOW: number = 4; const GREEN: number = 5; const BLUE: number = 6; const UNKNOWN: number = 9; // Not used, kept for testing purposes /* * TCS34725: Colour sensor data storage and flag definitions */ let RGBC_C: number = 0; // Clear light raw data storage let RGBC_R: number = 0; // Red light raw data storage let RGBC_G: number = 0; // Green light raw data storage let RGBC_B: number = 0; // Blue light raw data storage let TCS34725_INIT: number = 0; // TSC34725 sensor initialisation flag, 0 = not initialised, 1 = initialised /* * TCS34725: I2C bus functions: Requires i2c.ts */ function getInt8LE(addr: number, reg: number): number { // Get 8 bit little-endian integer pins.i2cWriteNumber(addr, reg, NumberFormat.UInt8BE); return pins.i2cReadNumber(addr, NumberFormat.Int8LE); } function getUInt16LE(addr: number, reg: number): number { // Get 16 bit little-endian unsigned integer pins.i2cWriteNumber(addr, reg, NumberFormat.UInt8LE); return pins.i2cReadNumber(addr, NumberFormat.UInt16LE); } function getInt16LE(addr: number, reg: number): number { // Get 16 bit little-endian integer pins.i2cWriteNumber(addr, reg, NumberFormat.UInt8LE); return pins.i2cReadNumber(addr, NumberFormat.Int16LE); } function readReg(addr: number, reg: number): number { // Read 8 bit big-endian unsigned integer pins.i2cWriteNumber(addr, reg, NumberFormat.UInt8LE); return pins.i2cReadNumber(addr, NumberFormat.UInt8LE); } function writeReg(addr: number, reg: number, dat: number) { // Write 8 bit little-endian integer let buf = pins.createBuffer(2); buf[0] = reg; buf[1] = dat; pins.i2cWriteBuffer(addr, buf); } /** * TCS34725: Color Sensor Initialisation */ function tcs34725_begin() { let id = readReg(TCS34725_ADDRESS, REG_TCS34725_ID | REG_TCS34725_COMMAND_BIT); // Get TCS34725 ID if (id === 0x44) { // Valid ID? writeReg(TCS34725_ADDRESS, REG_TCS34725_TIMING | REG_TCS34725_COMMAND_BIT, 0xEB); // Yes, Set integration time writeReg(TCS34725_ADDRESS, REG_TCS34725_WAIT | REG_TCS34725_COMMAND_BIT, 0xFF); // Set wait time to 2.4mS writeReg(TCS34725_ADDRESS, REG_TCS34725_CONFIG | REG_TCS34725_COMMAND_BIT, 0x00); // Set WLONG to 0 writeReg(TCS34725_ADDRESS, REG_TCS34725_CONTROL | REG_TCS34725_COMMAND_BIT, 0x01); // Set gain to 4 writeReg(TCS34725_ADDRESS, REG_TCS34725_ENABLE | REG_TCS34725_COMMAND_BIT, TCS34725_PON); // Power on sensor, disable wait time, disable interrupts basic.pause(3); // Need minimum 2.4mS after power on writeReg(TCS34725_ADDRESS, REG_TCS34725_ENABLE | REG_TCS34725_COMMAND_BIT, TCS34725_PON | TCS34725_AEN); // Keep power on, enable RGBC ADC TCS34725_INIT = 1; // Sensor is connected and initialised } else { // No TCS34725_INIT = 0; // Sensor is not connected } } /** * TCS34725: Color Sensor, read red, green, blue and clear raw data */ function getRGBC() { if (!TCS34725_INIT) { // Is the TCS32725 sensor initialised? tcs34725_begin(); // No, then initialise the sensor } let clear = getUInt16LE(TCS34725_ADDRESS, REG_CLEAR_CHANNEL_L | REG_TCS34725_COMMAND_BIT); // Read natural (clear) light level if (clear == 0) { // Prevent divide by zero error if sensor in complete darkness clear = 1; } RGBC_C = clear; RGBC_R = getUInt16LE(TCS34725_ADDRESS, REG_RED_CHANNEL_L | REG_TCS34725_COMMAND_BIT); // Read red component of clear light RGBC_G = getUInt16LE(TCS34725_ADDRESS, REG_GREEN_CHANNEL_L | REG_TCS34725_COMMAND_BIT); // Read green component of clear light RGBC_B = getUInt16LE(TCS34725_ADDRESS, REG_BLUE_CHANNEL_L | REG_TCS34725_COMMAND_BIT); // Read blue component of clear light basic.pause(50); let ret = readReg(TCS34725_ADDRESS, REG_TCS34725_ENABLE | REG_TCS34725_COMMAND_BIT) // Get current status of enable register ret |= TCS34725_AIEN; // Set AEIN bit ? writeReg(TCS34725_ADDRESS, REG_TCS34725_ENABLE | REG_TCS34725_COMMAND_BIT, ret) // Re-enable RGBC interrupt ? } /** * TCS34725: getRed - Reporter block that returns the normalised red value from the TCS34725 color sensor */ //% block="red" //% weight=60 export function getRed(): number { getRGBC(); // Get raw light and colour values let red = Math.round((RGBC_R / RGBC_C) * 255); // Normalise red value return red; } /** * TCS34725: getGreen - Reporter block that returns the normalised green value from the TCS34725 color sensor */ //% block="green" //% weight=60 export function getGreen(): number { getRGBC(); // Get raw light and colour values let green = Math.round((RGBC_G / RGBC_C) * 255); // Normalise green value return green; } /** * TCS34725: getBlue - Reporter block that returns the normalised blue value from the TCS34725 color sensor */ //% block="blue" //% weight=60 export function getBlue(): number { getRGBC(); // Get raw light and colour values let blue = Math.round((RGBC_B / RGBC_C) * 255); // Normalise blue value return blue; } /** * TCS34725: getAllLight - Reporter block that returns the raw or total light value from the TCS34725 color sensor */ //% block="raw" //% weight=60 export function getRaw(): number { getRGBC(); // Get raw light and colour values return Math.round(RGBC_C); // Return clear natural light level } /** * TCS34725: m_mColour - Reporter block that returns the colour of the M and M */ //% block="m & m colour" //% weight=60 export function m_mColour(): number { getRGBC(); // Get colour / light information from TSC34725 sensor let red: number = Math.round((RGBC_R / RGBC_C) * 255); // Normalise red value let green: number = Math.round((RGBC_G / RGBC_C) * 255); // Normalise green value let blue: number = Math.round((RGBC_B / RGBC_C) * 255); // Normalise blue value let clear: number = RGBC_C; // Get clear light level basic.showNumber(clear); let colour: number = UNKNOWN; // Start with unknown colour if (clear < 580 && clear > 540 && red > 80 && green < 100 && blue < 85) { // Brown M & M? colour = BROWN; // Yes } else if (clear < 700 && red > 100 && green < 85 && blue < 70) { // Red M & M? colour = RED; // Yes } else if (clear > 820 && red > 120 && green < 80 && blue < 60) { // Orange M & M? colour = ORANGE; // Yes } else if (clear > 1100 && red > 110 && green > 80 && blue < 55) { // Yellow M & M? colour = YELLOW; // Yes } else if (clear > 700 && red < 80 && green > 100 && blue < 80) { // Green M & M? colour = GREEN; // Yes } else if (clear > 630 && red < 80 && green < 100 && blue > 85) { // Blue M & M? colour = BLUE; // Yes } else { colour = BLANK; // Broken, missing, discoloured or chipped M & M } return colour; } /* * PCA9685 register address and control bit definitions */ const CHIP_ADDRESS: number = 0x6A; // Default Chip address const REG_MODE1: number = 0x00; // Mode 1 register address const REG_MODE2: number = 0x01; // Mode 2 register address const REG_SUB_ADR1: number = 0x02; // Sub address register 1 address const REG_SUB_ADR2: number = 0x03; // Sub address register 2 address const REG_SUB_ADR3: number = 0x04; // Sub address register 3 address const REG_ALL_CALL: number = 0x05; // All call address register const REG_SERVO1_BASE: number = 0x06; // Servo 1 base address const REG_SERVO_DISTANCE: number = 4; // Four registers per servo const REG_ALL_LED_ON_L: number = 0xFA; // All LED on low register address const REG_ALL_LED_ON_H: number = 0xFB; // All LED on high register address const REG_ALL_LED_OFF_L: number = 0xFC; // All LED off low register address const REG_ALL_LED_OFF_H: number = 0xFD; // All LED off high register address const REG_PRE_SCALE: number = 0xFE; // Pre-scaler register address const PWM_FREQUENCY: number = 0x79; // Pre-scaler value for 50Hz let PCA9685_init: boolean = false; // Flag to allow us to initialise without explicitly calling the initialisation function // List of possible 16 servo motors export enum Servos { Servo1 = 1, Servo2 = 2, Servo3 = 3, Servo4 = 4, Servo5 = 5, Servo6 = 6, Servo7 = 7, Servo8 = 8, Servo9 = 9, Servo10 = 10, Servo11 = 11, Servo12 = 12, Servo13 = 13, Servo14 = 14, Servo15 = 15, Servo16 = 16, } // export enum BoardAddresses{ Board1 = 0x6A, } // List of possible output pulse ranges export enum PulseRange { R500_2500uS = 1, R700_2400uS = 2, // Currently set as default R700_2300uS = 3, R800_2200uS = 4, R900_2100uS = 5, R1000_2000uS = 6, } // Time 0.5 0.7 0.7 0.8 0.9 1.0 mS const loPulseLim = [102, 143, 143, 164, 184, 204]; // Lower pulse limit width in multiples of 4.88uS // Time 2.5 2.4 2.3 2.2 2.1 2.0 mS const hiPulseLim = [512, 500, 471, 451, 430, 409]; // Higher pulse limit width in multiples of 4.88uS // Time 2.0 1.8 1.6 1.4 1.2 1.0 mS const range = [410, 357, 328, 287, 246, 205]; // Pulse width range in multiples of 4.88uS // Servo number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 let ServoRange = [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]; // Individual servo pulse range, default = R700 - 2400uS // Function to read i2c register - for testing purposes //function readReg(addr: number, reg: number): number { // Read 8 bit little-endian unsigned integer // pins.i2cWriteNumber(addr, reg, NumberFormat.UInt8LE); // return pins.i2cReadNumber(addr, NumberFormat.UInt8LE); //} // Function to map a value from one range into another range function map(value: number, fromLow: number, fromHigh: number, toLow: number, toHigh: number): number { return ((value - fromLow) * (toHigh - toLow)) / (fromHigh - fromLow) + toLow; } /* * This initialisation function sets up the PCA9865 servo driver chip. * The PCA9685 comes out of reset in low power mode with the internal oscillator off with no output signals, this allows writes to the pre-scaler register. * The pre-scaler register is set to 50Hz producing a refresh rate or frame period of 20mS which inturn makes each bit of the 4096 count equal to 4.88uS. * Sets the 16 LED ON registers to 0x00 which starts the high output pulse start at the beginning of each 20mS frame period. * Sets the 16 LED OFF registers to 0x133 (4.88uS x 1500) which ends the high output pulse 1.5mS into the frame period. This places all servo motors at 90 degrees or centre travel. * It is these LED OFF registers that will be modified to set the pulse high end time to vary the pulse width and the position of the attached servo motor. * Sets the mode1 register to 0x01 to disable restart, use internal clock, disable register auto increment, select normal (run) mode, disable sub addresses and allow LED all call addresses. * Finally the initialised flag will be set true. * This function should not be called directly by a user, the first servo write will call it. * This function initialises all 16 LED ON and LED OFF registers by using a single block write to the 'all LED' addresses. */ function init(): void { let buf = pins.createBuffer(2) // Create a buffer for i2c bus data buf[0] = REG_PRE_SCALE; // Point at pre-scaler register buf[1] = PWM_FREQUENCY; // Set PWM frequency to 50Hz or repetition rate of 20mS pins.i2cWriteBuffer(CHIP_ADDRESS, buf, false); // Write to PCA9685 buf[0] = REG_ALL_LED_ON_L; // Point at ALL LED ON low byte register buf[1] = 0x00; // Start high pulse at 0 (0-0x199) pins.i2cWriteBuffer(CHIP_ADDRESS, buf, false); // Write to PCA9685 buf[0] = REG_ALL_LED_ON_H; // buf[1] = 0x00; // Start each frame with pulse high pins.i2cWriteBuffer(CHIP_ADDRESS, buf, false); // Write to PCA9685 buf[0] = REG_ALL_LED_OFF_L; // buf[1] = 0x33; // End high pulse at mid range 1.5mS = 1500/4.88uS = 307 (0x133) pins.i2cWriteBuffer(CHIP_ADDRESS, buf, false); // Write to PCA9685 buf[0] = REG_ALL_LED_OFF_H; // buf[1] = 0x01; // End high pulse at mid range 1.5mS = 1500/4.88uS = 307 (0x133) pins.i2cWriteBuffer(CHIP_ADDRESS, buf, false); // Write to PCA9685 buf[0] = REG_MODE1; // buf[1] = 0x01; // Normal mode, start oscillator and allow LED all call registers pins.i2cWriteBuffer(CHIP_ADDRESS, buf, false) // Write to PCA9685 basic.pause(10); // Let oscillator start and settle PCA9685_init = true; // The PCA9685 is now initialised, no need to do it again } /** * Sets the requested servo to the reguested angle. * If the PCA9685 has not yet been initialised calls the initialisation routine * * @param Servo Which servo to set * @param degrees the angle to set the servo to */ //% blockId=I2C_servo_write //% block="set%Servo|to%degrees" //% degrees.min=0 degrees.max=180 export function servoWrite(Servo: Servos, degrees: number): void { if (PCA9685_init == false) { // PCA9685 initialised? init(); // No, then initialise it } let range: number = ServoRange[Servo - 1]; // Get configured pulse range for specified servo let lolim: number = loPulseLim[range - 1]; // Get lower pulse limit for the pulse range let hilim: number = hiPulseLim[range - 1]; // Get upper pulse limit for the pulse range let pulse: number = map(degrees, 0, 180, lolim, hilim); // Map degrees 0-180 to pulse range let final: number = Math.floor(pulse); // No decimal points let buf = pins.createBuffer(2); // Create a buffer for i2c bus data buf[0] = REG_SERVO1_BASE + (REG_SERVO_DISTANCE * (Servo - 1)) + 2; // Calculate address of LED OFF low byte register buf[1] = final % 256; // Calculate low byte value pins.i2cWriteBuffer(CHIP_ADDRESS, buf, false); // Write low byte to PCA9685 buf[0] = REG_SERVO1_BASE + (REG_SERVO_DISTANCE * (Servo - 1)) + 3; // Calculate address of LED OFF high byte register buf[1] = Math.floor(final / 256); // Calculate high byte value pins.i2cWriteBuffer(CHIP_ADDRESS, buf, false); // Write high byte to PCA9685 } /** * Sets the specified servo to the specified pulse range. * On startup all 16 servos are set to the default pulse range of 0.5mS to 2.5mS * This block is used to set the pulse range to a specific range, other than the default * * @param Servo Which servo to alter the pulse range. * @param Range The new pulse range for the servo. */ //% blockId=I2C_set_pulse_range //% block="set%Servo|pulse range%PulseRange" export function setRange (Servo: Servos, Range: PulseRange): void { ServoRange[Servo - 1] = Range; // Store new pulse range in servoRange array } }
// Code generated by docs2go. DO NOT EDIT. package babylon import ( "syscall/js" ) // PredicateCondition represents a babylon.js PredicateCondition. // Defines a predicate condition as an extension of Condition type PredicateCondition struct { *Condition ctx js.Value } // JSObject returns the underlying js.Value. func (p *PredicateCondition) JSObject() js.Value { return p.p } // PredicateCondition returns a PredicateCondition JavaScript class. func (ba *Babylon) PredicateCondition() *PredicateCondition { p := ba.ctx.Get("PredicateCondition") return PredicateConditionFromJSObject(p, ba.ctx) } // PredicateConditionFromJSObject returns a wrapped PredicateCondition JavaScript class. func PredicateConditionFromJSObject(p js.Value, ctx js.Value) *PredicateCondition { return &PredicateCondition{Condition: ConditionFromJSObject(p, ctx), ctx: ctx} } // PredicateConditionArrayToJSArray returns a JavaScript Array for the wrapped array. func PredicateConditionArrayToJSArray(array []*PredicateCondition) []interface{} { var result []interface{} for _, v := range array { result = append(result, v.JSObject()) } return result } // NewPredicateCondition returns a new PredicateCondition object. // // https://doc.babylonjs.com/api/classes/babylon.predicatecondition#constructor func (ba *Babylon) NewPredicateCondition(actionManager *ActionManager, predicate JSFunc) *PredicateCondition { args := make([]interface{}, 0, 2+0) args = append(args, actionManager.JSObject()) args = append(args, js.FuncOf(predicate)) p := ba.ctx.Get("PredicateCondition").New(args...) return PredicateConditionFromJSObject(p, ba.ctx) } // IsValid calls the IsValid method on the PredicateCondition object. // // https://doc.babylonjs.com/api/classes/babylon.predicatecondition#isvalid func (p *PredicateCondition) IsValid() bool { retVal := p.p.Call("isValid") return retVal.Bool() } // Predicate returns the Predicate property of class PredicateCondition. // // https://doc.babylonjs.com/api/classes/babylon.predicatecondition#predicate func (p *PredicateCondition) Predicate() js.Value { retVal := p.p.Get("predicate") return retVal } // SetPredicate sets the Predicate property of class PredicateCondition. // // https://doc.babylonjs.com/api/classes/babylon.predicatecondition#predicate func (p *PredicateCondition) SetPredicate(predicate JSFunc) *PredicateCondition { p.p.Set("predicate", js.FuncOf(predicate)) return p }
Using Eye Tracking Technology to Analyze the Impact of Stylistic Inconsistency on Code Readability A number of research efforts have focused in the area of programming style. However, to the best of our knowledge, there is little sound and solid evidence of how and to what extent can stylistic inconsistency impact the readability and maintainability of the source code. To bridge the research gap, we design an empirical experiment in which eye tracking technology is introduced to quantitatively reflect developers' cognitive efforts and mental processes when encountering the inconsistency issue.
<filename>implementations/ugene/src/include/U2Core/U2ModDbi.h #include "../../corelibs/U2Core/src/dbi/U2ModDbi.h"
I’ve never been a ring wearer. I think it was because in my early 20’s I put weight on and (as well as everywhere else) my fingers ballooned up like sausages. Finding rings in shops was a bit of a nightmare and when I did find a ring that squeezed on it rubbed against the insides of my podgy little digits and would subsequently end up lost in the bottom of my handbag for all eternity. Now, thanks to cycling to work, the occasional run and living with a vegan man I have shed those excess pounds and my fingers are ring ready again and I have my happy claws on these Twisted, Knotted rings. Materials: *I’ve done the hard work for you and found links so you can easily buy the materials I’ve used if you like. Click on the materials above to go straight to them. They are affiliate links so if you choose to buy I make a tiny bit of dollar to put towards new projects! Instructions: 1. Cut a length of wire 30cm/15″ and roughly fold in half into a fish shape. 2. Twist the ‘tail’ and bend the wire ends outwards. 3. Begin to twist the body of the fish whilst holding the tail. 4. Continue to twist until until you have a long wire with a little bobble at the end. 5. Wrap the wire around the ring mandrel so the ends cross. 6. Start to twist the ends around each other to create a spiral knot. 7. Cut off the excess wire. 8. File the ends of the wire with a nail file and squeeze them close to the rest of the knot with chain nose pliers. What do you think? Definitely a reason to start wearing rings again right!?
def write_log(self, log_path: str, outdf: pd.DataFrame, notes: pd.DataFrame) -> None: with open(log_path, 'w') as f: f.write('Columns: {}\n\n'.format(outdf.columns.values)) for idx, row in outdf.iterrows(): key_str = "'{}':\t" if type(idx) is str else "{}: " full_line = key_str + "({}, '{}'),\n" row_list = _fix_dtypes(row.tolist()) f.write( full_line.format(idx, row_list, notes.loc[idx].squeeze()))
They traveled thousands of miles from Australia to the Israeli desert city of Be'er Sheva Tuesday to trace the footsteps of their ancestors – or, the hoof prints of the horses they rode – in a reenactment of what is known as the "last successful cavalry charge” in history. As late afternoon sun fell across a sandy open expanse that was once a battlefield, the horses of the Australian light cavalry brigade rode again, kicking up clouds of dust. Australian horsemen and horsewomen, some of them direct descendants of the original soldiers, rode in a reenactment of one of the most pivotal battles of World War I. On the stands watching were the prime ministers of Israel and Australia, the governor general of New Zealand and hundreds of fellow Australians as well as Israelis. But alas, the “charge” a century later, was not a charge at all, but instead “a walk on sacred ground” as the announcer described it from the stage. The reenactors on horseback had been told to walk their horses, not race across the sand in full gallop upon instruction by the Shin Bet, Israel’s internal security force. They feared a full-on charge with 100 horses might be dangerous, an Israeli organizer said. Australian Prime Minister Malcom Turnbull, apparently unsatisfied with the walking horses, asked to see some real galloping. As the crowds thinned out, he and Israeli Prime Minister Benjamin Netanyahu and his wife, Sara Netanyahu, turned their chairs on the stage once again to the sandy stretch before them to watch a smaller contingent of horsemen and women gallop towards them. >> The Battle of Beersheba: Two faded photos solve 100-year-old mystery about epic WWI battle << A ceremony to mark the centenary of the World War I battle for Be'er Sheva, October 31, 2017. Ilan Assayag “It’s a way to remember our heritage and the sacrifices made by our forefathers,” said John Welsch, 53, one of the reenactors who spent the past three days riding through southern Israel along the trail the soldiers had ridden. He was wearing the olive colored uniform of the light horsemen, a leather ammunition belt strapped across his chest and on his head, their trademark wide brimmed hat with a plume of feathers. One hundred years ago to the day – and almost to the hour – the troops, as part of the British imperial forces, galloped full charge over this same piece of land in a bold surprise attack, dismounting only to overtake Turkish troops in their trenches in bloody, hand-to-hand combat. The battle became the turning point for the British to conquer Palestine, breaking what had been 400 years of Ottoman rule. Just two days after the battle, the Balfour Declaration was issued, promising the burgeoning Zionist movement a “Jewish national homeland” and setting the stage for the creation of the state of Israel. >> At ANZAC ceremony, Netanyahu's eloquent words underscore his tragic failures | Opinion << On Tuesday Be'er Sheva was host to a day of events commemortating the Battle of Be'er Sheva, won by the British Army and the Australian and New Zealand Army Corp, known as ANZAC. Prime Minister Benjamin Netanyahu speaks at a ceremony to mark the centenary of the World War I battle for Be'er Sheva, Israel, October 31, 2017. Tomer Appelbaum The New Zealanders, earlier in the day on October 31, 1917, secured the hill of Tel el Saba, known today in Hebrew as Tel Be'er Sheva, with a cavalary charge and bayonets drawn. There were also official commemorations for their fallen soldiers Tuesday. Among those who had made the pilgrimage to Be'er Sheva was Judith Estall, 78, who travelled here from Australia to pay homage to her father, who was among the light horsemen. She said he did not speak much about it to his family but did speak about the friends he lost there. She visited the graves of three of his friends Tuesday. “My dad saw horrible things the four years he was away fighting,” Estall said. David Wood was also at the reenactment. His great grandfather, Alfred Joseph March, was among the 800 soldiers in the battle. Wood, a musician in the Australian army, said that March appeared to have PTSD from his years fighting in World War I. He too did not say much about his wartime experiences, but when the war was brought up he would grow quiet. Even reading his great-grandfather’s leather-bound war time diary, his descriptions were quite terse. He described overtaking the Turks in Be'er Sheva this way: “We took Be'er Sheva after a bit of a scrap.” Israelis welcome members of the Australian Light Horse association as they ride their horses in Beer-Sheva on October 31, 2017. Tomer Appelbaum In Australia and New Zealand there is great pride in the memory of the Battle of Be'er Sheva. It’s also a welcome story of military triumph after the devastating losses they suffered in Galilopli, Turkey earlier in the war. Amid mortar shells and machine gun fire, 800 light horsemen charged some 20,000 feet over open ground towards the Turkish who were caught by surprise. They had been expecting British imperial troops to try to take the holding line from Gaza. They also did not expect that the light horsemen, who were infantrymen who traveled by horse, to charge the whole way on horseback. Usually they would dismount at a certain distance from the trenches at the enemy line. The subterfuge worked. The battle that began at dusk left Be'er Sheva in British imperial hands, 31 Australians were killed and over a hundred Turkish were left dead. Beersheva was a strategic city to capture, for beyond it lay the prize of Jerusalem and other cities and towns of the Holy Land including Jericho, Nazareth and Bethlehem. Damascus and other Syrian cities also were also taken within the year. An Australian wears a World War I uniform as he walks among the graves during a ceremonyto mark the centenary of the World War I battle for Be'er Sheva, Oct. 31, 2017. Tomer Appelbaum But victory that day one hundred years ago in Be'er Sheva was also motivated by thirst. Thirst of the hundreds of horses who at that point had not drunk any water for as long as two days. Once they broke through the line in Beersheva the horses – and the men – could drink from its wells. “It’s a very important part of Australian identity,” Welsh, the reenactor, said.