content
stringlengths
10
4.9M
import java.util.ArrayList; import java.util.List; import java.util.Scanner; public class Main { public static void main(String[]srgs) { Scanner scan=new Scanner(System.in); int a=scan.nextInt(); List<String> list=new ArrayList<String>(); for(int i=0;i!=a;i++) { String str=scan.next(); if(!list.contains(str)) { list.add(str); } } System.out.println(list.size()==4?"Four":"Three"); scan.close(); } }
Luteinizing hormone pulse frequency and amplitude in azoospermic, oligozoospermic and normal fertile men in Turkey. AIM To investigate the LH pulse frequency and amplitude in azoospermic and oligozoospermic patients and to compare them with normal fertile subjects. METHODS In this controlled clinical study, 10 normal fertile male volunteers and 20 infertile patients (10 oligozoospermic and 10 azoospermic) were enrolled. Blood samples were taken every 30 minutes for 12 hours. FSH, LH and T levels were determined. LH was observed at all the blood samples, but FSH and testosterone only at the first, middle and last samples. RESULTS The mean LH levels were significantly different between all the groups, but there was no statistical difference in the FSH levels between the fertile and oligozoospermic groups. The mean LH levels increased from the fertile towards the azoospermic groups (P<0.01). The LH pulse amplitude and frequency were significantly different between all the 3 groups. The former increased while the latter decreased from the fertile to the azoospermic groups. The T levels were different statistically only between the fertile and the azoospermic groups. CONCLUSION The more prominent is the testicular defect, the lower will be the LH pulse frequency and the higher the amplitude.
Since Trump’s Election, Increased Attention to Politics – Especially Among Women Most find it stressful to talk politics with those who differ on Trump Survey Report Following an election that had one of the largest gender gaps in history, women are more likely than men to say they are paying increased attention to politics. And while far more Democrats than Republicans say they have attended a political event, rally or protest since the election, Democratic women – especially younger women and those with postgraduate degrees – are among the most likely to have participated in such a political gathering. The latest national survey by Pew Research Center, conducted June 27 to July 9 among 2,505 adults, finds that 52% of Americans say they are paying more attention to politics since Donald Trump’s election; 33% say they are paying about the same amount of attention, while 13% say they are paying less attention to politics. Nearly six-in-ten women (58%) say they are paying increased attention to politics since Trump’s election, compared with 46% of men. Overall, more Democrats and Democratic-leaning independents than Republicans and Republican leaners say they have become more attentive to politics. But there are similarly wide gender gaps in heightened interest to politics among members of both parties: 63% of Democratic women say they are more attentive to politics, compared with 51% of Democratic men. Among Republicans, 54% of women and 43% of men say the same. Among the public overall, 15% say they have attended a political event, rally or protest since the election – with two-thirds (67%) of this group saying they have done so to oppose Trump or his policies. Democrats are about three times as likely as Republicans to say they have attended a political event (22% vs. 7%). Among Democrats, there are gender, age, race and education differences in the shares saying they have participated in a political event, rally or protest. And even within several groups of Democrats, there are sizable gender differences: For instance, while Democrats with postgraduate degrees are more likely than less educated Democrats to have attended a political event or protest, 43% of Democratic women postgraduates say they have done so, compared with 30% of Democratic men with advanced degrees. The new survey also finds that, nearly nine months after the election, most people (59%) say it is “stressful and frustrating” to talk about politics with people who have a different opinion of Trump than they do; just 35% find such conversations “interesting and informative.” On the other hand, relatively few say that knowing that a friend had voted for Trump or Clinton would strain their friendship – just 19% say that knowing a friend backed Trump would strain their friendship, while only 7% say the same about learning a friend had voted for Hillary Clinton. The survey also finds that, despite the nation’s deep political divisions, majorities of both Republicans (56%) and Democrats (59%) say that even though people in the opposing party feel differently about politics, they share “many of my other values and goals.” How politics impacts conversations and friendships A majority of the public finds talking with people who have a different opinion from their own about Donald Trump to be a stressful and frustrating experience: About six-in-ten (59%) say it is stressful and frustrating, while about a third (35%) say it is interesting and informative. Democrats feel more negatively about talking politics with people who have a different opinion of the president than do Republicans. A large majority of Democrats and Democratic-leaning independents – nearly seven-in-ten (68%) – say they find it to be stressful and frustrating to talk to people with different opinions of Trump. Among Republicans and Republican leaners, fewer (52%) say they find this to be stressful and frustrating. White Democrats and Democratic leaners are more likely than black and Hispanic Democrats to say it is stressful and frustrating to talk to people with different opinions of Trump. About three quarters of white Democrats (74%) say it is frustrating, compared with 56% of black Democrats and 61% of Hispanic Democrats. Overall, more women (64%) than men (54%) say talking to people with a different opinion of Trump is stressful and frustrating. And adults under 30 are more likely to say they find these discussions interesting and informative than do those 30 and older (42% vs. 33%). Most of the public says learning that a friend voted for Donald Trump or Hillary Clinton would not have any effect on their friendships. About one-in-five (19%) say that knowing a friend had voted for Trump would put a strain on their friendship; 7% say knowing a friend had voted for Clinton would strain their friendship. About a third (35%) of Democrats and Democratic leaners say that, if a friend had voted for Trump, it would “put a strain on [the] friendship;” a smaller share of Republicans and Republican leaners (13%) say the same about learning a friend had voted for Clinton. Few Democrats and Republicans say a friend voting for their party’s candidate last fall would make a friendship stronger: 13% of Republicans and Republican leaners say a friend voting for Trump would make the friendship stronger, and 12% of Democrats and Democratic leaners say the same. Among Democrats and Democratic leaners, whites, college graduates and liberals are among the most likely to say knowing a friend voted for Trump would strain their friendship. While 40% of white Democrats and Democratic leaners say this, fewer black (28%) and Hispanic (25%) Democrats say the same. Similarly, there is a 17-percentage-point gap between the share of Democrats with a college degree or more education (44%) and the share with no more than a high school education (27%) saying a friend voting for Trump would put a strain on the friendship. There is a division on ideological lines among Democrats on whether a vote for Trump would strain a friendship. Liberal Democrats are about evenly divided between saying say their friendship would be strained (47%) if a friend said they voted for Trump and saying it would not have any effect (51%). Far more conservative and moderate Democrats say a friend voting for Trump would not have any effect (73%) than say it would put a strain on the friendship (25%). Republicans and Democrats see shared non-political values Despite their political differences, most Republicans and Democrats stop short of saying that people in the other party do not share their other values and goals beyond politics. Among both parties, about four-in-ten (41% of Republicans and 38% of Democrats) say that members of the opposing party “feel differently about politics, and they probably don’t share many of my other values and goals either.” (Note: these questions are based on partisans and do not include those who lean toward the parties). Majorities in both parties say the other side probably shares their other values and goals: Nearly six-in-ten Democrats (59%) say this about Republicans, while 56% of Republicans say it about Democrats. While these views are little changed from 2013, in 2007, 53% of Republicans and 51% of Democrats said that members of the opposing party did not share many of their goals and values outside of politics. There is a significant ideological divide among Republicans about whether Democrats share their other values and goals. About half of conservative Republicans (47%) say Democrats don’t share their other values and goals. By contrast, about a quarter of moderate and liberal Republicans (26%) say the same. Among Democrats, there is only a modest difference in these views by ideology: 35% of conservative and moderate Democrats say Republicans don’t share their other values and goals; 42% of liberal Democrats say the same.
def scale_iterable(iterable, scale_factor): if isinstance(iterable, list): scaled_iterable = list(val * scale_factor for val in iterable) elif isinstance(iterable, tuple): scaled_iterable = tuple(val * scale_factor for val in iterable) return scaled_iterable
package redis import ( "fmt" "os" "time" "github.com/go-redis/redis" ) // Storage is the interface that needs to be implemented for caching capabilities. type Storage interface { Get(key string) ([]byte, error) Set(key string, content []byte, duration time.Duration) error Delete(key string) error Exists(key string) (bool, error) Expire(key string, seconds time.Duration) error } // NewRedisStore creates new redis store func NewRedisStore() (Storage, error) { r, err := newRedisClient() if err != nil { return nil, err } return &redisStore{r}, nil } // newRedisClient creates new redis client func newRedisClient() (*redis.Client, error) { opts := &redis.Options{ Addr: fmt.Sprintf("%s:%s", os.Getenv("REDIS_HOST"), os.Getenv("REDIS_PORT")), DB: 0, } if os.Getenv("REDIS_PASS") != "" { opts.Password = os.Getenv("REDIS_PASS") } client := redis.NewClient(opts) _, err := client.Ping().Result() if err != nil { // try without password opts.Password = "" client = redis.NewClient(opts) _, err = client.Ping().Result() if err != nil { return nil, err } } return client, nil }
/* Copyright (c) 2014-2015, The Linux Foundation. All rights reserved. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 and * only version 2 as published by the Free Software Foundation. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. */ #include <linux/list_sort.h> #include <linux/msm-bus-board.h> #include <linux/msm_bus_rules.h> #include <linux/slab.h> #include <linux/types.h> #include <trace/events/trace_msm_bus.h> struct node_vote_info { int id; u64 ib; u64 ab; u64 clk; }; struct rules_def { int rule_id; int num_src; int state; struct node_vote_info *src_info; struct bus_rule_type rule_ops; bool state_change; struct list_head link; }; struct rule_node_info { int id; void *data; struct raw_notifier_head rule_notify_list; struct rules_def *cur_rule; int num_rules; struct list_head node_rules; struct list_head link; struct rule_apply_rcm_info apply; }; DEFINE_MUTEX(msm_bus_rules_lock); static LIST_HEAD(node_list); static struct rule_node_info *get_node(u32 id, void *data); static int node_rules_compare(void *priv, struct list_head *a, struct list_head *b); #define LE(op1, op2) (op1 <= op2) #define LT(op1, op2) (op1 < op2) #define GE(op1, op2) (op1 >= op2) #define GT(op1, op2) (op1 > op2) #define NB_ID (0x201) static struct rule_node_info *get_node(u32 id, void *data) { struct rule_node_info *node_it = NULL; struct rule_node_info *node_match = NULL; list_for_each_entry(node_it, &node_list, link) { if (node_it->id == id) { if ((id == NB_ID)) { if ((node_it->data == data)) { node_match = node_it; break; } } else { node_match = node_it; break; } } } return node_match; } static struct rule_node_info *gen_node(u32 id, void *data) { struct rule_node_info *node_it = NULL; struct rule_node_info *node_match = NULL; list_for_each_entry(node_it, &node_list, link) { if (node_it->id == id) { node_match = node_it; break; } } if (!node_match) { node_match = kzalloc(sizeof(struct rule_node_info), GFP_KERNEL); if (!node_match) { pr_err("%s: Cannot allocate memory", __func__); goto exit_node_match; } node_match->id = id; node_match->cur_rule = NULL; node_match->num_rules = 0; node_match->data = data; list_add_tail(&node_match->link, &node_list); INIT_LIST_HEAD(&node_match->node_rules); RAW_INIT_NOTIFIER_HEAD(&node_match->rule_notify_list); pr_debug("Added new node %d to list\n", id); } exit_node_match: return node_match; } static bool do_compare_op(u64 op1, u64 op2, int op) { bool ret = false; switch (op) { case OP_LE: ret = LE(op1, op2); break; case OP_LT: ret = LT(op1, op2); break; case OP_GT: ret = GT(op1, op2); break; case OP_GE: ret = GE(op1, op2); break; case OP_NOOP: ret = true; break; default: pr_info("Invalid OP %d", op); break; } return ret; } static void update_src_id_vote(struct rule_update_path_info *inp_node, struct rule_node_info *rule_node) { struct rules_def *rule; int i; list_for_each_entry(rule, &rule_node->node_rules, link) { for (i = 0; i < rule->num_src; i++) { if (rule->src_info[i].id == inp_node->id) { rule->src_info[i].ib = inp_node->ib; rule->src_info[i].ab = inp_node->ab; rule->src_info[i].clk = inp_node->clk; } } } } static u64 get_field(struct rules_def *rule, int src_id) { u64 field = 0; int i; for (i = 0; i < rule->num_src; i++) { switch (rule->rule_ops.src_field) { case FLD_IB: field += rule->src_info[i].ib; break; case FLD_AB: field += rule->src_info[i].ab; break; case FLD_CLK: field += rule->src_info[i].clk; break; } } return field; } static bool check_rule(struct rules_def *rule, struct rule_update_path_info *inp) { bool ret = false; if (!rule) return ret; switch (rule->rule_ops.op) { case OP_LE: case OP_LT: case OP_GT: case OP_GE: { u64 src_field = get_field(rule, inp->id); ret = do_compare_op(src_field, rule->rule_ops.thresh, rule->rule_ops.op); break; } default: pr_err("Unsupported op %d", rule->rule_ops.op); break; } return ret; } static void match_rule(struct rule_update_path_info *inp_node, struct rule_node_info *node) { struct rules_def *rule; int i; list_for_each_entry(rule, &node->node_rules, link) { for (i = 0; i < rule->num_src; i++) { if (rule->src_info[i].id == inp_node->id) { if (check_rule(rule, inp_node)) { trace_bus_rules_matches( (node->cur_rule ? node->cur_rule->rule_id : -1), inp_node->id, inp_node->ab, inp_node->ib, inp_node->clk); if (rule->state == RULE_STATE_NOT_APPLIED) rule->state_change = true; rule->state = RULE_STATE_APPLIED; } else { if (rule->state == RULE_STATE_APPLIED) rule->state_change = true; rule->state = RULE_STATE_NOT_APPLIED; } } } } } static void apply_rule(struct rule_node_info *node, struct list_head *output_list) { struct rules_def *rule; struct rules_def *last_rule; last_rule = node->cur_rule; node->cur_rule = NULL; list_for_each_entry(rule, &node->node_rules, link) { if ((rule->state == RULE_STATE_APPLIED) && !node->cur_rule) node->cur_rule = rule; if (node->id == NB_ID) { if (rule->state_change) { rule->state_change = false; raw_notifier_call_chain(&node->rule_notify_list, rule->state, (void *)&rule->rule_ops); } } else { if ((rule->state == RULE_STATE_APPLIED) && (node->cur_rule && (node->cur_rule->rule_id == rule->rule_id))) { node->apply.id = rule->rule_ops.dst_node[0]; node->apply.throttle = rule->rule_ops.mode; node->apply.lim_bw = rule->rule_ops.dst_bw; node->apply.after_clk_commit = false; if (last_rule != node->cur_rule) list_add_tail(&node->apply.link, output_list); if (last_rule) { if (node_rules_compare(NULL, &last_rule->link, &node->cur_rule->link) == -1) node->apply.after_clk_commit = true; } } rule->state_change = false; } } } int msm_rules_update_path(struct list_head *input_list, struct list_head *output_list) { int ret = 0; struct rule_update_path_info *inp_node; struct rule_node_info *node_it = NULL; mutex_lock(&msm_bus_rules_lock); list_for_each_entry(inp_node, input_list, link) { list_for_each_entry(node_it, &node_list, link) { update_src_id_vote(inp_node, node_it); match_rule(inp_node, node_it); } } list_for_each_entry(node_it, &node_list, link) apply_rule(node_it, output_list); mutex_unlock(&msm_bus_rules_lock); return ret; } static bool ops_equal(int op1, int op2) { bool ret = false; switch (op1) { case OP_GT: case OP_GE: case OP_LT: case OP_LE: if (abs(op1 - op2) <= 1) ret = true; break; default: ret = (op1 == op2); } return ret; } static bool is_throttle_rule(int mode) { bool ret = true; if (mode == THROTTLE_OFF) ret = false; return ret; } static int node_rules_compare(void *priv, struct list_head *a, struct list_head *b) { struct rules_def *ra = container_of(a, struct rules_def, link); struct rules_def *rb = container_of(b, struct rules_def, link); int ret = -1; int64_t th_diff = 0; if (ra->rule_ops.mode == rb->rule_ops.mode) { if (ops_equal(ra->rule_ops.op, rb->rule_ops.op)) { if ((ra->rule_ops.op == OP_LT) || (ra->rule_ops.op == OP_LE)) { th_diff = ra->rule_ops.thresh - rb->rule_ops.thresh; if (th_diff > 0) ret = 1; else ret = -1; } else if ((ra->rule_ops.op == OP_GT) || (ra->rule_ops.op == OP_GE)) { th_diff = rb->rule_ops.thresh - ra->rule_ops.thresh; if (th_diff > 0) ret = 1; else ret = -1; } } else ret = ra->rule_ops.op - rb->rule_ops.op; } else if (is_throttle_rule(ra->rule_ops.mode) && is_throttle_rule(rb->rule_ops.mode)) { if (ra->rule_ops.mode == THROTTLE_ON) ret = -1; else ret = 1; } else if ((ra->rule_ops.mode == THROTTLE_OFF) && is_throttle_rule(rb->rule_ops.mode)) { ret = 1; } else if (is_throttle_rule(ra->rule_ops.mode) && (rb->rule_ops.mode == THROTTLE_OFF)) { ret = -1; } return ret; } static void print_rules(struct rule_node_info *node_it) { struct rules_def *node_rule = NULL; int i; if (!node_it) { pr_err("%s: no node for found", __func__); return; } pr_info("\n Now printing rules for Node %d cur rule %d\n", node_it->id, (node_it->cur_rule ? node_it->cur_rule->rule_id : -1)); list_for_each_entry(node_rule, &node_it->node_rules, link) { pr_info("\n num Rules %d rule Id %d\n", node_it->num_rules, node_rule->rule_id); pr_info("Rule: src_field %d\n", node_rule->rule_ops.src_field); for (i = 0; i < node_rule->rule_ops.num_src; i++) pr_info("Rule: src %d\n", node_rule->rule_ops.src_id[i]); for (i = 0; i < node_rule->rule_ops.num_dst; i++) pr_info("Rule: dst %d dst_bw %llu\n", node_rule->rule_ops.dst_node[i], node_rule->rule_ops.dst_bw); pr_info("Rule: thresh %llu op %d mode %d State %d\n", node_rule->rule_ops.thresh, node_rule->rule_ops.op, node_rule->rule_ops.mode, node_rule->state); } } void print_all_rules(void) { struct rule_node_info *node_it = NULL; list_for_each_entry(node_it, &node_list, link) print_rules(node_it); } void print_rules_buf(char *buf, int max_buf) { struct rule_node_info *node_it = NULL; struct rules_def *node_rule = NULL; int i; int cnt = 0; list_for_each_entry(node_it, &node_list, link) { cnt += scnprintf(buf + cnt, max_buf - cnt, "\n Now printing rules for Node %d cur_rule %d\n", node_it->id, (node_it->cur_rule ? node_it->cur_rule->rule_id : -1)); list_for_each_entry(node_rule, &node_it->node_rules, link) { cnt += scnprintf(buf + cnt, max_buf - cnt, "\nNum Rules:%d ruleId %d STATE:%d change:%d\n", node_it->num_rules, node_rule->rule_id, node_rule->state, node_rule->state_change); cnt += scnprintf(buf + cnt, max_buf - cnt, "Src_field %d\n", node_rule->rule_ops.src_field); for (i = 0; i < node_rule->rule_ops.num_src; i++) cnt += scnprintf(buf + cnt, max_buf - cnt, "Src %d Cur Ib %llu Ab %llu\n", node_rule->rule_ops.src_id[i], node_rule->src_info[i].ib, node_rule->src_info[i].ab); for (i = 0; i < node_rule->rule_ops.num_dst; i++) cnt += scnprintf(buf + cnt, max_buf - cnt, "Dst %d dst_bw %llu\n", node_rule->rule_ops.dst_node[0], node_rule->rule_ops.dst_bw); cnt += scnprintf(buf + cnt, max_buf - cnt, "Thresh %llu op %d mode %d\n", node_rule->rule_ops.thresh, node_rule->rule_ops.op, node_rule->rule_ops.mode); } } } static int copy_rule(struct bus_rule_type *src, struct rules_def *node_rule, struct notifier_block *nb) { int i; int ret = 0; memcpy(&node_rule->rule_ops, src, sizeof(struct bus_rule_type)); node_rule->rule_ops.src_id = kzalloc( (sizeof(int) * node_rule->rule_ops.num_src), GFP_KERNEL); if (!node_rule->rule_ops.src_id) { pr_err("%s:Failed to allocate for src_id", __func__); return -ENOMEM; } memcpy(node_rule->rule_ops.src_id, src->src_id, sizeof(int) * src->num_src); if (!nb) { node_rule->rule_ops.dst_node = kzalloc( (sizeof(int) * node_rule->rule_ops.num_dst), GFP_KERNEL); if (!node_rule->rule_ops.dst_node) { pr_err("%s:Failed to allocate for src_id", __func__); return -ENOMEM; } memcpy(node_rule->rule_ops.dst_node, src->dst_node, sizeof(int) * src->num_dst); } node_rule->num_src = src->num_src; node_rule->src_info = kzalloc( (sizeof(struct node_vote_info) * node_rule->rule_ops.num_src), GFP_KERNEL); if (!node_rule->src_info) { pr_err("%s:Failed to allocate for src_id", __func__); return -ENOMEM; } for (i = 0; i < src->num_src; i++) node_rule->src_info[i].id = src->src_id[i]; return ret; } void msm_rule_register(int num_rules, struct bus_rule_type *rule, struct notifier_block *nb) { struct rule_node_info *node = NULL; int i, j; struct rules_def *node_rule = NULL; int num_dst = 0; if (!rule) return; mutex_lock(&msm_bus_rules_lock); for (i = 0; i < num_rules; i++) { if (nb) num_dst = 1; else num_dst = rule[i].num_dst; for (j = 0; j < num_dst; j++) { int id = 0; if (nb) id = NB_ID; else id = rule[i].dst_node[j]; node = gen_node(id, nb); if (!node) { pr_info("Error getting rule"); goto exit_rule_register; } node_rule = kzalloc(sizeof(struct rules_def), GFP_KERNEL); if (!node_rule) { pr_err("%s: Failed to allocate for rule", __func__); goto exit_rule_register; } if (copy_rule(&rule[i], node_rule, nb)) { pr_err("Error copying rule"); goto exit_rule_register; } node_rule->rule_id = node->num_rules++; if (nb) node->data = nb; list_add_tail(&node_rule->link, &node->node_rules); } } list_sort(NULL, &node->node_rules, node_rules_compare); if (nb) raw_notifier_chain_register(&node->rule_notify_list, nb); exit_rule_register: mutex_unlock(&msm_bus_rules_lock); return; } static int comp_rules(struct bus_rule_type *rulea, struct bus_rule_type *ruleb) { int ret = 1; if (rulea->num_src == ruleb->num_src) ret = memcmp(rulea->src_id, ruleb->src_id, (sizeof(int) * rulea->num_src)); if (!ret && (rulea->num_dst == ruleb->num_dst)) ret = memcmp(rulea->dst_node, ruleb->dst_node, (sizeof(int) * rulea->num_dst)); if (!ret && (rulea->dst_bw == ruleb->dst_bw) && (rulea->op == ruleb->op) && (rulea->thresh == ruleb->thresh)) ret = 0; return ret; } void msm_rule_unregister(int num_rules, struct bus_rule_type *rule, struct notifier_block *nb) { int i; struct rule_node_info *node = NULL; struct rule_node_info *node_tmp = NULL; struct rules_def *node_rule; struct rules_def *node_rule_tmp; bool match_found = false; if (!rule) return; mutex_lock(&msm_bus_rules_lock); if (nb) { node = get_node(NB_ID, nb); if (!node) { pr_err("%s: Can't find node", __func__); goto exit_unregister_rule; } list_for_each_entry_safe(node_rule, node_rule_tmp, &node->node_rules, link) { list_del(&node_rule->link); kfree(node_rule); node->num_rules--; } raw_notifier_chain_unregister(&node->rule_notify_list, nb); } else { for (i = 0; i < num_rules; i++) { match_found = false; list_for_each_entry(node, &node_list, link) { list_for_each_entry_safe(node_rule, node_rule_tmp, &node->node_rules, link) { if (comp_rules(&node_rule->rule_ops, &rule[i]) == 0) { list_del(&node_rule->link); kfree(node_rule); match_found = true; node->num_rules--; list_sort(NULL, &node->node_rules, node_rules_compare); break; } } } } } list_for_each_entry_safe(node, node_tmp, &node_list, link) { if (!node->num_rules) { pr_debug("Deleting Rule node %d", node->id); list_del(&node->link); kfree(node); } } exit_unregister_rule: mutex_unlock(&msm_bus_rules_lock); } bool msm_rule_are_rules_registered(void) { bool ret = false; if (list_empty(&node_list)) ret = false; else ret = true; return ret; }
""" this code is borrowed from https://github.com/ajbrock/BigGAN-PyTorch MIT License Copyright (c) 2019 Andy Brock Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. """ from os.path import dirname, exists, join, isfile import os from torch.utils.data import DataLoader from tqdm import tqdm import numpy as np import h5py as h5 from data_util import Dataset_ def make_hdf5(name, img_size, crop_long_edge, resize_size, data_dir, resizer, DATA, RUN): if resize_size is not None: file_name = "{dataset_name}_{size}_{resizer}_train.hdf5".format(dataset_name=name, size=img_size, resizer=resizer) else: file_name = "{dataset_name}_{size}_train.hdf5".format(dataset_name=name, size=img_size) file_path = join(data_dir, file_name) hdf5_dir = dirname(file_path) if not exists(hdf5_dir): os.makedirs(hdf5_dir) if os.path.isfile(file_path): print("{file_name} exist!\nThe file are located in the {file_path}.".format(file_name=file_name, file_path=file_path)) else: dataset = Dataset_(data_name=DATA.name, data_dir=RUN.data_dir, train=True, crop_long_edge=crop_long_edge, resize_size=resize_size, resizer=resizer, random_flip=False, normalize=False, hdf5_path=None, load_data_in_memory=False) dataloader = DataLoader(dataset, batch_size=500, shuffle=False, pin_memory=False, num_workers=RUN.num_workers, drop_last=False) print("Start to load {name} into an HDF5 file with chunk size 500.".format(name=name)) for i, (x, y) in enumerate(tqdm(dataloader)): x = np.transpose(x.numpy(), (0, 2, 3, 1)) y = y.numpy() if i == 0: with h5.File(file_path, "w") as f: print("Produce dataset of len {num_dataset}".format(num_dataset=len(dataset))) imgs_dset = f.create_dataset("imgs", x.shape, dtype="uint8", maxshape=(len(dataset), img_size, img_size, 3), chunks=(500, img_size, img_size, 3), compression=False) print("Image chunks chosen as {chunk}".format(chunk=str(imgs_dset.chunks))) imgs_dset[...] = x labels_dset = f.create_dataset("labels", y.shape, dtype="int64", maxshape=(len(dataloader.dataset), ), chunks=(500, ), compression=False) print("Label chunks chosen as {chunk}".format(chunk=str(labels_dset.chunks))) labels_dset[...] = y else: with h5.File(file_path, "a") as f: f["imgs"].resize(f["imgs"].shape[0] + x.shape[0], axis=0) f["imgs"][-x.shape[0]:] = x f["labels"].resize(f["labels"].shape[0] + y.shape[0], axis=0) f["labels"][-y.shape[0]:] = y return file_path, False, None
/** * @author Gidi Shabat */ public class MockArtifactoryServersCommonService implements ArtifactoryServersCommonService { public MockArtifactoryServersCommonService(ArtifactoryVersion version) { } @Nullable @Override public ArtifactoryServer getRunningHaPrimary() { return null; } @Override public ArtifactoryServer getCurrentMember() { return null; } @Override public List<ArtifactoryServer> getOtherActiveMembers() { return new ArrayList<>(); } @Override public List<ArtifactoryServer> getActiveMembers() { return null; } @Override public List<ArtifactoryServer> getOtherRunningHaMembers() { return null; } @Override public ArtifactoryServer getArtifactoryServer(String serverId) { return null; } @Override public List<ArtifactoryServer> getAllArtifactoryServers() { return null; } @Override public void updateArtifactoryServerRole(String serverId, ArtifactoryServerRole newRole) { } @Override public void updateArtifactoryJoinPort(String serverId, int port) { } @Override public void updateArtifactoryServerState(ArtifactoryServer server, ArtifactoryServerState newState) { } @Override public void createArtifactoryServer(ArtifactoryServer artifactoryServer) { } @Override public void updateArtifactoryServer(ArtifactoryServer artifactoryServer) { } @Override public boolean removeServer(String serverId) { return false; } @Override public void updateArtifactoryServerHeartbeat(String serverId, long heartBeat) { } }
/** * @author Nataliya Shurupova */ @Entity @DiscriminatorValue(value="2") public class WeekdayBlackout extends BlackoutDate { // TODO: This ought to be the java.util.Calendar constant for the day, or a custom enum private String dayOfTheWeek; @Transient public String getDisplayName() { return getDayOfTheWeek(); } @Transient public int getDayOfTheWeekInteger() { return mapDayNameToInteger(getDayOfTheWeek()); } public String getDayOfTheWeek() { return this.dayOfTheWeek; } public void setDayOfTheWeek(String dayOfTheWeek) { this.dayOfTheWeek = dayOfTheWeek; } @Override public boolean equals(Object o) { if (this == o) return true; if (o == null || getClass() != o.getClass()) return false; WeekdayBlackout that = (WeekdayBlackout) o; if (dayOfTheWeek != null ? !dayOfTheWeek.equals(that.getDayOfTheWeek()) : that.getDayOfTheWeek() != null) return false; return true; } @Override public String toString(){ StringBuffer sb = new StringBuffer(); sb.append("Id = "); sb.append(getId()); sb.append(" DayOfTheWeek = "); sb.append(getDayOfTheWeek()); sb.append(super.toString()); return sb.toString(); } }
/** * Select the previous collision object (in cyclical index order). */ public void selectPrevious() { if (isSelected()) { List<String> names = cgm.getPhysics().listPcoNames(""); String selectedName = name(); int index = names.indexOf(selectedName); assert index >= 0 : index; int numObjects = names.size(); int newIndex = MyMath.modulo(index - 1, numObjects); selectedName = names.get(newIndex); select(selectedName); } }
/* SPDX-License-Identifier: GPL-2.0+ */ /* * Copyright (C) 2016 Freescale Semiconductor, Inc. * Copyright 2017-2018 NXP */ #ifndef __DT_BINDINGS_RSCRC_IMX_H #define __DT_BINDINGS_RSCRC_IMX_H /* * These defines are used to indicate a resource. Resources include peripherals * and bus masters (but not memory regions). Note items from list should * never be changed or removed (only added to at the end of the list). */ #define IMX_SC_R_A53 0 #define IMX_SC_R_A53_0 1 #define IMX_SC_R_A53_1 2 #define IMX_SC_R_A53_2 3 #define IMX_SC_R_A53_3 4 #define IMX_SC_R_A72 5 #define IMX_SC_R_A72_0 6 #define IMX_SC_R_A72_1 7 #define IMX_SC_R_A72_2 8 #define IMX_SC_R_A72_3 9 #define IMX_SC_R_CCI 10 #define IMX_SC_R_DB 11 #define IMX_SC_R_DRC_0 12 #define IMX_SC_R_DRC_1 13 #define IMX_SC_R_GIC_SMMU 14 #define IMX_SC_R_IRQSTR_M4_0 15 #define IMX_SC_R_IRQSTR_M4_1 16 #define IMX_SC_R_SMMU 17 #define IMX_SC_R_GIC 18 #define IMX_SC_R_DC_0_BLIT0 19 #define IMX_SC_R_DC_0_BLIT1 20 #define IMX_SC_R_DC_0_BLIT2 21 #define IMX_SC_R_DC_0_BLIT_OUT 22 #define IMX_SC_R_PERF 23 #define IMX_SC_R_DC_0_WARP 25 #define IMX_SC_R_DC_0_VIDEO0 28 #define IMX_SC_R_DC_0_VIDEO1 29 #define IMX_SC_R_DC_0_FRAC0 30 #define IMX_SC_R_DC_0 32 #define IMX_SC_R_GPU_2_PID0 33 #define IMX_SC_R_DC_0_PLL_0 34 #define IMX_SC_R_DC_0_PLL_1 35 #define IMX_SC_R_DC_1_BLIT0 36 #define IMX_SC_R_DC_1_BLIT1 37 #define IMX_SC_R_DC_1_BLIT2 38 #define IMX_SC_R_DC_1_BLIT_OUT 39 #define IMX_SC_R_DC_1_WARP 42 #define IMX_SC_R_DC_1_VIDEO0 45 #define IMX_SC_R_DC_1_VIDEO1 46 #define IMX_SC_R_DC_1_FRAC0 47 #define IMX_SC_R_DC_1 49 #define IMX_SC_R_DC_1_PLL_0 51 #define IMX_SC_R_DC_1_PLL_1 52 #define IMX_SC_R_SPI_0 53 #define IMX_SC_R_SPI_1 54 #define IMX_SC_R_SPI_2 55 #define IMX_SC_R_SPI_3 56 #define IMX_SC_R_UART_0 57 #define IMX_SC_R_UART_1 58 #define IMX_SC_R_UART_2 59 #define IMX_SC_R_UART_3 60 #define IMX_SC_R_UART_4 61 #define IMX_SC_R_EMVSIM_0 62 #define IMX_SC_R_EMVSIM_1 63 #define IMX_SC_R_DMA_0_CH0 64 #define IMX_SC_R_DMA_0_CH1 65 #define IMX_SC_R_DMA_0_CH2 66 #define IMX_SC_R_DMA_0_CH3 67 #define IMX_SC_R_DMA_0_CH4 68 #define IMX_SC_R_DMA_0_CH5 69 #define IMX_SC_R_DMA_0_CH6 70 #define IMX_SC_R_DMA_0_CH7 71 #define IMX_SC_R_DMA_0_CH8 72 #define IMX_SC_R_DMA_0_CH9 73 #define IMX_SC_R_DMA_0_CH10 74 #define IMX_SC_R_DMA_0_CH11 75 #define IMX_SC_R_DMA_0_CH12 76 #define IMX_SC_R_DMA_0_CH13 77 #define IMX_SC_R_DMA_0_CH14 78 #define IMX_SC_R_DMA_0_CH15 79 #define IMX_SC_R_DMA_0_CH16 80 #define IMX_SC_R_DMA_0_CH17 81 #define IMX_SC_R_DMA_0_CH18 82 #define IMX_SC_R_DMA_0_CH19 83 #define IMX_SC_R_DMA_0_CH20 84 #define IMX_SC_R_DMA_0_CH21 85 #define IMX_SC_R_DMA_0_CH22 86 #define IMX_SC_R_DMA_0_CH23 87 #define IMX_SC_R_DMA_0_CH24 88 #define IMX_SC_R_DMA_0_CH25 89 #define IMX_SC_R_DMA_0_CH26 90 #define IMX_SC_R_DMA_0_CH27 91 #define IMX_SC_R_DMA_0_CH28 92 #define IMX_SC_R_DMA_0_CH29 93 #define IMX_SC_R_DMA_0_CH30 94 #define IMX_SC_R_DMA_0_CH31 95 #define IMX_SC_R_I2C_0 96 #define IMX_SC_R_I2C_1 97 #define IMX_SC_R_I2C_2 98 #define IMX_SC_R_I2C_3 99 #define IMX_SC_R_I2C_4 100 #define IMX_SC_R_ADC_0 101 #define IMX_SC_R_ADC_1 102 #define IMX_SC_R_FTM_0 103 #define IMX_SC_R_FTM_1 104 #define IMX_SC_R_CAN_0 105 #define IMX_SC_R_CAN_1 106 #define IMX_SC_R_CAN_2 107 #define IMX_SC_R_CAN(x) (IMX_SC_R_CAN_0 + (x)) #define IMX_SC_R_DMA_1_CH0 108 #define IMX_SC_R_DMA_1_CH1 109 #define IMX_SC_R_DMA_1_CH2 110 #define IMX_SC_R_DMA_1_CH3 111 #define IMX_SC_R_DMA_1_CH4 112 #define IMX_SC_R_DMA_1_CH5 113 #define IMX_SC_R_DMA_1_CH6 114 #define IMX_SC_R_DMA_1_CH7 115 #define IMX_SC_R_DMA_1_CH8 116 #define IMX_SC_R_DMA_1_CH9 117 #define IMX_SC_R_DMA_1_CH10 118 #define IMX_SC_R_DMA_1_CH11 119 #define IMX_SC_R_DMA_1_CH12 120 #define IMX_SC_R_DMA_1_CH13 121 #define IMX_SC_R_DMA_1_CH14 122 #define IMX_SC_R_DMA_1_CH15 123 #define IMX_SC_R_DMA_1_CH16 124 #define IMX_SC_R_DMA_1_CH17 125 #define IMX_SC_R_DMA_1_CH18 126 #define IMX_SC_R_DMA_1_CH19 127 #define IMX_SC_R_DMA_1_CH20 128 #define IMX_SC_R_DMA_1_CH21 129 #define IMX_SC_R_DMA_1_CH22 130 #define IMX_SC_R_DMA_1_CH23 131 #define IMX_SC_R_DMA_1_CH24 132 #define IMX_SC_R_DMA_1_CH25 133 #define IMX_SC_R_DMA_1_CH26 134 #define IMX_SC_R_DMA_1_CH27 135 #define IMX_SC_R_DMA_1_CH28 136 #define IMX_SC_R_DMA_1_CH29 137 #define IMX_SC_R_DMA_1_CH30 138 #define IMX_SC_R_DMA_1_CH31 139 #define IMX_SC_R_UNUSED1 140 #define IMX_SC_R_UNUSED2 141 #define IMX_SC_R_UNUSED3 142 #define IMX_SC_R_UNUSED4 143 #define IMX_SC_R_GPU_0_PID0 144 #define IMX_SC_R_GPU_0_PID1 145 #define IMX_SC_R_GPU_0_PID2 146 #define IMX_SC_R_GPU_0_PID3 147 #define IMX_SC_R_GPU_1_PID0 148 #define IMX_SC_R_GPU_1_PID1 149 #define IMX_SC_R_GPU_1_PID2 150 #define IMX_SC_R_GPU_1_PID3 151 #define IMX_SC_R_PCIE_A 152 #define IMX_SC_R_SERDES_0 153 #define IMX_SC_R_MATCH_0 154 #define IMX_SC_R_MATCH_1 155 #define IMX_SC_R_MATCH_2 156 #define IMX_SC_R_MATCH_3 157 #define IMX_SC_R_MATCH_4 158 #define IMX_SC_R_MATCH_5 159 #define IMX_SC_R_MATCH_6 160 #define IMX_SC_R_MATCH_7 161 #define IMX_SC_R_MATCH_8 162 #define IMX_SC_R_MATCH_9 163 #define IMX_SC_R_MATCH_10 164 #define IMX_SC_R_MATCH_11 165 #define IMX_SC_R_MATCH_12 166 #define IMX_SC_R_MATCH_13 167 #define IMX_SC_R_MATCH_14 168 #define IMX_SC_R_PCIE_B 169 #define IMX_SC_R_SATA_0 170 #define IMX_SC_R_SERDES_1 171 #define IMX_SC_R_HSIO_GPIO 172 #define IMX_SC_R_MATCH_15 173 #define IMX_SC_R_MATCH_16 174 #define IMX_SC_R_MATCH_17 175 #define IMX_SC_R_MATCH_18 176 #define IMX_SC_R_MATCH_19 177 #define IMX_SC_R_MATCH_20 178 #define IMX_SC_R_MATCH_21 179 #define IMX_SC_R_MATCH_22 180 #define IMX_SC_R_MATCH_23 181 #define IMX_SC_R_MATCH_24 182 #define IMX_SC_R_MATCH_25 183 #define IMX_SC_R_MATCH_26 184 #define IMX_SC_R_MATCH_27 185 #define IMX_SC_R_MATCH_28 186 #define IMX_SC_R_LCD_0 187 #define IMX_SC_R_LCD_0_PWM_0 188 #define IMX_SC_R_LCD_0_I2C_0 189 #define IMX_SC_R_LCD_0_I2C_1 190 #define IMX_SC_R_PWM_0 191 #define IMX_SC_R_PWM_1 192 #define IMX_SC_R_PWM_2 193 #define IMX_SC_R_PWM_3 194 #define IMX_SC_R_PWM_4 195 #define IMX_SC_R_PWM_5 196 #define IMX_SC_R_PWM_6 197 #define IMX_SC_R_PWM_7 198 #define IMX_SC_R_GPIO_0 199 #define IMX_SC_R_GPIO_1 200 #define IMX_SC_R_GPIO_2 201 #define IMX_SC_R_GPIO_3 202 #define IMX_SC_R_GPIO_4 203 #define IMX_SC_R_GPIO_5 204 #define IMX_SC_R_GPIO_6 205 #define IMX_SC_R_GPIO_7 206 #define IMX_SC_R_GPT_0 207 #define IMX_SC_R_GPT_1 208 #define IMX_SC_R_GPT_2 209 #define IMX_SC_R_GPT_3 210 #define IMX_SC_R_GPT_4 211 #define IMX_SC_R_KPP 212 #define IMX_SC_R_MU_0A 213 #define IMX_SC_R_MU_1A 214 #define IMX_SC_R_MU_2A 215 #define IMX_SC_R_MU_3A 216 #define IMX_SC_R_MU_4A 217 #define IMX_SC_R_MU_5A 218 #define IMX_SC_R_MU_6A 219 #define IMX_SC_R_MU_7A 220 #define IMX_SC_R_MU_8A 221 #define IMX_SC_R_MU_9A 222 #define IMX_SC_R_MU_10A 223 #define IMX_SC_R_MU_11A 224 #define IMX_SC_R_MU_12A 225 #define IMX_SC_R_MU_13A 226 #define IMX_SC_R_MU_5B 227 #define IMX_SC_R_MU_6B 228 #define IMX_SC_R_MU_7B 229 #define IMX_SC_R_MU_8B 230 #define IMX_SC_R_MU_9B 231 #define IMX_SC_R_MU_10B 232 #define IMX_SC_R_MU_11B 233 #define IMX_SC_R_MU_12B 234 #define IMX_SC_R_MU_13B 235 #define IMX_SC_R_ROM_0 236 #define IMX_SC_R_FSPI_0 237 #define IMX_SC_R_FSPI_1 238 #define IMX_SC_R_IEE 239 #define IMX_SC_R_IEE_R0 240 #define IMX_SC_R_IEE_R1 241 #define IMX_SC_R_IEE_R2 242 #define IMX_SC_R_IEE_R3 243 #define IMX_SC_R_IEE_R4 244 #define IMX_SC_R_IEE_R5 245 #define IMX_SC_R_IEE_R6 246 #define IMX_SC_R_IEE_R7 247 #define IMX_SC_R_SDHC_0 248 #define IMX_SC_R_SDHC_1 249 #define IMX_SC_R_SDHC_2 250 #define IMX_SC_R_ENET_0 251 #define IMX_SC_R_ENET_1 252 #define IMX_SC_R_MLB_0 253 #define IMX_SC_R_DMA_2_CH0 254 #define IMX_SC_R_DMA_2_CH1 255 #define IMX_SC_R_DMA_2_CH2 256 #define IMX_SC_R_DMA_2_CH3 257 #define IMX_SC_R_DMA_2_CH4 258 #define IMX_SC_R_USB_0 259 #define IMX_SC_R_USB_1 260 #define IMX_SC_R_USB_0_PHY 261 #define IMX_SC_R_USB_2 262 #define IMX_SC_R_USB_2_PHY 263 #define IMX_SC_R_DTCP 264 #define IMX_SC_R_NAND 265 #define IMX_SC_R_LVDS_0 266 #define IMX_SC_R_LVDS_0_PWM_0 267 #define IMX_SC_R_LVDS_0_I2C_0 268 #define IMX_SC_R_LVDS_0_I2C_1 269 #define IMX_SC_R_LVDS_1 270 #define IMX_SC_R_LVDS_1_PWM_0 271 #define IMX_SC_R_LVDS_1_I2C_0 272 #define IMX_SC_R_LVDS_1_I2C_1 273 #define IMX_SC_R_LVDS_2 274 #define IMX_SC_R_LVDS_2_PWM_0 275 #define IMX_SC_R_LVDS_2_I2C_0 276 #define IMX_SC_R_LVDS_2_I2C_1 277 #define IMX_SC_R_M4_0_PID0 278 #define IMX_SC_R_M4_0_PID1 279 #define IMX_SC_R_M4_0_PID2 280 #define IMX_SC_R_M4_0_PID3 281 #define IMX_SC_R_M4_0_PID4 282 #define IMX_SC_R_M4_0_RGPIO 283 #define IMX_SC_R_M4_0_SEMA42 284 #define IMX_SC_R_M4_0_TPM 285 #define IMX_SC_R_M4_0_PIT 286 #define IMX_SC_R_M4_0_UART 287 #define IMX_SC_R_M4_0_I2C 288 #define IMX_SC_R_M4_0_INTMUX 289 #define IMX_SC_R_M4_0_MU_0B 292 #define IMX_SC_R_M4_0_MU_0A0 293 #define IMX_SC_R_M4_0_MU_0A1 294 #define IMX_SC_R_M4_0_MU_0A2 295 #define IMX_SC_R_M4_0_MU_0A3 296 #define IMX_SC_R_M4_0_MU_1A 297 #define IMX_SC_R_M4_1_PID0 298 #define IMX_SC_R_M4_1_PID1 299 #define IMX_SC_R_M4_1_PID2 300 #define IMX_SC_R_M4_1_PID3 301 #define IMX_SC_R_M4_1_PID4 302 #define IMX_SC_R_M4_1_RGPIO 303 #define IMX_SC_R_M4_1_SEMA42 304 #define IMX_SC_R_M4_1_TPM 305 #define IMX_SC_R_M4_1_PIT 306 #define IMX_SC_R_M4_1_UART 307 #define IMX_SC_R_M4_1_I2C 308 #define IMX_SC_R_M4_1_INTMUX 309 #define IMX_SC_R_M4_1_MU_0B 312 #define IMX_SC_R_M4_1_MU_0A0 313 #define IMX_SC_R_M4_1_MU_0A1 314 #define IMX_SC_R_M4_1_MU_0A2 315 #define IMX_SC_R_M4_1_MU_0A3 316 #define IMX_SC_R_M4_1_MU_1A 317 #define IMX_SC_R_SAI_0 318 #define IMX_SC_R_SAI_1 319 #define IMX_SC_R_SAI_2 320 #define IMX_SC_R_IRQSTR_SCU2 321 #define IMX_SC_R_IRQSTR_DSP 322 #define IMX_SC_R_ELCDIF_PLL 323 #define IMX_SC_R_OCRAM 324 #define IMX_SC_R_AUDIO_PLL_0 325 #define IMX_SC_R_PI_0 326 #define IMX_SC_R_PI_0_PWM_0 327 #define IMX_SC_R_PI_0_PWM_1 328 #define IMX_SC_R_PI_0_I2C_0 329 #define IMX_SC_R_PI_0_PLL 330 #define IMX_SC_R_PI_1 331 #define IMX_SC_R_PI_1_PWM_0 332 #define IMX_SC_R_PI_1_PWM_1 333 #define IMX_SC_R_PI_1_I2C_0 334 #define IMX_SC_R_PI_1_PLL 335 #define IMX_SC_R_SC_PID0 336 #define IMX_SC_R_SC_PID1 337 #define IMX_SC_R_SC_PID2 338 #define IMX_SC_R_SC_PID3 339 #define IMX_SC_R_SC_PID4 340 #define IMX_SC_R_SC_SEMA42 341 #define IMX_SC_R_SC_TPM 342 #define IMX_SC_R_SC_PIT 343 #define IMX_SC_R_SC_UART 344 #define IMX_SC_R_SC_I2C 345 #define IMX_SC_R_SC_MU_0B 346 #define IMX_SC_R_SC_MU_0A0 347 #define IMX_SC_R_SC_MU_0A1 348 #define IMX_SC_R_SC_MU_0A2 349 #define IMX_SC_R_SC_MU_0A3 350 #define IMX_SC_R_SC_MU_1A 351 #define IMX_SC_R_SYSCNT_RD 352 #define IMX_SC_R_SYSCNT_CMP 353 #define IMX_SC_R_DEBUG 354 #define IMX_SC_R_SYSTEM 355 #define IMX_SC_R_SNVS 356 #define IMX_SC_R_OTP 357 #define IMX_SC_R_VPU_PID0 358 #define IMX_SC_R_VPU_PID1 359 #define IMX_SC_R_VPU_PID2 360 #define IMX_SC_R_VPU_PID3 361 #define IMX_SC_R_VPU_PID4 362 #define IMX_SC_R_VPU_PID5 363 #define IMX_SC_R_VPU_PID6 364 #define IMX_SC_R_VPU_PID7 365 #define IMX_SC_R_VPU_UART 366 #define IMX_SC_R_VPUCORE 367 #define IMX_SC_R_VPUCORE_0 368 #define IMX_SC_R_VPUCORE_1 369 #define IMX_SC_R_VPUCORE_2 370 #define IMX_SC_R_VPUCORE_3 371 #define IMX_SC_R_DMA_4_CH0 372 #define IMX_SC_R_DMA_4_CH1 373 #define IMX_SC_R_DMA_4_CH2 374 #define IMX_SC_R_DMA_4_CH3 375 #define IMX_SC_R_DMA_4_CH4 376 #define IMX_SC_R_ISI_CH0 377 #define IMX_SC_R_ISI_CH1 378 #define IMX_SC_R_ISI_CH2 379 #define IMX_SC_R_ISI_CH3 380 #define IMX_SC_R_ISI_CH4 381 #define IMX_SC_R_ISI_CH5 382 #define IMX_SC_R_ISI_CH6 383 #define IMX_SC_R_ISI_CH7 384 #define IMX_SC_R_MJPEG_DEC_S0 385 #define IMX_SC_R_MJPEG_DEC_S1 386 #define IMX_SC_R_MJPEG_DEC_S2 387 #define IMX_SC_R_MJPEG_DEC_S3 388 #define IMX_SC_R_MJPEG_ENC_S0 389 #define IMX_SC_R_MJPEG_ENC_S1 390 #define IMX_SC_R_MJPEG_ENC_S2 391 #define IMX_SC_R_MJPEG_ENC_S3 392 #define IMX_SC_R_MIPI_0 393 #define IMX_SC_R_MIPI_0_PWM_0 394 #define IMX_SC_R_MIPI_0_I2C_0 395 #define IMX_SC_R_MIPI_0_I2C_1 396 #define IMX_SC_R_MIPI_1 397 #define IMX_SC_R_MIPI_1_PWM_0 398 #define IMX_SC_R_MIPI_1_I2C_0 399 #define IMX_SC_R_MIPI_1_I2C_1 400 #define IMX_SC_R_CSI_0 401 #define IMX_SC_R_CSI_0_PWM_0 402 #define IMX_SC_R_CSI_0_I2C_0 403 #define IMX_SC_R_CSI_1 404 #define IMX_SC_R_CSI_1_PWM_0 405 #define IMX_SC_R_CSI_1_I2C_0 406 #define IMX_SC_R_HDMI 407 #define IMX_SC_R_HDMI_I2S 408 #define IMX_SC_R_HDMI_I2C_0 409 #define IMX_SC_R_HDMI_PLL_0 410 #define IMX_SC_R_HDMI_RX 411 #define IMX_SC_R_HDMI_RX_BYPASS 412 #define IMX_SC_R_HDMI_RX_I2C_0 413 #define IMX_SC_R_ASRC_0 414 #define IMX_SC_R_ESAI_0 415 #define IMX_SC_R_SPDIF_0 416 #define IMX_SC_R_SPDIF_1 417 #define IMX_SC_R_SAI_3 418 #define IMX_SC_R_SAI_4 419 #define IMX_SC_R_SAI_5 420 #define IMX_SC_R_GPT_5 421 #define IMX_SC_R_GPT_6 422 #define IMX_SC_R_GPT_7 423 #define IMX_SC_R_GPT_8 424 #define IMX_SC_R_GPT_9 425 #define IMX_SC_R_GPT_10 426 #define IMX_SC_R_DMA_2_CH5 427 #define IMX_SC_R_DMA_2_CH6 428 #define IMX_SC_R_DMA_2_CH7 429 #define IMX_SC_R_DMA_2_CH8 430 #define IMX_SC_R_DMA_2_CH9 431 #define IMX_SC_R_DMA_2_CH10 432 #define IMX_SC_R_DMA_2_CH11 433 #define IMX_SC_R_DMA_2_CH12 434 #define IMX_SC_R_DMA_2_CH13 435 #define IMX_SC_R_DMA_2_CH14 436 #define IMX_SC_R_DMA_2_CH15 437 #define IMX_SC_R_DMA_2_CH16 438 #define IMX_SC_R_DMA_2_CH17 439 #define IMX_SC_R_DMA_2_CH18 440 #define IMX_SC_R_DMA_2_CH19 441 #define IMX_SC_R_DMA_2_CH20 442 #define IMX_SC_R_DMA_2_CH21 443 #define IMX_SC_R_DMA_2_CH22 444 #define IMX_SC_R_DMA_2_CH23 445 #define IMX_SC_R_DMA_2_CH24 446 #define IMX_SC_R_DMA_2_CH25 447 #define IMX_SC_R_DMA_2_CH26 448 #define IMX_SC_R_DMA_2_CH27 449 #define IMX_SC_R_DMA_2_CH28 450 #define IMX_SC_R_DMA_2_CH29 451 #define IMX_SC_R_DMA_2_CH30 452 #define IMX_SC_R_DMA_2_CH31 453 #define IMX_SC_R_ASRC_1 454 #define IMX_SC_R_ESAI_1 455 #define IMX_SC_R_SAI_6 456 #define IMX_SC_R_SAI_7 457 #define IMX_SC_R_AMIX 458 #define IMX_SC_R_MQS_0 459 #define IMX_SC_R_DMA_3_CH0 460 #define IMX_SC_R_DMA_3_CH1 461 #define IMX_SC_R_DMA_3_CH2 462 #define IMX_SC_R_DMA_3_CH3 463 #define IMX_SC_R_DMA_3_CH4 464 #define IMX_SC_R_DMA_3_CH5 465 #define IMX_SC_R_DMA_3_CH6 466 #define IMX_SC_R_DMA_3_CH7 467 #define IMX_SC_R_DMA_3_CH8 468 #define IMX_SC_R_DMA_3_CH9 469 #define IMX_SC_R_DMA_3_CH10 470 #define IMX_SC_R_DMA_3_CH11 471 #define IMX_SC_R_DMA_3_CH12 472 #define IMX_SC_R_DMA_3_CH13 473 #define IMX_SC_R_DMA_3_CH14 474 #define IMX_SC_R_DMA_3_CH15 475 #define IMX_SC_R_DMA_3_CH16 476 #define IMX_SC_R_DMA_3_CH17 477 #define IMX_SC_R_DMA_3_CH18 478 #define IMX_SC_R_DMA_3_CH19 479 #define IMX_SC_R_DMA_3_CH20 480 #define IMX_SC_R_DMA_3_CH21 481 #define IMX_SC_R_DMA_3_CH22 482 #define IMX_SC_R_DMA_3_CH23 483 #define IMX_SC_R_DMA_3_CH24 484 #define IMX_SC_R_DMA_3_CH25 485 #define IMX_SC_R_DMA_3_CH26 486 #define IMX_SC_R_DMA_3_CH27 487 #define IMX_SC_R_DMA_3_CH28 488 #define IMX_SC_R_DMA_3_CH29 489 #define IMX_SC_R_DMA_3_CH30 490 #define IMX_SC_R_DMA_3_CH31 491 #define IMX_SC_R_AUDIO_PLL_1 492 #define IMX_SC_R_AUDIO_CLK_0 493 #define IMX_SC_R_AUDIO_CLK_1 494 #define IMX_SC_R_MCLK_OUT_0 495 #define IMX_SC_R_MCLK_OUT_1 496 #define IMX_SC_R_PMIC_0 497 #define IMX_SC_R_PMIC_1 498 #define IMX_SC_R_SECO 499 #define IMX_SC_R_CAAM_JR1 500 #define IMX_SC_R_CAAM_JR2 501 #define IMX_SC_R_CAAM_JR3 502 #define IMX_SC_R_SECO_MU_2 503 #define IMX_SC_R_SECO_MU_3 504 #define IMX_SC_R_SECO_MU_4 505 #define IMX_SC_R_HDMI_RX_PWM_0 506 #define IMX_SC_R_A35 507 #define IMX_SC_R_A35_0 508 #define IMX_SC_R_A35_1 509 #define IMX_SC_R_A35_2 510 #define IMX_SC_R_A35_3 511 #define IMX_SC_R_DSP 512 #define IMX_SC_R_DSP_RAM 513 #define IMX_SC_R_CAAM_JR1_OUT 514 #define IMX_SC_R_CAAM_JR2_OUT 515 #define IMX_SC_R_CAAM_JR3_OUT 516 #define IMX_SC_R_VPU_DEC_0 517 #define IMX_SC_R_VPU_ENC_0 518 #define IMX_SC_R_CAAM_JR0 519 #define IMX_SC_R_CAAM_JR0_OUT 520 #define IMX_SC_R_PMIC_2 521 #define IMX_SC_R_DBLOGIC 522 #define IMX_SC_R_HDMI_PLL_1 523 #define IMX_SC_R_BOARD_R0 524 #define IMX_SC_R_BOARD_R1 525 #define IMX_SC_R_BOARD_R2 526 #define IMX_SC_R_BOARD_R3 527 #define IMX_SC_R_BOARD_R4 528 #define IMX_SC_R_BOARD_R5 529 #define IMX_SC_R_BOARD_R6 530 #define IMX_SC_R_BOARD_R7 531 #define IMX_SC_R_MJPEG_DEC_MP 532 #define IMX_SC_R_MJPEG_ENC_MP 533 #define IMX_SC_R_VPU_TS_0 534 #define IMX_SC_R_VPU_MU_0 535 #define IMX_SC_R_VPU_MU_1 536 #define IMX_SC_R_VPU_MU_2 537 #define IMX_SC_R_VPU_MU_3 538 #define IMX_SC_R_VPU_ENC_1 539 #define IMX_SC_R_VPU 540 #define IMX_SC_R_DMA_5_CH0 541 #define IMX_SC_R_DMA_5_CH1 542 #define IMX_SC_R_DMA_5_CH2 543 #define IMX_SC_R_DMA_5_CH3 544 #define IMX_SC_R_ATTESTATION 545 #define IMX_SC_R_LAST 546 /* * Defines for SC PM CLK */ #define IMX_SC_PM_CLK_SLV_BUS 0 /* Slave bus clock */ #define IMX_SC_PM_CLK_MST_BUS 1 /* Master bus clock */ #define IMX_SC_PM_CLK_PER 2 /* Peripheral clock */ #define IMX_SC_PM_CLK_PHY 3 /* Phy clock */ #define IMX_SC_PM_CLK_MISC 4 /* Misc clock */ #define IMX_SC_PM_CLK_MISC0 0 /* Misc 0 clock */ #define IMX_SC_PM_CLK_MISC1 1 /* Misc 1 clock */ #define IMX_SC_PM_CLK_MISC2 2 /* Misc 2 clock */ #define IMX_SC_PM_CLK_MISC3 3 /* Misc 3 clock */ #define IMX_SC_PM_CLK_MISC4 4 /* Misc 4 clock */ #define IMX_SC_PM_CLK_CPU 2 /* CPU clock */ #define IMX_SC_PM_CLK_PLL 4 /* PLL */ #define IMX_SC_PM_CLK_BYPASS 4 /* Bypass clock */ /* * Defines for SC CONTROL */ #define IMX_SC_C_TEMP 0 #define IMX_SC_C_TEMP_HI 1 #define IMX_SC_C_TEMP_LOW 2 #define IMX_SC_C_PXL_LINK_MST1_ADDR 3 #define IMX_SC_C_PXL_LINK_MST2_ADDR 4 #define IMX_SC_C_PXL_LINK_MST_ENB 5 #define IMX_SC_C_PXL_LINK_MST1_ENB 6 #define IMX_SC_C_PXL_LINK_MST2_ENB 7 #define IMX_SC_C_PXL_LINK_SLV1_ADDR 8 #define IMX_SC_C_PXL_LINK_SLV2_ADDR 9 #define IMX_SC_C_PXL_LINK_MST_VLD 10 #define IMX_SC_C_PXL_LINK_MST1_VLD 11 #define IMX_SC_C_PXL_LINK_MST2_VLD 12 #define IMX_SC_C_SINGLE_MODE 13 #define IMX_SC_C_ID 14 #define IMX_SC_C_PXL_CLK_POLARITY 15 #define IMX_SC_C_LINESTATE 16 #define IMX_SC_C_PCIE_G_RST 17 #define IMX_SC_C_PCIE_BUTTON_RST 18 #define IMX_SC_C_PCIE_PERST 19 #define IMX_SC_C_PHY_RESET 20 #define IMX_SC_C_PXL_LINK_RATE_CORRECTION 21 #define IMX_SC_C_PANIC 22 #define IMX_SC_C_PRIORITY_GROUP 23 #define IMX_SC_C_TXCLK 24 #define IMX_SC_C_CLKDIV 25 #define IMX_SC_C_DISABLE_50 26 #define IMX_SC_C_DISABLE_125 27 #define IMX_SC_C_SEL_125 28 #define IMX_SC_C_MODE 29 #define IMX_SC_C_SYNC_CTRL0 30 #define IMX_SC_C_KACHUNK_CNT 31 #define IMX_SC_C_KACHUNK_SEL 32 #define IMX_SC_C_SYNC_CTRL1 33 #define IMX_SC_C_DPI_RESET 34 #define IMX_SC_C_MIPI_RESET 35 #define IMX_SC_C_DUAL_MODE 36 #define IMX_SC_C_VOLTAGE 37 #define IMX_SC_C_PXL_LINK_SEL 38 #define IMX_SC_C_OFS_SEL 39 #define IMX_SC_C_OFS_AUDIO 40 #define IMX_SC_C_OFS_PERIPH 41 #define IMX_SC_C_OFS_IRQ 42 #define IMX_SC_C_RST0 43 #define IMX_SC_C_RST1 44 #define IMX_SC_C_SEL0 45 #define IMX_SC_C_CALIB0 46 #define IMX_SC_C_CALIB1 47 #define IMX_SC_C_CALIB2 48 #define IMX_SC_C_IPG_DEBUG 49 #define IMX_SC_C_IPG_DOZE 50 #define IMX_SC_C_IPG_WAIT 51 #define IMX_SC_C_IPG_STOP 52 #define IMX_SC_C_IPG_STOP_MODE 53 #define IMX_SC_C_IPG_STOP_ACK 54 #define IMX_SC_C_SYNC_CTRL 55 #define IMX_SC_C_OFS_AUDIO_ALT 56 #define IMX_SC_C_DSP_BYP 57 #define IMX_SC_C_CLK_GEN_EN 58 #define IMX_SC_C_INTF_SEL 59 #define IMX_SC_C_RXC_DLY 60 #define IMX_SC_C_TIMER_SEL 61 #define IMX_SC_C_LAST 62 #endif /* __DT_BINDINGS_RSCRC_IMX_H */
<filename>cascading-hadoop/src/test/java/cascading/flow/hadoop/MapReduceFlowPlatformTest.java /* * Copyright (c) 2007-2013 Concurrent, Inc. All Rights Reserved. * * Project and contact information: http://www.cascading.org/ * * This file is part of the Cascading project. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package cascading.flow.hadoop; import java.io.IOException; import java.net.URI; import cascading.PlatformTestCase; import cascading.cascade.Cascade; import cascading.cascade.CascadeConnector; import cascading.flow.Flow; import cascading.flow.hadoop.planner.HadoopPlanner; import cascading.pipe.Pipe; import cascading.platform.hadoop.HadoopPlatform; import cascading.scheme.hadoop.TextLine; import cascading.tap.Tap; import cascading.tap.hadoop.Hfs; import cascading.tuple.Fields; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapred.FileInputFormat; import org.apache.hadoop.mapred.FileOutputFormat; import org.apache.hadoop.mapred.JobConf; import org.apache.hadoop.mapred.TextInputFormat; import org.apache.hadoop.mapred.TextOutputFormat; import org.apache.hadoop.mapred.lib.IdentityMapper; import org.apache.hadoop.mapred.lib.IdentityReducer; import org.junit.Test; import static data.InputData.inputFileApache; /** * */ public class MapReduceFlowPlatformTest extends PlatformTestCase { public MapReduceFlowPlatformTest() { super( true ); } @Test public void testFlow() throws IOException { getPlatform().copyFromLocal( inputFileApache ); JobConf conf = new JobConf( ( (HadoopPlatform) getPlatform() ).getJobConf() ); conf.setJobName( "mrflow" ); conf.setOutputKeyClass( LongWritable.class ); conf.setOutputValueClass( Text.class ); conf.setMapperClass( IdentityMapper.class ); conf.setReducerClass( IdentityReducer.class ); conf.setInputFormat( TextInputFormat.class ); conf.setOutputFormat( TextOutputFormat.class ); FileInputFormat.setInputPaths( conf, new Path( inputFileApache ) ); String outputPath = getOutputPath( "flowTest" ); FileOutputFormat.setOutputPath( conf, new Path( outputPath ) ); Flow flow = new MapReduceFlow( "mrflow", conf, true ); validateLength( flow.openTapForRead( new Hfs( new TextLine(), inputFileApache ) ), 10 ); flow.complete(); validateLength( flow.openTapForRead( new Hfs( new TextLine(), outputPath ) ), 10 ); } private String remove( String path, boolean delete ) throws IOException { FileSystem fs = FileSystem.get( URI.create( path ), HadoopPlanner.createJobConf( getProperties() ) ); if( delete ) fs.delete( new Path( path ), true ); return path; } @Test public void testCascade() throws IOException { getPlatform().copyFromLocal( inputFileApache ); // Setup two standard cascading flows that will generate the input for the first MapReduceFlow Tap source1 = new Hfs( new TextLine( new Fields( "offset", "line" ) ), remove( inputFileApache, false ) ); String sinkPath4 = getOutputPath( "flow4" ); Tap sink1 = new Hfs( new TextLine( new Fields( "offset", "line" ) ), remove( sinkPath4, true ), true ); Flow firstFlow = new HadoopFlowConnector( getProperties() ).connect( source1, sink1, new Pipe( "first-flow" ) ); String sinkPath5 = getOutputPath( "flow5" ); Tap sink2 = new Hfs( new TextLine( new Fields( "offset", "line" ) ), remove( sinkPath5, true ), true ); Flow secondFlow = new HadoopFlowConnector( getProperties() ).connect( sink1, sink2, new Pipe( "second-flow" ) ); JobConf defaultConf = HadoopPlanner.createJobConf( getProperties() ); JobConf firstConf = new JobConf( defaultConf ); firstConf.setJobName( "first-mr" ); firstConf.setOutputKeyClass( LongWritable.class ); firstConf.setOutputValueClass( Text.class ); firstConf.setMapperClass( IdentityMapper.class ); firstConf.setReducerClass( IdentityReducer.class ); firstConf.setInputFormat( TextInputFormat.class ); firstConf.setOutputFormat( TextOutputFormat.class ); FileInputFormat.setInputPaths( firstConf, new Path( remove( sinkPath5, true ) ) ); String sinkPath1 = getOutputPath( "flow1" ); FileOutputFormat.setOutputPath( firstConf, new Path( remove( sinkPath1, true ) ) ); Flow firstMR = new MapReduceFlow( firstConf, true ); JobConf secondConf = new JobConf( defaultConf ); secondConf.setJobName( "second-mr" ); secondConf.setOutputKeyClass( LongWritable.class ); secondConf.setOutputValueClass( Text.class ); secondConf.setMapperClass( IdentityMapper.class ); secondConf.setReducerClass( IdentityReducer.class ); secondConf.setInputFormat( TextInputFormat.class ); secondConf.setOutputFormat( TextOutputFormat.class ); FileInputFormat.setInputPaths( secondConf, new Path( remove( sinkPath1, true ) ) ); String sinkPath2 = getOutputPath( "flow2" ); FileOutputFormat.setOutputPath( secondConf, new Path( remove( sinkPath2, true ) ) ); Flow secondMR = new MapReduceFlow( secondConf, true ); JobConf thirdConf = new JobConf( defaultConf ); thirdConf.setJobName( "third-mr" ); thirdConf.setOutputKeyClass( LongWritable.class ); thirdConf.setOutputValueClass( Text.class ); thirdConf.setMapperClass( IdentityMapper.class ); thirdConf.setReducerClass( IdentityReducer.class ); thirdConf.setInputFormat( TextInputFormat.class ); thirdConf.setOutputFormat( TextOutputFormat.class ); FileInputFormat.setInputPaths( thirdConf, new Path( remove( sinkPath2, true ) ) ); String sinkPath3 = getOutputPath( "flow3" ); FileOutputFormat.setOutputPath( thirdConf, new Path( remove( sinkPath3, true ) ) ); Flow thirdMR = new MapReduceFlow( thirdConf, true ); CascadeConnector cascadeConnector = new CascadeConnector(); // pass out of order Cascade cascade = cascadeConnector.connect( firstFlow, secondFlow, thirdMR, firstMR, secondMR ); cascade.complete(); validateLength( thirdMR.openTapForRead( new Hfs( new TextLine(), sinkPath3 ) ), 10 ); } }
package io.nanovc.agentsim.simulations.memory; import io.nanovc.agentsim.*; import org.junit.jupiter.api.Test; /** * Tests the {@link MemorySimulationHandler}. */ public class MemorySimulationHandlerTests extends MemorySimulationHandlerTestsBase { @Test public void creationTest() { new MemorySimulationHandler(); } @Test public void one_model_one_renaming_agent() throws Exception { // Define the input model using code: ConsumerWithException<EnvironmentController> inputModelCreator = controller -> { //#region Input Model // Create a model: MockModel model = new MockModel(); model.name = "Model"; controller.addModel(model); // Create the agent configuration: MockRenameAgentConfig agentConfig = new MockRenameAgentConfig(); agentConfig.modelToRename = model.name; agentConfig.newModelName = "CHANGED MODEL"; controller.addAgentConfig(agentConfig); //#endregion }; // Make sure the model is as expected: //language=JSON String expectedInputJSON = "{\n" + " \"models\" : [\n" + " {\n" + " \"type\" : \"io.nanovc.agentsim.simulations.memory.MemorySimulationHandlerTests$MockModel\",\n" + " \"name\" : \"Model\"\n" + " }\n" + " ],\n" + " \"agentConfigs\" : [\n" + " {\n" + " \"type\" : \"io.nanovc.agentsim.simulations.memory.MemorySimulationHandlerTests$MockRenameAgentConfig\",\n" + " \"modelToRename\" : \"Model\",\n" + " \"newModelName\" : \"CHANGED MODEL\",\n" + " \"enabled\" : true\n" + " }\n" + " ]\n" + "}"; // Make sure that the output model is as expected: // Make sure the model is as expected: //language=JSON String expectedOutputJSON = "{\n" + " \"models\" : [\n" + " {\n" + " \"type\" : \"io.nanovc.agentsim.simulations.memory.MemorySimulationHandlerTests$MockModel\",\n" + " \"name\" : \"CHANGED MODEL\"\n" + " }\n" + " ],\n" + " \"agentConfigs\" : [\n" + " {\n" + " \"type\" : \"io.nanovc.agentsim.simulations.memory.MemorySimulationHandlerTests$MockRenameAgentConfig\",\n" + " \"modelToRename\" : \"Model\",\n" + " \"newModelName\" : \"CHANGED MODEL\",\n" + " \"enabled\" : true\n" + " }\n" + " ]\n" + "}"; assert_InputJSON_Simulation_OutputJSON(inputModelCreator, expectedInputJSON, expectedOutputJSON); } @Test public void two_models_one_renaming_agent() throws Exception { // Define the input model using code: ConsumerWithException<EnvironmentController> inputModelCreator = controller -> { //#region Input Model // Create models: MockModel model1 = new MockModel(); model1.name = "Model 1"; controller.addModel(model1); MockModel model2 = new MockModel(); model2.name = "Model 2"; controller.addModel(model2); // Create the agent configuration: MockRenameAgentConfig agentConfig = new MockRenameAgentConfig(); agentConfig.modelToRename = model2.name; agentConfig.newModelName = "CHANGED MODEL"; controller.addAgentConfig(agentConfig); //#endregion }; // Make sure the model is as expected: //language=JSON String expectedInputJSON = "{\n" + " \"models\" : [\n" + " {\n" + " \"type\" : \"io.nanovc.agentsim.simulations.memory.MemorySimulationHandlerTests$MockModel\",\n" + " \"name\" : \"Model 1\"\n" + " },\n" + " {\n" + " \"type\" : \"io.nanovc.agentsim.simulations.memory.MemorySimulationHandlerTests$MockModel\",\n" + " \"name\" : \"Model 2\"\n" + " }\n" + " ],\n" + " \"agentConfigs\" : [\n" + " {\n" + " \"type\" : \"io.nanovc.agentsim.simulations.memory.MemorySimulationHandlerTests$MockRenameAgentConfig\",\n" + " \"modelToRename\" : \"Model 2\",\n" + " \"newModelName\" : \"CHANGED MODEL\",\n" + " \"enabled\" : true\n" + " }\n" + " ]\n" + "}"; // Make sure that the output model is as expected: // Make sure the model is as expected: //language=JSON String expectedOutputJSON = "{\n" + " \"models\" : [\n" + " {\n" + " \"type\" : \"io.nanovc.agentsim.simulations.memory.MemorySimulationHandlerTests$MockModel\",\n" + " \"name\" : \"CHANGED MODEL\"\n" + " },\n" + " {\n" + " \"type\" : \"io.nanovc.agentsim.simulations.memory.MemorySimulationHandlerTests$MockModel\",\n" + " \"name\" : \"Model 1\"\n" + " }\n" + " ],\n" + " \"agentConfigs\" : [\n" + " {\n" + " \"type\" : \"io.nanovc.agentsim.simulations.memory.MemorySimulationHandlerTests$MockRenameAgentConfig\",\n" + " \"modelToRename\" : \"Model 2\",\n" + " \"newModelName\" : \"CHANGED MODEL\",\n" + " \"enabled\" : true\n" + " }\n" + " ]\n" + "}"; assert_InputJSON_Simulation_OutputJSON(inputModelCreator, expectedInputJSON, expectedOutputJSON); } @Test public void two_models_two_renaming_agents() throws Exception { // Define the input model using code: ConsumerWithException<EnvironmentController> inputModelCreator = controller -> { //#region Input Model // Create models: MockModel model1 = new MockModel(); model1.name = "Model 1"; controller.addModel(model1); MockModel model2 = new MockModel(); model2.name = "Model 2"; controller.addModel(model2); // Create the agent configurations: MockRenameAgentConfig agentConfig1 = new MockRenameAgentConfig(); agentConfig1.modelToRename = model1.name; agentConfig1.newModelName = "CHANGED MODEL 1"; controller.addAgentConfig(agentConfig1); MockRenameAgentConfig agentConfig2 = new MockRenameAgentConfig(); agentConfig2.modelToRename = model2.name; agentConfig2.newModelName = "CHANGED MODEL 2"; controller.addAgentConfig(agentConfig2); //#endregion }; // Make sure the model is as expected: //language=JSON String expectedInputJSON = "{\n" + " \"models\" : [\n" + " {\n" + " \"type\" : \"io.nanovc.agentsim.simulations.memory.MemorySimulationHandlerTests$MockModel\",\n" + " \"name\" : \"Model 1\"\n" + " },\n" + " {\n" + " \"type\" : \"io.nanovc.agentsim.simulations.memory.MemorySimulationHandlerTests$MockModel\",\n" + " \"name\" : \"Model 2\"\n" + " }\n" + " ],\n" + " \"agentConfigs\" : [\n" + " {\n" + " \"type\" : \"io.nanovc.agentsim.simulations.memory.MemorySimulationHandlerTests$MockRenameAgentConfig\",\n" + " \"modelToRename\" : \"Model 1\",\n" + " \"newModelName\" : \"CHANGED MODEL 1\",\n" + " \"enabled\" : true\n" + " },\n" + " {\n" + " \"type\" : \"io.nanovc.agentsim.simulations.memory.MemorySimulationHandlerTests$MockRenameAgentConfig\",\n" + " \"modelToRename\" : \"Model 2\",\n" + " \"newModelName\" : \"CHANGED MODEL 2\",\n" + " \"enabled\" : true\n" + " }\n" + " ]\n" + "}"; // Make sure that the output model is as expected: // Make sure the model is as expected: //language=JSON String expectedOutputJSON = "{\n" + " \"models\" : [\n" + " {\n" + " \"type\" : \"io.nanovc.agentsim.simulations.memory.MemorySimulationHandlerTests$MockModel\",\n" + " \"name\" : \"CHANGED MODEL 1\"\n" + " },\n" + " {\n" + " \"type\" : \"io.nanovc.agentsim.simulations.memory.MemorySimulationHandlerTests$MockModel\",\n" + " \"name\" : \"CHANGED MODEL 2\"\n" + " }\n" + " ],\n" + " \"agentConfigs\" : [\n" + " {\n" + " \"type\" : \"io.nanovc.agentsim.simulations.memory.MemorySimulationHandlerTests$MockRenameAgentConfig\",\n" + " \"modelToRename\" : \"Model 1\",\n" + " \"newModelName\" : \"CHANGED MODEL 1\",\n" + " \"enabled\" : true\n" + " },\n" + " {\n" + " \"type\" : \"io.nanovc.agentsim.simulations.memory.MemorySimulationHandlerTests$MockRenameAgentConfig\",\n" + " \"modelToRename\" : \"Model 2\",\n" + " \"newModelName\" : \"CHANGED MODEL 2\",\n" + " \"enabled\" : true\n" + " }\n" + " ]\n" + "}"; assert_InputJSON_Simulation_OutputJSON(inputModelCreator, expectedInputJSON, expectedOutputJSON); } @Test public void one_model_one_value_changer_agent() throws Exception { // Define the input model using code: ConsumerWithException<EnvironmentController> inputModelCreator = controller -> { //#region Input Model // Create a model: MockModel model = new MockModel(); model.name = "Model"; model.value = "Value"; controller.addModel(model); // Create the agent configuration: MockValueChangerAgentConfig agentConfig = new MockValueChangerAgentConfig(); agentConfig.modelValueToChange = model.value; agentConfig.newValue = "CHANGED Value"; controller.addAgentConfig(agentConfig); //#endregion }; // Make sure the model is as expected: //language=JSON String expectedInputJSON = "{\n" + " \"models\" : [\n" + " {\n" + " \"type\" : \"io.nanovc.agentsim.simulations.memory.MemorySimulationHandlerTests$MockModel\",\n" + " \"name\" : \"Model\",\n" + " \"value\" : \"Value\"\n" + " }\n" + " ],\n" + " \"agentConfigs\" : [\n" + " {\n" + " \"type\" : \"io.nanovc.agentsim.simulations.memory.MemorySimulationHandlerTests$MockValueChangerAgentConfig\",\n" + " \"modelValueToChange\" : \"Value\",\n" + " \"newValue\" : \"CHANGED Value\",\n" + " \"enabled\" : true\n" + " }\n" + " ]\n" + "}"; // Make sure that the output model is as expected: // Make sure the model is as expected: //language=JSON String expectedOutputJSON = "{\n" + " \"models\" : [\n" + " {\n" + " \"type\" : \"io.nanovc.agentsim.simulations.memory.MemorySimulationHandlerTests$MockModel\",\n" + " \"name\" : \"Model\",\n" + " \"value\" : \"CHANGED Value\"\n" + " }\n" + " ],\n" + " \"agentConfigs\" : [\n" + " {\n" + " \"type\" : \"io.nanovc.agentsim.simulations.memory.MemorySimulationHandlerTests$MockValueChangerAgentConfig\",\n" + " \"modelValueToChange\" : \"Value\",\n" + " \"newValue\" : \"CHANGED Value\",\n" + " \"enabled\" : true\n" + " }\n" + " ]\n" + "}"; assert_InputJSON_Simulation_OutputJSON(inputModelCreator, expectedInputJSON, expectedOutputJSON); } @Test public void one_model_two_value_changer_agents_two_solutions() throws Exception { // Define the input model using code: ConsumerWithException<EnvironmentController> inputModelCreator = controller -> { //#region Input Model // Create model: MockModel model = new MockModel(); model.name = "Model"; model.value = "Value"; controller.addModel(model); // Create the agent configurations: MockValueChangerAgentConfig agentConfig1 = new MockValueChangerAgentConfig(); agentConfig1.modelValueToChange = model.value; agentConfig1.newValue = "CHANGED VALUE 1"; controller.addAgentConfig(agentConfig1); MockValueChangerAgentConfig agentConfig2 = new MockValueChangerAgentConfig(); agentConfig2.modelValueToChange = model.value; agentConfig2.newValue = "CHANGED VALUE 2"; controller.addAgentConfig(agentConfig2); //#endregion }; // Make sure that the output model is as expected: // Make sure the model is as expected: //language=JSON String expectedOutputJSON = "[\n" + " {\n" + " \"solutionName\" : \"Solution 1\",\n" + " \"environment\" : {\n" + " \"models\" : [\n" + " {\n" + " \"type\" : \"io.nanovc.agentsim.simulations.memory.MemorySimulationHandlerTests$MockModel\",\n" + " \"name\" : \"Model\",\n" + " \"value\" : \"CHANGED VALUE 2\"\n" + " }\n" + " ],\n" + " \"agentConfigs\" : [\n" + " {\n" + " \"type\" : \"io.nanovc.agentsim.simulations.memory.MemorySimulationHandlerTests$MockValueChangerAgentConfig\",\n" + " \"modelValueToChange\" : \"Value\",\n" + " \"newValue\" : \"CHANGED VALUE 1\",\n" + " \"enabled\" : true\n" + " },\n" + " {\n" + " \"type\" : \"io.nanovc.agentsim.simulations.memory.MemorySimulationHandlerTests$MockValueChangerAgentConfig\",\n" + " \"modelValueToChange\" : \"Value\",\n" + " \"newValue\" : \"CHANGED VALUE 2\",\n" + " \"enabled\" : true\n" + " }\n" + " ]\n" + " }\n" + " },\n" + " {\n" + " \"solutionName\" : \"Solution 2\",\n" + " \"environment\" : {\n" + " \"models\" : [\n" + " {\n" + " \"type\" : \"io.nanovc.agentsim.simulations.memory.MemorySimulationHandlerTests$MockModel\",\n" + " \"name\" : \"Model\",\n" + " \"value\" : \"CHANGED VALUE 1\"\n" + " }\n" + " ],\n" + " \"agentConfigs\" : [\n" + " {\n" + " \"type\" : \"io.nanovc.agentsim.simulations.memory.MemorySimulationHandlerTests$MockValueChangerAgentConfig\",\n" + " \"modelValueToChange\" : \"Value\",\n" + " \"newValue\" : \"CHANGED VALUE 1\",\n" + " \"enabled\" : true\n" + " },\n" + " {\n" + " \"type\" : \"io.nanovc.agentsim.simulations.memory.MemorySimulationHandlerTests$MockValueChangerAgentConfig\",\n" + " \"modelValueToChange\" : \"Value\",\n" + " \"newValue\" : \"CHANGED VALUE 2\",\n" + " \"enabled\" : true\n" + " }\n" + " ]\n" + " }\n" + " }\n" + "]"; assert_Simulation_OutputJSONSolutions(inputModelCreator, expectedOutputJSON); } /** * A model used for testing. */ public static class MockModel extends ModelBase { /** * The value for the model. */ public String value; } /** * A config for a {@link MockRenameAgent} that renames models. */ public static class MockRenameAgentConfig extends AgentConfigBase { /** * This is the name of the model to rename. */ public String modelToRename; /** * This is the model name to use if a model matches {@link #modelToRename}. */ public String newModelName; } /** * A mock agent for the tests that renames models. */ public static class MockRenameAgent extends AgentBase<MockRenameAgentConfig> { @Override public void modifyEnvironment(ReadOnlyEnvironmentController input, EnvironmentController output, SimulationIterationAPI iteration, SimulationController simulation, MockRenameAgentConfig config) throws Exception { // Find the model with the given name: ModelAPI model = output.getModelByName(config.modelToRename); if (model != null) { // We found the model with the given name. // NOTE: Since the environment controller has internal indexes, // we choose to remove and add the model so that the indexes are kept up to date. // The alternative is to modify the model name and then re-index the whole environment model. // model.setName(config.newModelName); // outputModelControllerToModify.indexEnvironmentModel(); // Remove the model from the output: output.removeModel(model); // Change the name of the model: model.setName(config.newModelName); // Add the model again: output.addModel(model); } } } /** * A config for a {@link MockRenameAgent} that changes model values. */ public static class MockValueChangerAgentConfig extends AgentConfigBase { /** * This is the value to look for in the model to change. * If the model has this value then it is changed. */ public String modelValueToChange; /** * This is the model value to change to if a model with {@link #modelValueToChange} is found. */ public String newValue; } /** * A mock agent for the tests that changes model values. */ public static class MockValueChangerAgent extends AgentBase<MockValueChangerAgentConfig> { @Override public void modifyEnvironment(ReadOnlyEnvironmentController input, EnvironmentController output, SimulationIterationAPI iteration, SimulationController simulation, MockValueChangerAgentConfig config) throws Exception { // Go through each mock model in the environment: output.forEachTypeOfModelExactly( MockModel.class, mockModel -> { // Check whether the value is as expected: if (config.modelValueToChange.equals(mockModel.value)) { // We found a model with the given value. // Change the model value: mockModel.value = config.newValue; } } ); } } }
import { getManyCards } from './handlers/cards.many' import { getCardsToGold } from './handlers/cards.toGold' export const routes: mb.Route[] = [ { methods: ['get'], url: '/api/cards', handler: getManyCards }, { methods: ['get'], url: '/api/cards/to-gold/:id', handler: getCardsToGold } ]
import _Vue from "vue"; import M from "materialize-css"; export function MessagePlugin(Vue: typeof _Vue, options?: MessagesPluginOptions) { Vue.prototype.$message = function (html: string) { M.toast({ html }) } Vue.prototype.$error = function (html: string) { M.toast({ html: `[Ошибка]: ${html}` }) } } export class MessagesPluginOptions { // add stuff } declare module 'vue/types/vue' { interface Vue { $message: Function; } }
// ColorIndexAt returns the palette index of the pixel at (x, y). func (l *Layer) ColorIndexAt(x, y int) uint8 { if t := l.TileAt(x, y); t != nil { ts := l.Tileset.Size return t.ColorIndexAt(x%ts.X, y%ts.Y) } return 0 }
/* * Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one * or more contributor license agreements. Licensed under the Elastic License * 2.0 and the Server Side Public License, v 1; you may not use this file except * in compliance with, at your election, the Elastic License 2.0 or the Server * Side Public License, v 1. */ package org.elasticsearch.legacygeo.builders; import org.elasticsearch.legacygeo.test.RandomShapeGenerator; import org.locationtech.jts.geom.Coordinate; import org.locationtech.spatial4j.shape.Rectangle; import java.io.IOException; public class EnvelopeBuilderTests extends AbstractShapeBuilderTestCase<EnvelopeBuilder> { public void testInvalidConstructorArgs() { NullPointerException e; e = expectThrows(NullPointerException.class, () -> new EnvelopeBuilder(null, new Coordinate(1.0, -1.0))); assertEquals("topLeft of envelope cannot be null", e.getMessage()); e = expectThrows(NullPointerException.class, () -> new EnvelopeBuilder(new Coordinate(1.0, -1.0), null)); assertEquals("bottomRight of envelope cannot be null", e.getMessage()); } @Override protected EnvelopeBuilder createTestShapeBuilder() { return createRandomShape(); } @Override protected EnvelopeBuilder createMutation(EnvelopeBuilder original) throws IOException { return mutate(original); } static EnvelopeBuilder mutate(EnvelopeBuilder original) throws IOException { EnvelopeBuilder mutation = copyShape(original); // move one corner to the middle of original switch (randomIntBetween(0, 3)) { case 0: mutation = new EnvelopeBuilder( new Coordinate(randomDoubleBetween(-180.0, original.bottomRight().x, true), original.topLeft().y), original.bottomRight() ); break; case 1: mutation = new EnvelopeBuilder( new Coordinate(original.topLeft().x, randomDoubleBetween(original.bottomRight().y, 90.0, true)), original.bottomRight() ); break; case 2: mutation = new EnvelopeBuilder( original.topLeft(), new Coordinate(randomDoubleBetween(original.topLeft().x, 180.0, true), original.bottomRight().y) ); break; case 3: mutation = new EnvelopeBuilder( original.topLeft(), new Coordinate(original.bottomRight().x, randomDoubleBetween(-90.0, original.topLeft().y, true)) ); break; } return mutation; } static EnvelopeBuilder createRandomShape() { Rectangle box = RandomShapeGenerator.xRandomRectangle(random(), RandomShapeGenerator.xRandomPoint(random())); EnvelopeBuilder envelope = new EnvelopeBuilder( new Coordinate(box.getMinX(), box.getMaxY()), new Coordinate(box.getMaxX(), box.getMinY()) ); return envelope; } }
def confirm_order(self, update: Update, context: CallbackContext) -> int: context.chat_data['order_decision'] = update.message.text reply_keyboard = [['Yes', 'No']] if context.chat_data['order_decision'] == 'Cancel': update.message.reply_text( 'You\'ve cancelled your order. Would you like to make another trade?', reply_markup=ReplyKeyboardMarkup( reply_keyboard, one_time_keyboard=True, ) ) else: try: Order().activate_order( context.chat_data['order_id'], ) except Exception as e: print(e) update.message.reply_text( "There was an error, ending the conversation. If you'd like to try again, send /start.") return ConversationHandler.END update.message.reply_text( 'Please wait while we process your order.' ) while True: order_summary = Order().get_order( context.chat_data['order_id'], ) if order_summary['results'].get('status') == 'executed': print('executed') break time.sleep(2) context.chat_data['average_price'] = order_summary['results'].get('executed_price') update.message.reply_text( f'Your order was executed at €{context.chat_data["average_price"]/10000:,.2f} per share. ' 'Would you like to make another trade?', reply_markup=ReplyKeyboardMarkup( reply_keyboard, one_time_keyboard=True ) ) print(f'chat_data {context.chat_data}') return TradingBot.CONFIRMATION
/* * problema c.cxx * * Copyright 2018 Cuenta Generica para la Escuela de simulacion molecular <[email protected]> * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, * MA 02110-1301, USA. * * */ #include <iostream> #include <cstdio> using namespace std; int ret[100001]; int main(int argc, char **argv) { int n, best = 1, vb = 999999; cin>>n; int sqr = 1; while(sqr*sqr <= n) sqr++; for(int i = 1;i<=n;i++) { int v = i + (n+i-1)/i; if(v < vb) best = i, vb = v; } //cout<<"best = "<<best<<" and vb = "<<vb<<endl; sqr --; for(int i = 0;i<sqr;i++) ret[i] = sqr-i; int i; for(i = sqr;i < n-(n%sqr);i++) ret[i] = ret[i-sqr] + sqr; ret[i] = n; //cout<<"sqrt = "<<sqr<<" and vsqrt = "<<(sqr + (n+sqr-1)/sqr)<<endl; i++; while(i<n) ret[i] = ret[i-1]-1, i++; for(i = 0;i<n;i++) printf("%d ",ret[i]); return 0; }
package models import ( "fmt" ) type Guild struct { ID int Emoji string Name string BattleID int PlayerIDs []int Prestige int Wins int Coins int ChatID int64 } func (g *Guild) Title() string { return fmt.Sprintf("[%s]%s", g.Emoji, g.Name) } var Guilds = []Guild{ Guild{ID: 1, Emoji: RhinoEmoji, Name: "Rhino"}, Guild{ID: 2, Emoji: ScorpionEmoji, Name: "Scorpion"}, Guild{ID: 3, Emoji: LizardEmoji, Name: "Lizard"}, } func GuildByTitle(title string) (Guild, error) { for _, guild := range Guilds { if guild.Title() == title { return guild, nil } } return Guild{}, fmt.Errorf("Guild %s not found", title) } func GuildByEmoji(emoji string) (Guild, error) { for _, guild := range Guilds { if guild.Emoji == emoji { return guild, nil } } return Guild{}, fmt.Errorf("Guild %s not found", emoji) } func (g *Guild) InBattle() bool { return g.BattleID != 0 }
// Normalized Least Mean Squares (NLMS) update // // NLMS is a variant of Least Mean Squares which uses a // variable gain. The gain is computed based on the power // of the input. The given `filter` taps are updated based // on the input samples in `window`. fn nlms_update( relaxation: f32, regularization: f32, error: f32, window: &[f32], filter: &mut [f32], ) { assert_eq!(window.len(), filter.len()); let gain = nlms_gain(relaxation, regularization, window); for (coeff, data) in filter.iter_mut().zip(window.iter()) { *coeff += gain * error * data; } }
# Long Beach Fire Department # Glen Goodrich Copyright © 2005 by Glen Goodrich 9781439614402 Published by Arcadia Publishing Charleston, South Carolina Printed in the United States of America Library of Congress Catalog Card Number: 2005926067 For all general information contact Arcadia Publishing at: Telephone 843-853-2070 Fax 843-853-0044 E-mail [email protected] For customer service and orders: Toll-Free 1-888-313-2665 Visit us on the Internet at www.arcadiapublishing.com # Table of Contents Title Page Copyright Page ACKNOWLEDGMENTS INTRODUCTION One \- THE EARLY YEARS 1897–1933 Two \- EARTHQUAKE 1933 Three \- THE GROWTH YEARS 1934–1970 Four \- TRAINING Five \- FIRST AID Six \- ODDS AND ENDS Seven \- FIRE CHIEFS # ACKNOWLEDGMENTS Herb Bramley is the reason the Long Beach Firefighter's Museum has the collection of photographs and documents used for this book. Herb had the foresight, willingness, and dedication to start a museum devoted to the Long Beach Fire Department, and then collect and store everything he could to preserve our history. From the beginning, our museum has had a staff of volunteers who are just as dedicated as Herb and take great pride in conserving our rich history. We have had tremendous cooperation from everyone, past and present, who works for our fire department, firefighter and non-firefighter. We must also thank every fire chief since the museum began, for their support and for recognizing the importance of maintaining history. Without their complete support, we would not have a place to store and show our collection. All volunteers of the museum, through their determined efforts, have made some contribution to this book. A great deal of thanks must be given to coauthors Nicole Harbour, Mike Kenney, and Mary Alger for taking on the extra responsibility of going through hundreds of pictures and documents, and examining several scrapbooks to extract our history and a small sample of photographs for this book. Without their efforts as coauthors, this book would not have happened. # INTRODUCTION The City of Long Beach, formerly known as Willmore City, first tried to organize a volunteer fire department on March 16, 1897, when a group of citizens saw the need after an increase in property fire loss. Twenty eight charter members signed up, with Brewster C. Kenyon as captain, John McPherson as first lieutenant, and William Craig as second lieutenant. The board of trustees appropriated the funds to purchase a hand-drawn ladder truck with equipment like leather buckets and axes. The ladder truck was housed in a building in an alley between Ocean Avenue and First Street. Several fund-raisers were held to purchase equipment like helmets, shirts, and belts for the members. In the spring of 1898, Brewster Kenyon resigned to accept a commission in the U. S. Army for the Spanish-American War. This led to the resignation of several other members and thus the breakup of the volunteer fire department. The board of trustees met again May 27, 1902, and this time decided to form a more permanent fire department. J. F. Corbet was selected chief and H. D. Wilson selected as assistant chief, but due to business commitments both had to resign. Another reorganization took place and J. E. Shrewsbury was selected chief and N. C. Lollich selected as assistant chief. G. C. Craw was named foreman of Hose Company No. 1 with J. Robertson as his assistant. J. H. Morgan was named foreman of Hose Company No. 2 with E. J. Fisher as his assistant. E. O. Dorsett was the foreman of the hook and ladder with G. Gaylord as his assistant. All of the equipment was hand-drawn at this time. The Long Beach Pavilion was destroyed in a fire on January 6, 1905, resulting in $15,000 in fire damage and three firemen were injured. Soon after this fire, a $30,000 bond was passed to build a permanent central fire station near Third Street and Pacific Avenue. Part of the bond was used to purchase a horse-drawn steam pumper, a hose wagon, a hook and ladder, seven horses, and alarm boxes. Chief Shrewsbury and Assistant Chief Craw moved into the new fire station in 1906 at 210 West Third Street, and the first alarm received by the newly installed alarm system was box no. 23 at the corner of Third Street and Olive Avenue. Frank Craig loaned an automobile to the department to try out in 1907. With the test a complete success, the fire department purchased two Rambler chassis. Assistant Chief Craw oversaw the installation of a 35-gallon chemical tank with 200 feet of chemical hose and an additional 300 feet of cotton hose for each Rambler. Records indicate that the Long Beach Fire Department had the distinction of having the first motorized fire apparatus on the Pacific coast, beating Los Angeles by three months. The transition to motorized equipment began in 1907 with the purchase of the two Ramblers. In 1910, the department bought a Mitchell Chemical Truck and in 1911, a Robinson pumper. By the end of 1913, the Long Beach Fire Department would be completely motorized with the purchase of a Seagrave Hose Wagon to pull the Metropolitan Steamer, a motorized hose wagon, and the largest pumper of its time—a Gorham 1,100 g.p.m. pumper. The department had three teams of horses—Major and Colonel pulled the ladder truck, Tom and Jerry (no kidding) pulled the steamer, and Prince and King pulled the hose wagon. The seventh horse was Barney, the smartest and most mischievous. Barney was the swing horse, who filled in for the others on their days off. He learned the different rules for pulling the ladder truck, the steamer, and the hose wagon, and could work with each team. But by the end of 1914, the department had sold all but one horse to the water department. Tom was kept to service the fire hydrants. Chief Shrewsbury and Mr. C. Shaw, superintendent of the water department, were together in the chief's car when it collided with Assistant Chief Craw and his driver G. Wright responding to a false alarm on May 2, 1916. Chief Shrewsbury was killed instantly, making his the first death in the line of duty for the Long Beach Fire Department. Assistant Chief Craw was appointed chief after he recovered from his injuries. Captain Taylor, who was acting chief, was appointed assistant chief. In the beginning, Long Beach was a small vacation town by the ocean with an 1890 population of 550. As the city grew, so did the fire department. Tourism was the main industry, but that changed on June 23, 1921, with the discovery of oil on Signal Hill. This caused a huge growth in population and the rapid development of wooden oil derricks. The lack of regard for fire safety by the oil operators led to a series of fires that kept the department busy for years. One particular fire lasted for 3 days, and involved 11 oil derricks and 3 "gassers." Today the city of Long Beach covers an area of approximately 52 square miles and has a population of about 461,522 (according to the 2000 federal census). The census also states that Long Beach is the most ethnically diverse city in the United States. Long Beach has the second busiest port in the nation and an active oil industry, although the oil derricks have disappeared. All of these present many challenges for the Long Beach Fire Department. The department has come a long way from the days when volunteer firefighters ran to fires, and will continue to grow to meet the needs of the citizens of the city of Long Beach. # One # THE EARLY YEARS 1897–1933 The board of trustees called for a citizens meeting on May 27, 1902, and asked Los Angeles fire commissioner Jacob Kuhrts to speak on the necessity of an organized and well-trained fire department. J. F. Corbert was elected chief and H. D. Wilson assistant chief, but both had to resign due to business commitments. The fire department was again reorganized. The new chief was J. E. Shrewsbury, and N. C. Lollich became the assistant chief. G. C. Craw was elected as foreman of Hose Company No. 1, J. H. Morgan as foreman of Hose Company No. 2, and E. O. Dorsett as foreman of the Hook and Ladder. Chief J. E. Shrewsbury is standing on the left of the bottom row and G. C. Craw is on the right of the bottom row. The rest are unknown. Chief Joseph E. Shrewsbury was elected chief in 1902, and remained in that position until his untimely death in 1916. The chief died in an automoblie accident while responding to a false alarm. Hose Company No. 1 was placed behind city hall. In this photograph, city hall is in the background and the Hose Company No. 1 is decorated for the Fourth of July celebration. The people are unidentified. Seen here is city hall about 1909, with the fire alarm bell in the background. Before 1906, Hose Company No. 1 was stored behind city hall. Hose Company No. 2 was housed in a shed in an alley. There is a story that when the ladder crew was running to a fire they caught up with a trolley. They hopped on board the trolley and towed the ladder behind. When they got to the fire, the citizens riding the trolley helped put out the fire. This happened between 1902, when the department purchased the hand-drawn ladder, and 1906, when the horse-drawn ladder was purchased. A fire at the Long Beach Pavilion prompted a $30,000 bond to be passed for the building of a central fire station, pictured here about 1906, and the purchase of fire alarm boxes, a steam engine, a hose wagon, a hook and ladder, and seven horses. Chief Shrewsbury is shown resetting an alarm box with the hose wagon and the steam engine in the background. The first alarm box pulled in the newly installed alarm system was box no. 23. In 1907, the LBFD borrowed a Rambler from Frank Craig for testing and soon after purchased two Rambler chassis. Assistant Chief Craw was in charge of fitting both with chemical tanks and hoses. The department also had the distinction of being the first fire department on the West Coast to operate motorized fire equipment—beating Los Angeles by three months. In this picture, Chief Shrewsbury is standing to the far left and Assistant Chief Craw is fourth from the left. This is a c. 1909 list of firemen and their salaries. This is the Long Beach Pier and boardwalk in 1909. Tourism was the main industry at this time, and the population was about 23,000. The horse-drawn hook and ladder is near the corner of Pine Avenue and Ocean Boulevard. The fire department operated three teams of horses and a seventh horse as a spare to fill in when one of the others had a day off. The names of the horses where Tom and Jerry, who pulled the Metropolitan steam engine; King and Prince, who pulled the hose wagon; and Major and Colonel, who pulled the ladder truck. Barney was the seventh horse. This is the fire fund tally from the 1909 city audit report, showing fire department expenses for that year. The Metropolitan steam-pumper is pulled by the horse team Tom and Jerry. The driver is unidentified. The hose wagon, led by King and Prince, pulls George Wright (left) and Louie Bruffet. Firefighters Jewell (seated) and Folkner are in the alarm center. The alarm center building was added to the side of the central fire station. In 1911, the fire department purchased a Robinson motorized pumper and a Seagrave air-cooled tractor to pull the hook and ladder—marking the end for the horse teams. Pictured here is a 1913 Gorham pumper with Chief Shrewsbury standing to the left and Station No. 1 in the background. The rest of the firefighters are unidentified. Pictured here is the hook and ladder, c. 1911, with the air-cooled Seagrave tractor attached. The alarm bell tower was eventually moved from behind city hall to behind Station No. 1. Chief J. Shrewsbury and Assistant Chief Craw stand with an unidentified Station No. 1 hook and ladder crew. Pictured here is the American School fire on December 27, 1911, the first school in Long Beach. The school district still has a school at this location—Renaissance High School. At the wheel of the Mitchell chief car is Chief Shrewsbury. While responding to a false alarm on May 2, 1916, the chief received fatal injuries in an automobile accident at Broadway and American Avenues. Assistant Chief Craw was also involved in the accident. This picture shows the damage to the car that Chief Shrewsbury and his passenger, Mr. C. Shaw, the superintendent of the Water Department, were riding in. Mr. Shaw recovered from his injuries. The funeral parade for Chief Shrewsbury was well attended. Chief Shrewsbury was respected by many and received flowers from around California. Unidentified firefighters pay their respects at the grave of Chief Shrewsbury. Assistant Chief Craw was injured along with his driver G. Wright in the accident. Capt. J. Taylor was appointed acting fire chief until Craw returned to duty—at which time he was appointed fire chief. Captain Taylor was then appointed assistant chief. By the end of the 1917 fiscal year, there were 36 members in the fire department's four fire stations answering 128 alarms with a total fire loss of $27,192.99. The largest single fire loss occurred at the National Potash Company. The population of the city at this time was about 44,865. In this 1917 picture taken in front of Station No. 1, Chief G. Craw is sitting to the right of the civilian in the center of the bottom row and Assistant Chief Taylor is sitting to the left. The rest are unidentified. Seen here about 1920 is the crew of Engine No. 1 and Squad No. 1. From left to right are (first row, seated on the running board) M. Cooper, Chief Craw, and alarm superintendent Clark; (second row) Ray Peterson, Captain Rieder, G. Hocking, Jack Thompson, Victor Herbert, Fred Peth, Stanley Ellis, Ted Alstott, Bill Minter, Capt. Harry Lucas, Bill Simms, Tiny Henning, Loyde Kinnman, Forney Milton, Bill West, Jim Moran, Joe Johnston, and Glenn Croy. During the 1920s, a crew tests the steam pumper. A steamer crew, c. 1920, takes a much-needed break from training. The Metropolitan steam pumper was heavy and had a high center of gravity. This 1920s wreck was probably the result of taking a corner too fast. In 1913, a Seagrave hose wagon was purchased to pull the Metropolitan steam engine. In the foreground, firefighters pick up hose near the steam engine, and in the background (on the left) is the rear of the Seagrave Hose Wagon. Up until 1921, the major industries in the Long Beach area were tourism and the film studios known as Balboa Studios. Balboa Studios started in 1913, and by 1917 was the city's largest employer and major tourist attraction. In 1921, that all changed with the discovery of oil in the Long Beach and Signal Hill area. With the rapid growth of the oil industry and the lack of safety, it was not long before the department was fighting oil field fires. Seen here is a gusher at Shell Martin No. 1, c. 1921. The first major oil fire was the Fisher oil fire in 1924. On June 26, 1927, the next major oil fire occurred—the Alamitos blaze. Long Beach firefighters find time to pose for a photograph after the Alamitos oil fire. William Minter was appointed fire chief on March 1, 1926, after Chief Craw retired. Chief Craw was the first fireman to retire with a pension. Signal Hill came to be known as "Porcupine Hill" because the sight of all the oil derricks in the distance looked like porcupine quills. Station No. 2 and Station No. 3 were similar in design, and in the early 1920s, for some unknown reason, the station numbers were switched: No. 2 became No. 3 and No. 3 became No. 2. This picture is the switched Station No. 3 with the Metropolitan steam engine and a 1922 Ahrens-Fox in the driveway. This is what would become Station No. 2, with the Gorham pumper in the apparatus bay. Pictured here is the battalion chief's car, a Graham-Paige touring car. The fire chief drove a Cadillac. In the northern part of the city there was a small station, Chemical No. 3, at 2926 East Sixty-fifth Street. It later became a small grocery store. On April 1, 1930, Gomer M. Wilhite, Harold E. Maas, and Harmon B. Gearhart received their notice of appointment as firefighters, with a starting pay of $170 a month. This is Station No. 10, which is now the Long Beach Firefighters Museum. # Two # EARTHQUAKE 1933 On March 10, 1933, a devastating 6.5-magnitude earthquake struck Long Beach, destroying many buildings including several fire stations. Firefighter P. T. Forker was upstairs in the dormitory. He had just made his way out the window onto the small balcony in the front when the face of the building crashed down on him. Lt. A. Stephens was downstairs in the apparatus bay when the station started to collapse. Stephens ran for the front door but as he reached the outside the wall fell on him. Both firefighter Forker and Lieutenant Stephens died at the hospital. This clock stopped at the exact time of the earthquake—5:55 p.m. Many of the fire stations had debris in front of the apparatus bay, so the trucks had to be dug out. To complicate matters, there were four major fires at the same time. The Pike amusement area near the Cyclone Racer roller coaster experienced severe damage. This rubble was once the post office on the corner of Seventh Street and Redondo Avenue. There was extensive damage to the Wonder Bread Bakery near the corner of Anaheim Road and Redondo Avenue. The science building at Poly High School was severely damaged. The citizens of Long Beach pulled together to help one another after the earthquake. With many of the local services and utilities out of service, soup lines were set up in Lincoln Park to feed the thousands who were without food. Many citizens of Long Beach had to live in tents until they could get their lives back together. Chief Minter stands in front of the damaged Station No. 1. The alarm center was in a one-story brick building attached to the station and it too suffered severe damage. The electrical power and the battery backup were out of service, and there was only contact with 2 of the 12 fire stations. A large tent was acquired from the Barnum Circus to be used until the station could be rebuilt. City hall is in the background. Here is a close-up view of the Station No. 1 tent with apparatus and an unidentified crew inside. A tent also had to be used to house the fire department's headquarters. Seen here is the damage to the rear of Station No. 5. This photograph shows what Station No. 7 looked like before the earthquake. From 1933 to 1940, this building served temporarily as Station No. 7. Station No. 1 was not the only station that was temporarily housed in a tent until a more permanent building could be built. Station No. 9 also used a tent as living quarters. Station No. 9 did all their cooking in the outdoor kitchen. Demolition started on all unsafe structures including Station No. 9. Once the demolition was complete to Station No. 9, another large tent was secured from Barnum Circus and placed on the same lot. Station No. 10 also suffered some damage to the living quarters, and the firefighters had to live in a tent. In June of 1933, there was an explosion and fire at the Richfield Oil Refinery. The cause of the fire was likely due to lingering effects of the earthquake in March. The tree pictured here shows the force of the explosion at the Richfield Oil Refinery. It was a tremendous effort by many to extinguish this fire. In 1934, Station No. 1 was still in a tent and city hall, in the upper left corner of the picture, had been rebuilt. The earthquake was not the last disaster that happened to Station No. 1. Later in 1934, a strong wind destroyed the tent. # Three # THE GROWTH YEARS 1934–1970 Instead of building a completely new station, the city decided to purchase and remodel an existing building for Station No. 1. The Donner Cafe became the ideal choice for a more permanent Station No. 1. On March 16, 1935, the remodeled Station No. 1 was ready to move into. Station No. 12 was built in 1929, but due to the Great Depression and budget problems, the equipment and the manpower weren't moved in until July 1, 1936. Station No. 12 has remained unchanged since it was built, only some interior upgrades have been needed. In 1910, Station No. 4 first opened at 411 Loma Avenue. This picture was taken in 1939 with a Mack 750 gallons-per-minute quad and an unidentified crew. In 1939, "Talk Alarm" radio receivers were installed in all 14 fire stations. Here, Harmon B. Gearhart receives a call and is ready to dispatch the appropriate units. Pictured here in the 1940s is the alarm desk with the Gamewell. The Gamewell is a ticker-tape device that punches holes in a narrow paper tape. The holes are a code that correspond to the number of a street alarm pull box. After reading the number, the dispatcher could then dispatch the equipment to a specific "pull box" location. Harry Clayton is seen here in the 1940s manning the alarm desk. Just after World War II broke out, the building that housed the Fire College and the alarm center was protected with sand bags. Firefighters Wilhite and Muis, c. 1940, use a funnel device to incorporate a firefighting powder into a hose line. In 1939, fire Station No. 9 was reopened in a different location. The Works Progress Administration (WPA) started by Franklin Delano Roosevelt in 1935 built Station No. 9 and Station No. 7. This photograph was taken in 1940. This picture of Station No. 9 was taken in 1986 and shows how little it has changed. The crew, from left to right, is Mike O'Neil, Gene Willingham, Duaine Jackson, either Tom or Jerry Freeman, Paul Lepore, and Alan Patalano. In 1942, the City of Long Beach commissioned the building of its first fireboat, the Charles S. Windham. The Windham was built by Wilmington Boats Works and financed by the Harbor Department. In the 1909 city audit report, Charles S. Windham was listed as mayor and fire commissioner for the City of Long Beach. On May 8, 1942, the fireboat was placed into service. The Windham patrolled the harbor area from sunset to sunrise until the U.S. Coast Guard had sufficient resources to take over. From 1942 until 1945, the alarm office was moved to the 15th floor of the Villa Riviera Hotel because of the ocean view, and it was thought that the hotel was more secure. Mr. H. Beaver is sitting at the alarm desk. The gargoyles can be seen just outside the windows of the alarm office at the Villa Riviera Hotel. On February 25, 1942, an anti-aircraft shell was accidentally discharged and hit a bank building near the intersection of Long Beach Boulevard and Market Street in the northern part of the city. Fire Station No. 7 has changed very little since this photograph was taken in 1941. One of the more prominent features of the fire trucks was, and still is, the use of nickel and chrome plating. During the war years, we had to "black out" most of the shiny plated surfaces on all of our equipment—like this Ahrens-Fox pumper at Station No. 2 in 1943. This is a picture of a GMC panel truck that was used as a first aid and salvage wagon. Here, the crew stands behind the type of equipment that it carried. The crew, from left to right, are firefighters George Mathews, Leonard Foster, Henry Kern, and Capt. Don DiMarzo. Almost 20 percent of the firefighters in Long Beach were called to duty during World War II. Don DiMarzo, pictured in his U.S. Navy uniform, was assigned to the USS Intrepid. The Intrepid saw fierce action in the Pacific and was involved in several battles. During one of these fights, Don DiMarzo lost his life helping extinguish a fire onboard ship. He was the only Long Beach firefighter who left to join the military to die in the line of duty during World War II. In 1902, Charles Drake started to develop the "Pike" amusement area along the coast of Long Beach. On July 14, 1943, the Looff's Hippodrome caught fire. Several of the buildings became fully engulfed making it a very stubborn fire to put out. The saddest moment during the fire was when Looff's Carrousel, built in 1911 by Charles I. D. Looff, a famous carrousel builder, was completely destroyed. A replacement was built and remained until 1979. On April 24, 1944, the Caldwell Apartment complex caught fire and quickly grew to a three-alarm blaze. The fire lasted 11 hours and left 30 families homeless. The total fire loss exceeded $100,000. In the beginning, it seemed that the Caldwell Apartments fire started in the elevator shaft, but it was soon discovered that the fire was arson. The arsonist was caught less than a month later, and he confessed to the crime. There was a large military presence in the city during World War II. Here, firefighters put out a military tank fire near the intersection of Market Street and Dairy Avenue in 1944. In 1945, most of the piers and docks were built of wood, and a fire that involved the docks would become a major disaster. One of the biggest challenges the firefighters faced was directing the hose streams up under the docks. On December 5, 1945, berths 52, 53, and 54 caught fire. It took 280 men 65 hours to extinguish the fire with mutual aid help from the U.S. Navy and Los Angeles Fire Department fireboat no. 2. Night operations at the dock fires resulted in some excellent pictures. This unidentified crew stands in front of the fire station at the Long Beach Municipal Airport in 1947. The Douglas Aircraft Company donated the building. The firefighters trained often in the oil fields in order to keep up with the rapidly changing oil industry. This picture was taken in 1946. In 1946, there were still wooden derricks in the oil fields, and when they were on fire there was always the danger of collapse. The Bixby family, who ran a successful sheep ranch, owned much of the land that is now the city of Long Beach. In 1947, two barns and part of the corral were destroyed by fire. The buildings were rebuilt and now this portion of the ranch, including the main house, is a historic landmark open to visitors. Ranch hands and private citizens manned the hose lines until the fire department arrived. A large part of the damage was in the barn and corral area. In November 1947, Howard Hughes took his airplane for what was supposed to be a test taxi on the water. Everyone was surprised when he took off and flew the plane. This photograph was taken from the fireboat as it stood by for the test. Pictured here is Howard Hughes standing on the top of his plane. In 1950, a flaming barrel hits this 1929 Ahrens-Fox fire truck and caused the crew to jump off and run for cover. Pictured here, from left to right, are firefighters Shaw, Wolf, Slope, and Monroe. This was the last fire for the Ahrens-Fox; it served the Long Beach Fire Department from 1929 to 1950. The rig is housed in the Long Beach Firefighters Museum and will soon be restored. This 1950 Mack was assigned to Station No. 1 as Squad No. 1 and while responding to a minor garage fire on April 19, 1951, was involved in an accident with another fire truck. Rookie firefighter Marc Mimms was killed on his first run as a firefighter. Six other firefighters were injured. Pictured here in 1950 is the fireboat station and boathouse in the harbor. Fireboats are often asked to do water displays. On January 27, 1956, the south levee of the Dominguez Channel burst 200 feet from Henry Ford Avenue, sending oily floodwaters into the Ford plant and shorting electrical circuits. Ford was in the process of moving to a new plant, and there were only a few employees present when the explosion occurred at 10:20 a.m. Firefighters and five fireboats from Long Beach, Los Angeles, and the U.S. Navy fought the blaze for five hours. The fire and water damage was estimated to be in the millions. Hancock Oil Refinery tank no. 65 foamed over on May 22, 1958, causing oil to flow through the yard, and eventually find an ignition source. The Hancock Oil Refinery is located in the city of Signal Hill bordering the city of Long Beach. The call went out for mutual aid, and responding units came from Long Beach, Los Angeles, and the city of Vernon. Equipment was sent from the U.S. Navy and the Richfield Oil Company. Pictured here is one of three crash fire trucks sent by the U.S. Air Force. The Long Beach Fire Department was on the scene for 54 hours and used 74 men to combat the fire. There was heavy damage to four lowpressure natural gas tanks belonging to the City of Long Beach. In 1959, the City of Long Beach built a public safety building to house the police and fire departments. The units that were assigned to the new Station No. 1 were a squad, rescue, engine, and a hose wagon. In 1964, Long Beach was hit by a number of arson fires. Four churches were set on fire with a total fire loss of $1,125,000. Here, the unidentified crew works to put out the church fire at Fifth Street and Cherry Avenue. Living near an airport is not always safe. Here, a small private airplane pays an unfortunate visit to a home. June 1964 saw the start of construction on a new alarm office on the north end of the same property where the old alarm center was located (on Peterson Avenue). The aged wooden training tower was nearby. By December 1964, the new alarm center was ready for use. Station No. 8, built in 1929 in the Belmont Shore area of Long Beach, is still in service today. The left side of the building was the fire station and the right was the police station. This is how Station No. 8 looks today. Pictured, from left to right, are crewmembers Lorin Jones, Chris Rowe, Ralph Morentin (standing on the step), and Capt. John Kirby. On January 10, 1968, the move into the new Station No. 10 took place. The station was built on the site of the old Fire College and alarm office. # Four # TRAINING Up until 1924, firefighters were trained differently depending on the station to which they were assigned. In 1924, Chief Craw and Assistant Chief Minter adopted the Fresno Fire Department method of training firefighters. Fresno used one central training location and a training manual to achieve consistency. Station No. 6 was used as the first training center. Pictured here are unknown trainee firefighters. In 1930, Long Beach carpenters supervised the construction of the Fire College building and a wooden drill tower built by firefighters. Capt. E. Steiner was appointed as the first drill master. In 1940, the wooden drill tower extended to six stories. In this 1939 picture, two unidentified firefighters practice a rescue with a breathing device that was said to last two hours. The classroom in the Fire College was small and could only accommodate 20 students. Here, instructors prepare lessons on fire ground hydraulics. After the start of World War II, the fire department began to train the Auxiliary Fire and Rescue Service. Pictured here, from left to right, are Capt. Harry Lucas, Nancy Brooks, and Battalion Chief Radcliffe, who were in charge of training the auxiliary firefighters. This is an Emergency Trailer Unit; it contained a pump and other tools for the auxiliary firefighters. In 1941, almost 600 men volunteered in to be in the Auxiliary Fire and Rescue Service. Here the unidentified auxiliary firefighters train with the Emergency Trailer Units. This c. 1945 drill class trains with an evacuation chute. These 1945 recruits are training with single beam ladders, or Pompier Ladders as they are sometimes known. This ladder was used by firefighters to go from floor to floor on the outside of a building. By placing the hooked end of the ladder on the window ledge above and climbing the narrow ladder up to that floor, the firefighter could use the same ladder up to the next floor and so on. In this 1946 photograph, firefighters receive a demonstration on a selfcontained breathing apparatus. Life net training was an important part of new firefighter training. The firefighters would train with the life net during the first few days of drill school. If they refused to jump into the net they failed drill school. This 1947 picture shows the Fire College, Station No. 10, and the wooden training tower in the background. In 1964, the new alarm office was built where the wooden tower is, and four years later Station No. 10 was built where the Fire College was located. In 1947, the firefighters put on a demonstration of the evacuation chute at Recreation Park. In 1964, firefighter Kent Holliday died when he fell from the evacuation chute during training. One of the most difficult things to do at a dock fire was to direct the water stream up under a wooden dock or pier. In this picture, an unidentified boat crew trains with a floating nozzle that would spray water up. Here is a closer view of the floating nozzle. During the 1950s, firefighters train with a dual air pump device. Two firefighters would put on breathing masks that were connected by a hose to a single air pump and another would turn the hand crank to operate the air pump. This is a closer view of the dual breathing mask that the firefighters would wear. In this 1965 photograph, firefighters train with high expansion foam. It is used to help smother a fire in an enclosed space, such as a house, as seen here. The airport fire crews train with a prop to simulate an aircraft fire. At one time, the city of Long Beach had one of the busiest private plane airports in the nation. The citizens of Long Beach voted for $1,535,000 of improvements, and part of this went to a new training center. In this 1963 picture, you can see a six-story concrete tower being built with a drafting pit in the foreground. A drafting pit is an underground pit of water used to practice pumping and test fire truck pumps. In July 1964, the first drill class from the new training center graduated. # Five # FIRST AID In this 1920s photograph, the unidentified firefighters use oxygen to revive a man who was overcome with gas fumes. In this 1920s or early 1930s photograph, firefighters practice an early form of cardio pulmonary resuscitation (CPR). In this photograph, dated before 1933, Chief Minter stands to the left of an unidentified crew displaying the first-aid equipment carried by the Long Beach Fire Department. Before 1941, private companies provided the ambulance service for Long Beach. This is a picture of a 1934 Studebaker ambulance. Pictured here, from left to right, are firefighter Tally, Captain Davis, and firefighter Tyler as they practice first aid on the "patient"—firefighter Corrigan. During World War II, the Civil Defense Ambulance Service operated with donated ambulances. The Culinary Alliance Local No. 681 donated the ambulance seen here. Seen here in 1947 is the ambulance crew at Station No. 7. From left to right are Bob Moll, Morris McCuen, Ted Klobucher, Garret Cady, George Carver, John Olson, Dale Lowell, and Al DeFrank. At Recreation Park in 1947, unidentified firefighters demonstrate a rescue at a firefighting presentation. This is the 1948 Packard ambulance that the fire department used. In this c. 1950 picture, a patient is rescued from a trolley accident. In the late 1950s, the fire department ambulance crew transported this child to the hospital. In the 1950s, the Long Beach Fire Department started using Cadillacs for ambulances. Here is an interior rear view of one of the Cadillacs. In 1972, the Long Beach Fire Department sent its first class of firefighters to paramedic school. Here, John Acosta and Bob Parkins practice inserting IVs. The first paramedic graduating class is pictured here in 1972. They are, from left to right, as follows: (first row) Dr. Irv Unger from Saint Mary's Hospital, Chief Rizzo, John Acosta, Art Santavicca, Gary Olson, Pat Highfill, Dennis Weller, and John Christensen; (second row) city manager Bob Creighton, Bill "Mad Dog" Kelly, Walt Gupton, Don Aselin, Bob Shue, Craig Vestermark, Bob Parkins, Gary Robertson, Dennis Wynn, and Carl Scheu. The first paramedic vehicle was a converted plumbing truck. # Six # ODDS AND ENDS George Hocking had the duty to care for the horses; there were seven. Tom and Jerry, two dapplegrays, pulled the engine. King and Prince, both coal black, were assigned to the hose wagon, and assigned to the ladder truck were two bays named Major and Colonel. Barney, the seventh horse, was used to fill in when the other horses had a day off. Barney was quite the prankster—he learned how to open all the horse stalls and let the other horses loose. When he heard someone coming though, he would enter one of the stalls as though nothing happened. In 1911, a local actress posed for a publicity photograph sitting on a horse next to the Metropolitan steam engine in front of Station No. 1. The names of the actress and the firefighter are unknown. Pictured here in 1941 are retired Battalion Chief J. R. Buchanan and retired engineer George Hocking standing next to the 1902 hand-drawn ladder. The first known fire department mascot was a little dog named Digger, shown here in 1917 at Station No. 2 with firefighter H. Foulke and an unknown female. Fouke left the fire department in 1918 to join the U.S. Army. Digger had a uniform cap and coat, and lived mostly at Station No. 2. He eventually passed away in 1924, and his Digger's service cap and coat are on display in the museum. In 1926, Chief Minter created the Long Beach Junior Fire Department, very close to today's fire explorers, a program for teenagers to learn the fire service and first aid. Pictured here are the badges they wore. Practicing arm splinting in 1929, from left to right, are Wilford Woodbury, Roy Hamilton, and W. S. Minter Jr., the chief's son. This is W. S. Minter Jr.'s Long Beach Junior Fire Department membership card. Practicing the Shaefer method of resuscitation in 1929 are Earl Milan and Bill Stuht (lying down). Battalion Chief Harry Lucas is demonstrating a hydrant fitting he invented for the fire department in the late 1920s and early 1930s. This fitting is unique to the Long Beach Fire Department, and has been so successful that it is still in use today. Here is a closer view of Chief Lucas's hydrant fitting. In 1931, the obsolete alarm bell was removed from the tower behind Station No. 1. In 1942, the old alarm bell was scrapped for the war effort. Firefighter Ed Rugals was a talented cartoonist who often drew cartoons of the men and goings-on of the department. This cartoon from the 1940s shows drill instructor Max Bryan teaching the recruit how to "jack a hose line." This cartoon, also drawn in the 1940s, is poking fun at a District 1 "A" shift strategy session. Another department mascot was a dog named Bimbo, seen here in 1955 sitting on a ladder truck. When the alarm sounded, Bimbo often beat the crew to the rig. Bimbo also helped train new firefighter recruits at the training center. Seen here in 1941 are firefighters R. Plumb and H. Kern pulling kitchen duty at Station No. 7. Today in the Long Beach Fire Department, the whole crew takes a turn cooking—even the "bad" ones. Here is Cully Churchfield's pet parrot. Churchfield was a nonsworn employee and, during the mid-1950s, could be seen around the maintenance shop. On December 4, 1957, Station No. 13 was moved from Santa Fe Avenue on the west side of town to the rapidly growing east Long Beach area, and renamed Station No. 18. Engine No. 18 had to be temporarily housed at Station No. 17 until the move was complete. On December 6, 1957, the apparatus bay was moved and reassembly started. In August 1957, a new Station No. 13 was built on Adriatic Avenue. On January 8, 1958, the fire department was allowed to move into the new Station No. 18. While the traditional firehouse dog is the dalmatian, Long Beach had two other breeds before finally getting their dalmatian mascot, Duchess. Seen here in 1958 is Duchess being "paw-printed" by Chief Sandeman and an unidentified battalion chief. Duchess is at rest in her favorite chair at Station No. 7 in 1958 waiting for the next alarm. Pictured here in 1958, from left to right, are George Brown, Duchess, and Art McIntyre running up the steps to city hall. This c. 1963 photograph proves that firefighters do rescue cats from trees. The cat is near the end of the ladder where the tree bends at the top. Here is the firefighter handing the rescued cat to another firefighter. The most famous of the Long Beach Fire Department mascots was Sam the cat, who lived at Station No. 6. One day Sam was chased and he used the fire pole to escape. From that day on, Sam would slide down the pole on his own. After the local press picked up the story, Sam quickly became an international star. Ripley's Believe It or Not even ran a story on Sam. Sam started to receive fan mail and donations for his care. Golden Books and author Virginia Parsons published a children's book about Sam and his adventures at the fire station. When Station No. 6 moved to a new location in the harbor area, Sam never adjusted. He left, never to return. This is Station No. 6, where Sam lived and the infamous slide down the fire pole from the second floor to the first floor took place. Station No. 6 was built in 1922 and equipped for $36,923. This is a photograph of John Makemson (far right) in his drill class in 1944. To his left is Robert Leslie. The others are unidentified. On December 10, 1976, Robert Leslie became chief. In the early 1970s, John Makemson retired from Station No. 12. He bought a house across the street from the station and often visited the crews during his retirement. John passed away in the early 1990s, but this is actually where the story begins. Mysterious things have been happening at Station No. 12 ever since John's passing. Recently a drill school graduate with no knowledge of previous ghostly visitations was on his first night in the station when he experienced a particularly odd phenomenon. He was awakened by a shadowy presence near his bed, and when he tried to sit up, he felt a light pressure on his chest holding him down. He could see the rest of the crew sleeping in their beds when this happened. Were these just firefighter pranks? Some say yes. Some say that John visits the station. John did once say, "I'll dance on their graves." This is Station No. 12 as it looked in 1929 when it was built. It looks the same today, and some even say it is haunted. # Seven # FIRE CHIEFS Every fire chief for the Long Beach Fire Department has contributed to the rich history of the department. All the fire chiefs have risen through the ranks, with the exception of Chief Shrewsbury who was appointed chief in 1901. Chief Shrewsbury turned down offers to go to other departments for more pay. Chief Shrewsbury was part of a sad moment in the fire deparment's history with his untimely passing in an automobile accident. Joseph E. Shrewsbury October 1, 1901–May 2, 1916 George Craw May 2, 1916–March 1, 1926 William S. Minter March 1, 1926–August 7, 1933 Allen C. Duree August 7, 1933–February 1, 1946 Frank S. Sandeman February 19, 1946–July 13, 1961 Leonard V. Foster July 25, 1961–December 17, 1968 Tullio J. Rizzo December 17, 1968–February 3, 1974 Virgil M. Jones February 3, 1974–December 10, 1976 Robert E. Leslie December 10, 1976–December 13, 1984 James B. Souders December 13, 1984–November 2, 1988 Chris A. Hunter November 2, 1988–December 31, 1993 Harold Omel Jr. January 1, 1994–December 31, 1997 Anthon L. Beck December 31, 1997–February 2, 2002 Terry L. Harbour February 2, 2002–June 30, 2004 Chief Ellis has been chief of the Long Beach Fire Department for a relatively short period of time—June 12, 2004 to present—and has already begun to make his mark on what will be the history of the Long Beach Fire Department. Long Beach is a very diverse community with several industries; therefore, the Long Beach Fire Department must be highly trained and diverse itself. Today the Long Beach Fire Department has 502 sworn personnel and 23 fire stations including two fireboat stations and an airport station. In 1994, the Long Beach Fire Department merged with the Long Beach Lifeguards adding a Swift Water Rescue team and a dive team. Patrolling the harbor are two Marine Lifeguard Fire/Rescue boats, four Marine Safety Response vehicles, and two 89-foot harbor fireboats. The department has 9 paramedic units, and 10 of the engine companies are Paramedic Assessment Units. There is also a dedicated Urban Search and Rescue Unit and team. In 2003, there were 56,919 total calls for service including 37,602 medical, 9,795 lifeguard/marine safety, and 5,434 fire. The fire department badge, whose history dates back hundreds of years, has long been a symbol of a great profession. The badge now represents a commitment to all of the people the Long Beach Fire Department serves and to our fellow firefighters who carry out their duties in a skillful, professional manner and are willing to sometimes risk the ultimate—their lives. Seen here is latest version of the Long Beach Fire Department Badge. Find more books like this at www.imagesofamerica.com Search for your hometown history, your old stomping grounds, and even your favorite sports team.
def parse_config(self, config): try: for change_name, targets in config.items('changes'): change = self.find_change(change_name) for target in targets.split(): self.set_changes(os.path.join('assets', 'minecraft', 'textures', 'block', target), change) except configparser.NoSectionError: pass
package oci import ( "context" "fmt" "github.com/oracle/oci-go-sdk/v44/nosql" "github.com/turbot/steampipe-plugin-sdk/grpc/proto" "github.com/turbot/steampipe-plugin-sdk/plugin" "github.com/turbot/steampipe-plugin-sdk/plugin/transform" ) //// TABLE DEFINITION func tableOciNoSQLTableMetricWriteThrottleCountHourly(_ context.Context) *plugin.Table { return &plugin.Table{ Name: "oci_nosql_table_metric_write_throttle_count_hourly", Description: "OCI NoSQL Table Monitoring Metrics - Write Throttle Count (Hourly)", List: &plugin.ListConfig{ ParentHydrate: listNoSQLTables, Hydrate: listNoSQLTableMetricWriteThrottleCountHourly, }, GetMatrixItem: BuildCompartementRegionList, Columns: MonitoringMetricColumns( []*plugin.Column{ { Name: "name", Description: "The name of the NoSQL table.", Type: proto.ColumnType_STRING, Transform: transform.FromField("DimensionValue"), }, }), } } func listNoSQLTableMetricWriteThrottleCountHourly(ctx context.Context, d *plugin.QueryData, h *plugin.HydrateData) (interface{}, error) { table := h.Item.(nosql.TableSummary) region := fmt.Sprintf("%v", ociRegionNameFromId(*table.Id)) return listMonitoringMetricStatistics(ctx, d, "HOURLY", "oci_nosql", "WriteThrottleCount", "tableName", *table.Name, *table.CompartmentId, region) }
package motherlode.block; import motherlode.client.model.BlockModelDefinition; import motherlode.client.model.ItemBlockModelDefinition; import motherlode.client.model.ItemModelDefinition; import motherlode.util.InitUtil; import net.minecraft.block.Block; import net.minecraft.block.BlockSlab; import net.minecraft.block.material.Material; import net.minecraft.block.properties.IProperty; import net.minecraft.block.properties.PropertyEnum; import net.minecraft.block.state.BlockStateContainer; import net.minecraft.block.state.IBlockState; import net.minecraft.item.Item; import net.minecraft.item.ItemStack; import net.minecraft.util.IStringSerializable; import net.minecraft.util.math.BlockPos; import net.minecraft.world.World; import net.minecraftforge.fml.relauncher.ReflectionHelper; import net.minecraftforge.fml.relauncher.Side; import net.minecraftforge.fml.relauncher.SideOnly; import java.util.Random; public abstract class BlockMotherlodeSlab extends BlockSlab implements IModeledBlock { public final String name; public final String blockstate; public Block halfslab; public static final PropertyEnum<BlockMotherlodeSlab.Variant> VARIANT = PropertyEnum.create("variant", BlockMotherlodeSlab.Variant.class); public static final String[] HARDNESS_MAPPINGS = new String[] { "q", "field_149782_v", "blockHardness" }; public static final String[] RESISTANCE_MAPPINGS = new String[] { "r", "field_149781_w", "blockResistance" }; @SuppressWarnings("deprecation") public BlockMotherlodeSlab(String name, String blockstate, Block baseBlock) { super(baseBlock.getDefaultState().getMaterial()); this.name = name; this.blockstate = blockstate; IBlockState iblockstate = this.blockState.getBaseState(); if (!this.isDouble()) { iblockstate = iblockstate.withProperty(HALF, EnumBlockHalf.BOTTOM); InitUtil.setup(this, name + "_slab"); halfslab = this; } else { InitUtil.setup(this, name + "_double_slab"); } if (this.blockMaterial == Material.ROCK) { setHarvestLevel("pickaxe", 0); } setHardness(ReflectionHelper.getPrivateValue(Block.class, baseBlock, HARDNESS_MAPPINGS)); setResistance(ReflectionHelper.getPrivateValue(Block.class, baseBlock, RESISTANCE_MAPPINGS)); setSoundType(baseBlock.getSoundType()); this.setDefaultState(iblockstate); useNeighborBrightness = true; } public Item getItemDropped(IBlockState state, Random rand, int fortune) { return Item.getItemFromBlock(halfslab); } public ItemStack getItem(World worldIn, BlockPos pos, IBlockState state) { return new ItemStack(halfslab); } public IBlockState getStateFromMeta(int meta) { IBlockState iblockstate = this.getDefaultState(); if (!this.isDouble()) { iblockstate = iblockstate.withProperty(HALF, (meta & 8) == 0 ? EnumBlockHalf.BOTTOM : EnumBlockHalf.TOP); } return iblockstate; } public int getMetaFromState(IBlockState state) { int i = 0; if (!this.isDouble() && state.getValue(HALF) == EnumBlockHalf.TOP) { i |= 8; } return i; } protected BlockStateContainer createBlockState() { return this.isDouble() ? new BlockStateContainer(this, VARIANT) : new BlockStateContainer(this, HALF, VARIANT); } public String getUnlocalizedName(int meta) { return super.getUnlocalizedName(); } public static class Double extends BlockMotherlodeSlab { public Double(String name, Block baseBlock, Block half) { this(name, "", baseBlock, half); } public Double(String name, String blockstate, Block baseBlock, Block half) { super(name, blockstate, baseBlock); this.halfslab = half; } public boolean isDouble() { return true; } } public static class Half extends BlockMotherlodeSlab { public Half(String name, Block baseBlock) { this(name, "", baseBlock); } public Half(String name, String blockstate, Block baseBlock) { super(name, blockstate, baseBlock); } public boolean isDouble() { return false; } } @Override public Comparable<?> getTypeForItem(ItemStack stack) { return Variant.DEFAULT; } @Override public IProperty<?> getVariantProperty() { return VARIANT; } public static enum Variant implements IStringSerializable { DEFAULT; public String getName() { return "default"; } } @SideOnly(Side.CLIENT) @Override public BlockModelDefinition getBlockModelDefinition() { if (isDouble()) { if (blockstate.isEmpty()) { return new BlockModelDefinition(this, VARIANT).append("stair=ignore").setVariant("slab_half=double"); } return new BlockModelDefinition(this, blockstate, VARIANT).append("stair=ignore").append("type=" + name).setVariant("slab_half=double"); } else { if (blockstate.isEmpty()) { return new BlockModelDefinition(this, VARIANT).append("stair=ignore").setVariant(state -> "slab_half=" + state.getValue(HALF).getName()); } return new BlockModelDefinition(this, blockstate, VARIANT).append("stair=ignore").append("type=" + name).setVariant(state -> "slab_half=" + state.getValue(HALF).getName()); } } @SideOnly(Side.CLIENT) @Override public ItemModelDefinition getItemModelDefinition() { if (isDouble()) { return null; } if (blockstate.isEmpty()) { return new ItemBlockModelDefinition(this).setVariant("slab_half=bottom"); } return new ItemBlockModelDefinition(this, blockstate).append("type=" + name).setVariant("slab_half=bottom,stair=ignore"); } }
def assertConvsConnectedToGammas(self, conv_names, gamma_prefixes, mapper): def make_set(item): return item if isinstance(item, set) else set([item,]) convs = [get_op(conv_name) for conv_name in conv_names] gamma_sets = [make_set(mapper.get_gamma(conv)) for conv in convs] if len(gamma_sets) > 1: for i in range(1, len(gamma_sets)): self.assertEqual(gamma_sets[i], gamma_sets[0]) actual_gamma_names = sorted([g.op.name for g in gamma_sets[0]]) gamma_prefixes = sorted(gamma_prefixes) for expected, actual in zip(gamma_prefixes, actual_gamma_names): self.assertTrue(actual.startswith(expected))
<gh_stars>10-100 #ifdef USE_MSVC /** * Include Header */ #include "Plugin.h" /** * Include Engine */ #include <ING/Engine/Engine.h> /** * Include Debug */ #include <ING/_Debug/Debug.h> namespace ING { namespace MSVC { /** * Constructors And Destructor */ Plugin::Plugin(const WString& path) : IPlugin(path), moduleHandle(0) { } Plugin::~Plugin() { } /** * Release Method */ bool Plugin::Release() { return IPlugin::Release(); } std::string GetLastErrorAsString() { //Get the error message ID, if any. DWORD errorMessageID = ::GetLastError(); if (errorMessageID == 0) { return std::string(); //No error message has been recorded } LPSTR messageBuffer = nullptr; //Ask Win32 to give us the string version of that message ID. //The parameters we pass in, tell Win32 to create the buffer that holds the message for us (because we don't yet know how long the message string will be). size_t size = FormatMessageA(FORMAT_MESSAGE_ALLOCATE_BUFFER | FORMAT_MESSAGE_FROM_SYSTEM | FORMAT_MESSAGE_IGNORE_INSERTS, NULL, errorMessageID, MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT), (LPSTR)&messageBuffer, 0, NULL); //Copy the error message into a std::string. std::string message(messageBuffer, size); //Free the Win32's string's buffer. LocalFree(messageBuffer); return message; } /** * Methods */ bool Plugin::Load() { if (moduleHandle != 0) return false; moduleHandle = LoadLibrary(Path::GetAbsolutePath(GetPath()).c_str()); if (moduleHandle == 0) { Debug::Error(ToWString("Cant Load Plugin ") + GetPath()); Debug::Error(GetLastErrorAsString()); Release(); return false; } PluginNameFunction nameFunction = (PluginNameFunction)GetProcAddress(moduleHandle, "PluginName"); if (!nameFunction) { Debug::Error(ToWString("Not Found Plugin ") + GetPath()); Release(); return false; } name = nameFunction(); dllDirCookie = AddDllDirectory(GetPath().c_str()); loadFunction = (PluginLoadFunction)GetProcAddress(moduleHandle, (GetName() + ToString("_Load")).c_str()); unloadFunction = (PluginUnloadFunction)GetProcAddress(moduleHandle, (GetName() + ToString("_Unload")).c_str()); lateCreateFunction = (PluginLateCreateFunction)GetProcAddress(moduleHandle, (GetName() + ToString("_LateCreate")).c_str()); preInitFunction = (PluginPreInitFunction)GetProcAddress(moduleHandle, (GetName() + ToString("_PreInit")).c_str()); lateInitFunction = (PluginLateInitFunction)GetProcAddress(moduleHandle, (GetName() + ToString("_LateInit")).c_str()); preRunFunction = (PluginPreRunFunction)GetProcAddress(moduleHandle, (GetName() + ToString("_PreRun")).c_str()); preReleaseFunction = (PluginPreReleaseFunction)GetProcAddress(moduleHandle, (GetName() + ToString("_PreRelease")).c_str()); return IPlugin::Load(); } bool Plugin::Unload() { if (moduleHandle == 0) return false; RemoveDllDirectory(dllDirCookie); dllDirCookie = 0; FreeLibrary(moduleHandle); return IPlugin::Unload(); } bool Plugin::LateCreate() { if (!lateCreateFunction) return false; return IPlugin::LateCreate(); } bool Plugin::PreInit() { if (!preInitFunction) return false; return IPlugin::PreInit(); } bool Plugin::LateInit() { if (!lateInitFunction) return false; return IPlugin::LateInit(); } bool Plugin::PreRun() { if (!preRunFunction) return false; return IPlugin::PreRun(); } bool Plugin::PreRelease() { if (!preReleaseFunction) return false; return IPlugin::PreRelease(); } } } #endif
// Tests a known result of a homography estimation. TEST(DLT, KnownTransforms1) { vector<Vec2f> input{ {0, 0}, {0, 1}, {1, 1}, {1, 0}}; vector<Vec2f> output{ {0, 0}, {0, 2}, {2, 2}, {2, 0}}; Mat3f GT = Mat3f::Identity(); GT(0, 0) = 2; GT(1, 1) = 2; constexpr float kEpsilon = 1e-5f; Mat3f H = DLT(input.data(), output.data(), input.size()); H /= H(2, 2); for (int row = 0; row < 3; ++ row) { for (int col = 0; col < 3; ++ col) { EXPECT_NEAR(GT(row, col), H(row, col), kEpsilon) << " at (" << row << ", " << col << ")"; } } H = NormalizedDLT(input.data(), output.data(), input.size()); H /= H(2, 2); for (int row = 0; row < 3; ++ row) { for (int col = 0; col < 3; ++ col) { EXPECT_NEAR(GT(row, col), H(row, col), kEpsilon) << " at (" << row << ", " << col << ")"; } } }
// helper file to unzip the WAR file contents private static void unjar(String zipFile, String outputDirectory) throws IOException { byte[] buffer = new byte[1024]; ZipFile archive = new ZipFile(new File(zipFile)); try { Enumeration<? extends ZipEntry> entries = archive.entries(); while (entries.hasMoreElements()) { ZipEntry entry = entries.nextElement(); File file = new File(outputDirectory + File.separator + entry.getName()); if(entry.isDirectory()){ file.mkdirs(); continue; } else { new File(file.getParent()).mkdirs(); } FileOutputStream out = new FileOutputStream(file); try { InputStream in = archive.getInputStream(entry); try { int len; while ((len = in.read(buffer)) > 0) { out.write(buffer, 0, len); } } finally { in.close(); } out.flush(); } finally { out.close(); } } } finally { archive.close(); } }
We have featured one of these Camaro Pace Cars in the past, but it was in pieces. I’m not sure which is worse though. This body is full of rust and everything else is deteriorated. What is there though appears to be original and there is documentation to prove the car’s pedigree. As can be evidenced by the quantity of bids, someone wants this one badly! These were actually replicas of the actual pace cars, but the white paint with orange stripes outside and the orange houndstooth seats inside made them very striking. You can read more about what made these special here on Camaros.org. This car appears to have been ordered with the base SS350 engine and the optional 4-speed! It’s going to take a lot of work to restore, but hopefully someone with the resources and time will drag it home. If you have what it takes, this project can be found here on eBay out of South Point, Ohio.
// Checks that we index member variables. //- @C defines/binding ClassC //- @f defines/binding FieldF //- FieldF childof ClassC //- FieldF typed vname("int#builtin",_,_,_,_) //- FieldF.node/kind variable //- FieldF.subkind field class C { int f; };
#include "Scanner.h" #include <string.h> #define TYPE_BUFFER 0 #define TYPE_FILE 1 void Scanner_InitFromString(Scanner_T * scanner, const char * string) { scanner->type = TYPE_BUFFER; scanner->file = NULL; scanner->index = 0; scanner->buffer = string; scanner->size = strlen(string); scanner->line = 1; scanner->col = 1; } void Scanner_InitFromBuffer(Scanner_T * scanner, const char * buffer, size_t size) { scanner->type = TYPE_BUFFER; scanner->file = NULL; scanner->index = 0; scanner->buffer = buffer; scanner->size = size; scanner->line = 1; scanner->col = 1; } void Scanner_InitFromFile(Scanner_T * scanner, const char * filename) { scanner->type = TYPE_FILE; scanner->file = fopen(filename, "r"); scanner->index = 0; scanner->buffer = NULL; scanner->size = 0; scanner->line = 1; scanner->col = 1; if(scanner->file == NULL) { printf("Error: Scanner_InitFromFile: Cant Open File: \"%s\"\n", filename); } } void Scanner_Destroy(Scanner_T * scanner) { if(scanner->file != NULL) { fclose(scanner->file); scanner->file = NULL; } scanner->buffer = NULL; } static void Scanner_SetChar(Scanner_T * scanner, ScannerChar_T * schar, char c) { if(scanner->col == 0 && c != '\r') { scanner->col = 1; } schar->line = scanner->line; schar->col = scanner->col; schar->c = c; if(c == '\n') { scanner->line ++; scanner->col = 0; } else if(scanner->col != 0); { scanner->col ++; } } static void Scanner_SetBadChar(ScannerChar_T * schar) { schar->line = 0; schar->col = 0; schar->c = '\0'; } int Scanner_GetNextChar(Scanner_T * scanner, ScannerChar_T * schar) { int result; int f_char; if(scanner->type == TYPE_FILE) { if(scanner->file == NULL) { result = 0; Scanner_SetBadChar(schar); } else { f_char = fgetc(scanner->file); if(f_char == EOF) { fclose(scanner->file); scanner->file = NULL; Scanner_SetBadChar(schar); result = 0; } else { Scanner_SetChar(scanner, schar, f_char); result = 1; } } } else if(scanner->type == TYPE_BUFFER) { if(scanner->index < scanner->size) { Scanner_SetChar(scanner, schar, scanner->buffer[scanner->index]); scanner->index ++; result = 1; } else { Scanner_SetBadChar(schar); result = 0; } } else { result = 0; } return result; }
/** * * @author Mustafa SACLI */ public final class ConfigurationManager { public static DbConfiguration getConfiguration(String connection_name) { DbConfiguration db_conf = get_configuration("C:/JvFreeOrm/conf/config.xml", connection_name); return db_conf; } public static DbConfiguration getConfiguration(String file_name, String connection_name) { DbConfiguration db_conf = get_configuration(file_name, connection_name); return db_conf; } /** * * @param connTypeName * @return returns Driver Type Of Connection. */ public static DriverType GetDriverType(String connTypeName) { DriverType dt = DriverType.Unknown; try { String connType = connTypeName == null ? "" : connTypeName; connType = connType.trim(); connType = connType.toLowerCase(); connType = connType.replace('ı', 'i'); // || connType.matches("mssqljtds") if (connType.matches("ext") || connType.matches("external")) { dt = DriverType.External; return dt; } if (connType.matches("derby") || connType.matches("apachederby")) { dt = DriverType.Derby; return dt; } if (connType.matches("mssql") || connType.matches("ms-sql")) { dt = DriverType.MsSQL; return dt; } if (connType.matches("jtds")) { dt = DriverType.Jtds; return dt; } if (connType.matches("sunodbc") || connType.matches("sun-odbc")) { dt = DriverType.SunOdbc; return dt; } if (connType.matches("enterprisedb") || connType.matches("enterprise-db")) { dt = DriverType.EnterpriseDb; return dt; } if (connType.matches("db2")) { dt = DriverType.Db2; return dt; } if (connType.matches("oracle")) { dt = DriverType.Oracle; return dt; } if (connType.matches("mysql") || connType.matches("my-sql")) { dt = DriverType.MySQL; return dt; } if (connType.matches("sqlite")) { dt = DriverType.SQLite; return dt; } if (connType.matches("fbird") || connType.matches("firebird") || connType.matches("firebirdsql") || connType.matches("firebird-sql")) { dt = DriverType.Firebird; return dt; } if (connType.matches("access")) { dt = DriverType.Access; return dt; } if (connType.matches("hsql")) { dt = DriverType.HSql; return dt; } if (connType.matches("pgsql") || connType.matches("pg-sql") || connType.matches("postgresql") || connType.matches("postgre-sql")) { dt = DriverType.PostgreSQL; return dt; } if (connType.matches("h2")) { dt = DriverType.H2; return dt; } if (connType.matches("sybase")) { dt = DriverType.Sybase; return dt; } if (connType.matches("informix")) { dt = DriverType.Informix; return dt; } if (connType.matches("u2")) { dt = DriverType.U2; return dt; } if (connType.matches("ingres")) { dt = DriverType.Ingres; return dt; } if (connType.matches("first") || connType.matches("firstsql") || connType.matches("first-sql")) { dt = DriverType.FirstSQL; return dt; } if (connType.matches("mimer") || connType.matches("mimersql") || connType.matches("mimer-sql")) { dt = DriverType.MimerSQL; return dt; } if (connType.matches("openbase") || connType.matches("open-base")) { dt = DriverType.OpenBase; return dt; } if (connType.matches("sapdb") || connType.matches("sap-db")) { dt = DriverType.SapDb; return dt; } if (connType.matches("small") || connType.matches("smallsql") || connType.matches("small-sql")) { dt = DriverType.SmallSQL; return dt; } if (connType.matches("cassandra")) { dt = DriverType.Cassandra; return dt; } if (connType.matches("cache")) { dt = DriverType.Cache; return dt; } if (connType.matches("terradata") || connType.matches("terra-data")) { dt = DriverType.TerraData; return dt; } if (connType.matches("jtdsmssql") || connType.matches("jtds-mssql") || connType.matches("mssqljtds") || connType.matches("mssql-jtds")) { dt = DriverType.Jtds_MsSql; return dt; } if (connType.matches("mssqlce") || connType.matches("ms-sqlce")) { dt = DriverType.MsSqlCe; return dt; } } catch (Exception e) { dt = DriverType.Unknown; } return dt; } private static DbConfiguration get_configuration(String file_name, String connection_name) { DbConfiguration db_conf = new DbConfiguration(); try { File f = new File(file_name); DocumentBuilder docBuilder = DocumentBuilderFactory.newInstance() .newDocumentBuilder(); Document doc = docBuilder.parse(f); doc.getDocumentElement().normalize(); Element nd_config = doc.getDocumentElement(); NodeList nd_list = nd_config.getElementsByTagName("config"); if (nd_list == null || nd_list.getLength() == 0) { db_conf.setError("Db Configuration File is empty."); return db_conf; } Node nd; NodeList nd_sub_list; for (int nd_counter = 0; nd_counter < nd_list.getLength(); nd_counter++) { nd=nd_list.item(nd_counter); } } catch (Exception e) { db_conf.setError(e.getMessage()); } return db_conf; } }
<reponame>novemberde/lang-practice sum = lambda a,b: a+b print(sum(1,3)) # 4
/** \brief Sorts points in a point array. * * arr array uses NR standard indexing i.e arr[1...n] * but brr[0..n-1] * if the point array is two-way-coupled to another point array * the image pointers of that array will follow sort * if the array is not two-way-coupled to another the image * pointers in the other array will be untouched */ void double_sort_points(unsigned long n, PosType *arr, Point *brr){ unsigned long i,ir=n,j,k,l=1,*istack; long jstack=0; PosType a; Point b; istack=lvector(1,NSTACK); for (;;) { if (ir-l < M) { for (j=l+1;j<=ir;j++) { a=arr[j]; PointCopy(&b,&brr[j-1]); for (i=j-1;i>=l;i--) { if (arr[i] <= a) break; arr[i+1]=arr[i]; PointCopy(&brr[i],&brr[i-1]); } arr[i+1]=a; PointCopy(&brr[i],&b); } if (!jstack) { free_lvector(istack,1,NSTACK); return; } ir=istack[jstack]; l=istack[jstack-1]; jstack -= 2; } else { k=(l+ir) >> 1; std::swap(arr[k],arr[l+1]); assert(k < n + 1); assert(l < n); SwapPointsInArray(&brr[k-1],&brr[l]); if (arr[l] > arr[ir]) { std::swap(arr[l],arr[ir]); assert(l < n + 1); assert(ir < n + 1); SwapPointsInArray(&brr[l-1],&brr[ir-1]); } if (arr[l+1] > arr[ir]) { assert(l < n); assert(ir < n+1); std::swap(arr[l+1],arr[ir]); SwapPointsInArray(&brr[l],&brr[ir-1]); } if (arr[l] > arr[l+1]) { assert(l < n); std::swap(arr[l],arr[l+1]); SwapPointsInArray(&brr[l-1],&brr[l]); } i=l+1; j=ir; a=arr[l+1]; PointCopy(&b,&brr[l]); for (;;) { do i++; while (arr[i] < a); do j--; while (arr[j] > a); if (j < i) break; std::swap(arr[i],arr[j]); assert(l < n + 1); assert(j < n + 1); SwapPointsInArray(&brr[i-1],&brr[j-1]); } arr[l+1]=arr[j]; arr[j]=a; PointCopy(&brr[l],&brr[j-1]); PointCopy(&brr[j-1],&b); jstack += 2; if (jstack > NSTACK) nrerror("NSTACK too small in double_sort_points"); if (ir-i+1 >= j-l) { istack[jstack]=ir; istack[jstack-1]=i; ir=j-1; } else { istack[jstack]=j-1; istack[jstack-1]=l; l=i; } } } }
/** * Handle request coming from Lambda. Call Message Parser. * @param input - String input from lambda * @param context - Lambda context * @return output from message parser */ public String handleRequest(String input, Context context) { MessageParser parser = new MessageParser(); try { return parser.RunCumulusTask(input, context, new TaskLogic()); } catch(MessageAdapterException e) { return e.getMessage(); } }
from django.contrib import admin from durin.models import Client from .models import ClientSettings class ClientSettingsInlineAdmin(admin.StackedInline): """ Django's StackedInline for :class:`ClientSettings` model. """ model = ClientSettings list_select_related = True extra = 1 class ClientAdmin(admin.ModelAdmin): """ Django's ModelAdmin for :class:`Client` model. """ inlines = [ ClientSettingsInlineAdmin, ] list_display = ( "id", "name", "token_ttl", "throttle_rate", ) # Unregister default admin view admin.site.unregister(Client) admin.site.register(Client, ClientAdmin)
FM-Indexing Grammars Induced by Suffix Sorting for Long Patterns The run-length compressed Burrows-Wheeler transform (RLBWT) used in conjunction with the backward search introduced in the FM index is the centerpiece of most compressed indexes working on highly-repetitive data sets like biological sequences. Compared to grammar indexes, the size of the RLBWT is often much bigger, but queries like counting the occurrences of long patterns can be done much faster than on any existing grammar index so far. In this paper, we combine the virtues of a grammar with the RLBWT by building the RLBWT on top of a special grammar based on induced suffix sorting. Our experiments reveal that our hybrid approach outperforms the classic RLBWT with respect to the index sizes, and with respect to query times on biological data sets for sufficiently long patterns. Introduction A text index built on a string T of length n is a data structure that can answer the following queries, for a given pattern P of length m: exists(P) : does the pattern P occur in T ? count(P) : how often does the pattern P occur in T ? locate(P) : where does the pattern P occur in T ? The answers are a boolean, a number, and a list of starting positions in the text, respectively. locate(P ) is the most powerful query because the cardinality of its returned set is the return value of count(P ), whereas count(P ) > 0 is a boolean statement equivalent to exists(P ). One prominent example of such a text index is the FM-index . It consists of a wavelet tree built upon the BWT of the text, and can answer count(P ) in time linear to the length of P multiplied by the operational cost of the wavelet tree, which can be logarithmic in the alphabet size and up to constant . Given the BWT consists of r maximal character runs, this data structure can be represented by two additional bit vectors of length n in r lg σ + o(r lg σ) + O(n) bits of space. This space can be further reduced with Huffman-shaped wavelet trees by exploiting the zeroth order empirical entropy on the string consisting of the different letters of the runs in the BWT. For locate, the indexes based on the BWT are augmented by a sampling of the suffix array , which needs n lg n bits in its plain form. In what follows, we do not address locate since this augmentation can be done orthogonal to our proposed data structure, and is left as future work. Although current approaches achieve O(m) time for count(P ) with |P | = m, it involves O(m) queries to the underlying wavelet tree data structure, which is performed in a constant number of random accesses. Unfortunately, these random accesses make the FM-index rather slow in practice. The BWT built on a grammar compressed string allows us to match non-terminals in one backward search step, hence allowing us to jump over multiple characters in one step. Consequently, we spend less time on the cache-unfriendly wavelet tree, but more time on extracting the grammar symbols stored in cache-friendly arrays. Our experiments reveal that this extra work pays off for the reduced usage of the wavelet tree regarding the time performance. Regarding the space, the grammar captures the compressibility far better than the run-length compression of the BWT built on the plain text. Here, we leverage certain properties of the GCIS (grammar compression by induced suffix sorting) grammar , which have been discovered by Akagi et al. and Díaz-Domínguez et al. for determining non-terminals of the text matching portions of the pattern. Our Contribution To sum up, our contribution is that combining the BWT with a specific choice of grammar-based compression method achieves potentially better compression than the plain RLBWT, and at the same time reducing the memory accesses for count queries (heuristically). This comes at the expense of additional computation for building the grammar of the text during the construction and of the pattern during a query. Related Work A lot of research effort has been invested in analyzing and improving count of the BWT (e.g., and the references therein) and the sampling of the suffix array (e.g., and the references therein). Another line of research are grammar indexes, which usually enhance a grammar for locate queries. Although computing the smallest grammar is NP-complete , there are grammars with a size of O(r log(n/r)) , and some grammars are empirically much smaller than the RLBWT in practice. However, most indexes have a quadratic dependency on the pattern length for locate , and are unable to give improved query times independent of the number of occurrences of the pattern, when considering only count. A novel exception is the grammar index of Christiansen et al. , which achieves O(m+log 2+ n) time for count with a space of O(γ log(n/γ)), for γ being the size of the smallest string attractor of the input text. However, this approach seems to be rather impractical, and up to now nobody has considered implementing it. Related to our work is the grammar indexes of Akagi et al. and Díaz-Domínguez et al. , which are also based on the GCIS grammar, where the latter is based on results of Christiansen et al. . They also use similar techniques for extracting non-terminals from the pattern grammar, for which they can be sure of that these appear in the text grammar. However, they need to call locate for computing count, and thus their time complexity is dependent on the number of occurrences of a pattern. We are not aware of a combination of the BWT with grammar techniques, except for construction. Here, Kärkkäinen et al. studied the construction of the BWT upon a grammar-compressed input. They applied a grammar compression merging frequent bigrams similar to Re-Pair , and empirically could improve the computation of the BWT as well as the reconstruction of the text from the BWT. With a similar target, Díaz-Domínguez and Navarro computed the extended BWT , a BWT variant for multiple texts, from the GCIS grammar. Preliminaries With lg we denote the logarithm to base two (i.e., lg = log 2 ). Our computational model is the word RAM with machine word size Ω(lg n), where n denotes the length of a given input string T , which we call the text, whose characters are drawn from an integer alphabet Σ = {1, . . . , σ} of size σ = n O(1) . We call the elements of Σ characters. A character run is a maximal substring consisting of repetition of the same character. For a string S ∈ Σ * , we denote with S its i-th suffix, and with |S| its length. Given X, Y, Z ∈ Σ * with S = XY Z, then X, Y and Z are called a prefix , substring and suffix of S, respectively. We say that a prefix X (resp. suffix Z) is proper if X = S (resp. Z = S). The order < on the alphabet Σ induces a lexicographic order on Σ * , which we denote by ≺. Given a character c ∈ Σ, and an integer j, the rank query T. rank c (j) counts the occurrences of c in T , and the select query T. select c (j) gives the position of the j-th c in T . We stipulate that rank c (0) = select c (0) = 0. If the alphabet is binary, i.e., when T is a bit vector, there are data structures that use o(|T |) extra bits of space, and can compute rank and select in constant time, respectively. Each of those data structures can be constructed in time linear in |T |. We say that a bit vector has a rank-support and a select-support if it is endowed by data structures providing constant time access to rank and select, respectively. Burrows-Wheeler Transform The BWT of T is a permutation of the characters ofT := T $, where we appended an artificial character $ smaller than all characters appearing in T . This BWT, denoted by BWT, is defined such that BWT is the preceding character ofT 's i-th lexicographically smallest suffix, orT built upon a string T ∈ Σ * is a tuple G T := (Γ, π, X T ) with Γ being the set of non-terminals, a function π : Γ → (Σ ∪ Γ) + that applies (production) rules, and a start symbol X T such that the iterative application of π on X T eventually gives T . Additionally, π is injective, there is no X ∈ Γ with |π(X)| = 0, and for each X ∈ Γ \ {X T }, there is a Y ∈ Γ such that X is contained in π(Y ). Obviously, G T has no cycle. For simplicity, we stipulate that π(c) = c for c ∈ Σ. We say that a nonterminal (∈ Γ) or a character (∈ Σ) is a symbol , and denote the set of characters and non-terminals with S := Σ ∪ Γ. We understand π also as a string morphism π : S * → S * by applying π on each symbol of the input string. This allows us to define the expansion π * (X) of a symbol X, which is the iterative application of π until obtaining a string of characters, i.e., π * (X) ⊂ Σ * and π * (X T ) = T . Since π(X) is deterministically defined, we use to say the right hand side of X for π(X). The lexicographic order on Σ induces an ordering on Γ by saying that X ≺ Y if and only if π * (X) ≺ π * (Y ). Grammar Compression Based on Induced Suffix Sorting SAIS is a linear-time algorithm for computing the suffix array . We briefly sketch the parts of SAIS needed for constructing the GCIS grammar. Starting with a text T , we pad it with artificial characters # and $ to its left and right ends, respectively, such that T = # and T = $. We stipulate that # < $ < c for each character c ∈ Σ. Central to SAIS is the type assignment to each suffix, which is either L or S: The LMS substrings induce a factorization of T = T 1 · · · T t , where each factor starts with an LMS substring. We call this factorization LMS factorization. By replacing each factor T x by the lexicographic rank of its respective LMS substring 1 , we obtain a string T (1) of these ranks. We recurse on T (1) until we obtain a string T (t T −1) whose rank-characters are all unique or whose LMS factorization consists of at most two factors. If we, instead of assigning ranks, assign each LMS substring a non-terminal, and recurse on a string of non-terminals, we obtain a grammar G T that is factorizing. Specifically, the right hand side of a non-terminal is an LMS substring without its last character, and the special characters # and $ are omitted. The start symbol is defined by X T → T (t T ) . Lemma 2.1 ( ). The GCIS grammar G T can be constructed in O(n) time. G T is reduced, meaning that we can reach all non-terminals of Γ from X (t T ) . Since there are no neighboring S * suffixes, an LMS substring has a length of at least three, and therefore the right-hand sides of all non-terminals are of length at least two (except maybe for the first factor). This means that the length of T (i) is at most half of the length of T (i−1) for i ≥ 1. Consequently, the height t T is O(lg n). Example for a GCIS grammar We build GCIS on the example text T := bacabacaacbcbc. For that, we determine the types of all suffixes, which determine the LMS substrings, as shown in Fig. 1. We obtain the grammar G T with the following rules: A → aac, B → ab, C → ac, D → b, and E → bc. The grammar has σ (1) := 5 non-terminals on height 1. By replacing the LMS substrings with the respective non-terminals, we obtain the string T (1) := DCBCAEE. Since there are two occurrences of E, we would recurse, but here, and in the following examples, we stop at height 1 for simplicity. In what follows, we study an approach that builds the BWT on this text, which is given by BWT (1) := ECCBD$EA. FM-Indexing the GCIS Grammar The main idea of our approach is that we build the GCIS grammar G P on P and translate the matching problem of P in T to matching , with t P being the height of G P . The problem is that the LMS factorization of P and the LMS factorization of the occurrences of P in T can look differently since the occurrences of P in T are not surrounded by the artificial characters # and $, but by different contexts of T . The question is whether there is a substring of P (h) , for which we can be sure that each occurrence of P in T is represented in T (h) by a substring containing P (h) . We call such a maximal substring a core, and give a characterization similar to Akagi et al. that determines this core: Cores Given a pattern P , we pad it like the text with the artificial characters # and $, and compute its LMS factorization. Now, we study the change of the LMS factorization when prepending or appending characters to P , i.e., we change P to cP or P c for a character c ∈ Σ, while keeping the artificial characters # and $ at the left and right ends, respectively. We claim that (a) prepending characters can only extend the leftmost factor or let a new factor emerge consisting only of the newly introduced character, and (b) appending characters can split the last factor at the beginning of the rightmost character run into two. Consequently, given that the LMS factorization of P is P = P 1 · · · P p , fix an occurrence of P in T . Then this occurrence is contained in the LMS factors P 1 P 2 · · · P p−1 P p P p+1 , where P 1 is a (not necessarily proper) suffix of P 1 , and either (a) P p+1 is empty and P p is a (not necessarily proper) prefix of P p , or (b) P p is P p without its last character run, which is the prefix of P p+1 . Prepending Suppose we prepend a new character c to P such that we get P := cP with P = c and P = #. Then none of the types changes, i.e., the type of P is the type of P for i ≥ 1, since the type of a suffix is independent of its preceding suffixes. It is left to determine the type of P and to update the first LMS substring of P (cf. Fig. 2 Appending Let us fix an occurrence of the pattern P in the text T , let m be the position in T matching P , and assume that the LMS factorization of P is P = P 1 . . . P p with p > 2. Note that P is always L since its successor is $. Given the last two factors of P are P p−1 and P p , we have two cases to consider of how the LMS factor in T covering the same characters as P p−1 and P p look like. Figure 2: Prepending one of the characters a or b to P = abab. The rectangular brackets demarcate the LMS substrings. The two cases are studied in Sect. 3.1. , the text factor F covering T has P p as a (not necessarily proper) prefix, and its preceding factor is P p−1 (assuming that p > 2). Second, is L, then we have the same setting as above (we do not introduce a new LMS substring with an extra S * suffix). However, if T is S, then the factorization of P 's occurrence in T differs: is a prefix of P p , and its preceding factor is equal to P p−1 . In total, when matching the last LMS factors of P with the occurrences of P in T , only the last character run in P can be contained in a different LMS factors. Figure 3 visualizes our observation considering the additional case that P is S * , which is covered in our first case. Pattern Matching For simplicity, assume that we stop the grammar construction on the first level, i.e., after computing the factorization of the plain text such that t T = 2. We additionally build the BWT on T (t T −1) and call it BWT (t T −1) . It can be computed in linear time by using an (alphabet-independent) linear-time suffix array construction algorithm like SAIS. Now, given a pattern P , we compute the GCIS grammar G P on P , where we use the same non-terminals as in G T whenever their right hand sides match. Then there are non-terminals Y 1 , . . . , Y p such that P has the LMS factorization P = P 1 · · · P p with P y = π(Y y ) for each y ∈ . According to Sect. 3.1 each occurrence of P in T is captured by an occurrence of Y 2 · · · Y p−1 in T (1) . So Y 2 , . . . , Y p−1 do not only appear as non-terminals in the grammar of T , but they also appear as substrings in T (1) (if P occurs in T ). In what follows, we call Y 2 , . . . , Y p−1 the core of P , and show how to use the core to find P via BWT (1) and a dictionary on right hand sides of the non-terminals of G T . If we turn BWT (1) into an FM-index by representing it by a wavelet tree, it can find the core of P in p − 2 backward search steps, i.e., returning an interval in the BWT that corresponds to all occurrences of Y 2 · · · Y p−1 in T (1) , which corresponds to all occurrences of P 2 · · · P p−1 in T . We can extend this interval to an interval covering all occurrences P 1 · · · P p−1 with the following trick: On constructing the wavelet tree on BWT (1) , we encode the symbols of T (1) by the colexicographic order of their right hand sides. See Table 2 for the colexicographic ranking of the non-terminals, and Fig. 5 for the wavelet tree In Cases 1 and 2, we extend the last factor P p , while we split P p in Case 3, moving its last character to a new factor P p+1 . Table 2: Colexiographic ranking of the non-terminals of Sect. 2.4. We additionally add the artificial character $ with rank 0 because it is later used in BWT (1) . of our running example. To understand our modification, we briefly review the wavelet tree under that aspect: The wavelet tree is a binary tree. The root node stores for each text position i of BWT (1) a bit for whether the colexicographic rank of this BWT (1) is larger than σ (1) /2. Its left and right children inherit the input string omitting the marked and unmarked positions, respectively such that the left and the right children obtain strings whose symbols have colexicographic ranks in and , respectively. The construction works then recursively in that the children themselves create bit vectors to partition the symbols. The recursion ends whenever a node receives a unary string. By having ranked the non-terminals (∈ Σ (1) ) colexicographically during the construction of the wavelet tree of the BWT, matching π(Y 1 ) is done by a topdown traversal of the wavelet tree, starting at the root. By doing so, we can find the lowest node whose leaves represent the positions of all non-terminals having π(Y 1 ) as a suffix, within the query range of π(Y 2 ) · · · π(Y p−1 ). Finally, it is left to find the missing suffix. Let R := {R k } k ⊂ Σ (1) be the set of all rules R k with P p being a (not necessarily proper) prefix of π(R k ). Since each R k ∈ R received a rank according to the lexicographic order of its right hand side, the elements in R form a consecutive interval in BWT, and this interval corresponds to occurrences of P p . So staring with this interval the aforementioned backward search gives us occurrences of P . However, the final range may not contain all occurrences. That is because, according to Sect. 3.1, the rightmost non-terminal may not cover P p completely, but only P p , where P p is the longest character run that is a suffix of P p , for ≥ 0. Now, suppose that the rule X p → P p exists, then we need to check, for all non-terminals in the set U = {U j } j with P p being a prefix of π(U j ), whether X p U j is a substring of T . With analogous reasoning, the occurrences of all elements of U ⊂ Σ (1) form a consecutive range in BWT, and with a backward search for X p we obtain another range corresponding to P p . However, this range combined with the range for R gives all occurrences of P p . Consequently, if X p exists, we need to perform the backward search not only for the range of R, but also for X p U. Example for Pattern Matching Continuing with Sect. 2.4, let P := cabaca be a given pattern. We obtain the factorization of P with its core BC as shown in Fig. 4 on the left. The pattern is divided into four factors P 1 , P 2 , P 3 , and P 4 , where we know that P 2 and P 3 Fig. 1 for the text. While we can determine the non-terminals corresponding to P 2 , P 3 , and P 4 , we have several candidate non-terminals that have P 1 and P 4 as a suffix or prefix, respectively, which we list below the brackets demarcating the LMS substrings of P . Right Matching P 2 P 3 P 4 in BWT (1) with the backward search. are the right hand sides of B and C, respectively. We find that only the nonterminals A, B, and C have P 4 = a as a prefix of their right hand sides. These form a consecutive interval in BWT (1) . With the backward search, we can find the interval of P 2 P 3 P 4 from , as shown in the right of Fig. 4: From , we match P 3 corresponding to C, which gives the first and the second C in F , represented by the interval . From there, we match P 2 corresponding to B, which gives the first B at position 3. To match further, we look at the wavelet tree given in Fig. 5. There, we can use the edges to match the non-terminals with a pattern backwards. For instance, all non-terminals having P 1 = c as a suffix are found in the right subtree of the root. However, we are interested in completing the range of P from the range of P 2 P 3 P 4 , which consists of the single position 2. Hence, we look for all non-terminals having P 1 as a suffix within this range, which gives us the second C. Finally, we explain our dictionary used for finding the non-terminals based on their right hand sides. This dictionary is represented by a trie, and implemented by the extended Burrows-Wheeler Transform (XBWT) . We use the XBWT because it supports substring queries , which allow us to extend a substring match by appending or prepending characters to the query. XBWT The grammar trie of G T on height h stores the reversed of the right hand sides of each non-terminal in Σ (h) for h ≥ 1, appended with an additional delimiter $ ∈ Σ smaller than all symbols. Each leaf of the trie corresponds to a non-terminal. The trie for our running example is depicted on the left side of Fig. 6. There, we additionally added an imaginary node as the parent of the root connected with an artificial character < $, which is needed for the XBWT construction. The XBWT of this trie is shown on the right of Fig. 6. It Figure 5: The wavelet tree of BWT (1) on our running example. The wavelet tree ranks each non-terminal by the colexicographic rank of its right hand side. Each row of the wavelet tree is depicted as a small matrix, where the actual data is the last row. The first row of each matrix consists of the ranks and the second row consists of the corresponding characters (∈ Σ (1) ). An edge on the i-th level lists all possible starting characters of the i-th suffix of the right hand sides of all non-terminals below this edge. consists of the arrays F , Last, and L; the other columns in the figure like Π are only for didactic reasons: L and Π represent the labels of the paths from each trie node up to the root, where L stores the first symbol, Π stores the remaining part, and F stores the first symbols of each string stored in Π. Consequently, concatenating L and Π gives the path from a node to the root in the trie. Each pair (L , Π ) is permuted such that Π is sorted lexicographically. The last element with the same string in Π is marked with a '1' in the bit vector Last. L is represented with a wavelet tree, and Last is equipped with a rank/select support. We represent F with an array C of size σ (h−1) lg n bits such that, given a c ∈ Σ (h−1) with its rank r c , C is the sum of all symbols in F whose rank is at most the rank of r c . Each $ in the array L corresponds to a leaf, and hence to a non-terminal. , and recurse. We use this operation for finding the interval in BWT (1) of P p by searching $P p . The returned range is the range of lexicographic ranks of the non-terminals whose right hand sides have P p as a prefix. We conclude that we can find P p in |P p | backward search steps on the XBWT. For our running example, where P p = a, we take the interval of all A's in F , and then select all $'s in F within that range. The ranks of these $'s corresponds to the non-terminals A, B, and C. Finally, we need the colexicographic order of the non-terminals for matching P 1 (and building the wavelet tree on the colexicograhically ranked non-terminals of BWT (h) ). For that we have two options: (a) we create an additional XBWT on the blind tree of the lexicographic sorted right-hand side strings of the non- (1) gives the non-terminal associated with a leaf. Reading the leaves representing the non-terminals from left to right gives their colexiographic ranking, cf. Table 2. Each node is represented by as many rows as it has children. terminals on height h, or (b) a simple permutation with σ (h) lg σ (h) bits. The former approach is depicted in Fig. 8 in the appendix, the latter approach given by Table 2. Complexity Analysis Up so far, we have studied the case that we stop the construction of the grammar at height 1. However, we can build the grammar up to a height t T = O(lg n), and then build BWT (t T ) on T (t T ) . We then store for each height h a separate XBWT equipped with the wavelet tree of Barbay et al. supporting a query in O(lg lg σ (h) ) time. The final BWT can be represented by a data structure supporting partial rank queries in constant time such that we can find a core in For the interval in BWT containing the occurrences of P p , there are now not two, but 2 t T possibilities: This is because, for each recursive application of the GCIS grammar, we have the possibility to include the last run of symbols of the last LMS factor. Note that large values of t T makes it unfeasible to find short patterns that exhibit cores only at lower heights; this shortcoming is addressed in the next section. Unfortunately, for a meaningful worst case query time analysis, we need to bound the lengths of the LMS factors of P . We can do so if we enhance the grammar to be run-length compressed, i.e., reducing character runs to single characters with their length information. Then a run-length compressed LMS substring on height h has a length of at most 2σ (h) , and therefore, we can find a range of non-terminals containing such a string in σ (h) lg lg σ (h) time. This gives O( t T −1 h=0 σ (h) lg lg σ (h) ) time for finding the 2 t T initial backward search intervals, and O(|P |) time for conducting the backward search on all possible intervals. Although the worst case time is never better than that of the FM-index built directly on BWT (0) , it can be improved by leveraging parallel executions. In fact, conducting the backward search on the 2 t T possible intervals is embarrassingly parallel. Given we have ρ processors, we set t T to O(lg ρ). Then each backward search can be handled by each processor individually in O(|P |/ρ) time. Finally, we merge the results in a tournament tree in O(lg ρ) time. The wavelet tree on BWT (t T ) uses nH k /2 t T + o(nH k ) + O(n/2 t T ) bits with the representation of Belazzougui and Navarro , and the XBWT on height h takes Practical Improvements For practical reasons, we follow the aforementioned examples with respect to that we stop the grammar construction at height 1. That is because we experienced that the grammar at height 1 already compresses well, while higher levels introduce much more non-terminals outweighing the compression. Contrary to that, we additionally introduce a chunking parameter λ ∈ O(log σ n). This parameter chops each LMS factor into factors of length λ with a possibly smaller last factor such that each non-terminal has a length of at most λ. The idea for such small λ is that we can interpret the right hand side of each non-terminal as an integer fitting into a constant number of machine words. For the dictionary on the right hand sides of the non-terminals, we drop the idea of the XBWT, but use compressed bit vectors B F and B R , each of length σ λ . We represent π(X) for each non-terminal as an integer v ∈ and store it by setting B F = 1. Similarly, we represent the reversed string π(X) as such an integer v and set B R = 1. We endow B F and B R with rank/select-support data structures. We additionally store a permutation to convert a value of B R . rank 1 to B F . select 1 . Pattern Matching Unfortunately, by limiting the right hand sides of the non-terminals at length λ, the property that only the first and last non-terminal of the parsed pattern is not in the core no longer holds in general. Let again P = P 1 · · · P p be the LMS factorization of our pattern. We assume that p ≥ 2 and |P | > λ; the other cases are analyzed afterwards. For x ∈ , we define the chunks P x,1 · · · P x,cx = P x with |P x,j | = λ for each j ∈ and |P x,cx | ∈ . Then, due to the construction of our chunks, there are non-terminals Y x,j ∈ Σ (1) with π(Y x,j ) = P x,j for all x ∈ and j ∈ . Hence, Y 2,1 · · · Y 2,c2 Y 3,1 · · · Y p−1 c p−1 is the core of P on height 1. The core can be found as a BWT range analogously as explained in Sect. 3.2. But before searching the core, we first find P p . We only analyze the case of an occurrence where the last character run in P p has not been transferred to a new factor. In that case, we find a range of non-terminals whose right hand sides start with P p,cp . In detail, we interpret P p,cp as a binary integer v having |P p,cp | lg σ bits. Then we create two integers v 1 , v 2 by padding v with '0' and '1' bits to v's right end (interpreting the right end as the bits encoding the end of the string P p,cp ), respectively, such that v 1 and v 2 have λ lg σ bits with v 1 ≤ v 2 . This gives us the ranks of all non-terminals whose right hand sides start with P p,cp , and this interval of ranks translates to a range in BWT (1) . Because we know that P p was always a prefix of a non-terminal in Sect. 3.2, we can apply the backward search to extend this range to the range of P p,1 · · · P p,cp , and then continue with searching the core. Finally, to extend this range to the full pattern, we remember that an occurrence of P 1 in T was always a suffix of the right hand side of a non-terminal. Thus if P 1 < λ, then we can process analogously. If not, then such a former right-hand side has been chunked into strings of length λ, where the last string has a length in . Because we want to match a suffix, we have therefore λ different ways in how to chunk P 1 = P 1,1 · · · P 1,c1 into the same way with |P 1,c1 | ∈ . Let us fix one of these chunkings. We try to extend the range of the core by P 1,2 · · · P 1,c1 with the backward search steps as before. If we successfully obtain a range, then we could proceed with P 1,1 as with P 1 in Sect. 3.2 with a top-down traversal of the wavelet tree. However, here we use the bit vector B R and interpret the reverse of P 1,1 like P p,cp above as an integer to obtain an interval I of colexicograhic ranks for all non-terminals whose reversed right-hand sides have the reverse of P 1,1 as a prefix (i.e., whose right hand sides have P 1,1 as a suffix). Unfortunately, we empirically evaluated that the top-down traversal of the wavelet tree built on the colexicograhically ordered not-terminals is not space-economic in conjunction with the run-length compression of BWT (1) . Instead, we have built the wavelet tree with the non-terminals in (standard) lexicographic order, and now use the permutation from B R to B F for each element of the interval I, and locate it in the wavelet tree individually. For |P | < λ, we need a different data structure: We create a generalized suffix tree on the right hand sides of all non-terminals. The string label of a node v is the concatenation of edge labels read from the root to v. We augment each node by the number of occurrences of its string label in T . For a given pattern P , we find the highest node v whose string label has P as a prefix. Then the answer to count(P ) is the stored number of occurrences in v. For the implementation, we represent the generalized suffix tree in LOUDS , and store the occurrences in a plain array by the level order induced by LOUDS. Implementation and Evaluation Our implementation is written in C++17 using the sdsl-lite library . The code is available at https://github.com/jamie-jjd/figiss. Central to our implementation is the wavelet tree implementation built upon the run-length compressed BWT (1) , for which we used the class sdsl::wt_rlmn. This class is a wrapper around the actual wavelet tree to make it usable for the RLBWT. Therefore, it is parameterized by a wavelet tree implementation, which we set to sdsl::wt_ap, an implementation of the alphabet-partitioned wavelet tree of Barbay et al. . Since we only care about answering count, we do neither sample the suffix array nor its inverse. The bit vectors B F and B R are realized by the class sdsl::sd_vector<> leveraging Elias-Fano compression. Evaluation Environment We evaluated all our experiments on a machine with Intel Xeon E3-1231v3 clocked at 3.4GHz running Ubuntu 20.04.2 LTS. The used compiler was g++ 9.3.0 with compile options -std=c++17 -O3. Datasets We set our focus on DNA sequences, for which we included the datasets cere, Escherichia Coli (abbreviated to e.coli), and para from the repetitive corpus of Pizza&Chili 2 . We additionally stored 15 of 1000 sequences of the human chromosome 19 3 into the dataset chr19.15, and create a dataset artificial.x for x ∈ {1, 2, 4, 8}, consisting of a uniform-randomly generated string S of length 5 · 2 10 on the alphabet {A, C, G, T} and 100 copies of S, where each character in each copy has been modified by a probability of x%, meaning changed to a different character or deleted. For the experiments we assume that all texts use the byte alphabet. In a preprocessing step, after reading an input text T , we reduce the byte alphabet to an alphabet Σ such that each character of Σ appears in T . We further renumber the characters such that Σ = {1, . . . , σ} by using a simplified version of sdsl::byte_alphabet. For technical reasons, we further assume that the texts end with a null byte (at least the used classes in the sdsl need this assumption), which is included in the alphabet sizes σ of our datasets. We present the characteristics of our datasets in Table 3 in the first three columns. Experiments In the following experiments, we call our solution RLFM (1) , evaluate it for each chunking parameter λ ∈ (cf. Sect. 4), and compare it with the FM-index RLFM (0) built on BWT (0) run-length compressed, again without any sampling. 4 Note that the sampling is only useful for locate queries, and therefore would be only a memory burden in our setting. While RLFM (1) uses sdsl::wt_ap suitable for larger alphabet sizes, RLFM (0) uses sdsl::wt_huff, a wavelet tree implementation optimized for byte alphabets. Table 3 shows the space requirements of RLFM (0) and RLFM (1) , which are measured by the serialization framework of sdsl. There, we observe that the larger λ gets, the better RLFM (1) compresses. However, we are pessimistic that this will be strictly the case for λ > 8 since the introduced number of symbols exponentially increases while the number of runs r (1) approaches a saturation curve. The case λ = 1 can be understood as a baseline: Here, the right-hand sides of all terminals are single characters. Hence, this approach does not profit from any benefits of our proposed techniques, and is provided to measure the overhead of our additional computation (e.g., the dictionary lookups). Good parameters seem to be λ = 4 and λ = 7, where λ = 4 is faster but uses more space than the solution with λ = 7. Compared to RLFM (0) , RLFM (1) always uses less space, and for the majority of values of λ, answering count(P ) is faster for sufficiently long lengths |P |, which can be observed in the plots of Fig. 7. There, we measure the time for count(P ) with |P | = 2 x for each x ∈ . For each data point and each dataset T , we extract 2 12 random samples of equal length from T , perform the query for each sample, and measure the average time per character. 5 From Fig. 7, we can empirically assess that the larger λ is, the steeper the falling slope of the average query time per character is for short patterns. That is because of the split of P 1 into λ different chunkings. Our solution with λ = 1 works like RLFM (0) with some additional overhead and therefore can never be faster than RLFM (0) . Interestingly, it seems that the used wavelet tree variant sdsl::wt_ap (used for every λ, in particular for λ = 1) seems to be smaller than sdsl::wt_huff used for RLFM (0) regarding the space comparison of RLFM (1) with λ = 1 and RLFM (0) in Table 3. The solution with λ = 2 is only interesting for artificial.x, for the other datasets it is always slower than RLFM (0) . Future Work The chunking into substrings of length λ is rather naive. Running a locality sensitive grammar compressor like ESP on the LMS substrings will produce factors of length three with the property that substrings are factorized in the same way, except maybe at their borders. Thus, we expect that employing a locality sensitive grammar will reduce the number of symbols and therefore improve r (1) . We further want to parallelize our implementation, and strive to beat RLFM (0) for smaller pattern lengths. Also, we would like to conduct our experiments on larger datasets like sequences usually maintained by pangenome indexes of large scale. (10 6 ). r (0) and r (1) are the number of character runs in BWT (0) and BWT (1) , respectively, and σ (0) and σ (1) are, respectively, the number of their different symbols. The column lg |P | is the logarithmic pattern length at which RLFM (1) starts to become faster than RLFM (0) on answering count(P $ cb E Figure 8: Trie on the right hand sides of all non-terminals of our running example with its XBWT representation, cf. Fig. 6 for the trie on the reserved right hand sides. The ranks of the $ in L corresponds to the colexiographic ranking of the non-terminals, cf. Table 2. A Consistent Grammars Our approach is not limited to the GCIS grammar. We can also make use of a wider range of grammars. For that purpose, we would like to introduce τconsistent grammars, and then show how we can use them. Given an integer τ < n and a run-length compressed string T of length n, a set of positions ) . . . π(X i |T (h+1) | (h) ) and the starting positions of the substrings π(X ij (h) ) for all j ∈ form a τ -consistent set. Examples of τ -consistent grammars are signature encoding with τ = O(lg * n), the Rsync parse with a probabilistically selectable τ , AlgBcp with τ = O(1), grammars based on string τ -synchronizing sets , a run-length compressed variant of GCIS with τ = 2σ , where σ is the number of different characters in the run-length encoded text. Now assume that P factorizes into P = P 1 · · · P p . If |P 1 |, |P m | > τ , then we can directly apply our approach since P 2 · · · P m−1 can be interpreted as the right-hand sides of non-terminals belonging to the core of P . Otherwise, let f and be the smallest and largest numbers, respectively such that |P 1 · · · P f | ≥ τ and |P · · · P p | ≥ τ . Then again P f +1 · · · P −1 can be found via the core of P . For the other factors, we can proceed analogously as for the chunking into λ-length substrings described in Sect. 4. B Full Experiments Finally, we provide the full experiments (Tables 4 and 5) and plots ( Fig. 9) with higher resolution that did not made in into the main text due to space limitations. We additionally evaluated in Tables 6 and 7 the construction times for RLFM (0) , RLFM (1) , and the FM-index on the plain BWT (0) . There, we used the same wavelet tree implementation sdsl::wt_huff for the FM-index as for RLFM (0) . We observe that the best construction times of RLFM (1) are roughly 2 -3 times slower than for RLFM (0) and the FM-index. The construction is the slowest for λ = 1 (up to 10 times slower), and fastest for a λ ∈ (the exact number differs for each dataset).
def init_plate_locations(self): if self.fms_scale == 'R': self.scale_init_waypoint = reflect_2d_y(self.SCALE_INIT_WAYPOINT) self.scale_deposit = reflect_2d_y(self.SCALE_DEPOSIT) self.scale_deposit_waypoint = reflect_2d_y(self.SCALE_DEPOSIT_WAYPOINT) self.scale_deposit_orientation = -self.SCALE_DEPOSIT_ORIENTATION else: self.scale_init_waypoint = self.SCALE_INIT_WAYPOINT self.scale_deposit = self.SCALE_DEPOSIT self.scale_deposit_waypoint = self.SCALE_DEPOSIT_WAYPOINT self.scale_deposit_orientation = self.SCALE_DEPOSIT_ORIENTATION if self.fms_switch == 'R': self.switch_deposit = reflect_2d_y(self.SWITCH_DEPOSIT) self.switch_deposit_orientation = -self.SWITCH_DEPOSIT_ORIENTATION self.switch_to_cube_point = reflect_2d_y(self.SWITCH_TO_CUBE_POINT) self.drive_by_switch_point = reflect_2d_y(self.DRIVE_BY_SWITCH_POINT) self.drive_by_orientation = -self.DRIVE_BY_ORIENTATION else: self.switch_deposit = self.SWITCH_DEPOSIT self.switch_deposit_orientation = self.SWITCH_DEPOSIT_ORIENTATION self.switch_to_cube_point = self.SWITCH_TO_CUBE_POINT self.drive_by_switch_point = self.DRIVE_BY_SWITCH_POINT self.drive_by_orientation = self.DRIVE_BY_ORIENTATION
Adoption of care management activities by primary care nurses for people with common mental disorders and physical conditions: a multiple case study. INTRODUCTION Few studies assessed current nursing practices before implementing the collaborative care model and the role of care manager for people with common mental disorders (CMDs) and physical conditions in primary care settings. AIM Evaluate the main determinants of practice that influence the adoption of care management activities by primary care nurses for people with CMDs and physical conditions. METHODS A qualitative multiple case study was conducted in three primary care clinics. A total of 33 participants were recruited. Various data sources were combined: interviews (n=32), nurse-patient encounters' observations (n=7), documents, and summaries of meetings with stakeholders (n=8). RESULTS Seven determinants were identified: (1) access to external mental health resources; (2) clarification of local CMD care trajectory; (3) compatibility between the coordination of nursing work and the role of care manager; (4) availability of mental health resources within the primary care clinic; (5) competency in care management and competency building; (6) responsibility sharing between the general practitioner and the primary care nurse; and (7) common understanding of the patient treatment plan. IMPLICATIONS FOR PRACTICE To build their competency in care management for people with CMDs, primary care nurses would benefit from having clinical support from mental health nurse practitioners.
def default_hashing(positions, res): return positions[:,0]*res+positions[:,1]
//---------------------------------------------------------------------------- // Default XML channel file name. //---------------------------------------------------------------------------- ts::UString ts::ChannelFile::DefaultFileName() { #if defined(TS_WINDOWS) static const UChar env[] = u"APPDATA"; static const UChar name[] = u"\\tsduck\\channels.xml"; #else static const UChar env[] = u"HOME"; static const UChar name[] = u"/.tsduck.channels.xml"; #endif const UString root(GetEnvironment(env)); return root.empty() ? UString() : UString(root) + UString(name); }
// String returns a formatted Metrics string scaled // to a width of s. func (h *Histogram) String(s int) string { if h == nil { return "" } var min, max uint64 = math.MaxUint64, 0 for _, bin := range *h { for _, v := range bin { if v > max { max = v } if v < min { min = v } } } switch len(*h) { case 0: return "" case 1: min = 0 } var b bytes.Buffer for _, bin := range *h { for k, v := range bin { blen := scale(float64(v), float64(min), float64(max), 1, float64(s)) line := fmt.Sprintf("%20s %s\n", k, strings.Repeat("-", int(blen))) b.WriteString(line) } } return b.String() }
<filename>src/unifrac.cpp<gh_stars>10-100 /* * BSD 3-Clause License * * Copyright (c) 2016-2021, UniFrac development team. * All rights reserved. * * See LICENSE file for more details */ #include "tree.hpp" #include "biom_interface.hpp" #include "unifrac.hpp" #include "affinity.hpp" #include <unordered_map> #include <cstdlib> #include <thread> #include <signal.h> #include <stdarg.h> #include <algorithm> #include <pthread.h> #include <unistd.h> #include "unifrac_internal.hpp" // We will always have the CPU version #define SUCMP_NM su_cpu #include "unifrac_cmp.hpp" #undef SUCMP_NM #ifdef UNIFRAC_ENABLE_ACC #define SUCMP_NM su_acc #include "unifrac_cmp.hpp" #undef SUCMP_NM #endif using namespace su; std::string su::test_table_ids_are_subset_of_tree(su::biom_interface &table, su::BPTree &tree) { std::unordered_set<std::string> tip_names = tree.get_tip_names(); std::unordered_set<std::string>::const_iterator hit; std::string a_missing_name = ""; for(auto i : table.obs_ids) { hit = tip_names.find(i); if(hit == tip_names.end()) { a_missing_name = i; break; } } return a_missing_name; } double** su::deconvolute_stripes(std::vector<double*> &stripes, uint32_t n) { // would be better to just do striped_to_condensed_form double **dm; dm = (double**)malloc(sizeof(double*) * n); if(dm == NULL) { fprintf(stderr, "Failed to allocate %zd bytes; [%s]:%d\n", sizeof(double*) * n, __FILE__, __LINE__); exit(EXIT_FAILURE); } for(unsigned int i = 0; i < n; i++) { dm[i] = (double*)malloc(sizeof(double) * n); if(dm[i] == NULL) { fprintf(stderr, "Failed to allocate %zd bytes; [%s]:%d\n", sizeof(double) * n, __FILE__, __LINE__); exit(EXIT_FAILURE); } dm[i][i] = 0; } for(unsigned int i = 0; i < stripes.size(); i++) { double *vec = stripes[i]; unsigned int k = 0; for(unsigned int row = 0, col = i + 1; row < n; row++, col++) { if(col < n) { dm[row][col] = vec[k]; dm[col][row] = vec[k]; } else { dm[col % n][row] = vec[k]; dm[row][col % n] = vec[k]; } k++; } } return dm; } void su::stripes_to_condensed_form(std::vector<double*> &stripes, uint32_t n, double* cf, unsigned int start, unsigned int stop) { // n must be >= 2, but that should be enforced upstream as that would imply // computing unifrac on a single sample. uint64_t comb_N = comb_2(n); for(unsigned int stripe = start; stripe < stop; stripe++) { // compute the (i, j) position of each element in each stripe uint64_t i = 0; uint64_t j = stripe + 1; for(uint64_t k = 0; k < n; k++, i++, j++) { if(j == n) { i = 0; j = n - (stripe + 1); } // determine the position in the condensed form vector for a given (i, j) // based off of // https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.squareform.html uint64_t comb_N_minus_i = comb_2(n - i); cf[comb_N - comb_N_minus_i + (j - i - 1)] = stripes[stripe][k]; } } } // write in a 2D matrix // also suitable for writing to disk template<class TReal> void su::condensed_form_to_matrix_T(const double* __restrict__ cf, const uint32_t n, TReal* __restrict__ buf2d) { const uint64_t comb_N = su::comb_2(n); for(uint64_t i = 0; i < n; i++) { for(uint64_t j = 0; j < n; j++) { TReal v; if(i < j) { // upper triangle const uint64_t comb_N_minus = su::comb_2(n - i); v = cf[comb_N - comb_N_minus + (j - i - 1)]; } else if (i > j) { // lower triangle const uint64_t comb_N_minus = su::comb_2(n - j); v = cf[comb_N - comb_N_minus + (i - j - 1)]; } else { v = 0.0; } buf2d[i*n+j] = v; } } } // make sure it is instantiated template void su::condensed_form_to_matrix_T<double>(const double* __restrict__ cf, const uint32_t n, double* __restrict__ buf2d); template void su::condensed_form_to_matrix_T<float>(const double* __restrict__ cf, const uint32_t n, float* __restrict__ buf2d); void su::condensed_form_to_matrix(const double* __restrict__ cf, const uint32_t n, double* __restrict__ buf2d) { su::condensed_form_to_matrix_T<double>(cf,n,buf2d); } void su::condensed_form_to_matrix_fp32(const double* __restrict__ cf, const uint32_t n, float* __restrict__ buf2d) { su::condensed_form_to_matrix_T<float>(cf,n,buf2d); } /* * The stripes end up computing the following positions in the distance * matrix. * * x A B C x x * x x A B C x * x x x A B C * C x x x A B * B C x x x A * A B C x x x * * However, we store those stripes as vectors, ie * [ A A A A A A ] */ // Helper class // Will cache pointers and automatically release stripes when all elements are used class OnceManagedStripes { private: const uint32_t n_samples; const uint32_t n_stripes; const ManagedStripes &stripes; std::vector<const double *> stripe_ptr; std::vector<uint32_t> stripe_accessed; const double *get_stripe(const uint32_t stripe) { if (stripe_ptr[stripe]==0) stripe_ptr[stripe]=stripes.get_stripe(stripe); return stripe_ptr[stripe]; } void release_stripe(const uint32_t stripe) { stripes.release_stripe(stripe); stripe_ptr[stripe]=0; } public: OnceManagedStripes(const ManagedStripes &_stripes, const uint32_t _n_samples, const uint32_t _n_stripes) : n_samples(_n_samples), n_stripes(_n_stripes) , stripes(_stripes) , stripe_ptr(n_stripes) , stripe_accessed(n_stripes) {} ~OnceManagedStripes() { for(uint32_t i = 0; i < n_stripes; i++) { if (stripe_ptr[i]!=0) { release_stripe(i); } } } double get_val(const uint32_t stripe, const uint32_t el) { if (stripe_ptr[stripe]==0) stripe_ptr[stripe]=stripes.get_stripe(stripe); const double *mystripe = stripe_ptr[stripe]; double val = mystripe[el]; stripe_accessed[stripe]++; if (stripe_accessed[stripe]==n_samples) release_stripe(stripe); // we will not use this stripe anymore return val; } }; // write in a 2D matrix // also suitable for writing to disk template<class TReal> void su::stripes_to_matrix_T(const ManagedStripes &_stripes, const uint32_t n_samples, const uint32_t n_stripes, TReal* __restrict__ buf2d, uint32_t tile_size) { // n_samples must be >= 2, but that should be enforced upstream as that would imply // computing unifrac on a single sample. // tile for for better memory access pattern const uint32_t TILE = (tile_size>0) ? tile_size : (128/sizeof(TReal)); const uint32_t n_samples_tup = (n_samples+(TILE-1))/TILE; // round up OnceManagedStripes stripes(_stripes, n_samples, n_stripes); for(uint32_t oi = 0; oi < n_samples_tup; oi++) { // off diagonal // alternate between inner and outer off-diagonal, due to wrap around in stripes const uint32_t o = ((oi%2)==0) ? \ (oi/2)*TILE : /* close to diagonal */ \ (n_samples_tup-(oi/2)-1)*TILE; /* far from diagonal */ for(uint32_t d = 0; d < (n_samples-o); d+=TILE) { // diagonal uint32_t iOut = d; uint32_t jOut = d+o; uint32_t iMax = std::min(iOut+TILE,n_samples); uint32_t jMax = std::min(jOut+TILE,n_samples); if (iOut==jOut) { // on diagonal for(uint64_t i = iOut; i < iMax; i++) { buf2d[i*n_samples+i] = 0.0; int64_t stripe=0; uint64_t j = i+1; for(; (stripe<n_stripes) && (j<jMax); stripe++, j++) { TReal val = stripes.get_val(stripe, i); buf2d[i*n_samples+j] = val; } if (j<n_samples) { // implies strip==n_stripes, we are really looking at the mirror stripe=n_samples-n_stripes-1; for(; j < jMax; j++) { --stripe; TReal val = stripes.get_val(stripe, j); buf2d[i*n_samples+j] = val; } } } // lower triangle for(uint64_t i = iOut+1; i < iMax; i++) { for(uint64_t j = jOut; j < i; j++) { buf2d[i*n_samples+j] = buf2d[j*n_samples+i]; } } } else if (iOut<jOut) { // off diagonal for(uint64_t i = iOut; i < iMax; i++) { unsigned int stripe=0; uint64_t j = i+1; // we are off diagonal, so adjust stripe += (jOut-j); j=jOut; if (stripe>n_stripes) { // ops, we overshoot... roll back j-=(stripe-n_stripes); stripe=n_stripes; } for(; (stripe<n_stripes) && (j<jMax); stripe++, j++) { TReal val = stripes.get_val(stripe, i); buf2d[i*n_samples+j] = val; } if (j<jMax) { // implies strip==n_stripes, we are really looking at the mirror stripe=n_samples-n_stripes-1; if (j<jOut) { stripe -= (jOut-j); // note: should not be able to overshoot j=jOut; } for(; j < jMax; j++) { --stripe; TReal val = stripes.get_val(stripe, j); buf2d[i*n_samples+j] = val; } } } // do the other off-diagonal immediately, so it is still in cache for(uint64_t j = jOut; j < jMax; j++) { for(uint64_t i = iOut; i < iMax; i++) { buf2d[j*n_samples+i] = buf2d[i*n_samples+j]; } } } } //for jOut } // for iOut } // Make sure it gets instantiated template void su::stripes_to_matrix_T<double>(const ManagedStripes &stripes, const uint32_t n_samples, const uint32_t n_stripes, double* __restrict__ buf2d, uint32_t tile_size); template void su::stripes_to_matrix_T<float>(const ManagedStripes &stripes, const uint32_t n_samples, const uint32_t n_stripes, float* __restrict__ buf2d, uint32_t tile_size); void su::stripes_to_matrix(const ManagedStripes &stripes, const uint32_t n_samples, const uint32_t n_stripes, double* __restrict__ buf2d, uint32_t tile_size) { return su::stripes_to_matrix_T<double>(stripes, n_samples, n_stripes, buf2d, tile_size); } void su::stripes_to_matrix_fp32(const ManagedStripes &stripes, const uint32_t n_samples, const uint32_t n_stripes, float* __restrict__ buf2d, uint32_t tile_size) { return su::stripes_to_matrix_T<float>(stripes, n_samples, n_stripes, buf2d, tile_size); } void progressbar(float progress) { // from http://stackoverflow.com/a/14539953 // // could encapsulate into a classs for displaying time elapsed etc int barWidth = 70; std::cout << "["; int pos = barWidth * progress; for (int i = 0; i < barWidth; ++i) { if (i < pos) std::cout << "="; else if (i == pos) std::cout << ">"; else std::cout << " "; } std::cout << "] " << int(progress * 100.0) << " %\r"; std::cout.flush(); } // Computes Faith's PD for the samples in `table` over the phylogenetic // tree given by `tree`. // Assure that tree does not contain ids that are not in table void su::faith_pd(biom_interface &table, BPTree &tree, double* result) { PropStack<double> propstack(table.n_samples); uint32_t node; double *node_proportions; double length; // for node in postorderselect for(unsigned int k = 0; k < (tree.nparens / 2) - 1; k++) { node = tree.postorderselect(k); // get branch length length = tree.lengths[node]; // get node proportions and set intermediate scores node_proportions = propstack.pop(node); set_proportions(node_proportions, tree, node, table, propstack); for (unsigned int sample = 0; sample < table.n_samples; sample++){ // calculate contribution of node to score result[sample] += (node_proportions[sample] > 0) * length; } } } #ifdef UNIFRAC_ENABLE_ACC // test only once, then use persistent value static int proc_use_acc = -1; inline bool use_acc() { if (proc_use_acc!=-1) return (proc_use_acc!=0); int has_nvidia_gpu_rc = access("/proc/driver/nvidia/gpus", F_OK); bool print_info = false; if (const char* env_p = std::getenv("UNIFRAC_GPU_INFO")) { print_info = true; std::string env_s(env_p); if ((env_s=="NO") || (env_s=="N") || (env_s=="no") || (env_s=="n") || (env_s=="NEVER") || (env_s=="never")) { print_info = false; } } if (has_nvidia_gpu_rc != 0) { if (print_info) printf("INFO (unifrac): GPU not found, using CPU\n"); proc_use_acc=0; return false; } if (const char* env_p = std::getenv("UNIFRAC_USE_GPU")) { std::string env_s(env_p); if ((env_s=="NO") || (env_s=="N") || (env_s=="no") || (env_s=="n") || (env_s=="NEVER") || (env_s=="never")) { if (print_info) printf("INFO (unifrac): Use of GPU explicitly disabled, using CPU\n"); proc_use_acc=0; return false; } } if (print_info) printf("INFO (unifrac): Using GPU\n"); proc_use_acc=1; return true; } #endif void su::unifrac(biom_interface &table, BPTree &tree, Method unifrac_method, std::vector<double*> &dm_stripes, std::vector<double*> &dm_stripes_total, const su::task_parameters* task_p) { #ifdef UNIFRAC_ENABLE_ACC if (use_acc()) { su_acc::unifrac(table, tree, unifrac_method, dm_stripes, dm_stripes_total, task_p); } else { #else if (true) { #endif su_cpu::unifrac(table, tree, unifrac_method, dm_stripes, dm_stripes_total, task_p); } } void su::unifrac_vaw(biom_interface &table, BPTree &tree, Method unifrac_method, std::vector<double*> &dm_stripes, std::vector<double*> &dm_stripes_total, const su::task_parameters* task_p) { #ifdef UNIFRAC_ENABLE_ACC if (use_acc()) { su_acc::unifrac_vaw(table, tree, unifrac_method, dm_stripes, dm_stripes_total, task_p); } else { #else if (true) { #endif su_cpu::unifrac_vaw(table, tree, unifrac_method, dm_stripes, dm_stripes_total, task_p); } } void su::process_stripes(biom_interface &table, BPTree &tree_sheared, Method method, bool variance_adjust, std::vector<double*> &dm_stripes, std::vector<double*> &dm_stripes_total, std::vector<std::thread> &threads, std::vector<su::task_parameters> &tasks) { // register a signal handler so we can ask the master thread for its // progress register_report_status(); // cannot use threading with openacc or openmp for(unsigned int tid = 0; tid < threads.size(); tid++) { if(variance_adjust) su::unifrac_vaw( std::ref(table), std::ref(tree_sheared), method, std::ref(dm_stripes), std::ref(dm_stripes_total), &tasks[tid]); else su::unifrac( std::ref(table), std::ref(tree_sheared), method, std::ref(dm_stripes), std::ref(dm_stripes_total), &tasks[tid]); } remove_report_status(); }
<reponame>Srijanjha/FormulaCLI<filename>formulacli/result_tables.py from datetime import datetime from typing import Optional, List import pandas as pd from bs4 import BeautifulSoup from formulacli.html_handlers import get_response, parse def get_result_table(soup: BeautifulSoup) -> Optional[BeautifulSoup]: try: table: BeautifulSoup = soup.select("table.resultsarchive-table")[0] return table except IndexError: return None def get_cols(table: BeautifulSoup) -> List[str]: cols: List[str] = [] for th in table.thead.find_all("th"): col: Optional[str] = th.text if col: cols.append(col.upper()) return cols def get_values(table: BeautifulSoup) -> List[List[str]]: entries: List[List[str]] = [] for tr in table.tbody.find_all("tr"): entry = [] for td in tr.find_all("td"): if td.text: entry.append(td.text.strip().replace("\n", " ")) entries.append(entry) return entries def fetch_results(_for: str = "drivers", year: Optional[int] = None) -> pd.DataFrame: if not year: year = datetime.now().year url: str = f"https://www.formula1.com/en/results.html/{year}/{_for}.html" table: Optional[BeautifulSoup] = get_result_table(parse(get_response(url))) if table is None: raise ValueError("Invalid Season Year") cols: List[str] = get_cols(table) entries: List[List[str]] = get_values(table) return pd.DataFrame(entries, columns=cols)
package com.lingkj.project.user.service; import com.baomidou.mybatisplus.extension.service.IService; import com.lingkj.common.utils.PageUtils; import com.lingkj.project.api.user.dto.AccountBindingDto; import com.lingkj.project.user.entity.UserToken; import com.lingkj.project.user.entity.UserWorthMentioning; import java.util.Map; /** * 用户Token * * @author chenyongsong * * @date 2019-07-05 10:57:20 */ public interface UserWorthMentioningService extends IService<UserWorthMentioning> { Map<String,Object> accountBinding(AccountBindingDto accountBindingDto); Map<String,Object> loginCheck(String type,String tripartiteId); }
The same exemplary precision found on the high-end iPhones is also found on the iPhone SE. At last, you don't have to feel that you're settling just to get a phone that's easier to handle. Handling the iPhone SE is a lovely affair, particularly if you're coming from the iPhone 6S Plus. It's dainty - cute, even - and although, obviously, it feels exactly as if you're handling the iPhone 5S, there's something extra novel about it now. It's no longer the norm; the 4-inch form factor is now the exception. Mostly, I'd guess, it will appeal to people who currently have a four-inch display phone. Going back to it from the iPhone 6s Plus with its 5.5-inch screen, took some doing - why, the entire dear little iPhone SE fits within the 6s Plus screen. Using the keyboard on the SE was a learning curve after the expansiveness of the 6s Plus. The front FaceTime camera is still the same sensor as iPhone 5s but benefits from the new ISP and from a Retina Flash. I'm not sure why it didn't get a bump to an iPhone 6s-level 5 megapixels, because selfies really are a thing and really do need the better camera. The standout news is battery life. Unlike many other recent Apple products, the iPhone SE's is a significant improvement over its predecessors'. In my lab stress test, which cycles through websites with uniform screen brightness, the SE lasted 10 hours--more than two hours longer than both the iPhone 6s and iPhone 5s, and nearly three hours longer than the Galaxy S7. [...] The iPhone SE is a win for ergonomic choice, but Apple doesn't score any points for originality. The new phone is nearly indistinguishable from the three-year-old iPhone 5s, which is a hair thicker and less pleasantly rounded than Apple's more recent designs. (The SE even fits in most existing 5s cases.) Although we've only been using the phone for a few days, one thing is clear - it's blazingly fast. Playing several high intensity games show that this really is as powerful as the current flagship, the 6s. It's powered by the A9, the same chip found in the iPhone 6s, and Apple says the iPhone SE has 2x faster CPU and 3x faster GPU performance compared to the older iPhone 5s - and this is something you notice right away, with a far snappier feel the the handset even when its not playing games. The best thing about the iPhone SE might just be its price. Selling for just $399 for a 16GB version and $499 for a 64GB version, this is a tremendously competitive phone. Most $400 phones are not going to give you the latest-generation processor and camera technologies. I really can't underscore how well I think this product will do, simply based on its price. Consider that the iPhone 6S starts at $649 for a 16GB version. Yes, it has more features -- including 3D Touch, a better front-facing camera and a larger display -- but the price point Apple has set will be very compelling. Apple let press go hands-on with the iPhone SE at its launch event and provided several publications with iPhone SE review units ahead of the device's debut, and we've gathered excerpts from each site to highlight the general release reaction to Apple's new 4-inch iPhone. Reviews and first impressions have been largely positive, with reviewers praising the device's powerful internals.The general consensus is that the iPhone SE is the perfect phone for people who want the power of Apple's flagship iPhone lineup in a small form factor. TechRadar called the exterior "svelte and sleek," and said it's just like handling an iPhone 5s, the phone the SE is modeled after. The Independent speculates that the iPhone SE will appeal most to those who currently use a 4-inch iPhone, as it can be difficult to adjust to a smaller 4-inch screen after using Apple's larger 4.7 or 5.5-inch iPhones. iMore pointed out that while the iPhone SE got the 12-megapixel rear camera from the iPhone 6s, the front-facing camera didn't get much of an upgrade. It's still 1.2 megapixels. The Wall Street Journal points out the impressive battery life in the iPhone SE, which beats out the iPhone 5s and the iPhone 6s, but criticizes the unoriginal design. The Daily Mail says the iPhone SE, with its A9 processor - the same processor in the iPhone 6s - is "blazingly fast." Mashable highlights the $399 price tag, calling it "tremendously competitive" for a device with current-generation technology. Pre-orders for the iPhone SE started at 12:01 a.m. on March 24. The device can be purchased from the Apple online store , with deliveries and in-store availability set to begin on March 31. While the 16GB iPhone SE models are still in stock and will deliver on that date, the 64GB iPhone SE models have proven more popular and shipping estimates have slipped to five to seven days. Pricing on the iPhone SE, which is Apple's most affordable iPhone, starts at $399.
<gh_stars>1-10 package com.github.alexivchenko.filefinder.core; import java.io.File; import java.io.InputStream; import java.util.Collections; import java.util.List; /** * @author <NAME> */ public class RobustXmlCrawler implements XmlCrawler { private final XmlCrawler delegate; public RobustXmlCrawler(XmlCrawler delegate) { this.delegate = delegate; } @Override public List<DetectedString> crawl(File xml) { try { return delegate.crawl(xml); } catch (ParseException e) { return Collections.emptyList(); } } @Override public List<DetectedString.FileStageBuilder> crawl(InputStream is) { try { return delegate.crawl(is); } catch (ParseException e) { return Collections.emptyList(); } } }
def _radar( df: pd.DataFrame, ax: plt.Axes, label: str, all_tags: Sequence[str], color: str, alpha: float = 0.2, edge_alpha: float = 0.85, zorder: int = 2, edge_style: str = '-'): tmp = df.groupby('tag').mean().reset_index() values = [] for curr_tag in all_tags: score = 0. selected = tmp[tmp['tag'] == curr_tag] if len(selected) == 1: score = float(selected['score']) else: print('{} bsuite scores found for tag {!r} with setting {!r}. ' 'Replacing with zero.'.format(len(selected), curr_tag, label)) values.append(score) values = np.maximum(values, 0.05) values = np.concatenate((values, [values[0]])) angles = np.linspace(0, 2*np.pi, len(all_tags), endpoint=False) angles = np.concatenate((angles, [angles[0]])) ax.plot(angles, values, '-', linewidth=5, label=label, c=color, alpha=edge_alpha, zorder=zorder, linestyle=edge_style) ax.fill(angles, values, alpha=alpha, color=color, zorder=zorder) axis_angles = angles[:-1] * 180/np.pi ax.set_thetagrids( axis_angles, map(_tag_pretify, all_tags), fontsize=18) text_angles = np.rad2deg(angles) for label, angle in zip(ax.get_xticklabels()[:-1], text_angles[:-1]): if 90 <= angle <= 270: label.set_horizontalalignment('right') else: label.set_horizontalalignment('left')
<reponame>interlay/polkabtc-ui import { ReactElement } from 'react'; import { useTranslation } from 'react-i18next'; import clsx from 'clsx'; import VaultTable from '../../../common/components/vault-table/vault-table'; import ActiveVaults from '../components/active-vaults'; import CollateralLocked from '../components/collateral-locked'; import Collateralization from '../components/collateralization'; import TimerIncrement from 'parts/TimerIncrement'; import MainContainer from 'parts/MainContainer'; import PageTitle from 'parts/PageTitle'; export default function VaultsDashboard(): ReactElement { const { t } = useTranslation(); return ( <MainContainer className={clsx( 'flex', 'justify-center', 'fade-in-animation' )}> <div className='w-3/4'> <div> <PageTitle mainTitle={t('dashboard.vault.vaults')} subTitle={<TimerIncrement />} /> <hr className='border-interlayDodgerBlue' /> <div className='vaults-graphs-container dashboard-graphs-container'> <ActiveVaults /> <CollateralLocked /> <Collateralization /> </div> <VaultTable></VaultTable> </div> </div> </MainContainer> ); }
import {BaseEntity,Column,Entity,Index,JoinColumn,JoinTable,ManyToMany,ManyToOne,OneToMany,OneToOne,PrimaryColumn,PrimaryGeneratedColumn,RelationId} from "typeorm"; import {form_contrib} from "./form_contrib"; @Entity("datos_formularios",{schema:"redcobrosjp" } ) export class datos_formularios extends BaseEntity { @OneToOne(type=>form_contrib, form_contrib=>form_contrib.datosFormularios,{ primary:true, nullable:false, }) @JoinColumn({ name:'form_contrib'}) formContrib:Promise<form_contrib | null>; @RelationId((datos_formularios: datos_formularios) => datos_formularios.formContrib) formContribId: Promise<string>; @Column("character varying",{ nullable:true, length:255, name:"suscribe" }) suscribe:string | null; @Column("character varying",{ nullable:true, length:255, name:"caracter" }) caracter:string | null; @Column("character varying",{ nullable:true, length:255, name:"documento" }) documento:string | null; @Column("character varying",{ nullable:true, length:255, name:"lugar" }) lugar:string | null; @Column("character varying",{ nullable:true, length:200, name:"cabecera" }) cabecera:string | null; }
// // FILE: MS5611.h // AUTHOR: <NAME> // Erni - testing/fixes // VERSION: 0.1.8 // PURPOSE: MS5611 Temperature & Pressure library for Arduino // URL: // // HISTORY: // see MS5611.cpp file // #ifndef MS5611_h #define MS5611_h #if ARDUINO < 100 #error "VERSION NOT SUPPPORTED" #else #include <Arduino.h> #endif #define MS5611_LIB_VERSION (F("0.1.8")) #define MS5611_READ_OK 0 class MS5611 { public: explicit MS5611(uint8_t deviceAddress); void init(); int read(uint8_t bits = 8); inline int32_t getTemperature() const { return _temperature; }; inline int32_t getPressure() const { return _pressure; }; inline int getLastResult() const { return _result; }; private: void reset(); void convert(const uint8_t addr, uint8_t bits); uint32_t readADC(); uint16_t readProm(uint8_t reg); void command(const uint8_t command); uint8_t _address; int32_t _temperature; int32_t _pressure; int _result; float C[7]; }; #endif // END OF FILE
/* Copyright 2016 Medcl (m AT medcl.net) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package public import ( "github.com/emirpasic/gods/sets/hashset" "github.com/infinitbyte/framework/core/api" "github.com/infinitbyte/framework/core/ui" "golang.org/x/oauth2" ) const ( githubAuthorizeUrl = "https://github.com/login/oauth/authorize" githubTokenUrl = "https://github.com/login/oauth/access_token" redirectUrl = "" ) var ( oauthCfg *oauth2.Config // scopes scopes = []string{"repo"} ) func InitUI(cfg ui.AuthConfig) { public := PublicUI{} ui.HandleUIMethod(api.GET, "/redirect/", public.RedirectHandler) if !cfg.Enabled { return } oauthCfg = &oauth2.Config{ ClientID: cfg.ClientID, ClientSecret: cfg.ClientSecret, Endpoint: oauth2.Endpoint{ AuthURL: githubAuthorizeUrl, TokenURL: githubTokenUrl, }, RedirectURL: redirectUrl, Scopes: scopes, } public.admins = hashset.New() for _, v := range cfg.AuthorizedAdmins { if v != "" { public.admins.Add(v) } } ui.HandleUIMethod(api.GET, "/auth/github/", public.AuthHandler) ui.HandleUIMethod(api.GET, "/auth/callback/", public.CallbackHandler) ui.HandleUIMethod(api.GET, "/auth/login/", public.Login) ui.HandleUIMethod(api.GET, "/auth/logout/", public.Logout) ui.HandleUIMethod(api.GET, "/auth/success/", public.LoginSuccess) ui.HandleUIMethod(api.GET, "/auth/fail/", public.LoginFail) }
/** Returns whether the queue is empty. */ bool myQueueEmpty(MyQueue* obj) { if (obj == NULL) return true; if (obj->tail == NULL) { return true; } return false; }
import { Component, OnInit, Input, Output, EventEmitter, OnDestroy } from '@angular/core'; import axios from 'axios'; import { Subscription } from 'rxjs'; import { CommonService, AxiosService, I18nService, MessageService } from '../../../../service'; @Component({ selector: 'app-package-progress', templateUrl: './package-progress.component.html', styleUrls: ['./package-progress.component.scss'] }) export class PackageProgressComponent implements OnInit, OnDestroy { public i18n: any; constructor( private i18nService: I18nService, private Axios: AxiosService, private msgService: MessageService, private commonService: CommonService ) { this.i18n = this.i18nService.I18n(); } @Input() reportId: string; @Output() createSucc = new EventEmitter(); private maxCount = 3; // 异常情况下最多轮询次数 private curCount = 0; // 异常情况下轮询次数 private getStatusTimer: any = null; private currLang = 'zh-cn'; public situation = -1; public msgInfo = ''; public packResult = ''; private totalProgress = 100; // 总的数据大小 private totalBar = 440; // 进度条总宽 public progessValue = ''; public barWidth = 0; private cancels: any = []; private closeTaskSub: Subscription; ngOnInit() { this.currLang = sessionStorage.getItem('language'); this.getStatus(); this.closeTaskSub = this.msgService.getMessage().subscribe(message => { if (message.type === 'closeTaskMsg' && message.data.result.subType === 'SoftwarePackage') { this.Axios.axios.delete(`/portadv/autopack/${encodeURIComponent(this.reportId)}/`) .then((resp: any) => { const msg = this.currLang ? resp.infochinese : resp.info; this.clearTimer(); while (this.cancels.length > 0) { this.cancels.pop()(); } if (this.commonService.handleStatus(resp) === 0) { this.createSucc.emit({ id: this.reportId, type: 'SoftwarePackage', situation: 2, state: 'stop_success', msg }); return; } this.createSucc.emit({ id: this.reportId, type: 'SoftwarePackage', situation: 2, state: 'stop_failed', msg}); }); } }); } ngOnDestroy(): void { if (this.closeTaskSub) { this.closeTaskSub.unsubscribe(); } } public getStatus() { let url = '/task/progress/?task_type=1'; url += this.reportId ? `&task_id=${this.reportId}` : ''; const CancelToken = axios.CancelToken; this.Axios.axios.get(url, { cancelToken: new CancelToken( c1 => (this.cancels.push(c1))) }).then((resp: any) => { const data = resp.data; // 统一进度条使用 if (Object.keys(data).length === 0) { if (this.commonService.handleStatus(resp) !== 0) { this.msgInfo = this.getRespInfo(resp); this.createSucc.emit({id: this.reportId, type: 'SoftwarePackage', state: 'failed', msg: this.msgInfo}); this.msgService.sendMessage({ type: 'creatingPackageProgress', data: true}); } return; } if (/deb/.test(resp.infochinese) || localStorage.getItem('filename') === 'deb') { localStorage.setItem('filename', 'deb'); } if (/rpm/.test(resp.infochinese) || localStorage.getItem('filename') === 'rpm') { localStorage.setItem('filename', 'rpm'); } if (data.status === -1) { // 暂无任务 this.curCount++; if (this.curCount <= this.maxCount) { this.getStatus(); } } else if (data.status === 0) { // 打包成功 this.situation = 2; this.curCount = 0; this.packResult = data.result; this.msgInfo = this.getRespInfo(resp); this.createSucc.emit({ id: this.reportId, type: 'SoftwarePackage', state: 'success', situation: 2, msg: this.msgInfo, data: { packResult: this.packResult } }); this.msgService.sendMessage({ type: 'creatingPackageProgress', data: true}); this.clearTimer(); } else if (data.status === 1) { // 正在打包 this.progessValue = data.progress + '%'; this.barWidth = Math.floor((data.progress / this.totalProgress) * this.totalBar); this.situation = 1; this.curCount = 0; this.msgInfo = this.getRespInfo(resp); this.clearTimer(); this.msgService.sendMessage({ type: 'creatingPackageProgress', data: true}); this.getStatusTimer = setTimeout(() => { this.getStatus(); }, 3000); } else if (data.status === 2) { // 打包失败 this.clearTimer(); this.curCount = 0; this.situation = 3; this.packResult = data.result; this.msgInfo = this.getRespInfo(resp); this.createSucc.emit({ id: this.reportId, type: 'SoftwarePackage', state: 'failed', situation: 3, msg: this.msgInfo, data: { packResult: this.packResult } }); this.msgService.sendMessage({ type: 'creatingPackageProgress', data: true}); } else if (data.status === 3) { this.situation = 4; this.msgInfo = this.getRespInfo(resp); this.createSucc.emit({ id: this.reportId, type: 'SoftwarePackage', state: 'success', situation: 4, msg: this.msgInfo }); this.msgService.sendMessage({ type: 'creatingPackageProgress', data: true}); } }, (error: any) => { this.msgService.sendMessage({ type: 'creatingPackageProgress', data: false}); }); } public closeTask() { const resultMsg = { id: this.reportId, type: 'stopConfirm', subType: 'SoftwarePackage', state: 'prompt', }; this.msgService.sendMessage({ type: 'creatingResultMsg', data: resultMsg }); } private showLoding() { document.getElementById('loading-box').style.display = 'flex'; } private closeLoding() { document.getElementById('loading-box').style.display = 'none'; } private clearTimer() { if (this.getStatusTimer) { clearTimeout(this.getStatusTimer); this.getStatusTimer = null; } } private getRespInfo(data: any) { const info = this.currLang === 'zh-cn' ? data.infochinese : data.info; return info; } }
<filename>AirballBase/src/org/schmivits/airball/airdata/N42PEUARTFlightData.java package org.schmivits.airball.airdata; import android.content.Context; /** * A N42PEUARTFlightData assumes the Android device is connected to a Prolific USB-to-serial * interface that is in turn connected to the RS-232 output port of a Dynon D10A. */ public class N42PEUARTFlightData extends DynonSerialFlightData { DynonUARTDataSource mDataSource; public N42PEUARTFlightData(Context context) { super(new N42PEAircraft(), N42PEAircraft.BETA_MODEL_CONFIG); mDataSource = new DynonUARTDataSource( context, N42PEAircraft.D10A_SERIAL_PARAMETERS, new HaveData() { @Override public void line(String line) { N42PEUARTFlightData.this.addDataLine(line); } @Override public void status(String status) { // TODO(ihab): unimplemented } }); } @Override public void destroy() { mDataSource.destroy(); } }
package core; import static GUI.GUIHelper.imgPath; import java.awt.Image; import java.io.File; import java.io.IOException; import javax.imageio.ImageIO; import javax.swing.ImageIcon; import javax.swing.JLabel; /** * Provides static methods and variables for MedicineUI * * @author jMedicine * @version 0.7.13 * @since 0.3.0 */ public class MedicineUtil { private static String[] medType = {"ยาเม็ด", "ยาแคปซูล", "ยาน้ำ", "สเปรย์"}; private static String[] tabletColor = {"white", "blue", "green", "yellow", "red", "pink", "purple", "orange", "brown"}; private static String[] liquidColor = {"transparent", "white", "blue", "green", "yellow", "red", "pink", "purple", "orange", "brown", "black"}; private static String[] medTime = {"เช้า", "กลางวัน", "เย็น", "ก่อนนอน"}; private static String[] medDoseStr = {"ก่อนอาหาร", "หลังอาหาร", "พร้อมอาหาร/หลังอาหารทันที"}; public static String[] getMedType() { return medType; } public static String[] getTabletColor() { return tabletColor; } public static int getTabletColorIndex(String color) { for (int i = 0; i < tabletColor.length; i++) { if (tabletColor[i].equals(color)) { return i; } } return -1; } public static String[] getLiquidColor() { return liquidColor; } public static int getLiquidColorIndex(String color) { for (int i = 0; i < liquidColor.length; i++) { if (liquidColor[i].equals(color)) { return i; } } return -1; } public static String[] getMedTime() { return medTime; } public static String[] getMedDoseStr() { return medDoseStr; } public static JLabel getMedIcon(Medicine medicine) { String imgURL = ""; JLabel labelPic = new JLabel(); boolean urlFinished = false; switch (medicine.getMedType()) { case "tablet": imgURL += "/tablets/tablet-"; break; case "capsule": imgURL += "/capsules/capsule-"; break; case "liquid": imgURL += "/liquids/liquid-"; break; case "spray": imgURL += "/spray.png"; urlFinished = true; break; } if (!urlFinished) { imgURL += medicine.getMedColor(); imgURL += ".png"; } try { Image img = ImageIO.read(new File(imgPath + imgURL)); labelPic.setIcon(new ImageIcon(img)); } catch (Exception ex) { try { Image img = ImageIO.read(new File(imgPath + "/system/med-not-found.png")); labelPic.setIcon(new ImageIcon(img)); } catch (IOException ignored) { } } return labelPic; } public static int tableSpoonCalc(int millilitres) { return millilitres / 15; } public static int teaSpoonCalc(int millilitres) { return (millilitres % 15) / 5; } }
/** * Pulls the current clock from this historical event recording. * * @return */ public static GamePhaseCurrentTime generateGamePhaseCurrentTime( GameClockEvent gce) { GamePhaseCurrentTime gpct = new GamePhaseCurrentTime(); gpct.setCurrentInterval(gce.currentInterval); gpct.setCurrentIntervalStartTime(gce.currentIntervalStartTime); gpct.setCurrentIntervalStartTimeLong(gce.currentIntervalStartTimeLong); gpct.setDateTripPoint(gce.dateTripPoint); gpct.setInitialized(gce.initialized); gpct.setIntervalName(gce.intervalName); gpct.setLongTripPoint(gce.longTripPoint); gpct.setPhaseId(gce.phaseId); gpct.setRsId(gce.rsId); gpct.setSdf(gce.sdf); gpct.setSimId(gce.simId); gpct.setTimeInterval(gce.timeInterval); gpct.setTimeOffset(gce.timeOffset); gpct.setTimerType(gce.timerType); return gpct; }
/** * Generates token and creates asynchronous operation. * Sets token to session and response. * @param message { * "sessionId": "session identifier for save as async data", * "expiredTime": "TTL for async operation" * } * @throws CreateAsyncOperationActorException for creation error */ public void create(final CreateAsyncOperationMessage message) throws CreateAsyncOperationActorException { try { String token = IOC.resolve(Keys.getOrAdd("db.collection.nextid")); Integer amountOfHoursToExpireFromNow = message.getExpiredTime(); String expiredTime = LocalDateTime.now().plusHours(amountOfHoursToExpireFromNow).format(formatter); message.setSessionIdInData(message.getSessionId()); IObject authOperationData = message.getOperationData(); collection.createAsyncOperation(authOperationData, token, expiredTime); message.setAsyncOperationToken(token); List<String> availableTokens = message.getOperationTokens(); if (availableTokens == null) { message.setOperationTokens(Arrays.asList(token)); } else { availableTokens.add(token); message.setOperationTokens(availableTokens); } } catch (ResolutionException | ReadValueException | ChangeValueException | CreateAsyncOperationException e) { throw new CreateAsyncOperationActorException("Can't create async operation.", e); } }
#include <bits/stdc++.h> using namespace std; struct Node { int data; Node *next, *prev; }; Node* merge( Node* a, Node* b) { // Base cases if (!a) return b; if (!b) return a; if (a->data <= b->data) { a->next = merge(a->next, b); a->next->prev = a; a->prev = NULL; return a; } else { b->next = merge(a, b->next); b->next->prev = b; b->prev = NULL; return b; } }
package main import ( "context" "encoding/base64" "github.com/Venafi/aws-private-ca-policy-venafi/common" "github.com/Venafi/vcert" "github.com/Venafi/vcert/pkg/endpoint" "github.com/aws/aws-lambda-go/lambda" "github.com/aws/aws-sdk-go-v2/aws/external" "github.com/aws/aws-sdk-go-v2/service/kms" "log" "os" "strings" ) var vcertConnector endpoint.Connector func HandleRequest() error { log.Println("Getting policies") names, err := common.GetAllPoliciesNames() if err != nil { log.Println("getting policies names error:", err) return err } for _, name := range names { log.Printf("Getting policy %s", name) vcertConnector.SetZone(name) p, err := vcertConnector.ReadPolicyConfiguration() if err == endpoint.VenafiErrorZoneNotFound { log.Printf("Policy %s not found. Deleting.", name) err = common.DeletePolicy(name) if err != nil { log.Println("delete policy error:", err) } continue } else if err != nil { log.Println(err) return err } log.Printf("Saving policy %s", name) err = common.SavePolicy(name, *p) if err != nil { log.Println("save policy error:", err) } } log.Println("success policies processing") return nil } func kmsDecrypt(encrypted string) (string, error) { log.Printf("Decrypting encrypted variable") if encrypted == "" { return "", nil } decodedBytes, err := base64.StdEncoding.DecodeString(encrypted) if err != nil { return "", err } cfg, err := external.LoadDefaultAWSConfig() if err != nil { log.Println("can`t load aws config", err) return "", err } svc := kms.New(cfg) input := &kms.DecryptInput{ CiphertextBlob: decodedBytes, } req := svc.DecryptRequest(input) result, err := req.Send(context.Background()) if err != nil { log.Println("can`t decrypt", encrypted, ":", err) return "", err } return string(result.Plaintext[:]), nil } func main() { log.Println("Starting policy lambda.") var err error apiKey := os.Getenv("CLOUDAPIKEY") password := os.Getenv("<PASSWORD>") plainTextCreds := strings.HasPrefix(strings.ToLower(os.Getenv("ENCRYPTED_CREDENTIALS")), "f") if !plainTextCreds { var err error apiKey, err = kmsDecrypt(apiKey) if err != nil { log.Println(err) os.Exit(1) } password, err = kmsDecrypt(password) if err != nil { log.Println(err) os.Exit(1) } } vcertConnector, err = getConnection( os.Getenv("TPPURL"), os.Getenv("TPPUSER"), password, os.Getenv("CLOUDURL"), apiKey, os.Getenv("TRUST_BUNDLE"), ) if err != nil { log.Println(err) os.Exit(1) } lambda.Start(HandleRequest) } func getConnection(tppUrl, tppUser, tppPassword, cloudUrl, cloudKey, trustBundle string) (endpoint.Connector, error) { log.Println("Getting Venafi connection") var config vcert.Config if tppUrl != "" && tppUser != "" && tppPassword != "" { config = vcert.Config{ ConnectorType: endpoint.ConnectorTypeTPP, BaseUrl: tppUrl, Credentials: &endpoint.Authentication{User: tppUser, Password: <PASSWORD>}, } if trustBundle != "" { buf, err := base64.StdEncoding.DecodeString(trustBundle) if err != nil { log.Printf("Can`t read trust bundle from file %s: %v\n", trustBundle, err) return nil, err } config.ConnectionTrust = string(buf) } } else if cloudKey != "" { config = vcert.Config{ ConnectorType: endpoint.ConnectorTypeCloud, Credentials: &endpoint.Authentication{APIKey: cloudKey}, BaseUrl: cloudUrl, } } else { panic("bad credentials for connection") //todo: replace with something more beatifull } return vcert.NewClient(&config) }
<filename>src/block/block_content/block_content_impl.hpp<gh_stars>0 /********************************************************************************************************************** * * Copyright (c) 2017-2018 <NAME> * * License: MIT * Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated * documentation files (the "Software"), to deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in all copies or substantial portions of * the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE * WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS * OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. * *********************************************************************************************************************/ #ifndef SSYBC_SRC_BLOCK_BLOCK_CONTENT_BLOCK_CONTENT_IMPL_HPP_ #define SSYBC_SRC_BLOCK_BLOCK_CONTENT_BLOCK_CONTENT_IMPL_HPP_ #include "include/ssybc/block/block_content/block_content.hpp" #include "include/ssybc/utility/utility.hpp" #include <exception> // --------------------------------------------- Constructor & Destructor --------------------------------------------- template<typename DataT, template<typename> class BinaryConverterTemplateT, typename HashCalculatorT> inline ssybc::BlockContent<DataT, BinaryConverterTemplateT, HashCalculatorT>::BlockContent(DataType const & data): data_ptr_{ std::make_unique<DataType const>(data) } { if (SizeOfBinary() <= 0) { throw std::logic_error("Cannot construct BlockContent from data with size 0 as binary."); } } template<typename DataT, template<typename> class BinaryConverterTemplateT, typename HashCalculatorT> inline ssybc::BlockContent<DataT, BinaryConverterTemplateT, HashCalculatorT>::BlockContent(DataType &&data): data_ptr_{ std::make_unique<DataType const>(data) } { if (SizeOfBinary() <= 0) { throw std::logic_error("Cannot construct BlockContent from data with size 0 as binary."); } } template<typename DataT, template<typename> class BinaryConverterTemplateT, typename HashCalculatorT> inline ssybc::BlockContent< DataT, BinaryConverterTemplateT, HashCalculatorT>::BlockContent(BlockContent const & content): BlockContent(content.Data()) { EMPTY_BLOCK } template<typename DataT, template<typename> class BinaryConverterTemplateT, typename HashCalculatorT> inline ssybc::BlockContent< DataT, BinaryConverterTemplateT, HashCalculatorT>::BlockContent(BlockContent && content) : BlockContent(content.Data()) { EMPTY_BLOCK } // --------------------------------------------------- Public Method -------------------------------------------------- template<typename DataT, template<typename> class BinaryConverterTemplateT, typename HashCalculatorT> inline DataT ssybc::BlockContent<DataT, BinaryConverterTemplateT, HashCalculatorT>::Data() const { return *data_ptr_; } template<typename DataT, template<typename> class BinaryConverterTemplateT, typename HashCalculatorT> inline ssybc::BinaryData ssybc::BlockContent<DataT, BinaryConverterTemplateT, HashCalculatorT>::Binary() const { auto result = BinaryConverterType().BinaryDataFromData(*data_ptr_); CacheSizeOfBinary_(result.size()); return result; } template<typename DataT, template<typename> class BinaryConverterTemplateT, typename HashCalculatorT> inline ssybc::SizeT ssybc::BlockContent<DataT, BinaryConverterTemplateT, HashCalculatorT>::SizeOfBinary() const { if (did_cache_size_of_binary_) { return size_of_binary_; } CacheSizeOfBinary_(Binary().size()); return size_of_binary_; } template<typename DataT, template<typename> class BinaryConverterTemplateT, typename HashCalculatorT> inline ssybc::BlockHash ssybc::BlockContent<DataT, BinaryConverterTemplateT, HashCalculatorT>::Hash() const { return HashCalculatorT().Hash(Binary()); } template<typename DataT, template<typename> class BinaryConverterTemplateT, typename HashCalculatorT> inline std::string ssybc::BlockContent<DataT, BinaryConverterTemplateT, HashCalculatorT>::HashAsString() const { return util::HexStringFromBytes(Hash(), " "); } template<typename DataT, template<typename> class BinaryConverterTemplateT, typename HashCalculatorT> inline ssybc::BlockContent<DataT, BinaryConverterTemplateT, HashCalculatorT>::operator std::string() const { return Description(); } template<typename DataT, template<typename> class BinaryConverterTemplateT, typename HashCalculatorT> inline std::string ssybc::BlockContent< DataT, BinaryConverterTemplateT, HashCalculatorT>::Description(std::string const &lead_padding) const { auto const size_str = util::ToString(SizeOfBinary()); std::string data_str{}; try { data_str = util::ToString(*data_ptr_); } catch (const std::exception& e) { data_str = util::HexStringFromBytes(Binary(), " "); } std::string result{lead_padding + "size: "}; result += (size_str + ",\n"); result += (lead_padding + "data: "); result += (data_str); return result; } template<typename DataT, template<typename> class BinaryConverterTemplateT, typename HashCalculatorT> inline std::string ssybc::BlockContent<DataT, BinaryConverterTemplateT, HashCalculatorT>::Description() const { return Description(""); } template<typename DataT, template<typename> class BinaryConverterTemplateT, typename HashCalculatorT> inline bool ssybc::BlockContent< DataT, BinaryConverterTemplateT, HashCalculatorT>::operator==(BlockContent const & block) const { return *data_ptr_ == *block.Data(); } template<typename DataT, template<typename> class BinaryConverterTemplateT, typename HashCalculatorT> inline bool ssybc::BlockContent< DataT, BinaryConverterTemplateT, HashCalculatorT>::operator!=(BlockContent const & block) const { return *data_ptr_ != *block.Data(); } template<typename DataT, template<typename> class BinaryConverterTemplateT, typename HashCalculatorT> inline auto ssybc::BlockContent< DataT, BinaryConverterTemplateT, HashCalculatorT>::ContentFromBinary(BinaryData const &binary_data) -> BlockContent { return BlockContent(BinaryConverterType().DataFromBinaryData(binary_data)); } template<typename DataT, template<typename> class BinaryConverterTemplateT, typename HashCalculatorT> inline auto ssybc::BlockContent< DataT, BinaryConverterTemplateT, HashCalculatorT>::ContentFromBinary(BinaryData &&binary_data) -> BlockContent { return BlockContent(BinaryConverterType().DataFromBinaryData(binary_data)); } // -------------------------------------------------- Private Method -------------------------------------------------- template<typename DataT, template<typename> class BinaryConverterTemplateT, typename HashCalculatorT> inline void ssybc::BlockContent< DataT, BinaryConverterTemplateT, HashCalculatorT>::CacheSizeOfBinary_(SizeT const size) const { if (did_cache_size_of_binary_) { return; } size_of_binary_ = size; did_cache_size_of_binary_ = true; } #endif // SSYBC_SRC_BLOCK_BLOCK_CONTENT_BLOCK_CONTENT_IMPL_HPP_
I rather imagine that we are going to be repeating this question a lot this season: What did we learn today? To start with, we learned that yes, an opposing player can be MOTM. “Tono” Rodriguez, a 32-year-old keeper formerly of Racing Santander, was spectacular today. No. He was absurd, that kind of head-smacking good where you make yourself wonder what just happened, that forces players to change their performance expectations. We hit shots against that dude that go in against 99.9% of keepers in the world, on any given day, but not against him. It wasn’t that the shots weren’t good, it was that he was better. Fabregas missed a wide-open net, we thought. But the slightest of touches pushed the ball wide. Granada’s keeper was astounding today, and is easily my MOTM. That’s one thing. We can not be great, yet have great players The attack wasn’t good today for many reasons, not least of which was its static quality. Villa isn’t part of the offense the way that Pedro is. He stands at the shoulder of the defense, waiting for the killer ball. If he doesn’t get a pass his way, he isn’t part of the offense. Pedro runs, jumps, passes, receives, defends, is always around the ball in a way that doesn’t make it at all remarkable when …. well …. he finds himself around the ball. So the lack of movement from the forward line meant fewer real opportunities. Let’s look at the starting lineup for some answers: Valdes, Alves, Song, Mascherano, Adriano, Busquets, Thiago, Fabregas, Sanchez, Villa, Messi. First question is “Who runs the offense?” In our possession-based style, there always has to be a home base, so to speak. That is Xavi, when he is in there. Today, he didn’t start, so Fabregas and Thiago were sharing those duties, even while performing them in a way that made offensive continuity impossible. Because both would pass and then run somewhere. In an ideal world, another player slides into the spot they have vacated, for that series of triangles. In the real world, with Fabregas and Thiago running around, Busquets becomes that home base, but triangles don’t form because the players are too scattered, and we have difficulties keeping meaningful possession. But as usual, Vilanova got the subs right, bringing in Xavi, Pedro and Tello. Instantly, the offense got more dangerous because the attackers saw more of the ball and thus were more involved in the offense, which had a home base: Xavi. Great players can elevate The excellence of Tono meant that ordinary shots weren’t going to beat him. That Xavi goal, the one that broke the deadlock, was an extraordinary, extraordinary thing. He aimed at the crossbar, knowing that the only way to beat that keeper was going to be with a shot that there was no way in hell he was going to reach. So Xavi hit a howitzer that spanked off the underside of the crossbar (that’s how fine he cut it) and bounded into the net. The goal was an example of a great player readjusting his reality to compensate for another player having a great day. It was remarkable, when you think about it. Further, these great players are ultimately what make it so difficult to play against us. I am going to blaspheme here, and say that top to bottom, talent-wise, That Other Spanish Team has a better team than we do. But we have more great players, and in a 90-minute match, in which one great play can make all the difference, such a thing can be huge. Granada played a perfect match for 85 minutes, then that one great player did that one great thing, and that was that. Messi is human Now we already knew this. He is flesh and blood, born of mortal beings. But the way so many folks act, you’d think he was divine. So when he showed a normal moment of petulance, born of frustration coupled with intense desire, it became something much more that it was. David Villa didn’t play a ball the exact way that Messi would have liked, and Messi let him have it. And a shouting match erupted on the pitch, with the match in the balance. And much was made of it. At the time, on Twitter, I said “I don’t like bitchy Messi.” That is still true. Vilanova said that he also didn’t like it. Duh. Messi said those sorts of things go on all the time in practice as well as in matches, and it was no big deal. We’ll have to take his word for it in practices, but it is a rarity, from what I have seen, in matches. Be that as it may, it’s understandable for many reasons. Last week, the Chicago Bears were getting pasted by their most-hated rivals, the Green Bay Packers and the quarterback, Jay Cutler, wasn’t having a good game. Nor was his offensive line. So he laid into his left tackle not only verbally, but amplified that commentary with a shove. There was consternation, media hoopla and other sorts of hand wringing. The involved players squashed that beef, publicly. But one player came out and said, essentially, that Cutler shouldn’t have laid into a teammate when he was having as bad a game as everybody else. Keep that frustration to yourself. This is an interesting point, because Cutler’s reaction, just as Messi’s reaction, was crap. Alves dropped a pass at Messi’s feet that should have been a goal. Blocked by the defense because of one touch too many. Alves didn’t scream at Messi. Villa made an exquisite run, Messi’s pass was too hard, and bent too much. Villa didn’t yell at Messi. Didn’t do anything except run back and reset the offense. It happens. Nobody’s perfect. So no yelling. Period. Some might lay such things at the feet of Messi’s efforts seemingly being worth more than everyone else’s. After all, Guardiola went from “Run you bastards, RUN!” to “Run you bastards, except for you. Save your energy for important stuff.” Messi’s only playing half the pitch, usually. In wider shots you can see him, as the rest of the team is in our end if an opponent is attacking, standing around midfield like he’s waiting for the bus. Pressure, pressure, pressure. You have the ball, No. 10, now be brilliant. Every. Last. Time. Madness. Messi has been scoring goals, but he hasn’t been glittering Messi, just as his club has been getting it done on the pitch, even while not sparkling in the way that everyone on the planet is expecting. And personal pique can make us angry, as much at ourselves as others, and human nature makes us lash out. I hate playing tennis doubles because if I win or lose a point, I want it to be because I played well or screwed up. Then I know who to get mad at. And I wasn’t even that good a tennis player. Now imagine being Messi, who has Messi Moments of such otherworldly grace that even he can’t live up to his own expectations. Like that crazy thing that resulted in the Granada own goal today. That was impossible. Yet it happened. However do you deal with being that good? If you have a high personal standard, it can make you impatient and yes, bitchy. At the wrong times. Messi and Villa made up quickly, and Villa waited for Messi in the tunnel after the match to share a hug with Messi. All’s good, even as somewhere, Wrongaldo allowed himself a little smile, as Mr. Humility became, for a few seconds, a Spoiled Superstar who wasn’t getting his way. So we know that Messi is human because he is subject to emotional frailty. But he also didn’t have a very good overall match, losing something like a dozen balls and recovering none, and leaving a few excellent scoring opportunities begging. And that’s frustrating. Couple that with the immense pressure of a season in which this club HAS to win big silver, and it must be making him crazy. Should we be a bit worried about the pressure that is on Messi? Yes. None of us can imagine how it must be, but he is in effect the absolute everything for two football-crazed nations of supporters. None of which excuses what he did Strikers have that elegant, eloquent two-armed “put the ball right here” gesture, that invariably generates an acknowledgement from the teammate in question. Voila. Shouting means ears close, and the teammate shouts back, because he’s a man too, dammit. And there’s an edge, that can sometimes develop into a fissure. Vilanova will ensure that such a thing doesn’t happen, irrespective of what he says in public, which is basically “I don’t like it, but I understand it.” We should justly emphasize the first part of his comment. The press is silly So MARCA has turned the substitutions of Villa and Thiago into a repercussion of the Messi petulance. In fact, they were subbed because both were coming back from long injury spells and aren’t yet fully match fit, but they were also suffering from varying degrees of ineffectiveness, as mentioned above. We weren’t going to win this match by staying pat, and Vilanova knew that, even if the two players were daisy fresh and match fit. Different skill sets were required. So the substitutions had nothing to do with Messi. Period. Anything you read to the contrary, you can feel free to giggle at, then dismiss. We should be striving for Mascherano II, not Txigrinski II Let’s try some numbers on for size: 3 matches, 31 balls won, none lost, 17 won today, including one that led to the winning goal. This has been Alex Song’s contribution to the cause. Not bad, right? Yes, he has made errors, including a ridiculous whiff on a ball that fell to Alves for the Spartak own goal. But today, he had as good a defensive match as anybody on that back line, with key interventions, physical play at the right times, snuffing out two attacks, playing out from the back with Barca quality and always making himself available for the return pass. Yet, there are those who saw the same match that I watched and said “Dude isn’t good.” I don’t begin to know how such things happen, even as I have seen them time and again. Henry wasn’t as bad as so many said he was. Neither were Txigrinski or Ibrahimovic. But once some cules decide they don’t like a player, that’s it. The player can stop attacks, contribute goals, and it won’t be good enough. I have been a lifelong Chicago Bears fan. The Bears used to have a fullback, Matt Suhey, who I didn’t like. He would score a touchdown, and I would say “Anybody could have done that. He sucks.” But if he missed a block that let to a negative play, it was as if a plague had been unleashed upon the Earth and it justified every last bad thought that I had about him. Fact of the matter was that Suhey was an excellent blocking back for Walter Payton, a reliable third-down receiver and a Pro Bowl-quality player. Not to me, dammit. I will confess that it was silly. Very silly. When Javier Mascherano came, he was a bad passer and card magnet, who was going to leave us a man down every time that we had the temerity to play with him on the pitch. Reality has proven to be something very different. But because he came from Liverpool and was a proven player, he got patience and the benefit of the doubt. Now he is an excellent center back, having adapted to what isn’t his natural or learned position. Alex Song isn’t the player that anybody wanted, except for our coaching staff. “We need a real defender. That dude sucks. What the hell did we buy him for?” The cries for Javi Martinez, etc, etc, rose far and wide, amid mutterings about a crappy summer window. He took the pitch as a sub in the second leg of the SuperCopa, and owned. He would have provided the winning assist had Messi been able to finish a shot that he usually makes, and there wasn’t much that anyone had to say. But with the error in the Spartak match, nothing good that came before mattered. Song sucks. And for me, such a worldview is wrong and unfair. People can think as they like, but if Puyol has the match that Song had today, the chatter would be about how awesome our Capita was. I don’t know how it happens, even as I know that it happens, because I did it. But having watched the match today twice, and the Spartak match three times, Song isn’t anywhere near as bad as too many cules believe, not that this reality will stop them from believing that he is a disaster. Tito Vilanova is an excellent coach This will inspire a clamor of “Duh,” but recall the reaction when he was named. It was as though so many forgot that Vilanova was an essential part of the success that has graced this club since Pep Guardiola took over, and now that Guardiola left, the players would suddenly start to suck and the tactics and approaches that Vilanova had a part in shaping weren’t going to work any longer. And yet, Vilanova’s club has shown amazing resilience in winning matches that should have been draws or losses, and doing so at times because the coach has made the exact right substitutions, or tactical alterations. Is the club playing as well this season? Nope. But recall that things weren’t all that hot last season, except in spurts. And this season, opponents have found new ways of attack that successfully neutralize tactics that have heretofore made us successful. All of the long balls this season are no accident, as opponents know that even when our back line is whole, we don’t have pure defenders in there. If you can get directly at our back line, bypassing that pesky, ball-hawking midfield, you can do some damage. If you can get a defender one on one or in a pace situation, that’s even better. The Spartak attacker ran past Mascherano like he was in quicksand. The RM long balls in the SuperCopa were no accident. This season, we are being played more physically and much more in the air. Yet we are still winning matches, and Vilanova is a big part of that. Will that stop the “Oh, Lawd!” attendant to every different lineup that he offers? Nope. And that’s a pity, because even if he will never get it, he deserves the same support that Guardiola received. Here’s hoping that results will mean that Vilanova, too, will earn it. Fabregas isn’t going to get a whole lot better Cesc Fabregas’ last few matches have been, for the most part, very good. If a few shots go the right way, people are saying “He’s back!” rather than “Sigh, is he ever going to get good?” But in our system, we are pretty darned close to seeing a Fabregas that is as good as he’s going to get. He passes, helps in possession, defends, including tracking back that belies his sluggardly pace, and creates chaos on the offensive end by popping up in places where midfielders aren’t supposed to. Because in our system, that’s what he’s supposed to do. At Arsenal, he ran the offense. He was The Man, making key passes, taking key shots. It was his team. And we kicked the crap out of Arsenal, let’s not forget. So even if we sign their best player (at that time), whatever did we expect? Xavi is our Fabregas. So is Iniesta, depending on which hat Fabregas would wear for Arsenal. Nor was he playing against packed midfields that were bound and determined to prevent tika-taka. He had space to be creative, and he used it to shine. It’s different in Barcelona, and Fabregas is adapting. But there is a standard, or more correctly a price tag-based standard being applied to him that, if it persists, he is never, ever going to meet. It’s going to be a long season …. Today, Granada came at us with a very intelligent match plan that almost worked. On defense, they didn’t bother with marking players, deciding instead that it’s easier to just put a leg or body in front of the ball. I can’t recall a match in which we have had more shots blocked at the defense, never mind a keeper having the match of his life. And long balls over the top are getting at our back line, directly. And people are going to piss and moan when we concede goals. But you know what? If we had a back line like Manchester City’s, big, strong traditional defenders who can deal with attacks, hoof the ball out and are physical, cules would piss and moan about THAT. Further, our play would suffer, because attackers defend and defenders attack. So Song is in the box, feeding Messi while Sanchez transforms into a right back and helps to break up an attack. We can’t have it both ways. Our best center back, Carles Puyol, is a converted forward. The ball skills and touch required to play in our back line means that ordinary defenders need not apply. This is a blessing much, much more than a curse. But at times this season, it will be a curse. It isn’t 2009, and won’t be ever again Guardiola’s first year featured an absurd team, the likes of which we will never, ever see again. Of the Top 10 attackers in the world, we had 3 of them: Eto’o, Henry and Messi. Any one of them could kill you and at times, they all did, with key goals, assists and passes that led to goals or assists. As our offense has evolved (in ways that I am not the biggest fan of, frankly), it has become more delicate, more capable of being derailed. In 2009, we could just bang a ball up the pitch to Eto’o who would take it, overpower somebody and bang a laser bean into the net. Or Iniesta would slide a ball up the pitch to a streaking Henry, who would outrun everybody and cut in toward goal. If you survived those two, there was Messi. Those days are gone. Today, we have a highly pressurized Best Player Alive, an aging striker coming back from a broken leg, a Serie A bit of brilliance who is finding that he can’t play that same way with us, a player just finding his form but who will always have to rely on work rate rather than sheer excellence to make a difference, and a couple of recent Masia graduates. If Vilanova wins anything with this club, it will be a remarkable accomplishment, just as it was last season, when Guardiola came sooo close. But it isn’t 2009 any longer. Cules need to stop applying that standard to subsequent clubs, and celebrate the hell out of what we have which, on its day, is still The Best Club in the World. And yet, we’re perfect so far this season, 5-0-0 Our back line is a shambles, our attack is a mess, we’re too dependent upon one player and yet, we haven’t lost a match this season, losing the SuperCopa on the away goals rule. And even then, we weren’t beaten, we lost thanks to two defensive errors that were almost comical. Vilanova says that the team is still improving. Hell, it has to, right? It can’t keep pulling out victories from what in another time would be draws or defeats, right? That’s just absurd. Which is pretty much what cules were saying about last season’s Liga winner, a team that didn’t play great all the time, but just kept on winning through key goals, well-timed substitutions and having the depth that meant play didn’t suffer to much when rotations had to occur. Yes, we still have the Great Player problem that they don’t have. It’s easier to replace a Xabi Alonso than a Xavi, even as we acknowledge the quality of both players. They can pull Jughead for Ozil or Modric. If Iniesta comes off, nobody is as good. Without Ronaldo, a tag team of Benzema and Higuain ain’t too bad, right? If Messi is off for us, our only proven goal scorer in the kinds of situations necessary to create goals is a 30-year-old dude coming back from a broken leg. All of which means that wins are going to be precious this season, not in their scarcity, but in the effort required to bring them off. They should be cherished and enjoyed, rather than expected. The scream that I uttered when Xavi scored that goal is still echoing in my TV room, because it was so glorious and no unexpected, like the legendary Iniestazo. It didn’t mean as much …. maybe. Because what it means right now is that this club has kept on winning, which is all that it has to do to pull off a remarkable accomplishment this season. So fasten those seat belts, folks. It’s going to be a hoot. Visca!
N = [] for i in range(6): N.append(int(input())) ans = "Yay!" for i in range(5): for j in range(i+1,5): dist = abs(N[j] - N[i]) if dist > N[5]: ans = ":(" print(ans) ##5m
def run(self): try: server_socket = socket.socket() server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) server_socket.bind(('0.0.0.0', 8000)) server_socket.listen(0) connection = server_socket.accept()[0].makefile('rb') try: while True: print 'start read' image_len = struct.unpack('<L', connection.read(struct.calcsize('<L')))[0] if not image_len: print 'invalid read' break image_stream = connection.read(image_len) print 'end read' self.clientsLock.acquire() try: for client in self.clients: client.set_frame(image_stream) finally: self.clientsLock.release() finally: connection.close() server_socket.close() except: logging.error(traceback.format_exc())
Functions satisfying holonomic q-differential equations Abstract. In a similar manner as in the papers and , where explicit algorithms for finding the differential equations satisfied by holonomic functions were given, in this paper we deal with the space of the q-holonomic functions which are the solutions of linear q-differential equations with polynomial coefficients. The sum, product and the composition with power functions of q-holonomic functions are also q-holonomic and the resulting q-differential equations can be computed algorithmically.
"My child has asthma": some answers to parents' questions. The incidence of asthma appears to have doubled since the 1970s. 15% of children in the UK are now estimated to have the condition. Children's asthma needs to be taken seriously by parents, health professionals and teachers. Its severity varies and it is a potentially life-threatening condition. The reason why children develop asthma is not known. Theories include a family tendency to asthma and allergy, house-dust mite and parental smoking. There is no clear evidence of a link between asthma and traffic fumes and air pollution. The vast majority of people with asthma take inhaled steroids, and these produce little or no side-effects. Tablet forms of the drug also have few side-effects if used for short courses of a few days. Only a minority need to take steroid tablets regularly, and need to discuss the issue of side-effects versus improvements in symptoms with their doctor. Most children use a preventer and a reliever medication. It is important that parents and child understand how and when to use each. Research suggests that two-thirds of children may grow out of their asthma symptoms, though asthma that is triggered by an allergic response may recur in later life. The prognosis for pre-school children is good.
def video_set_subtitle_file(self, psz_subtitle): e=VLCException() return libvlc_video_set_subtitle_file(self, psz_subtitle, e)
/** * Returns a type identifier for the specified array of values (integers), while using a cache * * @param values * an array of values (integers) * @return a type identifier */ static int typeIdentifierFor(int... values) { int j = IntStream.range(0, types.size()).filter(i -> Arrays.equals(values, types.get(i))).findFirst().orElse(-1); if (j != -1) return j; types.add(values); return types.size() - 1; }
// MergeValidationResults merge all validate result from func MergeValidationResults(ctx context.Context, k8sResources kube.K8SResource, channels ...<-chan v1alpha1.AuditResult) (kube.K8SResource, <-chan v1alpha1.AuditResult) { result := make(chan v1alpha1.AuditResult) var wg sync.WaitGroup wg.Add(len(channels)) mergeResult := func(ctx context.Context, ch <-chan v1alpha1.AuditResult) { defer wg.Done() for c := range ch { result <- c } } for _, c := range channels { go mergeResult(ctx, c) } go func() { defer close(result) wg.Wait() }() return k8sResources, result }
// tgrid makes a clickable grid of tweets from the Atom feed func tgrid(t Feed, x, y, w, h, nc int) { var slink, imlink string xp := x for i, entry := range t.Entry { for _, link := range entry.Link { switch link.Rel { case "alternate": slink = link.Href case "image": imlink = link.Href } } if i%nc == 0 && i > 0 { xp = x y += h } canvas.Link(slink, slink) canvas.Image(xp, y, imw, imh, imlink) canvas.LinkEnd() xp += w } }
#include <bits/stdc++.h> using namespace std; typedef long long ll; typedef pair<int, int> ii; typedef pair<int, int> pii; typedef pair<ll, ll> pll; typedef pair<double, double> pdd; typedef vector<int> vi; typedef vector<vi> vvi; typedef vector<pii> vii; #define pb push_back #define mp make_pair #define fi first #define se second #define FAST_IO ios_base::sync_with_stdio(false) const int PI = acos(-1.0); const int MOD = 1e9 + 7; int mod_exp(int a, int b) { if (b == 0) return 1; if (b == 1) return a; int ret = mod_exp(a, b/2); ret *= ret; if (b % 2 == 1) ret *= a; return ret; } ll reverse_number(ll n) { ll temp = n; ll ans = 0; while (n > 0) { ans = (ans * 10) + n % 10; n /= 10; } return ans; } int main() { ll k, p; cin >> k >> p; ll current_length = 2; ll current = 0; ll ans = 0; ll num = 1; ll exponent = 1; while (num <= k) { // cerr << num * mod_exp(10, exponent) << " " << reverse_number(num) << endl; // cerr << num * mod_exp(10, exponent) + reverse_number(num) << endl; ans = (ans + num * mod_exp(10, exponent) + reverse_number(num)) % p; ++num; if (num == mod_exp(10, exponent)) { ++exponent; } } cout << ans << '\n'; return 0; }
Requirement for Knowledge Management System Knowledge Management (KM), defined as the creation and application of new knowledge, is becoming a good source of competitive advantage. Employee innovation is increased by the development and sharing of knowledge. Failure to implement a knowledge management system for the organization is a major concern for the management information system society. With the drive for necessary attention to this issue, this paper intends to provide the means of implementing knowledge management system. The success of this implementation would come from the reduction of loss of critical information and improvement of data retrieval. This paper would also guide the implementation principles and review the implementation process in a step-by-step approach. This paper investigates the knowledge management implementation system requirements in various manners. The investigation covers the ways of devising the implementation system for information sorted by definition, requirements, and implementation. The source definition and data retrieval process is also investigated. This paper would consider the cooperation of the knowledge workers as one of the important aspect of the implementation. In addition to these, the importance of centralization in storage and retrieval process for necessary information is also included in the scope of this paper. The ways of eliminating loss of knowledge when the skilled workers leave or retire is also a subject matter of this paper. The contribution of this paper goes to more clarification of knowledge management system implementation.
House Intelligence Committee member Rep. Trey Gowdy (R-SC) said Thursday that “classified” information about conversations by members of President Donald Trump’s administration “should never have made it to the public domain.” “We cannot overlook the fact that the methodology of the collection and the content of that transcript never should have made into the public domain,” Gowdy said in an appearance on MSNBC, apparently referring to details of ousted National Security Adviser Michael Flynn’s conversations with Russia’s ambassador, according to Gowdy’s office. “And people may like that it did today, because it hurts Republicans, but what it really does is it hurts our country because you are leaking classified information.” He said that “all facets” of contacts between members of Trump’s administration and Russian officials are important, but leaks particularly so. “I’m not focused on the leak, but the leak is really, really, important,” Gowdy said. Gowdy also said that “Congress is not equipped to investigate crime.” “We don’t have the tools to do it,” he said. “We are welcome to investigate allegations of constitutional injury but allegations of criminal activity we are not equipped to do.” He called for an investigation of “every facet” of Russia’s attempts to interfere with anything “from elections to infrastructure.” “Russia is not our friend, so I want to be crystal clear about that,” Gowdy said. “Investigate it all.” Attorney General Jeff Sessions recused himself on Thursday from an investigation into ties between Trump’s campaign and Russia, amid revelations that Sessions met twice with Russian Ambassador Sergey Kislyak before the election. In January, Sessions denied to the Senate Judiciary Committee that he had any “communications with Russians.” On Thursday, however, Sessions’ spokeswoman confirmed that Sessions met with Kislyak twice before the election. Correction: This post has been updated with additional information from Gowdy’s office that the congressman was also referring to leaks about Flynn’s conversations with the Russian ambassador.
<filename>src/linux-3.7.10/drivers/staging/iio/adc/mxs-lradc.c /* * Freescale i.MX28 LRADC driver * * Copyright (c) 2012 DENX Software Engineering, GmbH. * <NAME> <<EMAIL>> * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. */ #include <linux/interrupt.h> #include <linux/device.h> #include <linux/kernel.h> #include <linux/slab.h> #include <linux/of.h> #include <linux/of_device.h> #include <linux/sysfs.h> #include <linux/list.h> #include <linux/io.h> #include <linux/module.h> #include <linux/platform_device.h> #include <linux/spinlock.h> #include <linux/wait.h> #include <linux/sched.h> #include <linux/stmp_device.h> #include <linux/bitops.h> #include <linux/completion.h> #include <mach/mxs.h> #include <mach/common.h> #include <linux/iio/iio.h> #include <linux/iio/buffer.h> #include <linux/iio/trigger.h> #include <linux/iio/trigger_consumer.h> #include <linux/iio/triggered_buffer.h> #define DRIVER_NAME "mxs-lradc" #define LRADC_MAX_DELAY_CHANS 4 #define LRADC_MAX_MAPPED_CHANS 8 #define LRADC_MAX_TOTAL_CHANS 16 #define LRADC_DELAY_TIMER_HZ 2000 /* * Make this runtime configurable if necessary. Currently, if the buffered mode * is enabled, the LRADC takes LRADC_DELAY_TIMER_LOOP samples of data before * triggering IRQ. The sampling happens every (LRADC_DELAY_TIMER_PER / 2000) * seconds. The result is that the samples arrive every 500mS. */ #define LRADC_DELAY_TIMER_PER 200 #define LRADC_DELAY_TIMER_LOOP 5 static const char * const mxs_lradc_irq_name[] = { "mxs-lradc-touchscreen", "mxs-lradc-thresh0", "mxs-lradc-thresh1", "mxs-lradc-channel0", "mxs-lradc-channel1", "mxs-lradc-channel2", "mxs-lradc-channel3", "mxs-lradc-channel4", "mxs-lradc-channel5", "mxs-lradc-channel6", "mxs-lradc-channel7", "mxs-lradc-button0", "mxs-lradc-button1", }; struct mxs_lradc_chan { uint8_t slot; uint8_t flags; }; struct mxs_lradc { struct device *dev; void __iomem *base; int irq[13]; uint32_t *buffer; struct iio_trigger *trig; struct mutex lock; uint8_t enable; struct completion completion; }; #define LRADC_CTRL0 0x00 #define LRADC_CTRL0_TOUCH_DETECT_ENABLE (1 << 23) #define LRADC_CTRL0_TOUCH_SCREEN_TYPE (1 << 22) #define LRADC_CTRL1 0x10 #define LRADC_CTRL1_LRADC_IRQ(n) (1 << (n)) #define LRADC_CTRL1_LRADC_IRQ_MASK 0x1fff #define LRADC_CTRL1_LRADC_IRQ_EN(n) (1 << ((n) + 16)) #define LRADC_CTRL1_LRADC_IRQ_EN_MASK (0x1fff << 16) #define LRADC_CTRL2 0x20 #define LRADC_CTRL2_TEMPSENSE_PWD (1 << 15) #define LRADC_CH(n) (0x50 + (0x10 * (n))) #define LRADC_CH_ACCUMULATE (1 << 29) #define LRADC_CH_NUM_SAMPLES_MASK (0x1f << 24) #define LRADC_CH_NUM_SAMPLES_OFFSET 24 #define LRADC_CH_VALUE_MASK 0x3ffff #define LRADC_CH_VALUE_OFFSET 0 #define LRADC_DELAY(n) (0xd0 + (0x10 * (n))) #define LRADC_DELAY_TRIGGER_LRADCS_MASK (0xff << 24) #define LRADC_DELAY_TRIGGER_LRADCS_OFFSET 24 #define LRADC_DELAY_KICK (1 << 20) #define LRADC_DELAY_TRIGGER_DELAYS_MASK (0xf << 16) #define LRADC_DELAY_TRIGGER_DELAYS_OFFSET 16 #define LRADC_DELAY_LOOP_COUNT_MASK (0x1f << 11) #define LRADC_DELAY_LOOP_COUNT_OFFSET 11 #define LRADC_DELAY_DELAY_MASK 0x7ff #define LRADC_DELAY_DELAY_OFFSET 0 #define LRADC_CTRL4 0x140 #define LRADC_CTRL4_LRADCSELECT_MASK(n) (0xf << ((n) * 4)) #define LRADC_CTRL4_LRADCSELECT_OFFSET(n) ((n) * 4) /* * Raw I/O operations */ static int mxs_lradc_read_raw(struct iio_dev *iio_dev, const struct iio_chan_spec *chan, int *val, int *val2, long m) { struct mxs_lradc *lradc = iio_priv(iio_dev); int ret; if (m != IIO_CHAN_INFO_RAW) return -EINVAL; /* Check for invalid channel */ if (chan->channel > LRADC_MAX_TOTAL_CHANS) return -EINVAL; /* * See if there is no buffered operation in progess. If there is, simply * bail out. This can be improved to support both buffered and raw IO at * the same time, yet the code becomes horribly complicated. Therefore I * applied KISS principle here. */ ret = mutex_trylock(&lradc->lock); if (!ret) return -EBUSY; INIT_COMPLETION(lradc->completion); /* * No buffered operation in progress, map the channel and trigger it. * Virtual channel 0 is always used here as the others are always not * used if doing raw sampling. */ writel(LRADC_CTRL1_LRADC_IRQ_EN_MASK, lradc->base + LRADC_CTRL1 + STMP_OFFSET_REG_CLR); writel(0xff, lradc->base + LRADC_CTRL0 + STMP_OFFSET_REG_CLR); writel(chan->channel, lradc->base + LRADC_CTRL4); writel(0, lradc->base + LRADC_CH(0)); /* Enable the IRQ and start sampling the channel. */ writel(LRADC_CTRL1_LRADC_IRQ_EN(0), lradc->base + LRADC_CTRL1 + STMP_OFFSET_REG_SET); writel(1 << 0, lradc->base + LRADC_CTRL0 + STMP_OFFSET_REG_SET); /* Wait for completion on the channel, 1 second max. */ ret = wait_for_completion_killable_timeout(&lradc->completion, HZ); if (!ret) ret = -ETIMEDOUT; if (ret < 0) goto err; /* Read the data. */ *val = readl(lradc->base + LRADC_CH(0)) & LRADC_CH_VALUE_MASK; ret = IIO_VAL_INT; err: writel(LRADC_CTRL1_LRADC_IRQ_EN(0), lradc->base + LRADC_CTRL1 + STMP_OFFSET_REG_CLR); mutex_unlock(&lradc->lock); return ret; } static const struct iio_info mxs_lradc_iio_info = { .driver_module = THIS_MODULE, .read_raw = mxs_lradc_read_raw, }; /* * IRQ Handling */ static irqreturn_t mxs_lradc_handle_irq(int irq, void *data) { struct iio_dev *iio = data; struct mxs_lradc *lradc = iio_priv(iio); unsigned long reg = readl(lradc->base + LRADC_CTRL1); if (!(reg & LRADC_CTRL1_LRADC_IRQ_MASK)) return IRQ_NONE; /* * Touchscreen IRQ handling code shall probably have priority * and therefore shall be placed here. */ if (iio_buffer_enabled(iio)) iio_trigger_poll(iio->trig, iio_get_time_ns()); else if (reg & LRADC_CTRL1_LRADC_IRQ(0)) complete(&lradc->completion); writel(reg & LRADC_CTRL1_LRADC_IRQ_MASK, lradc->base + LRADC_CTRL1 + STMP_OFFSET_REG_CLR); return IRQ_HANDLED; } /* * Trigger handling */ static irqreturn_t mxs_lradc_trigger_handler(int irq, void *p) { struct iio_poll_func *pf = p; struct iio_dev *iio = pf->indio_dev; struct mxs_lradc *lradc = iio_priv(iio); struct iio_buffer *buffer = iio->buffer; const uint32_t chan_value = LRADC_CH_ACCUMULATE | ((LRADC_DELAY_TIMER_LOOP - 1) << LRADC_CH_NUM_SAMPLES_OFFSET); int i, j = 0; for_each_set_bit(i, iio->active_scan_mask, iio->masklength) { lradc->buffer[j] = readl(lradc->base + LRADC_CH(j)); writel(chan_value, lradc->base + LRADC_CH(j)); lradc->buffer[j] &= LRADC_CH_VALUE_MASK; lradc->buffer[j] /= LRADC_DELAY_TIMER_LOOP; j++; } if (iio->scan_timestamp) { s64 *timestamp = (s64 *)((u8 *)lradc->buffer + ALIGN(j, sizeof(s64))); *timestamp = pf->timestamp; } iio_push_to_buffer(buffer, (u8 *)lradc->buffer); iio_trigger_notify_done(iio->trig); return IRQ_HANDLED; } static int mxs_lradc_configure_trigger(struct iio_trigger *trig, bool state) { struct iio_dev *iio = trig->private_data; struct mxs_lradc *lradc = iio_priv(iio); const uint32_t st = state ? STMP_OFFSET_REG_SET : STMP_OFFSET_REG_CLR; writel(LRADC_DELAY_KICK, lradc->base + LRADC_DELAY(0) + st); return 0; } static const struct iio_trigger_ops mxs_lradc_trigger_ops = { .owner = THIS_MODULE, .set_trigger_state = &mxs_lradc_configure_trigger, }; static int mxs_lradc_trigger_init(struct iio_dev *iio) { int ret; struct iio_trigger *trig; trig = iio_trigger_alloc("%s-dev%i", iio->name, iio->id); if (trig == NULL) return -ENOMEM; trig->dev.parent = iio->dev.parent; trig->private_data = iio; trig->ops = &mxs_lradc_trigger_ops; ret = iio_trigger_register(trig); if (ret) { iio_trigger_free(trig); return ret; } iio->trig = trig; return 0; } static void mxs_lradc_trigger_remove(struct iio_dev *iio) { iio_trigger_unregister(iio->trig); iio_trigger_free(iio->trig); } static int mxs_lradc_buffer_preenable(struct iio_dev *iio) { struct mxs_lradc *lradc = iio_priv(iio); struct iio_buffer *buffer = iio->buffer; int ret = 0, chan, ofs = 0, enable = 0; uint32_t ctrl4 = 0; uint32_t ctrl1_irq = 0; const uint32_t chan_value = LRADC_CH_ACCUMULATE | ((LRADC_DELAY_TIMER_LOOP - 1) << LRADC_CH_NUM_SAMPLES_OFFSET); const int len = bitmap_weight(buffer->scan_mask, LRADC_MAX_TOTAL_CHANS); if (!len) return -EINVAL; /* * Lock the driver so raw access can not be done during buffered * operation. This simplifies the code a lot. */ ret = mutex_trylock(&lradc->lock); if (!ret) return -EBUSY; lradc->buffer = kmalloc(len * sizeof(*lradc->buffer), GFP_KERNEL); if (!lradc->buffer) { ret = -ENOMEM; goto err_mem; } ret = iio_sw_buffer_preenable(iio); if (ret < 0) goto err_buf; writel(LRADC_CTRL1_LRADC_IRQ_EN_MASK, lradc->base + LRADC_CTRL1 + STMP_OFFSET_REG_CLR); writel(0xff, lradc->base + LRADC_CTRL0 + STMP_OFFSET_REG_CLR); for_each_set_bit(chan, buffer->scan_mask, LRADC_MAX_TOTAL_CHANS) { ctrl4 |= chan << LRADC_CTRL4_LRADCSELECT_OFFSET(ofs); ctrl1_irq |= LRADC_CTRL1_LRADC_IRQ_EN(ofs); writel(chan_value, lradc->base + LRADC_CH(ofs)); enable |= 1 << ofs; ofs++; }; writel(LRADC_DELAY_TRIGGER_LRADCS_MASK | LRADC_DELAY_KICK, lradc->base + LRADC_DELAY(0) + STMP_OFFSET_REG_CLR); writel(ctrl4, lradc->base + LRADC_CTRL4); writel(ctrl1_irq, lradc->base + LRADC_CTRL1 + STMP_OFFSET_REG_SET); writel(enable << LRADC_DELAY_TRIGGER_LRADCS_OFFSET, lradc->base + LRADC_DELAY(0) + STMP_OFFSET_REG_SET); return 0; err_buf: kfree(lradc->buffer); err_mem: mutex_unlock(&lradc->lock); return ret; } static int mxs_lradc_buffer_postdisable(struct iio_dev *iio) { struct mxs_lradc *lradc = iio_priv(iio); writel(LRADC_DELAY_TRIGGER_LRADCS_MASK | LRADC_DELAY_KICK, lradc->base + LRADC_DELAY(0) + STMP_OFFSET_REG_CLR); writel(0xff, lradc->base + LRADC_CTRL0 + STMP_OFFSET_REG_CLR); writel(LRADC_CTRL1_LRADC_IRQ_EN_MASK, lradc->base + LRADC_CTRL1 + STMP_OFFSET_REG_CLR); kfree(lradc->buffer); mutex_unlock(&lradc->lock); return 0; } static bool mxs_lradc_validate_scan_mask(struct iio_dev *iio, const unsigned long *mask) { const int mw = bitmap_weight(mask, iio->masklength); return mw <= LRADC_MAX_MAPPED_CHANS; } static const struct iio_buffer_setup_ops mxs_lradc_buffer_ops = { .preenable = &mxs_lradc_buffer_preenable, .postenable = &iio_triggered_buffer_postenable, .predisable = &iio_triggered_buffer_predisable, .postdisable = &mxs_lradc_buffer_postdisable, .validate_scan_mask = &mxs_lradc_validate_scan_mask, }; /* * Driver initialization */ #define MXS_ADC_CHAN(idx, chan_type) { \ .type = (chan_type), \ .indexed = 1, \ .scan_index = (idx), \ .info_mask = IIO_CHAN_INFO_RAW_SEPARATE_BIT, \ .channel = (idx), \ .scan_type = { \ .sign = 'u', \ .realbits = 18, \ .storagebits = 32, \ }, \ } static const struct iio_chan_spec mxs_lradc_chan_spec[] = { MXS_ADC_CHAN(0, IIO_VOLTAGE), MXS_ADC_CHAN(1, IIO_VOLTAGE), MXS_ADC_CHAN(2, IIO_VOLTAGE), MXS_ADC_CHAN(3, IIO_VOLTAGE), MXS_ADC_CHAN(4, IIO_VOLTAGE), MXS_ADC_CHAN(5, IIO_VOLTAGE), MXS_ADC_CHAN(6, IIO_VOLTAGE), MXS_ADC_CHAN(7, IIO_VOLTAGE), /* VBATT */ MXS_ADC_CHAN(8, IIO_TEMP), /* Temp sense 0 */ MXS_ADC_CHAN(9, IIO_TEMP), /* Temp sense 1 */ MXS_ADC_CHAN(10, IIO_VOLTAGE), /* VDDIO */ MXS_ADC_CHAN(11, IIO_VOLTAGE), /* VTH */ MXS_ADC_CHAN(12, IIO_VOLTAGE), /* VDDA */ MXS_ADC_CHAN(13, IIO_VOLTAGE), /* VDDD */ MXS_ADC_CHAN(14, IIO_VOLTAGE), /* VBG */ MXS_ADC_CHAN(15, IIO_VOLTAGE), /* VDD5V */ }; static void mxs_lradc_hw_init(struct mxs_lradc *lradc) { int i; const uint32_t cfg = (LRADC_DELAY_TIMER_PER << LRADC_DELAY_DELAY_OFFSET); stmp_reset_block(lradc->base); for (i = 0; i < LRADC_MAX_DELAY_CHANS; i++) writel(cfg | (1 << (LRADC_DELAY_TRIGGER_DELAYS_OFFSET + i)), lradc->base + LRADC_DELAY(i)); /* Start internal temperature sensing. */ writel(0, lradc->base + LRADC_CTRL2); } static void mxs_lradc_hw_stop(struct mxs_lradc *lradc) { int i; writel(LRADC_CTRL1_LRADC_IRQ_EN_MASK, lradc->base + LRADC_CTRL1 + STMP_OFFSET_REG_CLR); for (i = 0; i < LRADC_MAX_DELAY_CHANS; i++) writel(0, lradc->base + LRADC_DELAY(i)); } static int __devinit mxs_lradc_probe(struct platform_device *pdev) { struct device *dev = &pdev->dev; struct mxs_lradc *lradc; struct iio_dev *iio; struct resource *iores; int ret = 0; int i; /* Allocate the IIO device. */ iio = iio_device_alloc(sizeof(*lradc)); if (!iio) { dev_err(dev, "Failed to allocate IIO device\n"); return -ENOMEM; } lradc = iio_priv(iio); /* Grab the memory area */ iores = platform_get_resource(pdev, IORESOURCE_MEM, 0); lradc->dev = &pdev->dev; lradc->base = devm_request_and_ioremap(dev, iores); if (!lradc->base) { ret = -EADDRNOTAVAIL; goto err_addr; } /* Grab all IRQ sources */ for (i = 0; i < 13; i++) { lradc->irq[i] = platform_get_irq(pdev, i); if (lradc->irq[i] < 0) { ret = -EINVAL; goto err_addr; } ret = devm_request_irq(dev, lradc->irq[i], mxs_lradc_handle_irq, 0, mxs_lradc_irq_name[i], iio); if (ret) goto err_addr; } platform_set_drvdata(pdev, iio); init_completion(&lradc->completion); mutex_init(&lradc->lock); iio->name = pdev->name; iio->dev.parent = &pdev->dev; iio->info = &mxs_lradc_iio_info; iio->modes = INDIO_DIRECT_MODE; iio->channels = mxs_lradc_chan_spec; iio->num_channels = ARRAY_SIZE(mxs_lradc_chan_spec); ret = iio_triggered_buffer_setup(iio, &iio_pollfunc_store_time, &mxs_lradc_trigger_handler, &mxs_lradc_buffer_ops); if (ret) goto err_addr; ret = mxs_lradc_trigger_init(iio); if (ret) goto err_trig; /* Register IIO device. */ ret = iio_device_register(iio); if (ret) { dev_err(dev, "Failed to register IIO device\n"); goto err_dev; } /* Configure the hardware. */ mxs_lradc_hw_init(lradc); return 0; err_dev: mxs_lradc_trigger_remove(iio); err_trig: iio_triggered_buffer_cleanup(iio); err_addr: iio_device_free(iio); return ret; } static int __devexit mxs_lradc_remove(struct platform_device *pdev) { struct iio_dev *iio = platform_get_drvdata(pdev); struct mxs_lradc *lradc = iio_priv(iio); mxs_lradc_hw_stop(lradc); iio_device_unregister(iio); iio_triggered_buffer_cleanup(iio); mxs_lradc_trigger_remove(iio); iio_device_free(iio); return 0; } static const struct of_device_id mxs_lradc_dt_ids[] = { { .compatible = "fsl,imx28-lradc", }, { /* sentinel */ } }; MODULE_DEVICE_TABLE(of, mxs_lradc_dt_ids); static struct platform_driver mxs_lradc_driver = { .driver = { .name = DRIVER_NAME, .owner = THIS_MODULE, .of_match_table = mxs_lradc_dt_ids, }, .probe = mxs_lradc_probe, .remove = __devexit_p(mxs_lradc_remove), }; module_platform_driver(mxs_lradc_driver); MODULE_AUTHOR("<NAME> <<EMAIL>>"); MODULE_DESCRIPTION("Freescale i.MX28 LRADC driver"); MODULE_LICENSE("GPL v2");
<filename>dist/lib/ann/index.d.ts import * as T from './types'; export { T }; export { ANN } from './ANN'; export { default } from './create';
<gh_stars>10-100 from msdsl import * r0, r1, c = 1234, 2345, 1e-9 m = MixedSignalModel('rc', dt=0.1e-6) u = m.add_analog_input('u') k = m.add_digital_input('k') x = m.add_analog_output('x') g = eqn_case([1/r0, 1/r1], [k]) m.add_eqn_sys([c*Deriv(x) == (u-x)*g]) m.compile_and_print(VerilogGenerator())
#include <bits/stdc++.h> using namespace std; typedef unsigned long long ull; typedef long long ll; typedef unsigned int ui; typedef vector<int> vi; typedef vector<ui> vui; typedef vector<ll> vll; typedef vector<ull> vull; typedef vector<string> vs; typedef vector<vi> vvi; typedef vector<vui> vvui; typedef vector<vll> vvll; typedef vector<vull> vvull; typedef vector<vs> vvs; typedef string ss; #define forr(i,b) for(int i = 0; i < b; i++) #define ford(i,b) for(int i = b; i >= 0; i--) #define fore(v,a) for(auto& v:a) #define ff for(;;) #define pb push_back #define bb(a,b,c) max(min(b,c),a) template<typename T> int sgn(T val) { return (T(0) < val) - (val < T(0)); } int gcd(int a, int b) { if(a == 0) return b; return gcd(b % a, a); } int ncr(int n, int k) { int r = 1; if(k > n - k) k = n - k; for(int i=0;i<k;i++) { r *= n - i; r /= i + 1; } return r; } template<typename T> void printa(vector<T> arr) { for(ui i = 0; i < arr.size(); i++) if(i != arr.size() - 1) cout << arr[i] << ' '; else cout << arr[i] << '\n'; } bool fn(ss s, char c) { return s.find(c) != ss::npos; } int main() { ios::sync_with_stdio(0); cin.tie(0); int k, r; cin >> k >> r; // probably O(1) // because worst case is that i reaches k * 9 // it can be shown that ans will never surpass 9 int ans = 1; for (int i = k * ans; i % 10 && i % 10 != r; i = k * ++ans); cout << ans << '\n'; return 0; }
/*++ Copyright (c) 2006 Microsoft Corporation Module Name: expr_stat.cpp Abstract: Expression statistics (symbol count, var count, depth, ...) All functions in these module assume expressions do not contain nested quantifiers. Author: Leonardo de Moura (leonardo) 2008-02-05. Revision History: --*/ #include "ast/for_each_expr.h" #include "ast/expr_stat.h" void get_expr_stat(expr * n, expr_stat & r) { typedef std::pair<expr *, unsigned> pair; buffer<pair> todo; todo.push_back(pair(n, 0)); while (!todo.empty()) { pair & p = todo.back(); n = p.first; unsigned depth = p.second; unsigned j; todo.pop_back(); r.m_sym_count++; if (depth > r.m_depth) r.m_depth = depth; switch (n->get_kind()) { case AST_APP: j = to_app(n)->get_num_args(); if (j == 0) r.m_const_count++; while (j > 0) { --j; todo.push_back(pair(to_app(n)->get_arg(j), depth + 1)); } break; case AST_VAR: if (to_var(n)->get_idx() > r.m_max_var_idx) r.m_max_var_idx = to_var(n)->get_idx(); r.m_ground = false; break; case AST_QUANTIFIER: todo.push_back(pair(to_quantifier(n)->get_expr(), depth+1)); break; default: UNREACHABLE(); } } } unsigned get_symbol_count(expr * n) { unsigned r = 0; ptr_buffer<expr> todo; todo.push_back(n); while (!todo.empty()) { n = todo.back(); unsigned j; todo.pop_back(); r++; switch (n->get_kind()) { case AST_APP: j = to_app(n)->get_num_args(); while (j > 0) { --j; todo.push_back(to_app(n)->get_arg(j)); } break; case AST_QUANTIFIER: todo.push_back(to_quantifier(n)->get_expr()); break; default: break; } } return r; }
<gh_stars>1-10 import json import urllib3 from botocore.config import Config import botocore import boto3 import enum import traceback import time CREATE_SUCCEEDED = "CREATE_COMPLETE" CREATE_FAILED = ["CREATE_FAILED", "ROLLBACK_COMPLETE", "ROLLBACK_FAILED"] DELETE_SUCCEEDED = "DELETE_COMPLETE" DELETE_FAILED = ["DELETE_FAILED", "ROLLBACK_COMPLETE", "ROLLBACK_FAILED"] UPDATE_SUCCEEDED = "UPDATE_COMPLETE" UPDATE_FAILED = ["UPDATE_FAILED", "ROLLBACK_COMPLETE", "ROLLBACK_FAILED", "UPDATE_ROLLBACK_COMPLETE"] boto3_config = Config( retries={ 'max_attempts': 10, 'mode': 'standard' } ) cfn = boto3.client("cloudformation", config=boto3_config) http = urllib3.PoolManager() class Status(enum.Enum): SUCCESS = "SUCCESS" FAILED = "FAILED" def lambda_handler(event, context): try: msg = json.loads(event["Records"][0]["Sns"]["Message"]) print(json.dumps(msg)) properties = msg['ResourceProperties'] stack_name = properties["StackName"] tmpl = properties["Template"] parameters_list = properties["Parameters"] if msg["RequestType"] == "Create": create_handler(stack_name, tmpl, parameters_list, msg, context) return if msg["RequestType"] == "Update": update_handler(stack_name, tmpl, parameters_list, msg, context) return if msg["RequestType"] == "Delete": delete_handler(stack_name, tmpl, parameters_list, msg, context) return raise Exception("wrong request type") except Exception as e: print(traceback.format_exc()) send(msg, context, Status.FAILED, {}, reason=str(e)) def create_handler(stack_name, tmpl, parameters, msg, ctx): stack = cfn.create_stack( StackName=stack_name, TemplateBody=tmpl, Parameters=parameters )['StackId'] if wait_for_complete(stack, CREATE_SUCCEEDED, CREATE_FAILED, "Create"): send(msg, ctx, Status.SUCCESS, {}, physical_resource_id=stack) else: raise Exception("create failed") def update_handler(stack_name, tmpl, parameters, msg, ctx): stack = cfn.update_stack( StackName=stack_name, TemplateBody=tmpl, Parameters=parameters )['StackId'] if wait_for_complete(stack, UPDATE_SUCCEEDED, UPDATE_FAILED, "Update"): send(msg, ctx, Status.SUCCESS, {}, physical_resource_id=stack) else: raise Exception("update failed") def delete_handler(stack_name, tmpl, parameters, msg, ctx): cfn.delete_stack( StackName=stack_name ) if wait_for_complete(stack_name, DELETE_SUCCEEDED, DELETE_FAILED, "Delete"): send(msg, ctx, Status.SUCCESS, {}) else: raise Exception("delete failed") def wait_for_complete(stack_name, success_status, failed_status, request_type): while True: try: descr = cfn.describe_stacks(StackName=stack_name)['Stacks'][0] except botocore.exceptions.ClientError as e: if request_type == 'Delete' and "does not exist" in str(e): return True else: return False status = descr['StackStatus'] if status == success_status: return True if status in failed_status: return False time.sleep(5) def send(event, context, response_status, response_data, physical_resource_id=None, no_echo=False, reason=None): response_url = event['ResponseURL'] print(response_url) response_body = { 'Status': response_status.name, 'Reason': reason or "See the details in CloudWatch Log Stream: {}".format(context.log_stream_name), 'PhysicalResourceId': physical_resource_id or context.log_stream_name, 'StackId': event['StackId'], 'RequestId': event['RequestId'], 'LogicalResourceId': event['LogicalResourceId'], 'NoEcho': no_echo, 'Data': response_data } json_response_body = json.dumps(response_body) print("Response body:") print(json_response_body) headers = { 'content-type': '', 'content-length': str(len(json_response_body)) } try: response = http.request('PUT', response_url, headers=headers, body=json_response_body) print("Status code:", response.status) except Exception as e: print("send(..) failed executing http.request(..):", e)
Foundations of cumulative culture in apes: improved foraging efficiency through relinquishing and combining witnessed behaviours in chimpanzees (Pan troglodytes) A vital prerequisite for cumulative culture, a phenomenon often asserted to be unique to humans, is the ability to modify behaviour and flexibly switch to more productive or efficient alternatives. Here, we first established an inefficient solution to a foraging task in five captive chimpanzee groups (N = 19). Three groups subsequently witnessed a conspecific using an alternative, more efficient, solution. When participants could successfully forage with their established behaviours, most individuals did not switch to this more efficient technique; however, when their foraging method became substantially less efficient, nine chimpanzees with socially-acquired information (four of whom witnessed additional human demonstrations) relinquished their old behaviour in favour of the more efficient one. Only a single chimpanzee in control groups, who had not witnessed a knowledgeable model, discovered this. Individuals who switched were later able to combine components of their two learned techniques to produce a more efficient solution than their extensively used, original foraging method. These results suggest that, although chimpanzees show a considerable degree of conservatism, they also have an ability to combine independent behaviours to produce efficient compound action sequences; one of the foundational abilities (or candidate mechanisms) for human cumulative culture. either innovation or social learning, after already having mastered a previous solution" (p.447 17 ), a lack of such flexibility has been found in several experiments with chimpanzees. Marshall-Pescini and Whiten 16 found that young chimpanzees failed to cumulatively modify their foraging efforts by building on their exisiting behaviours despite witnessing a more productive solution. Yet, the more complex behaviour could be acquired if participants had no prior knowledge of the less lucrative foraging technique. This led the authors to suggest that chimpanzees are behaviourally conservative, since reported in several further studies 13, (see also ref. 22); in simple terms, chimpanzees tend to become 'stuck' on known behaviours despite availability of superior alternatives. These results appear inconsistent with other findings such as that of Horner and Whiten 23 , where chimpanzees 'streamlined' their behaviours after witnessing inefficient options used by others. However, this involved omitting elements 24,25 , as opposed to the additive, ratchet effect required for cumulative culure 2 . Similarly, following social demonstrations in a juice acquiring task, Yamamoto, Humle and Tanaka 26 found that chimpanzees switched from using a straw as a dipping tool to exploiting a more efficient sucking function, but this also did not involve additive ratcheting. Such findings are in line with records of behavioural modification in the wild (see also ref. 32), as well as more recent experiments demonstrating payoff-related variation in simple behaviour, such as depositing 'tokens' in novel locations to increase food reward value 33,34 . From studies examining behavioural change in humans, we might expect at least two factors to have differential effects on behavioural flexibility: the extent to which behaviour has been practiced, and the complexity of the behaviour involved . As cultural traditions are often well-established and long-held behaviours, and are also sufficiently complex to necessitate social learning to acquire them, it may be important to consider how well-ingrained the behaviour to be modified is when extrapolating results to chimpanzees' potential for cumulative culture. Evidence now exists that chimpanzees can recognise and adopt superior variants of behaviours which are simple and conceptually similar to existing routines 33,34 . Chimpanzees can also relinquish old solutions and build on very simple behaviours to form action sequences when these sequences are within most chimpanzees' repertoires 38 , as well as relinquish behaviours that have been performed but not yet adopted as a reliable foraging strategy 23,26 . However, the extent to which chimpanzees can modify, relinquish or build-upon well-established, cognitively more complex behaviours, those that perhaps mirror cultural behaviours more closely, remains to be established 13,16 . In the present studies, we investigated chimpanzees' ability to build upon socially acquired, complex behaviour in the context of improving efficiency. Of particular interest is whether a chimpanzee can benefit by witnessing a more efficient behaviour used by a conspecific compared to one they currently reliably employ to achieve the same goal, and flexibly switch to using this more efficient behaviour. A transparent puzzle box ( Fig. 1) was used (hereafter 'Serialbox') from which a valued token could be extracted (later exchanged for a food reward) via either of two alternative operations differing in efficiency, with the inefficient method (Supplementary Video 1) more labour intensive and taking longer to complete. The efficient . Under each lid were four finger holes that permitted an object (depicted as a purple cylinder) initially provisioned in the left-most compartment to be pushed the length of the apparatus. This object could then be extracted through an opening at the other end ('Extraction point A'). This was the inefficient method in Experiments 1 and 3. A small door spanning two thirds of the first compartment (coloured here in red for clarity) was fitted on the chimpanzee side of the apparatus and could be pulled open using a handle protruding from the outside of the box to give alternative and quicker access to the left-most compartment ('Extraction point B'), where the token was initially positioned. This, in combination with lifting the lid of the left-most compartment and using the underlying holes to manoeuvre the token to extraction point B, was the efficient method in Experiments 1 and 3. The blue square shown in the left-most compartment depicts the indent in the floor in which the token was placed throughout Experiment 2. Scientific RepoRts | 6:35953 | DOI: 10.1038/srep35953 method (Supplementary Video 2) involved partial use of behaviours common to the inefficient method, along with the addition of a novel behaviour. The efficient method therefore involved not only streamlining the inefficient method by a subtractive process (noted in some studies of cumulative culture) 24,25 , but also the addition of a novel behavioural element to an established sequence, that is, a ratcheting up on behaviour 2 . Participants across five groups were initially trained to extract a valued token from the transparent Serialbox via a multi-stepped, repetitive, inefficient process ( Fig. 1). To strengthen ecological validity when assessing chimpanzees' cumulative cultural capabilities, this extraction process was completed a minimum of 20 times over several sessions until it became a reliable and ingrained response. Three groups ('social information' groups) subsequently witnessed a conspecific model using the more efficient solution described in Fig. 1 and more fully in Methods below. Following repeated social demonstrations, the behaviour of participants was examined over ten hours of open diffusion, monitoring any spread of the more efficient technique, to better simulate the diffusion of behaviours in a culturally relevant context 39 . We hypothesised that if chimpanzees could recognise a solution more efficient than the one they were currently employing and were able to switch to this, they should do so once they witnessed the actions of the model, regarded as a simulated 'innovator' 40 . To assess how readily chimpanzees could themselves innovate and switch to the efficient method without the need for social information, we trained two control groups to use the inefficient method but did not expose them to the efficient method through a trained conspecific ('non-seeded' groups). To investigate how naïve chimpanzees might solve this extractive problem when they did not have an established solution to the puzzle, the Serialbox was introduced to one additional control group who were not initially trained to extract via the inefficient method ('naïve' group). For this group, the problem could be solved by using either the efficient or inefficient strategy. Experiment 1: Results Due to limited sample sizes, data were analysed using non-parametric methods with exact P values reported. Effect sizes were calculated using the Z score of the test statistic such that r = Z/√ N, where N was the total number of observations included in the analysis. An analysis of interrater reliability using Cohen's kappa found excellent agreement (κ = 1) between two coders' judgement of whether the participant was extracting via the inefficient or the efficient method. Participant inclusion and extractions across training and test phase. Eleven individuals in the 'social information' groups and eight in 'non-seeded' control groups met criterion for inclusion in the study (a minimum of 20 inefficient extractions; see Table 1 for participant demographics; Supplementary Table S1 for behaviours in the training and test periods; Supplementary Table S2 for relative efficiency of the two extraction techniques). There was no difference in the acquisition of the inefficient method between the 'social information' and 'non-seeded' individuals in terms of number of extractions made during the training period (Mann Whitney U = 36, P = 0.529; Supplementary Table S1). Within the 'social information' groups, to analyse any growing behavioural proficiency, the mean time taken across the first ten extractions using the inefficient method was compared to the mean time taken across the last ten inefficient extractions, using a one-tailed Wilcoxon signed rank test. If an individual did not extract 20 times during the testing period, the mean times taken for inefficient extractions either side of the median extraction were calculated and compared. Individuals became significantly more proficient at the inefficient method over this test period (Z = − 2.803, n = 10, P = 0.001, r = − 0.63), with a median reduction in extraction latency from 47.5 to 26.2 seconds. Switching behaviours. Across this testing period ('E1'), nine of the 11 individuals in the 'social information' groups and all individuals in the 'non-seeded' groups continued to exclusively use the inefficient method established during the training period ('E0') to extract the token. To test for switching behaviour at the individual level, following van Leeuwen et al. 34 , the number of inefficient and efficient extractions performed during E0 and E1 were compared using a one-tailed Fisher's exact test. Two individuals (from separate groups) demonstrated a significant change of behaviour within this period, switching to using the efficient solution (Individual Se: E0 0,21 , E1 10,16 , P = 0.001; Individual Sa: E0 0,22 , E1 179,0 , P < 0.0001: subscripts represent frequencies of efficient and inefficient methods respectively). 'Naïve' group. One individual, Jy, discovered and used the efficient method within two hours of interaction with the Serialbox. Individual Ua observed Jy's efficient method five times; following three initial failed attempts to open the door, she successfully used the efficient method to extract the token in a subsequent test session. Before Ua witnessed use of the efficient method, she had unsuccessfully interacted with the apparatus, exploring only the holes and lids. Two other individuals witnessed the use of the efficient method just one and five times each and never successfully extracted the token. There was no discovery of the elaborate, inefficient method. Experiment 1: Discussion When chimpanzees used a well-established but laborious solution to successfully gain rewards, most were not seen to further explore alternatives, or to capitalise on social information available about a more efficient approach. The central finding from Experiment 1 was thus of a remarkable degree of conservatism, expressed in perseverance with a well-rehearsed routine despite witnessing a more efficient alternative modelled by another chimpanzee. Such conservatism has been documented in a series of other recent chimpanzee studies 13,16, . By contrast, in the 'naïve' group, the efficient method was discovered, if by only a single persistent individual, and was later adopted by another chimpanzee. The results thus tentatively suggest that having a prior solution may in itself hinder adoption of a superior alternative 16,18 . Such conservatism may have some adaptive value insofar as switching to an alternative may be costly, either through cognitive demands inherent to learning or potential loss of reward through lack of expertise in this method 41,42 . In fact, chimpanzees, who at the start of the testing period were already well practiced at the inefficient method, effectively halved the time taken to successfully extract the token across the testing period. This indicates growing expertise and skill proficiency in their behaviour, and supports previous findings that skill mastery may hinder behavioural change 16,18 . To further investigate the limits of behavioural conservatism, in Experiment 2 the disparity in efficiency of behaviours was increased such that the inefficient method became not only an unreliable means of foraging but even when successfully employed, the latency to extraction from point A was typically far higher than for B. In addition, the alternative behaviour needed for extraction at point B was reduced to a single element and did not require use of parts of the inefficient method, so subjects had only to relinquish an established solution and adopt a novel one-stepped alternative with no ratcheting on prior behaviours. Experiment 2: Relinquishing a highly inefficient solution The movement of the token along the length of the apparatus to extraction point A was impeded by placing the token in an indentation in the floor, directly behind extraction point B (Fig. 1), so movement of the token towards A was more awkward to initiate. However, the token could now be extracted from point B solely by just pulling the door open. Raising lids and using finger holes was unnecessary. Accordingly, this experimental manipulation made the inefficient method more so, and the efficient method yet easier, enhancing the contrast between them (Supplementary Table S2). The 19 subjects who had met criterion for inclusion in the 'social information' and 'non-seeded' groups were all given a further ten hours of opportunity for solution and open diffusion with the inefficient method partially blocked in this way. Following Yamamoto et al. 26 , if individuals in the 'social information' groups failed to switch, they were provided with salient human demonstrations of the efficient method by SJD after this second period of open diffusion, because our question is not about chimpanzees offering such models, but rather how chimpanzees respond to such models when available. The 'naïve' group was not included in Experiment 2 as not only were they already exclusively using the efficient method of extraction, but their initial inclusion was designed primarily to investigate how solution naïve chimpanzees would approach this problem. Experiment 2: Results Extractions within the test period. In the 'social information' groups, the chimpanzee models demonstrated a 100% success rate of token extraction via the efficient method; in contrast, use of the inefficient method had a median success rate of only 25% (range 0-93%) (Supplementary Tables S1 and S2: a failed attempt was one in which a participant manipulated the Serialbox but subsequently left the apparatus without successfully extracting the token). Success rate became significantly lower in Experiment 2 (E2) compared to Experiment 1 when using the inefficient method (One-tailed Wilcoxon Signed ranks test Z = − 2.84, n = 10, P = 0.001, median E1 = 100%, median E2 = 25%, r = − 0.64). If participants were successful in extracting the token via the inefficient method, latency to extraction was almost two and a half times longer than a successful extraction in Experiment 1 (E1 median = 33.6 seconds, range = 24.5-51.8; E2 median = 83 seconds, range 66.1-556; See Supplementary Table S2 for comparisons with models' efficiency). In the 'non-seeded' groups, one individual now discovered and used the easier efficient method (Individual Kt), and was witnessed by two other individuals, Na and Ae. These two did not then acquire the method; however, they had observed Kt only three and two times respectively. No other individual was observed to use the efficient method in the 'non-seeded' groups, with success rate dropping for all other participants (median success rate of 14.3%, range 0-50%). Success rate was significantly lower in E2 than in E1 for those using the inefficient method in the 'non-seeded' groups (One-tailed Wilcoxon Signed ranks test Z = − 2.38, n = 7, P = 0.008, median E1 = 100%, median E2 = 14.3%, r = − 0.64). Success rate for those using the inefficient method did not differ between the 'social information' and 'non-seeded' groups (Mann Whitney U = 28, n = 17, P = 0.494). Switching behaviours. To assess switching behaviours in the 'social information' groups, the percentage of efficient extractions observed throughout E2 for each participant was compared with the percentage of efficient extractions observed during E0, using a one-tailed Wilcoxon signed rank test. There was now a significant switch, with five individuals in the 'social information' groups switching from the inefficient method to using the more efficient method that continued to be demonstrated by the model . Human demonstrations. After additional human demonstrations (median demonstrations given = 12, range = 10-17), four additional participants from the remaining six switched to using the efficient method in the 'social information' groups. Use of efficient method in 'social information' and 'non-seeded' groups. To determine the role of social information in behavioural upgrading, a one-tailed Fisher's exact test (applied due to expected values less than 5) compared the frequency of chimpanzees using the alternative method between those in 'non-seeded' groups and the 'social information' groups. A significant association was found between exposure to sustained social information and whether or not individuals switched to using the efficient alternative (P = 0.005) (Fig. 3). Based on the odds ratio, the odds of switching were 31.5 times higher for those in the 'social information' groups than those in the 'non-seeded' groups. As noted above, the two individuals who observed Kt in the 'non-seeded' group performing the efficient method did not acquire it, but they observed only three and two times respectively, whereas those in the 'social-information' groups had a median of 31 observations before acquisition (range 15-169; Supplementary Table S3). Experiment 2: Discussion In all, nine of the 11 chimpanzees in the 'social information' groups were eventually able to flexibly change their behaviours by relinquishing their mastered technique and switching to a novel one. We infer that this was due to the greater contrast between participants' inefficient use of extraction at point A and the more efficient use of extraction at point B displayed by the model, a contrast that involved differences in both latency to extraction and proportion of successful extractions. An alternative possibility, that the changes occurred because of the more extended time frame of adding E2 to E1, affording more observations of the model, can be rejected for several reasons. First, E1 involved a long period in which any switching at all was rare, and moreover, participants not switching in E1 persevered with their inefficient technique despite both multiple observations of the model (median 18 observations, range and multiple token extractions using their inefficient method (median 18 attempts, range 4-119 for those that switched in Experiment 2). In addition, among chimpanzees who did switch at some point, the number of observations of the efficient method did not predict the number of manipulations they would take before switching (final two columns in Supplementary Table S3). Given these considerations and that (i) only two participants were seen to open the door at point B in E1, and critically, (ii) no other individual was observed to make any persistent attempts to open the door until their behaviours became highly inefficient in E2, we conclude that the switch in behavioural strategy in E2 can be ascribed to the change in the relative efficiency of the options that were experimentally engineered between E1 and E2. Five of the switching chimpanzees showed relatively low levels of behavioural conservatism, with two having previously upgraded their behaviours in E1, the other three adopting the alternative once their own approach became highly inefficient in E2. This was clearly facilitated by social information, as demonstrated by a lack of switching (bar one individual) in the 'non-seeded' groups. The social learning involved may have relied on only relatively simple processes such as stimulus enhancement (of token extraction at point B), or more complex ones, like emulation or imitation, and our study was not designed to discriminate among these. In any case, stimulus enhancement or any other social learning was insufficient for change despite extensive exposure in Experiment 1; it had effects only when the contrast in efficiency became more extreme. Other chimpanzees still displayed a high degree of behavioural conservatism, in line with previous research 13,16, , showing a difficulty in inhibiting use of a highly inefficient established behaviour, with varying levels of perseveration. This was most evident in the 'social information' groups, where despite many observations of a far more efficient alternative, six individuals continued in their old behaviour for some time, with four only switching behaviours following salient social information engineered though human demonstrations, and the two remaining individuals never relinquishing their inefficient solutions. There was also very little exploratory behaviour in the 'non-seeded' groups, with only one individual discovering the efficient method. Despite witnessing the efficient solution, two individuals within the 'non-seeded' groups never attempted this alternative method. This was most likely due to their more limited and inconsistent exposure to demonstrations of this method, and highlights again the conservative nature of chimpanzee behaviour. Although there was no direct relationship between the number of observations of the model and number of manipulations taken before switching, no individual within the 'social-information' groups was seen to switch after as few demonstrations as experienced by these 'non-seeded' individuals, indicating the potential need for relatively sustained social information across repeated attempts to solve the Serialbox. This mirrors findings in humans whereby trial and error learning interacts with repeated exposure to socially available alternatives to produce behavioural change 43 . Whilst these results show some degree of behavioural flexibility, it remained to be seen whether chimpanzees could express such flexibility in a cumulative fashion; that is, could chimpanzees "add an existing technique used in a different context, or an entirely novel technique, to an existing technique, and integrate them functionally" (p. 181 44 ): could they now integrate the efficient method they had acquired (door pull and extraction at point B) with behavioural elements common to the inefficient method (lid lifting and hole poking) to cumulatively produce the efficient solution demanded by the scenario used in Experiment 1? In Experiment 1 only two chimpanzees were observed to do this, with the majority instead sticking to their known behaviours despite potential gains in extraction efficiency. Now however, seven additional chimpanzees within the 'social information' groups and one from the 'non-seeded' groups had mastered use of an alternative, independent solution (door pull and extraction at point B), which could potentially be combined with other known behaviours (elements of the inefficient solution) to produce a compound technique that they were previously not seen to use when some of these elements were novel. Experiment 3: Modifying, inhibiting and building on existing behaviours To investigate chimpanzees' potential for such accumulation, the token was repositioned in the same location as in Experiment 1 (i.e. it was removed from the indent in the floor so its movement was no longer impeded), and could now be successfully extracted at either point A using the methods of E0, or from point B (Fig. 1). To extract from point B, individuals had to employ initial elements from their learned, inefficient technique (lid lifting and hole poking) but inhibit the remainder of the sequence resulting in extraction at point A and instead combine lid lifting and poking with the element unique to efficient extraction (the door pull at point B). Alternatively, individuals could now revert back to using their earlier well-practiced inefficient technique, with this method reliably yielding the token, but much more slowly. Experiment 3: Results Extractions within the test period. One individual in the 'social information' groups and three individuals in the 'non-seeded' groups chose not to participate during the test period ('E3'- Supplementary Table S1). Switching behaviours. In the 'social information' groups, there was a significant change of behaviour from use of the earlier, trained inefficient method, with seven individuals now using the more efficient compound solution needed (One-tailed Wilcoxon signed rank test comparing percentage use of efficient behaviours: Z = − 2.410, n = 10, P = 0.008, median E0 = 0%, median E3 = 88.2%, r = − 0.54; Fig. 2). In the 'non-seeded' groups, one individual, Kt, also built on her prior solution to use the more efficient method. No additional individuals in the 'non-seeded' group used the efficient method of extraction, with four exclusively sticking with the inefficient solution. At the individual level, of those with personal experience of the efficient method (n = 9 'social information' participants and n = 1 'non-seeded' participant), seven showed a significant change of behaviour from their initial inefficient method to using the efficient compound solution (one-tailed Fisher exact tests with Bonferroni corrected P value = 0.005), whilst three reverted back to preferentially using the inefficient method (P > 0.005). In sum, five exclusively used the efficient method, three flexibly switched between using both methods, and two exclusively returned to the inefficient method ( Fig. 4 and Table 2). Experiment 3: Discussion Seven chimpanzees in the 'social information' groups now displayed the efficient solution employed by the models. Only two of these individuals had previously been seen to use this efficient solution, when this required the addition of a novel element, in E1. The other five, along with the innovator Kt in the non-seeded' group, displayed a cumulatively built combination of elements they had learned in E0 and E2. From the results of E3 we conclude that accumulation involved the combination of behaviour routines already in the repertoire. One of these, opening the door at point B (even if it was the case that this was acquired only by affordance learning about the significance of this door, but also if it involved copying the action sequence involved), gave rise to behavioural routines that could be combined with parts of an earlier-acquired procedure, of opening lids and poking, learned via training in E0. Chimpanzees' successes in E3 additionally displayed an ability to flexibly inhibit the remainder of the trained routine for extraction at point A. Such capacities for cumulative combination, although modest compared to full cumulative culture, could, we submit, provide important foundations for cumulative culture if present in ancestral states. General Discussion Chimpanzees were trained to use a relatively laborious sequence of actions to extract a valuable food-token from a puzzle-box. This initial method was sufficiently complex to require socially-facilitated acquisition in most chimpanzees and we ensured it was then extensively practiced, to become routine, as in cultural behaviours in the wild. A different, more efficient alternative was then demonstrated by a high ranking female conspecific. This new solution involved partial use of behaviours in common with the established extraction technique as well as the addition of a novel element. When chimpanzees could still successfully forage with their established method (in E1), only a small minority relinquished this and flexibly upgraded to the more efficient alternative witnessed. The predominant failure to switch to the more efficient technique is consistent with earlier reports of chimpanzee conservatism 13,16,18-21 and may offer a partial explanation for the relative stasis of chimpanzee culture. However, when their established behaviours were made considerably more inefficient in E2, most chimpanzees observing a knowledgeable individual were able to relinquish their inefficient behaviour and flexibly switch to using an alternative strategy. When in E3 they were again challenged by the task configuration of E1, the majority of these chimpanzees showed an ability to build on prior behaviours by combining already acquired elements of their learned use of the door for extraction at point B and parts of their earlier technique for extraction at point A. They had not achieved this earlier in E1, when success required the addition of a novel behaviour to the sequence. The cumulative combinations recorded in E3 thus stand in contrast to the findings of previous studies where chimpanzees appear behaviourally inflexible 13,16 . Our results suggest that in certain contexts at least, chimpanzees may combine known behaviours to match an efficient compound technique demonstrated by others. Although chimpanzees show a considerable degree of behavioural conservatism, we suggest these results indicate that they also have an ability to combine independent behaviours to produce more efficient compound action sequences. Such an ability, while not yet truly cumulative, may be one of the foundational abilities (or candidate mechanisms) for human cumulative culture, through the ability to "add an existing technique used in a different context … .to an existing technique, and integrate them functionally" (p.181 44 ). This shares similarities with human studies in which recombination of behavioural variants is employed to move solutions closer to an optimum ; that is, accumulation may commonly be brought about through novel recombination of existing behaviours, creating "innovations without invention, creativity or trial and error learning" (p.5 49 ). Whilst we offer evidence for a potential core prerequisite of cumulative culture, this is not evidence of cumulative culture itself, as the behaviours of interest were also produced spontaneously by one chimpanzee we studied, and they do not require the combination of multi-generational contributions by several innovators, which is inherent to full-blown cumulative culture 10 . Further, our study was not designed to dissect exactly how the chimpanzees were learning from the available social information, whereas advanced cultural accumulation is thought to depend on high fidelity transmission 51 , as well as cognitively complex learning heuristics 15,52 . However, chimpanzees in our study were able to use multiple solutions as well as to build on and combine prior behaviours to efficiently solve an extractive foraging problem, indicating greater potential for cumulative change than found in many earlier studies and emphasized in recent reviews (e.g. ref. 53). The accumulation observed here lends support to the plausibility that some behaviour exhibited by wild chimpanzees is actually the result of a cumulative process, even if elementary compared to that observed in human culture . Experiment 1: Methods Subjects and housing. N = 43 individuals (18 males; average age: 29.1; range: 11.9-50.5 years; Table 1) Procedure. Training phase (5 groups, 38 chimpanzees). Chimpanzees were initially trained to associate a small purple plastic token with a reward by trading this with experimenter SJD in exchange for one grape. The token was then placed inside the apparatus three quarters of the way along the first compartment (Fig. 1). The inefficient method of retrieving the token was demonstrated by SJD three times before participants interacted with the Serialbox. The inefficient method involved the lifting of each of the lids of the four compartments providing access to the finger holes. These holes were used to ferry the token along the compartments of the apparatus until it could be extracted from point A (Supplementary Video 1). Following these demonstrations, the box was pushed to the mesh allowing all individuals in each group access. Once the token was extracted from the apparatus, it was exchanged with SJD for one grape. During the training phase, the efficient method was not available because the pull door was locked shut, preventing extraction from point B. If an individual was not able to successfully retrieve the token after demonstrations, scaffolding of the solution was provided whereby the token was positioned adjacent to extraction point A until extraction from this point was mastered, with additional demonstrations given if necessary. The token was gradually placed further away until the chimpanzee was manoeuvring the token along the length of the apparatus by opening the lids and using the underlying finger holes. Participants were given the opportunity to engage with the Serialbox until all participating individuals had successfully retrieved the token a minimum of twenty times over no fewer than two training sessions. When an individual was successful in retrieving the token, the apparatus was pulled back from the mesh, reset and re-baited. If an individual showed interest in operating the apparatus but was unable to gain access due to monopolisation by more dominant individuals, they were offered the opportunity to voluntarily enter their indoor enclosures and participate by themselves until they had reached criterion for inclusion in the study. , N = 26). Model training phase. After all participating chimpanzees had reached criterion, a high ranking female chimpanzee voluntarily separated from her group and was trained on how to solve the Serialbox using a more efficient method. This involved pulling the door open, and, due to the positioning of the token a short distance from the extraction point (Fig. 1), lifting one lid and using the underlying finger holes to manoeuvre the token towards point B for efficient retrieval (Supplementary Video 2). Training sessions lasted around twenty minutes. Social information groups: Presence of social demonstrator (Three groups Social demonstration phase. The Serialbox was re-introduced to the entire group with the efficient method no longer locked. The token could now be retrieved via either extraction point A or B. The model was called by name and vocally encouraged to demonstrate the efficient method, which all models complied with. Following each extraction, the token was exchanged with SJD for one grape. After each participant had witnessed at least ten demonstrations of the more efficient method over no fewer than two separate testing sessions, the entire group was given the opportunity to interact with the Serialbox. A demonstration was taken to occur if an individual was within two metres of the model and the potential observer's head was orientated towards the apparatus. If Scientific RepoRts | 6:35953 | DOI: 10.1038/srep35953 a participating individual did not come into proximity with the model during the social demonstration phase, they were given the opportunity to voluntarily separate with the model and observe her actions. After the model had successfully retrieved the token, the apparatus was pulled away from the demonstrator, reset and re-baited. Testing phase (N = 11). The apparatus was presented over ten hours to all participating individuals with both the efficient and inefficient methods as viable strategies to extract the token. After each successful extraction, the apparatus was pulled away, reset and re-baited. To avoid cueing of responses, SJD occluded the apparatus and her hand movements with a sheet during interactions with the box. The apparatus was not made available to any non-participating chimpanzee (i.e. any individual who had not met criterion to be included in the study). Non-seeded groups: No social demonstrator (Two groups, N = 12). Control groups experienced the Training phase and Testing phase as above, but no model seeded knowledge of the more efficient method. Naïve group (1 group, N = 5). This control group was exposed to the apparatus with no prior knowledge of any solution over ten hours of open diffusion. Both the efficient and inefficient methods were viable extraction techniques. Experiment 2: Methods Methods followed those outlined in the Testing phase of Experiment 1 Methods with the exception that the token was now placed in an indent in the floor located directly behind (from the chimpanzee's perspective) extraction point B (Fig.1). This impeded movement of the token along the length of the apparatus. The 'naïve' group was not included in Experiment 2. Following Yamamoto et al. 26 , if individuals within the 'social information' groups failed to switch, they were provided with salient demonstrations of the efficient method by SJD after this second period of open diffusion (one individual did not receive human demonstrations as she did not wish to separate from her group). To avoid unnecessary voluntary separation of participants from their group, so long as a participant was able to gain access to the Serialbox, human demonstrations were given in the presence of other group members. If instead the participant struggled to gain access, they were offered the opportunity to voluntarily separate and given additional demonstrations over a period lasting no more than 30 minutes. After the participant attempted the inefficient method, SJD pulled the apparatus back and demonstrated use of the door. If participants were still attempting to use the inefficient method, SJD provisioned the apparatus with the door already open, facilitating extraction via point B. Experiment 3: Methods The token was again placed inside the apparatus three quarters of the way along the first compartment (as in Experiment 1). The apparatus was presented over five hours to all participating chimpanzees (19 individuals across the 'social information' and 'non-seeded' control groups), with both the efficient and inefficient methods as viable strategies to extract the token, following the procedure outlined in the Testing phase of Experiment 1 Methods. Analyses. Records of the social demonstration and testing phases were both narrated and visually recorded using a HC-920 Panasonic camcorder. Responses were coded in situ for all groups, with 'social information' groups' behaviour additionally coded through video analysis. Ethics Statement. Ethical approval was granted for this study by the UTMDACC Institutional Animal Care and Use Committee (IACUC approval number 0894-RN01) and the University of St Andrews' Animal Welfare and Ethics Committee, and was carried out in accordance with approved guidelines.
<gh_stars>0 package morganfield import ( "net" "strings" ) type suffixlist struct { Internal_Host string External_Host string } // Fqdns only // dev.localdomain OK // .localdomain NOT ok func (s suffixlist) PublicSuffix(domain string) string { // By default everything is public -> do not allow setting cookie result := domain // If it's the internal host, allow setting the cookie for this specific domain if domain == s.Internal_Host { result = strings.Join(strings.Split(s.Internal_Host, ".")[1:], ".") } // If it's the external host, allow setting the cookie for this specific domain if domain == s.External_Host { result = strings.Join(strings.Split(s.External_Host, ".")[1:], ".") } return result } // Informational method func (s suffixlist) String() string { return "morganfield" } func get_suffix_list(s Service_Definition) suffixlist { inthost, _, err := net.SplitHostPort(s.Internal_Host) if err != nil { panic(err) } exthost, _, err := net.SplitHostPort(s.Internal_Host) if err != nil { panic(err) } return suffixlist{ Internal_Host: inthost, External_Host: exthost, } }
<filename>Activiti-master/modules/activiti-form-engine/src/main/java/org/activiti/form/engine/impl/persistence/deploy/DeploymentManager.java /* Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.activiti.form.engine.impl.persistence.deploy; import java.util.List; import org.activiti.form.api.Form; import org.activiti.form.engine.ActivitiFormException; import org.activiti.form.engine.ActivitiFormObjectNotFoundException; import org.activiti.form.engine.FormEngineConfiguration; import org.activiti.form.engine.impl.FormQueryImpl; import org.activiti.form.engine.impl.persistence.entity.FormDeploymentEntity; import org.activiti.form.engine.impl.persistence.entity.FormDeploymentEntityManager; import org.activiti.form.engine.impl.persistence.entity.FormEntity; import org.activiti.form.engine.impl.persistence.entity.FormEntityManager; import org.activiti.form.engine.impl.persistence.entity.ResourceEntity; /** * @author <NAME> * @author <NAME> */ public class DeploymentManager { protected FormEngineConfiguration engineConfig; protected DeploymentCache<FormCacheEntry> formCache; protected List<Deployer> deployers; protected FormEntityManager formEntityManager; protected FormDeploymentEntityManager deploymentEntityManager; public DeploymentManager(DeploymentCache<FormCacheEntry> formCache, FormEngineConfiguration engineConfig) { this.formCache = formCache; this.engineConfig = engineConfig; } public void deploy(FormDeploymentEntity deployment) { for (Deployer deployer : deployers) { deployer.deploy(deployment); } } public FormEntity findDeployedFormById(String formId) { if (formId == null) { throw new ActivitiFormException("Invalid form id : null"); } // first try the cache FormCacheEntry cacheEntry = formCache.get(formId); FormEntity form = cacheEntry != null ? cacheEntry.getFormEntity() : null; if (form == null) { form = engineConfig.getFormEntityManager().findById(formId); if (form == null) { throw new ActivitiFormObjectNotFoundException("no deployed form found with id '" + formId + "'"); } form = resolveForm(form).getFormEntity(); } return form; } public FormEntity findDeployedLatestFormByKey(String formDefinitionKey) { FormEntity form = formEntityManager.findLatestFormByKey(formDefinitionKey); if (form == null) { throw new ActivitiFormObjectNotFoundException("no forms deployed with key '" + formDefinitionKey + "'"); } form = resolveForm(form).getFormEntity(); return form; } public FormEntity findDeployedLatestFormByKeyAndTenantId(String formDefinitionKey, String tenantId) { FormEntity form = formEntityManager.findLatestFormByKeyAndTenantId(formDefinitionKey, tenantId); if (form == null) { throw new ActivitiFormObjectNotFoundException("no forms deployed with key '" + formDefinitionKey + "' for tenant identifier '" + tenantId + "'"); } form = resolveForm(form).getFormEntity(); return form; } public FormEntity findDeployedLatestFormByKeyAndParentDeploymentId(String formDefinitionKey, String parentDeploymentId) { FormEntity form = formEntityManager.findLatestFormByKeyAndParentDeploymentId(formDefinitionKey, parentDeploymentId); if (form == null) { throw new ActivitiFormObjectNotFoundException("no forms deployed with key '" + formDefinitionKey + "' for parent deployment id '" + parentDeploymentId + "'"); } form = resolveForm(form).getFormEntity(); return form; } public FormEntity findDeployedLatestFormByKeyParentDeploymentIdAndTenantId(String formDefinitionKey, String parentDeploymentId, String tenantId) { FormEntity form = formEntityManager.findLatestFormByKeyParentDeploymentIdAndTenantId(formDefinitionKey, parentDeploymentId, tenantId); if (form == null) { throw new ActivitiFormObjectNotFoundException("no forms deployed with key '" + formDefinitionKey + "' for parent deployment id '" + parentDeploymentId + "' and tenant identifier '" + tenantId + "'"); } form = resolveForm(form).getFormEntity(); return form; } public FormEntity findDeployedFormByKeyAndVersionAndTenantId(String formDefinitionKey, int formVersion, String tenantId) { FormEntity form = formEntityManager.findFormByKeyAndVersionAndTenantId(formDefinitionKey, formVersion, tenantId); if (form == null) { throw new ActivitiFormObjectNotFoundException("no decisions deployed with key = '" + formDefinitionKey + "' and version = '" + formVersion + "'"); } form = resolveForm(form).getFormEntity(); return form; } /** * Resolving the decision will fetch the DMN, parse it and store the * {@link DmnDefinition} in memory. */ public FormCacheEntry resolveForm(Form form) { String formId = form.getId(); String deploymentId = form.getDeploymentId(); FormCacheEntry cachedForm = formCache.get(formId); if (cachedForm == null) { FormDeploymentEntity deployment = engineConfig.getDeploymentEntityManager().findById(deploymentId); List<ResourceEntity> resources = engineConfig.getResourceEntityManager().findResourcesByDeploymentId(deploymentId); for (ResourceEntity resource : resources) { deployment.addResource(resource); } deployment.setNew(false); deploy(deployment); cachedForm = formCache.get(formId); if (cachedForm == null) { throw new ActivitiFormException("deployment '" + deploymentId + "' didn't put form '" + formId + "' in the cache"); } } return cachedForm; } public void removeDeployment(String deploymentId) { FormDeploymentEntity deployment = deploymentEntityManager.findById(deploymentId); if (deployment == null) { throw new ActivitiFormObjectNotFoundException("Could not find a deployment with id '" + deploymentId + "'."); } // Remove any process definition from the cache List<Form> forms = new FormQueryImpl().deploymentId(deploymentId).list(); // Delete data deploymentEntityManager.deleteDeployment(deploymentId); for (Form form : forms) { formCache.remove(form.getId()); } } public List<Deployer> getDeployers() { return deployers; } public void setDeployers(List<Deployer> deployers) { this.deployers = deployers; } public DeploymentCache<FormCacheEntry> getFormCache() { return formCache; } public void setFormCache(DeploymentCache<FormCacheEntry> formCache) { this.formCache = formCache; } public FormEntityManager getFormEntityManager() { return formEntityManager; } public void setFormEntityManager(FormEntityManager formEntityManager) { this.formEntityManager = formEntityManager; } public FormDeploymentEntityManager getDeploymentEntityManager() { return deploymentEntityManager; } public void setDeploymentEntityManager(FormDeploymentEntityManager deploymentEntityManager) { this.deploymentEntityManager = deploymentEntityManager; } }
import { Context } from 'koa'; export interface IHealthcheckResponse { readonly status: number; readonly body: { readonly status: string; }; } export async function getHealthcheck(_ctx: Context): Promise<IHealthcheckResponse> { return { status: 200, body: { status: 'OK', }, }; }
// SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0-only) /* Copyright(c) 2014 - 2020 Intel Corporation */ #include "adf_accel_devices.h" #include "adf_common_drv.h" #include "adf_transport_internal.h" #define ADF_ARB_NUM 4 #define ADF_ARB_REG_SIZE 0x4 #define WRITE_CSR_ARB_SARCONFIG(csr_addr, arb_offset, index, value) \ ADF_CSR_WR(csr_addr, (arb_offset) + \ (ADF_ARB_REG_SIZE * (index)), value) #define WRITE_CSR_ARB_WT2SAM(csr_addr, arb_offset, wt_offset, index, value) \ ADF_CSR_WR(csr_addr, ((arb_offset) + (wt_offset)) + \ (ADF_ARB_REG_SIZE * (index)), value) int adf_init_arb(struct adf_accel_dev *accel_dev) { struct adf_hw_device_data *hw_data = accel_dev->hw_device; void __iomem *csr = accel_dev->transport->banks[0].csr_addr; unsigned long ae_mask = hw_data->ae_mask; u32 arb_off, wt_off, arb_cfg; const u32 *thd_2_arb_cfg; struct arb_info info; int arb, i; hw_data->get_arb_info(&info); arb_cfg = info.arb_cfg; arb_off = info.arb_offset; wt_off = info.wt2sam_offset; /* Service arb configured for 32 bytes responses and * ring flow control check enabled. */ for (arb = 0; arb < ADF_ARB_NUM; arb++) WRITE_CSR_ARB_SARCONFIG(csr, arb_off, arb, arb_cfg); /* Map worker threads to service arbiters */ thd_2_arb_cfg = hw_data->get_arb_mapping(); for_each_set_bit(i, &ae_mask, hw_data->num_engines) WRITE_CSR_ARB_WT2SAM(csr, arb_off, wt_off, i, thd_2_arb_cfg[i]); return 0; } EXPORT_SYMBOL_GPL(adf_init_arb); void adf_update_ring_arb(struct adf_etr_ring_data *ring) { struct adf_accel_dev *accel_dev = ring->bank->accel_dev; struct adf_hw_device_data *hw_data = accel_dev->hw_device; struct adf_hw_csr_ops *csr_ops = GET_CSR_OPS(accel_dev); u32 tx_ring_mask = hw_data->tx_rings_mask; u32 shift = hw_data->tx_rx_gap; u32 arben, arben_tx, arben_rx; u32 rx_ring_mask; /* * Enable arbitration on a ring only if the TX half of the ring mask * matches the RX part. This results in writes to CSR on both TX and * RX update - only one is necessary, but both are done for * simplicity. */ rx_ring_mask = tx_ring_mask << shift; arben_tx = (ring->bank->ring_mask & tx_ring_mask) >> 0; arben_rx = (ring->bank->ring_mask & rx_ring_mask) >> shift; arben = arben_tx & arben_rx; csr_ops->write_csr_ring_srv_arb_en(ring->bank->csr_addr, ring->bank->bank_number, arben); } void adf_exit_arb(struct adf_accel_dev *accel_dev) { struct adf_hw_device_data *hw_data = accel_dev->hw_device; struct adf_hw_csr_ops *csr_ops = GET_CSR_OPS(accel_dev); u32 arb_off, wt_off; struct arb_info info; void __iomem *csr; unsigned int i; hw_data->get_arb_info(&info); arb_off = info.arb_offset; wt_off = info.wt2sam_offset; if (!accel_dev->transport) return; csr = accel_dev->transport->banks[0].csr_addr; hw_data->get_arb_info(&info); /* Reset arbiter configuration */ for (i = 0; i < ADF_ARB_NUM; i++) WRITE_CSR_ARB_SARCONFIG(csr, arb_off, i, 0); /* Unmap worker threads to service arbiters */ for (i = 0; i < hw_data->num_engines; i++) WRITE_CSR_ARB_WT2SAM(csr, arb_off, wt_off, i, 0); /* Disable arbitration on all rings */ for (i = 0; i < GET_MAX_BANKS(accel_dev); i++) csr_ops->write_csr_ring_srv_arb_en(csr, i, 0); } EXPORT_SYMBOL_GPL(adf_exit_arb);
Duncan: Hey! Nice to meet you too. Just so you guys know I’ve got about an hour and then I’ve got a moving, I’m in the middle of moving so I’m up to my neck in boring tasks outside of this which is wonderful but I’ve got all this year packing up chords and Incredibly mundane things I have to do. Euvie: So, what’s the least mundane thing that’s happening right now? Duncan: The least mundane thing? I’ll tell you and it actually might seem….wait, where are you guys located? Euvie: Bulgaria Mike: And Istanbul Duncan: Ok. This might seem mundane to you guys but it’s raining in Los Angeles which is glorious because it never rains here. That makes my day whenever we get any kind of rain it all it’s just magical. So that currently is what I’m the most excited about. Boring. Mike: That’s so earthy. [laughter] Duncan: You know, You don’t really know much you need it if you don’t get weather, weather is so important in California because we’re in this rotten drought .You just don’t get whether, you get a never-ending beautiful day which sounds on paper fantastic maybe but in reality it’s kind of hellish, it’s like something out of some terrible and lazy existentialist play or something. Just unremitting beautiful days that you’re forced to recognise the darkness of your own heart. [laughter] Duncan: It seems like there is this never-ending trickle of news coming in about discoveries regarding plants and animals that we form away just road off as being unintelligent or completely unaware which is pretty, what do you call it…antro Euvie: Antropomorphic. Duncan: Antropocentric. Now you find out oh, actually the tree is incredibly tuned into its environment and connecting and communicating witth all the other plant life. So, If there is a frappening the human things got everything you can yellow really what you watching is used in Death you’re looking out at the death of all these living things that are just slowly withering because of a lack of rain watching things essentially die of thirst around you and it reminds you of the climate change that is apparently happening and then that puts you in a nice apocalyptic mind States especially when you mix that end with the fact that we have nearly legal marijuana year so it’s really easy to be constantly stoned. Mike: I was just at the Fish market today in Istanbul where shopping for dinner and my friend were just looking at the racks of fish and Shrimp and everything we’re seeing each other like you know in about 20 years none of these fish are going to be around. Duncan: Yeah. Mike: That’s crazy. Duncan: Maybe less. I mean it seems like right now, you have to take between subscribing to comforting conspiracy theory that there is some cabal of scientists working for the UN and want to convince the people of the Earth that the planet is going through climate change when in fact everything is fine. You either subscribe to that or you have to subscribe to the probably more likely scenario that we’re entering into a mass extinction or something that people have been writing for a long time and have names for. Scientists call it climate change, religious people call it the apocalypse. It’s the same word. It’s just funny with the modern way of talking about the end of the world is the climate change. Even though it’s not the end of the world. Actually one of my favorite George Carlin bits, he says: ”The planets, fine. Things die on this planet. Have been dying on this planet for millions of years. It’s just a normal thing for beings to be wiped out on this planet for new life to come. The planet is fine. Humans are fucked. “ . [laughter] Duncan: But it’s not really scary. I mean scary on one level but on another level when you’re really deep in yourself. Me when I think the things I’m truly afraid of the big ones like climate change being like wiped out by some plague or a flood or a massive tornado with powers never-before-seen. These things are so distant and impossible to connect you I don’t really feel afraid of them that’s probably the problem of the majority of the people on the planet to connect with something so massive and so seemingly far away. The ice caps? What does that even mean? Do most people have kids they don’t have enough food you know. One of the major problems our species is facing is the inability to recognise global challenges in the work together in a global way to fix them. Mike: It’s funny how we noticed that transition. It was like oh, look at the price of tomatoes! Duncan: Right. Here, there’s no avocados. You go to a Mexican restaurant and they’re like oh, there’s no guacamole, no avocados. What? You know, we have an avocado problem. What? There’s no avocado? What are you talking about? Allright, I guess there’s no avocados. Mike: The world is out of that fish, out of stock. Duncan: It’s crazy yeah. And it’s funny because, I really like thinking about singularity which is another word for the apocalypse. The way that movies have conditioned us to think ok, I know what the end of the world is going to look like. Because the way we think the end of the world looks if you’ve grown up watching post apocalyptic movies is-There’s this sudden event. Meteor impact or over a course of a couple of weeks a disaster happens. To see the way it probably is happening which is like you’re saying, conviniences that we’ve been taking for granted, just one after the other kind of go away. You see an elephant in the zoo maybe girraffes, it looks like the girraffes are saying goodbye. It’s like the party is beginning to end. The party consisting of all living things on this planet. Some of them are starting to make french exits. Where did the giraffes go? They didn’t even say goodbye. They just vanished, huh? Well, no more girraffes. They were cool. Euvie: I grew up in the Soviet Union actually and that’s what it was like in the late 80s I mean you just walking to the store on the shelves are empty in the the lady at the front as well sorry there’s no bread today well what do you have or will we have rice like you know how this kind of desperation and everybody is miserable and there is violence and there is probably a lot of alcoholism and stuff like that but there’s nothing happening on the day to day basis. It’s just getting progressively worse Mike: I was walking down the street the other day and looking at all the candy shops in thinking like there is conflict in Turkey right now and Russian ambassador was just shut the other day and I can hear a bomb car bomb from my window about a week ago so just crazy s*** right but I’m working on the street and it’s like a sort of a mix of the European Middle Eastern and Asian market sort of. The streets really lively in cool and there’s candy shops everywhere like every third shop has got a candy shop and I was thinking like I don’t know if it’s a farce if anyone is buying that. How are these guys staying open? Because you sense the desperation but at the same time there’s all of this commerce going on. Duncan: So the desperation is coming from the turbulence being caused by what? Mike: The political situation. Duncan: Can you describe the political situation to me because I, for one thing, over a year were foggy on what’s happening just because i we’re not even sure if we can trust our news. It’s a confusion is happening here because people are starting to wonder if all of the news that were getting over here is propaganda. We don’t know what’s happening. For example, the situation in Syria. There’s two different stories that the people are telling about that situation. One of them is these rebels are “good guys” and they’re being wiped out by Russia and Assad. In the other story it’s no, these are ISIS-radical islamic militants who if we don’t take care of them now, there’s going to be more assassinations, more car bombings. One story is liberate Syria and the other is just get rid of them like they’re zombies. Wipe them all out, completely. Then there’s of course shades of those stories. Which one is true? Mike: I think it’s like there’s no moderation in news and that’s part of the problem everyone’s trying to get the most clicks and attention and that’s why clickbait such a thing so it’s like it’s not even so much that the story is being manipulated although I’m sure it is but there’s just such extremism in how the story is being communicated that no one knows what side the believe anymore it’s like bomb them all or save them all I don’t know what I’m supposed to do in this situation. Duncan: And here’s another sad thing is it so many of these events are happening now it’s difficult to keep up with them because there is actually now Minor car bombings you know like minor suicide bombings you know where they go in only four people were killed now and so it I’m not even looking at that when I was just four people vs 50, it’s crazy things are getting strange things are but there was some video that popped up of one of these hoodies with two children did you see that the two children you were used as suicide bombers recently did you see that? Euvie: No. Mike: No, I didn’t. Duncan: So he’s sitting with them. He says Islam is the religion of glory. Not a religion of humiliation and these two children or were they going to be martyrd as they call it and he’s like saying that to the kids, are you afraid to die now? -Like no. You know, he ended up killing a bunch of people apparently. Again is that real is a propaganda I don’t know I saw the video looks real let you know that like you’re saying we want extreme extremes are preferable to shades because extremes you can deal with if there’s an extreme problem with we are surrounded by like something that is definitely trying to kill us and has a kind of unified goal did it easy to fix that problem so horrible that way you may want to fix it but it’s kind of seems like humanity is entering into this age where certain operating systems aren’t working anymore for the kind of software that we’re trying to run right now and some kind of archaic view of religion doesn’t work anymore or it’s mixing in with new ways of looking at the planet and it just doesn’t work. I mean, living in the United States fully aware of the fact that we are constantly bombing people accidentally, think about that. Like you know, people are horrified when a person intentionally walks into a mosque and blows up a bunch of people but in the United States it’s kind of overlooked that we accidentally kill people all the time which is so f***** up I like in fact the intentional killing sometimes I think like if you had to choose between being intentionally murder accidentally murdered which would you want accidentally getting stepped on by some giant thing drunk on its own power or vino intentionally blown up by Sam desperate thing I don’t know I hope I never have to make that decision do you guys think the problem is religion or a faulty or outdated understanding of a world religion? Euvie: I think that a big part of it is that people are just aren’t looking at the facts and whether it’s the religion clothing their judgement or just oversimplification or that they’re just not thinking on there own but I think a big part of that people just aren’t looking at fact in we have more days and other than we have ever had in human history and that’s what was my mind so whether it is because of religion or because of something else I don’t think it matters I think it matters just that people aren’t thinking straight. Mike: I was going to say the exact same thing observing facts you know I have heard something recently more people are killed by bee stings and terrorism or something like that that’s insane. Euvie: More people are killed by furniture than terrorists. [laughter] In the States. Duncan: Don’t know why I did this morning I’ve decided to blow the apocalypse trumpet with you guys I don’t know why. I didn’t want it to get so dark but let me just point this out. I’ve thought that too. [00:17:00] I was like c’mon really? More people are killed by sharks, drowning, hearth attacks. More people are killed by things falling over overpasses or the lightning strikes. You name it. You can come up with a lot of, mosquitos Jesus Christ. One of the things I used to think is if I really was Isis if I was running Isis I would just start selling free cigarettes. Yeah that’s gonna kill way more people. [00:17:30] So many more people. The tobacco companies are just wiping people out. No one even mentions that. OxyContin really like prices from one of the get organised start a pharmaceutical company that sells some kind of hyper addictive opiate and sell tobacco and you’re going to just kill so many infidels but that being said that I did this interview once with someone over it display singularity university this guy Aaron Frank [00:18:00] and they are studying the acceleration of technology and types of technology that are suddenly accessible the people that formerly worked in kind of map which technologies we don’t have access to it eventually you will be able to have access to or at least relatively easy and inexpensive to assemble and some of these did he mentioned or potentially the ability for the individual to shoot your own personal satellite up into space. [00:18:30] The other of course is “wet works” Mike: Oh, wet labs? Duncan: Yeah, wet labs. The ability of the individual To genetically modify things, to build things by altering DNA using technology that right now most people don’t have. Also, you have to add in those two are two things the third thing which is the Unknown technology that isn’t here ye, that will come. So what you were saying is, eventually going to get to the point where any motivated individual Will have the ability to create some kind of weapon that could do catastrophic damage to society it’s not here yet but it will be there. It will definitely be there eventually. So, the idea I guess, bees kill more people which is absolutely true. Yes right now bees are more dangerous than terrorists for sure. But from the ten, twenty, thirty year perspective, if there is some kind of mind virus, whatever you want to call it. If a new religion forms or if some very charismatic techno cult leader who comes up (by the way I don’t think this will happen), imagine some Mesiah appears. Some merge of an AI and human. Not afraid of death that has one of it’s tennants-if you help me wipe out people then I’ll let you stay alive. Just some ridiculous sci-fi scenario. Not even Islam. But if there’s any kind of mind If there’s any kind of cognitive POV that does not include the entire planet or the star that what they call the sanctity of life are the importance of regardless of where a person’s at in their life, if they’re not harming other people helping them have a good life, if there’s something that’s the opposite of that where we are right now they’re using what technology they have to cause to kill people for an imaginary thing again like when somebody kills people for freedom which is awful mantra that we’ve heard here the liberation of Iraq for example the George W Bush in it something about freedom in what you might as well be saying we liberated Iraq for Allah . Both of them are Phrases or words to completely up for interpretation you know what does that even mean man. But the idea that were looking at like a very militants religion or parts of a religion are militant better way to say er militarizing and there there is the potential in the future for them to obtain nuclear weapons potentially bioweapons those kinds of things did in the short term yet it’s just like chaos in smaller parts of the world but in the long term it really bad and I’m not a right wing person and not a left wing person I want to kill anybody I think I want to achieve what they call a type 1 civilization I want to achieve full connectivity that we can move into space and make-up artificial, super intelligence with technology. That’s an outcome I would love to see. Or create for the intelligent people on the planet and then everyone on the planet join together to work on telescopes that could potentially discover alien civilizations. Things like that appeal to me, that’s what I prefer. But, If there’s something standing in the way of that which is equivalent to a child that doesn’t want to stop believing in Santa Claus to the point that the child is willing to start burning down the houses of people who don’t believe in Santa Clause. Then I think at that point you have to do something that doesn’t make sense from the perspective of cultural relativism of the idea like everyone deserves to live the way they get to live up untill a point when the people, that doesnt necessarily work if some group of people, the way they want to live is by converting an entire planet to a religion at the cost of if you don’t convert they F. I’m sorry, I don’t even know why I did this. [laughter] Duncan: I went on reddit watch people die and i was watching these isis death videos. Have you guys seen any of those? Euvie, Mike: Yeah. Duncan: Jesus, holy f*** . Not only the most violent thing I’ve ever seen in my life, It’s also really well shot. Apparently, Isis has access to some great cinematographer or something. Euvie: And they take pride in it. Duncan: Yes! Mike: They’ve got dolly shots. Duncan: Yes! They’ve got a dolly in a place where they’re slicing peoples throats. What is happening? That is so apsurd to me asnd something about the perfection of their techniques of shooting snuff films essentially makes me feel alot less compassion then maybe I formerly felt. It makes me think oh, this are praying mantises or something. They’re fully infected with a mind virus that is, I dont see how you can fix that. I dont see how I take somebody who shot a film in a slaughter house hanging upside down with their throats beeing cut. I don’t know how many Ram Dass retreat or how many Ayahuasca sessions it’s going to take to make that person realize that maybe that’s not what God wants us to do on this planet. Alah is not saying hey can you guys maybe get some better lightning in there for God when you’re slicing these people’s throats You know it’s a problem it’s a real problem where you could say it’s innate a s***** problem because it least, because how can you even fix it. Mike: You said a few minutes ago like what you think the problem is is it religion and I just a few days ago I was looking at Google Maps and just kind of looking at the area cuz I’m in Istanbul in a zoomed out and saw it you know all of the Middle Eastern countries and I noticed that is not the first time I’ve looked at him out but obviously there’s a giant yellow patch in this area where like Algeria, Nigeria, Libya, Egypt, Sudan, Iraq, Saudi Arabia. All of these countries are in this yellow patch that is so concentrated nowhere else in the world in there and it’s not the first time I’ve heard people say that it’s not necessarily religion its environment climate change this to resources. Euvie: Climate change. Mike: Climate change, exactly. Duncan: Yeah, I’ve heard this too. And why the climate change of the climate changes changes because of industrialization and techno it’s technology were not effectively using the energy supplies so which is creating all these problems. Ok fine. It’s climate change, Fine so if instead of like ok all of us was imagine that instead of us being separated in the way we are through the internet all of us are in a place where a flood is beginning that we need to work together so that we don’t drown but some of us I don’t know one of you and me we start referring to some book of magic and from reading the book of magic or determination about the rising waters is that we’ve angered an invisible being who rather than just having a conversation with this is having a conversation with water the problem is one of us didn’t wear the right headwear. This is why the God is angry, why the water has risen. So if I stone that person to death maybe the water or stop rising or even better it’ll just please this loonies got the text people by murdering them so catastrophes See, that’s the problem. What we need is rveryone to be referred to as a humanistic concept of like, look, we did this. There’s no angry person is like why didn’t you read the book bit that I gave the guy said it the right way whoever that guy maybe you know Moses Jesus where you just need to be more pragmatic which is like if you imagine the amount of heroism and courage in overcoming the fear of death first invasion of the Buddha that a lot of these militants have achieved it ( I’m not saying that it’s good), but they are f***ing brave. They have overcome some basic human desire to stay alive and giving their lives for this belief that’s the tragedy because that very same energy imagine if that was being applied to let’s bring the world into a connected state so that we can use our resources to create the living conditions that will allow people to suffer let you know imagine that that that that was really sad about it the very same energy that these people are and I’m not just saying the radical Islamic militant people but the very same energy that anyone who is taking the time to figure out of build weapons to blow people up is that was being applied to how do we desalinate ocean water in a energy efficient way or how do we get clean water people putting it clean water to people you wipe out a huge percentage of the world’s diseases so that kind of frustrating but instead of being generated by an operating system that doesn’t work anymore. It doesn’t work. It will work, by the way sorry for this rant. It will work. The operating system of any fundamentalist religion will work. It will work after you have completely wiped out civilization. Then that’s definatelly going to work. Break if you can successfully take out the power grid or you can cut successfully somehow create an app global disruption that nuclear holocaust happens then in the non-radioactive patches of land were your building your post-apocalyptic Village your fundamentalist religion will definitely definitely were you can burn which is you can, you know what I mean. It will totally work. You can start burning witches again. But that isn’t going to work as long as this new Form of life that splits warming on the planet is happening also the new form of life by that I mean the modern civilisation the Wirral and join in different ways this can continue unless the people who are the voices of this civilisation come up with a way to harmonize so that this can continue to grow. Mike: I can’t help but feel like the resilience and strength and hardness required to either fight the opposing powers or do what’s necessary to change the world in in the right direction is just doesn’t exist in the west. Euvie: People are pu**ies Mike: Yeah they’re so comfortable the fire is not under their asses enough to make that changed it something women talking about quite a bit. Duncan: Yeah and also there’s been a kind of concerted effort to keep us from work feeling a fire under asses. Mike: Exactly. Duncan: True journalism would be if a van on use if instead of showing like SNL make fun of Donald Trump last night, they’ve really got him this time. They were showing devastation. Mike: Aleppo Duncan: Aleppo. The horror. The horror over and over again. Look this is happening on your planet. This is happening. This is happening. If they were doing that, there would be change. But the problem is that we have these completely unthoughout ideas apparently about what we can show people. We can’t show them that. Why not? Because the children. Oh really? You can’t show them children being blown up because you want to protect children? That doesn’t make any sense at all. You have to show the people what actually happening in whatever the forces are there keeping the media from illuminating what is actually happening on this planet or trying to put some kind of way I don’t know some kind of opaque blur in front of the horror whatever that forces it’s doing that there is in some rotten ways I let let me protect the children I don’t the people of the United States that they don’t need no it’s just it’s not tasteful to show that decapitated infant laying on bricks in a devastated city. It’s not tasteful to show the throat being cut of people who were being treated like human cattle and some slaughterhouse. That’s not tasteful, let’s not show that lets show own version of the world where they are things are kind of bad but they’re not that bad it’s f***** up what is the force that is getting in between the people and reality why is there a force that is working what is actually happening in the world are attempting to soften it so that the people don’t see what’s really happening what is that Force? Euvie: Well, it’s capitalism because Because the power to be interested in things staying the same are people feeling like things are the same so that they continue getting mortgages and continue buying sh** they don’t need and continue voting for things that don’t really affect anything I mean there are a lot of people in power who have vested interest in things remaining the way that they are. Mike: Yeah exactly you were saying about me not protecting the children from being exposed to that thing I wish that was all that was because it maybe at that point you can have a conversation in reason with them but I really think it’s about views and that’s it. So it’s news organizations are not going to show what doesn’t get views. Duncan: Having the guts or just being awake up at night where I was like why don’t I just see what this is all about. Just that one video has created like a real shift in my thinking and you know what you guys are saying – what is the truth, what is verifiable truth? And the verifiable truth is there is some organised effort to keep the people of planet Earth away from the truth that’s safe to say right? There appears to be some group or network of groups that are at all costs trying to keep us from seeing what’s really happening and anytime anyone shows as what’s really happening Edward Snowden for example Julian Assange anytime you get in a glimpse of what is really going down the white governments really work those people are either arrested or they have to go into hiding. That’s real, that’s true right? So that means that We are living under some kind of an umbrella or living under some kind of dome of symbols that are being placed in between us and reality In peace symbols because of technology this dome is beginning to evaporate and that causing a lot of cognitive dissonance for any of us. I mean and I was growing up living in the United States I would regularly take LSD and if you take LSD instead the current dollar bill currency or if you take LSD and watch the news or if you take LSD and do anything at all, any conditioning that’s been planted inside of you by corporations, states, countries or whatever, it doesn’t work for a second. I kind of weird window opens up and you see unfiltered reality. So When you watch the news you suddenly the reporters seemed insane like me why are they talking like that, what is that rhythm, why are they dressed like that, where are they acting like that? They’re not acting like people, people don’t act the way they’re acting. You think that. You will get money and you think what is this paper, why is this meant so much to me? This is just paper, this isn’t real, what is this, this isn’t real. Then you start looking at things through that lens and it becomes obvious that you are under some kind of dome of propaganda. It’s just you not really sure why it’s there, who made it, what is it shielding you from, what is it? Then when you come down you tell your friends now I think this might be living in a place where we’re being brainwashed by some kind of very powerful group of people that’s trying to make us think things that are not important are important and things that are important aren’t important. In fact, it appears that we’ll be hypnotised into believing that the right way to live is to trade our life energy in exchange for pieces of paper and that’s f****** insane man. You say that it when your 17 or 18 after a great acid trip and your friends are like whatever man, whatever. No, no, no it’s normal, it’s the best way to live. And then you can even rest on this idea like I just had a heavy acid trip. I’ll just lay back and you know how they say. If you take more than 3 bits of acid you become permanently insane [laughter] Duncan: This must be it. Because for a second it did really look like you know the paper that we call money wasn’t important at all. The most important thing was to love people and be in the moment, to connect with nature. But yeah I was f****** high and stupid and you could rest back into the idea that you just went a little crazy and that actually there isn’t a dome separating you from reality. But now, thanks to the internet. No matter how hard they try to keep information away we’re getting the information and I think this is creating a kind of cognitive dissonance that is yet another example of technological disruption that are happening all over the planet. A kind of weird cognitive vertigo that a whole lot of people are experiencing right now because they’re having to deal with the fact that we are not being shown the truth. The people of the Earth are not being shown the truth and you can maybe mind for the daddy, you can try to find the daddy, you can go on WikiLeaks, you can search for but you sure as f*** won’t see it when you turn the TV on and you’re not going to see it when the president talks and you’re not going to see it when you open up the newspaper because it’s been intentionally hidden from us and now we know it. Which means that you have two choices. You either pick between allowing yourself to just subscribe to the propaganda just as a form of laziness, or you face the truth which is we’re being lied to on a daily basis by people who aren’t afraid to kill and f*** man that the mind f*** that’s a real serious mind f*** I don’t know where the in-between place staying out is there. It’s either you’re just like yeah, I’m just living in another time period where some power structure is using propaganda to make people act like robots. Or you start getting in going down that rabbit hole of who the f*** are these people and why?
/** * This method reads all the lifecycle configuration files from a folder and add the already added configurations * as aspects. * * @param registry tenant registry * @param rootRegistry root registry * @return * @throws RegistryException * @throws FileNotFoundException * @throws XMLStreamException */ public static boolean addDefaultLifecyclesIfNotAvailable(Registry registry, Registry rootRegistry) throws RegistryException, FileNotFoundException, XMLStreamException { if (!registry.resourceExists(RegistryConstants.LIFECYCLE_CONFIGURATION_PATH)) { Collection lifeCycleConfigurationCollection = new CollectionImpl(); registry.put(RegistryConstants.LIFECYCLE_CONFIGURATION_PATH, lifeCycleConfigurationCollection); String defaultLifecycleConfigLocation = getDefaltLifecycleConfigLocation(); File defaultLifecycleConfigDirectory = new File(defaultLifecycleConfigLocation); if (!defaultLifecycleConfigDirectory.exists()) { return false; } FilenameFilter filenameFilter = new FilenameFilter() { @Override public boolean accept(File file, String name) { return name.endsWith(".xml"); } }; File[] lifecycleConfigFiles = defaultLifecycleConfigDirectory.listFiles(filenameFilter); if (lifecycleConfigFiles != null && lifecycleConfigFiles.length == 0) { return false; } for (File lifecycleConfigFile : lifecycleConfigFiles) { String fileName = FilenameUtils.removeExtension(lifecycleConfigFile.getName()); String resourcePath = RegistryConstants.LIFECYCLE_CONFIGURATION_PATH + fileName; String fileContent = null; if (!registry.resourceExists(resourcePath)) { try { fileContent = FileUtils.readFileToString(lifecycleConfigFile); } catch (IOException e) { String msg = String.format("Error while reading lifecycle config file %s ", fileName); log.error(msg, e); /* The exception is not thrown, because if we throw the error, the for loop will be broken and other files won't be read */ } if ((fileContent != null) && !fileContent.isEmpty()) { try { OMElement omElement = buildOMElement(fileContent); String aspectName = omElement.getAttributeValue(new QName("name")); if (fileName.equalsIgnoreCase(aspectName)) { addLifecycle(fileContent, registry, rootRegistry); } else { String msg = String.format("Configuration file name %s not matched with aspect name %s ", fileName, aspectName); log.error(msg); /* The error is not thrown, because if we throw the error, the for loop will be broken and other files won't be read */ } } catch (RegistryException e) { String msg = String.format("Error while adding aspect %s ", fileName); log.error(msg, e); /* The exception is not thrown, because if we throw the error, the for loop will be broken and other files won't be read */ } } } else { try { generateAspect(resourcePath, registry); } catch (Exception e) { String msg = String.format("Error while generating aspect %s ", fileName); log.error(msg, e); /* The exception is not thrown, because if we throw the error, the for loop will be broken and other aspects won't be added */ } } } } else { Resource lifecycleRoot = registry.get(getContextRoot()); if (!(lifecycleRoot instanceof Collection)) { String msg = "Failed to continue as the lifecycle configuration root: " + getContextRoot() + " is not a collection."; log.error(msg); throw new RegistryException(msg); } Collection lifecycleRootCol = (Collection)lifecycleRoot; String[] lifecycleConfigPaths = lifecycleRootCol.getChildren(); if (lifecycleConfigPaths != null) { for (String lifecycleConfigPath: lifecycleConfigPaths) { generateAspect(lifecycleConfigPath, registry); } } } return true; }
When it comes to alcoholic beverages, I’m a wine guy first and foremost, then a Scotch & bourbon guy. Until recently if someone mentioned rum and didn’t say Coke right after it or mention mojito’s, I wasn’t quite sure what they were getting at. But then aged rum started hitting my periphery. One reason was some friends who enjoyed and extolled its virtues and another was my love of Single Malt Scotch. Over the last couple of years it became clear to me that some of my favorite expressions of Scotch were ones that were finished in more than one type of cask. Among those is The Balvenie 14 Year Old Caribbean Cask. It’s aged for 12 years in whiskey barrels and then spends the last two years aging in ex- rum casks. Thus the desire to sample more aged rums increased. The more examples I tried it became clear how distinct they could be from one another as well as what a world apart they are from the typical rums used as mixers. The next step was obvious; I needed to sample numerous aged rums side by side to see the real differences. Over the last couple of weeks I have tried more than four dozen different rums from a host of producers. I tasted each one several times. My glass of choice was a brandy snifter and I sampled each rum both neat and with a single ice cube. Some of the rums work better with ice while most were better neat. Of the many examples of rum I tasted, those listed below are the ones I recommend you spend your money on. Here’s why: Kaniché Réserve — It was distilled and aged in bourbon casks in Barbados. Kaniché was then transported to the Cognac Ferrand estate in France, where it was finished in ex-Cognac casks. This rum sells for right around $18. Caramel and papaya aromas fill the nose here. Spices carry the day through the palate along with hints of crème brûlée and macadamia nut. It has a nice finish with vanilla bean and bits of clove. At under $20 this is an outstanding value. For this price you won’t mind making cocktails with it, but it’s worthy of being sipped neat. Flor de Cana Gran Reserva 7 — The history of Flor de Cana goes back to 1890 when the distillery was first built in Chichigalpa, Nicaragua. The fifth generation of the Pellas family is running things today. Flor de Cana Rums are available in more than 40 countries. Gran Reserva 7 sells for around $23. The nose reveals a hint of roasted coffee. The palate is medium-bodied and offers plenty of depth for the price point. Toffee, dates, and wisps of plantain are all in evidence. Characteristics of roasted pineapple dusted with cinnamon emerge on the finish along with a hint of cola. There is a little bite at the end that provides a nice final flourish. Bacardi 8 — This rum was aged for at least eight years in American white oak barrels. The Bacardi 8 is the top rum in their range and it most often sells for right around $23. Serving this one on the rocks really allowed it to shine. Coconut and pineapple aromas leap from the nose of this offering. The palate is studded with date and almond flavors that are accompanied by spices such a cinnamon and bits of nutmeg. Milk chocolate and caramel flavors emerge on the finish. This is a sip worthy rum, from one of the best known producers of cocktail rums. Pyrat XO Reserve — This was produced using a blend of aged Caribbean rums. Barrel-aging took place in a combination of French Limousin & American oak. The rums used to create the blend were up to 15-years-old. Pyrat XO most often sells for just under $25. This rum leads with a big boisterous nose that shows off vanilla bean, papaya, and coconut aromas. Baked apple and pineapple flavors dominate the palate. They are joined by honey and molasses characteristics which provide a gentle and restrained sweetness. Caramel, clove, and cinnamon are all part of the pleasing finish. Barbancourt Reserve Speciale — This Haitian producer has a history that dates back more than 150 years. Barbancourt rum undergoes a double distillation process is aged for eight years in white Limousin oak follows and sells for right around $25. The light coppery hue of this rum glistens in the glass as soon as you pour it. Toasted hazelnut and Madagascar vanilla are apparent on the nose. The palate is gentle and layered with nutty flavors underscored by subtle bits of fruit such as date and lychee. White pepper, mesquite honey, and continued roasted nut elements are all part of persistent and spice-laden finish. This rum falls squarely on the dryer side of the scale and brings to mind an Oloroso sherry in weight and prominence of nut characteristics. El Dorado 12 Year Old — This offering from Guyana is produced from sugarcane grown in alluvial soils along the Demerara River. The El Dorado 12 is produced from a blend of aged rums, none of them being less than 12-years-old. Aging occurred in old bourbon casks. This rum sells for right around $25. Allspice and clove aromas jump from the nose of this selection. Cinnamon spice is speckled through a core that features a heavy dose of chocolate characteristics. Bits of Calimyrna fig are present as well. Caramel, fruitcake elements, and bits of churro emerge on the long, spice-laden finish. This is an excellent example of aged rum featuring an impressively weighty mouthfeel. Cruzan Single Barrel Rum — This rum is made from a blend that has aged up to 12 years. Each batch is bottled one cask at a time. Named after it’s homeland of St. Croix, Cruzan most often sells for right around $29. Wisps of maple syrup, white pepper, and vanilla bean fill the nose of this selection. Date, walnut, and hints of savory spices are present on the palate of this widely available rum. Bits of coffee liqueur, nutmeg, and allspice emerge on the finish which is above average in length. This one really opened up on the rocks. Mount Gay Black Barrel — Twelve to 18-month-old sugar cane plants are harvested and crushed the same day. After fermentation, Mount Gay utilizes both single column and double copper pot distillates. Oak aging takes place in charred bourbon barrels. Black Barrel sells for right around $29.99. Toasted hazelnut aromas are joined by fruitcake spices and freshly ground allspice on the nose of this offering. The palate brings to mind Jamaican jerk spices with interweaved bits of fruity sweetness balancing those impressions of spicy heat. Hints of chamomile tea lead the long, lusty finish which, like the palate, is dominated by a strong core of spices appearing in droves. Berkshire Mountain Distillers Ragged Mountain Rum — This small batch entry comes from Sheffield, Mass. BMD was founded in 2007 and produces a number of small lot spirits. They age each overproof batch of rum in oak barrels and then blend in pure water from their on-site, historic spring. This rum shimmers in the glass with a light, apricot hue. Fresh coconut and pineapple aromas are joined by hints of clove. Brown sugar and a complex mélange of mixed roasted nuts inform the gentle and deeply layered palate. Salted caramel leads a couple of hints of salinity on the lingering, spicy finish. If your interests run toward small batch producers, here’s an offering at around $30 you should consider. Santa Teresa 1796 — This rum is made in Venezuela using the solera method. This process is also employed in the production of sherry, Madeira and some Ports. Santa Teresa has been using the solera since 1992. Prior to their rums being put into the solera, they have been aged from four to 35 years. Santa Teresa 1796 sells for right around $35. Dark fruit and toasted walnut aromas waft convincingly from the nose of this rum. Date, coconut, and a bevy of tropical fruit flavors are prominent throughout a medium bodied palate. Hints of chocolate emerge on the finish along with clove, black pepper, and a gentle hint of chicory. This is an extremely elegant and complex example of rum for the price. Papa’s Pilar Dark — This rum was inspired by Ernest Hemingway’s love of adventure and the spirit itself. Rums from various countries are gathered and then aged in the U.S. in a variety of cask types using the solera method. Aging of the rums in the blend is up to 24 years. This rum was developed with the cooperation of the Hemingway Estate; they donate all of their royalty profits to causes that sync up with Hemingway’s beliefs and adventurous lifestyle. Papa’s Pilar Dark sells for around $39.99. This offering is distinctly dark in the glass. Cinnamon, clove, and dried date aromas fill the nose. The first sip reveals a weighty palate loaded with an impressive amount of depth. Wave after wave of dried fruit flavors are in evidence with mission fig and plum being the most prominent. Chocolate, mesquite honey, and hints of coffee are all present in the dark, spicy, unctuous, and impossibly long finish that goes on and on with impressive persistence and measured intensity. This is incredibly profound and well rounded rum. Plantation XO 20th Anniversary — This offering is made by combining some of the company’s oldest reserved rums from Barbados. After blending, they allow the married rum to age for another 12 to 18 months in oak casks in France. It has a suggested retail price of $39.99. In the glass it has a deep, dark hue that brings to mind double-brewed tea. The nose is off the charts with bits of toasty oak, vanilla bean galore, and spices to spare. The palate is generous and powerfully layered with a bevy of precise and complex fruit and spices. Dates, coconuts, and mountain fig are all present and accounted for along with a hint of anise and toasted pecan. The prodigious finish brings to mind Fig Newtons dipped in dark chocolate. A hint of Seville orange provides a final distinct note. I recommend enjoying this impressive rum neat. Dos Maderas Rum PX (5+5) — This rum was aged in oak for five years in Guyana and Barbados. Then it was transported to Spain where it spent three years aging in 20-year-old Palo Cortado Sherry casks and a final two years in casks that held 20 year Don Guide Sherry. This is the only rum that uses these three cask types in conjunction with one another. It typically sells for right around $40. An array of dried fig and plum pudding spice aromas burst from the inviting nose of this rum. The palate is mellow and easygoing with tons and tons of gentle, sweet, layered depth. Spices and dried fruits such as date and white pepper are joined by hazelnut flavors. A milk chocolate element emerges on the remarkably long finish which leans sweet. Coconut, white fig, and continued bits of spice are present as well. This rum begs you back to the glass for sip after sip. You might want to buy two bottles of this one.
<gh_stars>0 import logging from typing import Dict, Any import emails from application.core.config import config def send_email( email_to: str, subject: str = '', text: str = '', environment: Dict[str, Any] = {}, # noqa ) -> None: assert config.EMAILS_ENABLED, 'no provided configuration for email variables' message = emails.Message( mail_from=('Project', '<EMAIL>'), subject=subject, text=text, ) smtp_options = dict(host=config.SMTP_HOST, port=config.SMTP_PORT) if config.SMTP_TLS: smtp_options['tls'] = True if config.SMTP_USER: smtp_options['user'] = config.SMTP_USER if config.SMTP_PASSWORD: smtp_options['password'] = config.SMTP_PASSWORD response = message.send(to=email_to, render=environment, smtp=smtp_options) logging.info(f'send email result: {response}') def send_test_email(email_to: str) -> None: send_email( email_to=email_to, subject='Test email', text='Test email', )
GOCAD TSurf 1 HEADER { name:<NAME> *visible:false *solid*color:0.000000 0.666667 1.000000 1 } GOCAD_ORIGINAL_COORDINATE_SYSTEM NAME Default AXIS_NAME "X" "Y" "Z" AXIS_UNIT "m" "m" "m" ZPOSITIVE Elevation END_ORIGINAL_COORDINATE_SYSTEM TFACE VRTX 1 1728805.20585289 5237972.34014042 -12232.27186601 VRTX 2 1732031.14041671 5237834.46488158 -11235.72861967 VRTX 3 1729892.99019644 5239583.51649068 -9608.74002975 VRTX 4 1731112.64794220 5236722.60649671 -13107.94537209 VRTX 5 1733089.11220852 5236618.64349613 -12524.27838553 VRTX 6 1734259.05356235 5237476.91443869 -10909.49276075 VRTX 7 1733707.74523051 5238869.51398973 -9190.59929312 VRTX 8 1731793.56695658 5239029.49720444 -9674.04653650 VRTX 9 1732123.66354374 5240514.86134651 -7502.87846150 VRTX 10 1729385.69667560 5240708.40638163 -8243.07952267 VRTX 11 1727589.28244794 5240962.99947653 -8552.64515761 VRTX 12 1728441.32690833 5239225.36233652 -10637.03380835 VRTX 13 1727127.35303459 5238739.74433496 -11790.56604068 VRTX 14 1727083.99604195 5237634.39626859 -13331.84120662 VRTX 15 1729373.38498732 5236874.56352103 -13538.11793874 VRTX 16 1730288.24238351 5235571.28803545 -15000.00095724 VRTX 17 1731254.49332951 5235313.68541640 -15000.00095724 VRTX 18 1732220.74427552 5235056.08279735 -15000.00095724 VRTX 19 1733912.07864961 5235577.24768096 -13658.58691210 VRTX 20 1735083.68305445 5236089.91275017 -12520.10806435 VRTX 21 1736659.18032541 5236334.57768939 -11602.86579368 VRTX 22 1736478.55258620 5237440.03102884 -10143.84814454 VRTX 23 1736446.54040814 5238697.28658427 -8420.67625185 VRTX 24 1735315.81796321 5239946.73912284 -7112.48177497 VRTX 25 1733726.54809770 5241003.31712115 -6239.14154755 VRTX 26 1731035.00177377 5242003.62383113 -5848.97447912 VRTX 27 1728801.75007144 5241832.04686996 -6907.34338851 VRTX 28 1728022.67811661 5243344.59461785 -5106.72050531 VRTX 29 1727183.41550892 5242552.46725583 -6508.57639111 VRTX 30 1725497.96255079 5241678.13997617 -8335.17291800 VRTX 31 1726783.44439050 5239893.78144721 -10324.57541934 VRTX 32 1724120.91173780 5240236.14942560 -10831.65650925 VRTX 33 1725405.23550961 5238606.15620418 -12608.46874986 VRTX 34 1725183.22268938 5237773.40031698 -13839.30703811 VRTX 35 1726423.23859948 5236601.69851165 -15000.00095724 VRTX 36 1727389.48954549 5236344.09589260 -15000.00095724 VRTX 37 1728355.74049149 5236086.49327355 -15000.00095724 VRTX 38 1729321.99143750 5235828.89065450 -15000.00095724 VRTX 39 1725456.98765348 5236859.30113070 -15000.00095724 VRTX 40 1724490.73670747 5237116.90374975 -15000.00095724 VRTX 41 1723524.48576146 5237374.50636880 -15000.00095724 VRTX 42 1723217.19781868 5238785.75458087 -13165.59948195 VRTX 43 1722558.23481546 5237632.10898785 -15000.00095724 VRTX 44 1721366.73770130 5238985.80169319 -13570.31900537 VRTX 45 1721591.98386945 5237889.71160690 -15000.00095724 VRTX 46 1720625.73292344 5238147.31422595 -15000.00095724 VRTX 47 1719659.48197744 5238404.91684500 -15000.00095724 VRTX 48 1719613.88494747 5239759.78484426 -13147.12662947 VRTX 49 1718693.23103143 5238662.51946405 -15000.00095724 VRTX 50 1717255.63039980 5239824.37500266 -13925.58475291 VRTX 51 1717726.98008542 5238920.12208310 -15000.00095724 VRTX 52 1717107.87560757 5239270.15622843 -14744.73679720 VRTX 53 1717403.38519203 5240378.59377689 -13106.43270862 VRTX 54 1719186.79474804 5240793.55190316 -11877.70464219 VRTX 55 1717551.13998425 5240932.81255112 -12287.28066433 VRTX 56 1717698.89477648 5241487.03132535 -11468.12862005 VRTX 57 1717846.64956871 5242041.25009958 -10648.97657576 VRTX 58 1720387.34970265 5241539.96148516 -10406.01945014 VRTX 59 1719540.73420208 5242480.02226103 -9420.24850932 VRTX 60 1717994.40436094 5242595.46887381 -9829.82453147 VRTX 61 1718142.15915317 5243149.68764805 -9010.67248718 VRTX 62 1718289.91394540 5243703.90642228 -8191.52044289 VRTX 63 1720701.68808015 5243053.03422033 -8202.41228106 VRTX 64 1718437.66873763 5244258.12519651 -7372.36839860 VRTX 65 1720370.59690096 5244633.22164398 -6143.64033217 VRTX 66 1718585.42352986 5244812.34397074 -6553.21635431 VRTX 67 1718733.17832209 5245366.56274497 -5734.06431002 VRTX 68 1718880.93311432 5245920.78151920 -4914.91226573 VRTX 69 1721155.61609886 5245611.15587031 -4505.33624359 VRTX 70 1719028.68790655 5246475.00029343 -4095.76022144 VRTX 71 1720576.51751918 5246952.76406109 -2867.03215501 VRTX 72 1719176.44269878 5247029.21906766 -3276.60817716 VRTX 73 1719324.19749101 5247583.43784190 -2457.45613287 VRTX 74 1719471.95228324 5248137.65661613 -1638.30408858 VRTX 75 1720872.02710364 5248061.20160956 -1228.72806643 VRTX 76 1719619.70707547 5248691.87539036 -819.15204429 VRTX 77 1720286.05186398 5249107.83800887 0.00000000 VRTX 78 1719767.46186770 5249246.09416459 -0.00000000 VRTX 79 1721252.30280999 5248850.23538982 -0.00000000 VRTX 80 1722218.55375599 5248592.63277077 0.00000000 VRTX 81 1721753.14010671 5247232.68655668 -2047.88011072 VRTX 82 1723571.62757051 5247123.18850655 -1529.97003539 VRTX 83 1723184.80470200 5248335.03015172 -0.00000000 VRTX 84 1724151.05564801 5248077.42753267 -0.00000000 VRTX 85 1725117.30659401 5247819.82491362 0.00000000 VRTX 86 1725244.03399033 5246354.19721616 -1975.86967359 VRTX 87 1726365.51301883 5246678.84520506 -1115.28573460 VRTX 88 1726083.55754002 5247562.22229457 -0.00000000 VRTX 89 1727049.80848603 5247304.61967552 -0.00000000 VRTX 90 1728016.05943203 5247047.01705647 -0.00000000 VRTX 91 1728305.72986887 5245819.53729520 -1587.29170523 VRTX 92 1728982.31037804 5246789.41443742 -0.00000000 VRTX 93 1729948.56132405 5246531.81181837 -0.00000000 VRTX 94 1730065.39032514 5245029.05587356 -2030.74634167 VRTX 95 1730914.81227005 5246274.20919932 -0.00000000 VRTX 96 1732130.34669437 5245010.67890682 -1296.41918952 VRTX 97 1731881.06321606 5246016.60658027 -0.00000000 VRTX 98 1732847.31416207 5245759.00396122 -0.00000000 VRTX 99 1733813.56510807 5245501.40134217 0.00000000 VRTX 100 1732900.87214508 5244018.92247487 -2381.52064155 VRTX 101 1734979.15033333 5244050.51479910 -1573.33737202 VRTX 102 1734779.81605408 5245243.79872312 -0.00000000 VRTX 103 1735746.06700009 5244986.19610407 -0.00000000 VRTX 104 1736712.31794609 5244728.59348502 -0.00000000 VRTX 105 1737881.26175483 5243290.30477373 -1554.71739714 VRTX 106 1737678.56889210 5244470.99086597 0.00000000 VRTX 107 1738644.81983810 5244213.38824692 -0.00000000 VRTX 108 1739860.68145299 5242951.08522530 -1294.60524619 VRTX 109 1739611.07078411 5243955.78562787 -0.00000000 VRTX 110 1740577.32173012 5243698.18300882 0.00000000 VRTX 111 1741543.57267612 5243440.58038977 -0.00000000 VRTX 112 1741545.24827141 5242185.62101030 -1731.16395316 VRTX 113 1742509.82362213 5243182.97777072 0.00000000 VRTX 114 1743561.00765637 5241501.73107767 -1933.31030221 VRTX 115 1743476.07456814 5242925.37515167 0.00000000 VRTX 116 1744442.32551414 5242667.77253262 -0.00000000 VRTX 117 1745205.06983398 5241440.59695273 -1412.83059170 VRTX 118 1745408.57646015 5242410.16991357 0.00000000 VRTX 119 1746374.82740616 5242152.56729452 -0.00000000 VRTX 120 1747505.94461443 5240572.39342454 -1764.42793600 VRTX 121 1747341.07835216 5241894.96467547 0.00000000 VRTX 122 1748307.32929817 5241637.36205642 -0.00000000 VRTX 123 1749517.44413488 5240353.50323745 -1326.46536404 VRTX 124 1749273.58024418 5241379.75943737 -0.00000000 VRTX 125 1750239.83119018 5241122.15681832 0.00000000 VRTX 126 1751206.08213619 5240864.55419927 -0.00000000 VRTX 127 1751198.34489590 5239417.50713233 -1999.69816704 VRTX 128 1752172.33308220 5240606.95158022 -0.00000000 VRTX 129 1753310.98847914 5239365.49861847 -1294.23697520 VRTX 130 1753458.74327137 5239919.71739270 -475.08493091 VRTX 131 1753138.58402820 5240349.34896117 -0.00000000 VRTX 132 1753163.23368691 5238811.27984424 -2113.38901949 VRTX 133 1753015.47889468 5238257.06107001 -2932.54106378 VRTX 134 1750813.62663969 5238298.58341565 -3685.29203134 VRTX 135 1752867.72410245 5237702.84229577 -3751.69310807 VRTX 136 1751248.69929451 5237244.05917579 -4980.42117450 VRTX 137 1752719.96931022 5237148.62352154 -4570.84515235 VRTX 138 1752572.21451799 5236594.40474731 -5389.99719664 VRTX 139 1752424.45972576 5236040.18597308 -6209.14924093 VRTX 140 1750390.23369121 5236163.32949610 -6787.59832427 VRTX 141 1752276.70493353 5235485.96719885 -7028.30128522 VRTX 142 1752128.95014130 5234931.74842462 -7847.45332951 VRTX 143 1750071.27930445 5235050.05308707 -8441.20501315 VRTX 144 1751981.19534907 5234377.52965039 -8666.60537380 VRTX 145 1750383.73893532 5233912.99639391 -9895.33344023 VRTX 146 1751833.44055684 5233823.31087616 -9485.75741809 VRTX 147 1751685.68576461 5233269.09210192 -10304.90946238 VRTX 148 1751537.93097238 5232714.87332769 -11124.06150667 VRTX 149 1749563.14331886 5232944.54684507 -11533.63752881 VRTX 150 1751390.17618015 5232160.65455346 -11943.21355095 VRTX 151 1751242.42138792 5231606.43577923 -12762.36559524 VRTX 152 1749153.11769732 5231778.13592368 -13294.07177573 VRTX 153 1751094.66659570 5231052.21700500 -13581.51763953 VRTX 154 1749613.26130364 5230419.23565445 -15000.00095724 VRTX 155 1750946.91180347 5230497.99823077 -14400.66968382 VRTX 156 1750579.51224965 5230161.63303540 -15000.00095724 VRTX 157 1748647.01035763 5230676.83827350 -15000.00095724 VRTX 158 1747435.48659775 5231955.41231175 -13681.34665830 VRTX 159 1747680.75941163 5230934.44089255 -15000.00095724 VRTX 160 1746714.50846562 5231192.04351160 -15000.00095724 VRTX 161 1745748.25751961 5231449.64613065 -15000.00095724 VRTX 162 1745761.08454027 5232711.45007876 -13254.05641089 VRTX 163 1744782.00657361 5231707.24874970 -15000.00095724 VRTX 164 1743561.38026353 5232951.67970444 -13731.81116429 VRTX 165 1743815.75562760 5231964.85136875 -15000.00095724 VRTX 166 1742849.50468159 5232222.45398780 -15000.00095724 VRTX 167 1741883.25373559 5232480.05660685 -15000.00095724 VRTX 168 1741739.46170442 5233881.67595060 -13118.73755870 VRTX 169 1740917.00278958 5232737.65922590 -15000.00095724 VRTX 170 1739950.75184357 5232995.26184495 -15000.00095724 VRTX 171 1740134.59076756 5234784.62635843 -12463.13517778 VRTX 172 1738758.27763687 5234345.28953864 -13575.73600300 VRTX 173 1738984.50089757 5233252.86446400 -15000.00095724 VRTX 174 1738018.24995156 5233510.46708305 -15000.00095724 VRTX 175 1737051.99900555 5233768.06970210 -15000.00095724 VRTX 176 1736653.58978706 5235075.27321431 -13342.69893058 VRTX 177 1736085.74805955 5234025.67232115 -15000.00095724 VRTX 178 1735119.49711354 5234283.27494020 -15000.00095724 VRTX 179 1734153.24616753 5234540.87755925 -15000.00095724 VRTX 180 1733186.99522153 5234798.48017830 -15000.00095724 VRTX 181 1738313.78544585 5235450.51003633 -12214.11398830 VRTX 182 1739846.93529696 5235906.45185569 -11020.89971148 VRTX 183 1738158.35337123 5236668.36786287 -10590.71448821 VRTX 184 1738043.17384125 5237804.82071476 -9064.84103449 VRTX 185 1738366.77731481 5238849.71235117 -7503.89149477 VRTX 186 1737143.93918312 5239935.60235971 -6455.29389685 VRTX 187 1735557.02585105 5241055.73963621 -5493.37812792 VRTX 188 1733818.75045914 5242217.49454691 -4529.71742579 VRTX 189 1732164.53028151 5243086.21412712 -3939.50717612 VRTX 190 1730255.92981894 5243516.17157902 -4048.35159592 VRTX 191 1728622.68419431 5244260.55294108 -3622.00536494 VRTX 192 1726510.84001367 5244198.93423395 -4483.97227200 VRTX 193 1725681.42571636 5243382.69817642 -5915.47379594 VRTX 194 1723882.44629192 5242893.84839853 -7251.89667148 VRTX 195 1722655.92099848 5241832.53367319 -9167.68936450 VRTX 196 1722070.64924394 5240938.97544653 -10616.07282697 VRTX 197 1721634.99057739 5239992.00107209 -12083.12601136 VRTX 198 1722090.40079504 5244015.98908996 -6362.68332085 VRTX 199 1723889.13527504 5244447.92982944 -5104.88215033 VRTX 200 1723230.64486454 5245814.73750611 -3461.01187843 VRTX 201 1725166.12535869 5245166.12697744 -3644.00861227 VRTX 202 1727110.84609137 5245114.89255718 -2999.25713162 VRTX 203 1734369.41842339 5243145.23291531 -3046.89764840 VRTX 204 1736264.07142592 5242953.36601630 -2614.63151808 VRTX 205 1737813.50224750 5242047.80667200 -3294.23022196 VRTX 206 1739621.94039473 5242030.75592567 -2652.44463866 VRTX 207 1740365.86899597 5240816.12923582 -4054.88060009 VRTX 208 1742318.11302708 5240522.90953711 -3741.28876883 VRTX 209 1743893.26104323 5239866.55253533 -4067.53964028 VRTX 210 1745531.13229131 5240390.15576217 -2742.42960389 VRTX 211 1747164.76951999 5239496.37443370 -3374.79615424 VRTX 212 1749267.03651833 5239414.24028561 -2714.72430239 VRTX 213 1749763.71413870 5237396.16407849 -5316.84223027 VRTX 214 1747029.41210132 5238368.92165172 -4980.42117450 VRTX 215 1745517.09212630 5239284.28790771 -4273.63654654 VRTX 216 1745603.68275129 5237958.00255009 -6071.98695121 VRTX 217 1748044.92504615 5237243.73819324 -6159.51557418 VRTX 218 1748519.42762743 5236070.57678101 -7603.85194807 VRTX 219 1748919.44870273 5234276.13842883 -9932.92038803 VRTX 220 1746653.21306198 5235912.26526259 -8508.88421296 VRTX 221 1746022.05004755 5234862.29846025 -10189.98673953 VRTX 222 1746105.46561281 5233740.32576372 -11707.56410531 VRTX 223 1747684.54835136 5232889.62701782 -12300.54917929 VRTX 224 1744076.96835586 5233764.64113861 -12420.28344194 VRTX 225 1743614.59608016 5235043.07544835 -10826.21312550 VRTX 226 1741575.46737231 5235286.01254886 -11241.15680531 VRTX 227 1741385.05106341 5236376.26963087 -9806.71038452 VRTX 228 1739709.06460039 5237012.75900572 -9544.97375901 VRTX 229 1739985.35188499 5238051.63758191 -8009.72931148 VRTX 230 1740406.79014733 5239636.30400102 -5667.92500742 VRTX 231 1737430.91850899 5241012.29468528 -4863.93484501 VRTX 232 1735547.21625241 5242113.86670951 -4036.82524665 VRTX 233 1742278.43336156 5239402.65218257 -5301.78512844 VRTX 234 1744103.76754748 5238818.16752589 -5436.81364549 VRTX 235 1741786.17300381 5237938.83515601 -7502.87846150 VRTX 236 1743410.72277633 5237293.65649635 -7795.52914415 VRTX 237 1745143.11323725 5236993.58770093 -7572.27167551 VRTX 238 1744912.40140674 5235915.80315543 -9144.43742408 VRTX 239 1743107.34419550 5236322.78972600 -9246.88748697 TRGL 1 2 3 TRGL 2 1 4 TRGL 2 4 5 TRGL 6 2 5 TRGL 7 2 6 TRGL 2 7 8 TRGL 3 2 8 TRGL 9 3 8 TRGL 3 9 10 TRGL 11 3 10 TRGL 3 11 12 TRGL 1 3 12 TRGL 13 1 12 TRGL 14 1 13 TRGL 1 14 15 TRGL 4 1 15 TRGL 16 4 15 TRGL 16 17 4 TRGL 17 18 4 TRGL 4 18 5 TRGL 5 18 19 TRGL 20 5 19 TRGL 20 6 5 TRGL 6 20 21 TRGL 22 6 21 TRGL 22 7 6 TRGL 7 22 23 TRGL 24 7 23 TRGL 24 9 7 TRGL 9 24 25 TRGL 26 9 25 TRGL 9 26 10 TRGL 26 27 10 TRGL 28 27 26 TRGL 27 28 29 TRGL 11 27 29 TRGL 27 11 10 TRGL 30 11 29 TRGL 30 31 11 TRGL 31 30 32 TRGL 33 31 32 TRGL 31 33 13 TRGL 31 13 12 TRGL 11 31 12 TRGL 33 14 13 TRGL 14 33 34 TRGL 35 14 34 TRGL 35 36 14 TRGL 36 37 14 TRGL 14 37 15 TRGL 37 38 15 TRGL 38 16 15 TRGL 39 35 34 TRGL 40 39 34 TRGL 41 40 34 TRGL 42 41 34 TRGL 43 41 42 TRGL 43 42 44 TRGL 45 43 44 TRGL 46 45 44 TRGL 47 46 44 TRGL 48 47 44 TRGL 49 47 48 TRGL 50 49 48 TRGL 51 49 50 TRGL 50 52 51 TRGL 53 50 48 TRGL 53 48 54 TRGL 55 53 54 TRGL 56 55 54 TRGL 57 56 54 TRGL 58 57 54 TRGL 57 58 59 TRGL 60 57 59 TRGL 61 60 59 TRGL 62 61 59 TRGL 63 62 59 TRGL 64 62 63 TRGL 64 63 65 TRGL 66 64 65 TRGL 67 66 65 TRGL 68 67 65 TRGL 69 68 65 TRGL 70 68 69 TRGL 70 69 71 TRGL 72 70 71 TRGL 73 72 71 TRGL 74 73 71 TRGL 75 74 71 TRGL 76 74 75 TRGL 77 76 75 TRGL 78 76 77 TRGL 79 77 75 TRGL 80 79 75 TRGL 81 80 75 TRGL 80 81 82 TRGL 83 80 82 TRGL 84 83 82 TRGL 85 84 82 TRGL 86 85 82 TRGL 85 86 87 TRGL 88 85 87 TRGL 89 88 87 TRGL 90 89 87 TRGL 91 90 87 TRGL 92 90 91 TRGL 93 92 91 TRGL 94 93 91 TRGL 95 93 94 TRGL 95 94 96 TRGL 97 95 96 TRGL 98 97 96 TRGL 99 98 96 TRGL 100 99 96 TRGL 99 100 101 TRGL 102 99 101 TRGL 103 102 101 TRGL 104 103 101 TRGL 104 101 105 TRGL 106 104 105 TRGL 107 106 105 TRGL 107 105 108 TRGL 109 107 108 TRGL 110 109 108 TRGL 111 110 108 TRGL 112 111 108 TRGL 113 111 112 TRGL 114 113 112 TRGL 115 113 114 TRGL 116 115 114 TRGL 116 114 117 TRGL 118 116 117 TRGL 119 118 117 TRGL 120 119 117 TRGL 121 119 120 TRGL 122 121 120 TRGL 122 120 123 TRGL 124 122 123 TRGL 125 124 123 TRGL 126 125 123 TRGL 127 126 123 TRGL 128 126 127 TRGL 129 128 127 TRGL 130 128 129 TRGL 128 130 131 TRGL 132 129 127 TRGL 133 132 127 TRGL 134 133 127 TRGL 135 133 134 TRGL 135 134 136 TRGL 137 135 136 TRGL 138 137 136 TRGL 139 138 136 TRGL 140 139 136 TRGL 141 139 140 TRGL 142 141 140 TRGL 143 142 140 TRGL 144 142 143 TRGL 144 143 145 TRGL 146 144 145 TRGL 147 146 145 TRGL 148 147 145 TRGL 149 148 145 TRGL 150 148 149 TRGL 151 150 149 TRGL 152 151 149 TRGL 153 151 152 TRGL 154 153 152 TRGL 154 155 153 TRGL 156 155 154 TRGL 157 154 152 TRGL 157 152 158 TRGL 159 157 158 TRGL 160 159 158 TRGL 161 160 158 TRGL 162 161 158 TRGL 163 161 162 TRGL 163 162 164 TRGL 165 163 164 TRGL 166 165 164 TRGL 167 166 164 TRGL 168 167 164 TRGL 169 167 168 TRGL 170 169 168 TRGL 170 168 171 TRGL 170 171 172 TRGL 173 170 172 TRGL 174 173 172 TRGL 175 174 172 TRGL 176 175 172 TRGL 177 175 176 TRGL 178 177 176 TRGL 178 176 19 TRGL 179 178 19 TRGL 180 179 19 TRGL 18 180 19 TRGL 176 20 19 TRGL 20 176 21 TRGL 176 181 21 TRGL 181 176 172 TRGL 171 181 172 TRGL 181 171 182 TRGL 181 182 183 TRGL 181 183 21 TRGL 183 22 21 TRGL 22 183 184 TRGL 23 22 184 TRGL 185 23 184 TRGL 23 185 186 TRGL 24 23 186 TRGL 187 24 186 TRGL 25 24 187 TRGL 188 25 187 TRGL 188 26 25 TRGL 26 188 189 TRGL 190 26 189 TRGL 190 28 26 TRGL 28 190 191 TRGL 28 191 192 TRGL 193 28 192 TRGL 28 193 29 TRGL 193 30 29 TRGL 30 193 194 TRGL 195 30 194 TRGL 30 195 32 TRGL 32 195 196 TRGL 197 32 196 TRGL 32 197 42 TRGL 33 32 42 TRGL 33 42 34 TRGL 42 197 44 TRGL 197 48 44 TRGL 48 197 54 TRGL 197 58 54 TRGL 58 197 196 TRGL 195 58 196 TRGL 58 195 63 TRGL 58 63 59 TRGL 63 195 194 TRGL 198 63 194 TRGL 63 198 65 TRGL 198 69 65 TRGL 69 198 199 TRGL 200 69 199 TRGL 69 200 71 TRGL 200 81 71 TRGL 81 200 82 TRGL 200 86 82 TRGL 86 200 201 TRGL 202 86 201 TRGL 86 202 87 TRGL 202 91 87 TRGL 91 202 191 TRGL 94 91 191 TRGL 190 94 191 TRGL 190 100 94 TRGL 100 190 189 TRGL 100 189 203 TRGL 101 100 203 TRGL 204 101 203 TRGL 101 204 105 TRGL 105 204 205 TRGL 206 105 205 TRGL 105 206 108 TRGL 206 112 108 TRGL 112 206 207 TRGL 208 112 207 TRGL 208 114 112 TRGL 114 208 209 TRGL 210 114 209 TRGL 114 210 117 TRGL 210 120 117 TRGL 120 210 211 TRGL 212 120 211 TRGL 120 212 123 TRGL 212 127 123 TRGL 212 134 127 TRGL 212 213 134 TRGL 212 214 213 TRGL 214 212 211 TRGL 214 211 215 TRGL 214 215 216 TRGL 217 214 216 TRGL 213 214 217 TRGL 140 213 217 TRGL 213 140 136 TRGL 134 213 136 TRGL 218 140 217 TRGL 218 143 140 TRGL 218 219 143 TRGL 219 218 220 TRGL 221 219 220 TRGL 219 221 222 TRGL 223 219 222 TRGL 219 223 149 TRGL 219 149 145 TRGL 143 219 145 TRGL 223 152 149 TRGL 152 223 158 TRGL 223 162 158 TRGL 162 223 222 TRGL 224 162 222 TRGL 162 224 164 TRGL 224 168 164 TRGL 168 224 225 TRGL 226 168 225 TRGL 168 226 171 TRGL 171 226 182 TRGL 182 226 227 TRGL 228 182 227 TRGL 182 228 183 TRGL 183 228 184 TRGL 184 228 229 TRGL 185 184 229 TRGL 230 185 229 TRGL 185 230 186 TRGL 230 231 186 TRGL 231 230 207 TRGL 231 207 205 TRGL 232 231 205 TRGL 187 231 232 TRGL 231 187 186 TRGL 188 187 232 TRGL 203 188 232 TRGL 189 188 203 TRGL 204 203 232 TRGL 204 232 205 TRGL 207 206 205 TRGL 207 230 233 TRGL 208 207 233 TRGL 208 233 209 TRGL 233 234 209 TRGL 235 234 233 TRGL 234 235 236 TRGL 216 234 236 TRGL 215 234 216 TRGL 234 215 209 TRGL 215 210 209 TRGL 211 210 215 TRGL 237 216 236 TRGL 237 217 216 TRGL 237 220 217 TRGL 220 237 238 TRGL 221 220 238 TRGL 225 221 238 TRGL 221 225 222 TRGL 225 224 222 TRGL 239 225 238 TRGL 239 226 225 TRGL 226 239 227 TRGL 239 235 227 TRGL 235 239 236 TRGL 239 238 236 TRGL 238 237 236 TRGL 235 229 227 TRGL 235 230 229 TRGL 230 235 233 TRGL 229 228 227 TRGL 220 218 217 TRGL 94 100 96 TRGL 191 202 192 TRGL 202 201 192 TRGL 201 199 192 TRGL 200 199 201 TRGL 199 193 192 TRGL 194 193 199 TRGL 198 194 199 TRGL 81 75 71 TRGL 7 9 8 END
/// Create a new HibernateKeyManager, with no keys. pub fn new() -> Self { HibernateKeyManager { private_key: None, public_key: None, } }
/** * This function is called once when teleop is enabled. */ @Override public void teleopInit() { if(wasAuto) { wasAuto = false; Drivetrain.getInstance().getPigeon().setYaw(Drivetrain.getInstance().getOdometry().getPoseMeters().getRotation().getDegrees()); Drivetrain.getInstance().getOdometry().resetPosition(Drivetrain.getInstance().getOdometry().getPoseMeters(), Drivetrain.getInstance().getOdometry().getPoseMeters().getRotation()); SwerveManual.pigeonAngle = 0; } if(RobotMap.DEMO_MODE) { (new ZeroHood()).schedule(); } Limelight.setLEDS(true); }
def update(self, true_labels, pred_labels): if len(pred_labels.shape) > 1: pred_labels = np.argmax(pred_labels, axis=-1) conf_matrix = np.bincount( self.n_classes * true_labels.astype(int) + pred_labels.astype(int), minlength=self.n_classes ** 2 ).reshape(self.n_classes, self.n_classes) self.confusion_matrix += conf_matrix
<reponame>miaoyinjun/boot-admin package org.jjche.system.modules.system.service; import cn.hutool.core.util.StrUtil; import com.baomidou.mybatisplus.core.conditions.query.LambdaQueryWrapper; import lombok.RequiredArgsConstructor; import org.jjche.cache.service.RedisService; import org.jjche.common.constant.CacheKey; import org.jjche.common.util.ValidationUtil; import org.jjche.core.util.FileUtil; import org.jjche.mybatis.base.service.MyServiceImpl; import org.jjche.mybatis.param.MyPage; import org.jjche.mybatis.param.PageParam; import org.jjche.mybatis.param.SortEnum; import org.jjche.mybatis.util.MybatisUtil; import org.jjche.system.modules.system.api.dto.DictDTO; import org.jjche.system.modules.system.api.dto.DictQueryCriteriaDTO; import org.jjche.system.modules.system.domain.DictDO; import org.jjche.system.modules.system.mapper.DictMapper; import org.jjche.system.modules.system.mapstruct.DictMapStruct; import org.springframework.stereotype.Service; import org.springframework.transaction.annotation.Transactional; import javax.servlet.http.HttpServletResponse; import java.io.IOException; import java.util.*; /** * <p>DictService class.</p> * * @author <NAME> * @version 1.0.8-SNAPSHOT * @since 2019-04-10 */ @Service @RequiredArgsConstructor public class DictService extends MyServiceImpl<DictMapper, DictDO> { private final DictMapStruct dictMapper; private final RedisService redisService; /** * <p> * 获取列表查询语句 * </p> * * @param criteria 条件 * @return sql */ private LambdaQueryWrapper queryWrapper(DictQueryCriteriaDTO criteria) { LambdaQueryWrapper queryWrapper = MybatisUtil.assemblyLambdaQueryWrapper(criteria, SortEnum.ID_DESC); String blurry = criteria.getBlurry(); if (StrUtil.isNotBlank(blurry)) { queryWrapper.apply("(name LIKE {0} OR description LIKE {0})", "%" + blurry + "%"); } return queryWrapper; } /** * 分页查询 * * @param criteria 条件 * @param pageable 分页参数 * @return / */ public MyPage queryAll(DictQueryCriteriaDTO criteria, PageParam pageable) { LambdaQueryWrapper queryWrapper = queryWrapper(criteria); MyPage<DictDO> myPage = this.page(pageable, queryWrapper); List<DictDTO> list = dictMapper.toVO(myPage.getRecords()); myPage.setNewRecords(list); return myPage; } /** * 查询全部数据 * * @param criteria / * @return / */ public List<DictDTO> queryAll(DictQueryCriteriaDTO criteria) { LambdaQueryWrapper queryWrapper = queryWrapper(criteria); return dictMapper.toVO(this.list(queryWrapper)); } /** * 创建 * * @param resources / */ @Transactional(rollbackFor = Exception.class) public void create(DictDO resources) { this.save(resources); } /** * 编辑 * * @param resources / */ @Transactional(rollbackFor = Exception.class) public void update(DictDO resources) { // 清理缓存 delCaches(resources); DictDO dict = this.getById(resources.getId()); ValidationUtil.isNull(dict.getId(), "DictDO", "id", resources.getId()); resources.setId(dict.getId()); this.updateById(resources); } /** * 删除 * * @param ids / */ @Transactional(rollbackFor = Exception.class) public void delete(Set<Long> ids) { // 清理缓存 List<DictDO> dicts = this.listByIds(ids); for (DictDO dict : dicts) { delCaches(dict); } this.removeByIds(ids); } /** * 导出数据 * * @param dictDtos 待导出的数据 * @param response / * @throws java.io.IOException if any. */ public void download(List<DictDTO> dictDtos, HttpServletResponse response) throws IOException { List<Map<String, Object>> list = new ArrayList<>(); for (DictDTO dictDTO : dictDtos) { Map<String, Object> map = new LinkedHashMap<>(); map.put("字典名称", dictDTO.getName()); map.put("字典描述", dictDTO.getDescription()); map.put("字典标签", null); map.put("字典值", null); list.add(map); } FileUtil.downloadExcel(list, response); } /** * 根据字典名称获取 * * @param name 字典名称 * @return / */ public DictDO getByName(String name) { LambdaQueryWrapper<DictDO> queryWrapper = new LambdaQueryWrapper<>(); queryWrapper.eq(DictDO::getName, name); return this.getOne(queryWrapper); } /** * <p>delCaches.</p> * * @param dict a {@link DictDO} object. */ public void delCaches(DictDO dict) { redisService.delete(CacheKey.DIC_NAME + dict.getName()); } }
lenght = 0 amount = 0 for _ in range(5): row = input() lenght += 1 if '1' in row: row = row.split() index = row.index('1') current = (lenght, index) lenght = current[0] while lenght != 3: if lenght < 3: lenght += 1 amount += 1 elif lenght > 3: lenght -= 1 amount += 1 while index != 2: if index < 2: index += 1 amount += 1 else: index -= 1 amount += 1 print(amount)
<reponame>daku10/leetcode-study package main func jump(nums []int) int { indexDistanceMap := make(map[int]int) indexDistanceMap[0] = 0 length := len(nums) innerJump(nums, 0, length, indexDistanceMap) return indexDistanceMap[length - 1] } func innerJump(nums []int, index int, length int, indexDistanceMap map[int]int) { n := nums[index] distance := indexDistanceMap[index] if n == 0 { return } for i := n; i > 0; i-- { next := index + i if next >= length { continue } if v, ok := indexDistanceMap[next]; ok { if v > distance + 1 { indexDistanceMap[next] = distance + 1 } else { continue } } else { indexDistanceMap[next] = distance + 1 } innerJump(nums, next, length, indexDistanceMap) } }
// Last returns the most recent datapoint for a metric+tagset. The metric+tagset // string should be formated like os.cpu{host=foo}. The tag porition expects the // that the keys will be in alphabetical order. func Last(t miniprofiler.Timer, w http.ResponseWriter, r *http.Request) (interface{}, error) { var counter bool if r.FormValue("counter") != "" { counter = true } val, timestamp, err := schedule.Search.GetLast(r.FormValue("metric"), r.FormValue("tagset"), counter) return struct { Value float64 Timestamp int64 }{ val, timestamp, }, err }
Nine years have passed since Fred Clarke and Cesar Pelli conceived the design notion of a San Francisco transit center wrapped in an undulating skin and topped by a quarter-mile-long park. Now the two leaders of Pelli Clarke Pelli Architects can stand inside and look out through sections of that skin. The glossy visions of 2007 are becoming reality and, so far, the pair’s architectural ambition has shaken off the politics and budgetary problems that still could pull it down. “It’s going to really glow,” Clark said a few days ago as he stood on the elevated open-air deck where buses someday will arrive from the Bay Bridge, a space screened by aluminum panels punched through with a crystalline pattern and coated in iridescent white paint. “Look at the sparkle on the edge of the punches. You can design a space and simulate it (in computer renderings), but you don’t have any idea that will happen.” The screen’s not the only hint of what will greet us by the end of 2017, when the huge complex that runs behind Mission Street from Beale Street west nearly to Second Street is scheduled to open. The structural braces along its outer edge are being painted a thick, clean white. Inside their crisscrossed grid, glass storefronts for future retail shops are being installed. This smooth progress is a counterpoint to the spring’s flurry of stories about the transit center and its daunting price tag. Pleasant commute touches The budget for the first phase is now $2.4 billion, twice the amount estimated when the competition was held in 2007, driven higher by a local building boom where construction firms can name their price for unusual projects. In April the longtime director of the Transbay Joint Powers Authority, Maria Ayerdi-Kaplan, was sent packing so that City Hall could take a more direct role in keeping costs and timelines on track. That’s the news. The reality is the efficient way the vast structure has emerged from the ground, taken form, jumped streets. Better yet, the finishes beginning to be applied — the layers that visitors will encounter long after the politics fade — offer the promise of a lyricism to the machine-like mass. For evidence, walk south on First Street from Mission and take a right on Natoma: The curves of perforated aluminum are a smooth-flowing horizontal river amid the verticality on all sides. In direct sun, the skin is a bright, shimmering canvas. When the sun moves behind the panels, light streams through to give the screen and its spidery bracing the look of a taut sculpture. Taut — but lithe, unlike the heaviness of the perforated steel panels that cloak the San Francisco Federal Building at Seventh and Mission streets. That slab-like tower, designed by the Los Angeles firm Morphosis, can be exhilarating. It also can turn brooding and grim. Emphasizing white, not gray Given how the transit center extends over streets between alleyways, the sinuous white wave makes sense. “We designed the frame with triangular pieces because it’s the simplest, lightest way” to support the aluminum skin, said Clarke, citing Buckminster Fuller as an inspiration. Similar care went into the structural bracing’s soft hue: “We chose a slightly gray color so the white is what you focus on. The gray disappears.” This clarity is a hallmark of the Connecticut firm founded by Pelli in 1977. So is pragmatism, as when Pelli and Clarke and the rest of the Transbay team changed the skin from glass to metal in 2013 in response to concerns about cost and security (think: mangled metal versus falling shards if a bomb is detonated). The change could have been bleak. Instead, it’s turning out to be an improvement. “I think I actually prefer the (white) metal to glass — with a facade as important as this one is, making it so visually striking is a real plus,” said John Rahaim, the city’s planning director. He toured the complex-in-progress last week and, like me, was impressed by what he saw. “In general, my reactions were quite positive. The size and scale is really strong.” Aside from Pelli Clarke Pelli’s attention to detail, the saving grace for the transit center is that the political clouds didn’t gather until nearly all bidding for construction and materials was complete. If City Hall had taken the reins a year earlier, the quest to trim costs by cheapening the finishes could have grabbed a few daily headlines but dumbed down what we’ll be living with for decades to come. Was there value-engineering along the way? You bet. It looks as though a line of aluminum panels was trimmed from the bottom and top of the facade, turning an evening gown into a cocktail dress. Overall, though, the evolving edifice retains the visual power that’s needed in this constrained setting. The low-slung form amid towers is shaping up to be a centerpiece, which is essential — because the concept of a Transbay “district” is no longer merely the pipe dream of planners and policy wonks. Month by month, the skyline and sidewalks around the transit center are in transition: The steel frames of two towers are rising along its rippled form, each including a bridge that will connect to the rooftop park. There’s a new apartment tower on Fremont Street one block to the south. Another on First Street has just started construction. To be sure, design perils lie ahead. The park will be installed next year, and if the design by PWP Landscape Architecture of Berkeley isn’t as lavish as the initial vision — welcome to the world of architectural competitions — it offers a tapestry of landscapes and habitats. That acclaimed firm, however, no longer is designing the plaza at Mission and Fremont streets that marks the main entrance to the transit center. Crucial role in daily life The plaza is part of the Salesforce Tower project starting to scrape the sky next door, and the development team of Boston Properties and Hines apparently has let PWPLA go. The idea is to get rid of the redwoods in the approved design and open things up. “We’ll be keeping close tabs on that,” Rahaim said. Good thing. That plaza needs to function in part as the ground-level foyer to the park set 70 feet in the air. If a new design turns the open corner into what feels like the quad of a Salesforce campus, with nothing about it that invites people to explore what’s up above, the potential of the publicly funded park above the transit center could be lost. The role of that landscaped rooftop isn’t to look good from nearby towers, but to be a vital part of the everyday city. The closer we get to opening day, it’s easy to lose attention of the final round of decision making. The thing is, this can be the most important round of all. John King is the San Francisco Chronicle’s urban design critic. Email: [email protected] Twitter: johnkingsfchron
from aws_cdk import core, aws_iam as iam, aws_lambda as _lambda from aws_cdk.core import CustomResource import aws_cdk.aws_logs as logs import aws_cdk.custom_resources as cr import uuid import json class ScpPolicyResource(core.Construct): def __init__( self, scope: core.Construct, id: str, service_control_policy_string: str, description: str, name: str, ) -> None: super().__init__(scope, id) POLICY_ID_LOOKUP = "Policy.PolicySummary.Id" # https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Organizations.html # https://docs.aws.amazon.com/cdk/api/latest/docs/custom-resources-readme.html#physical-resource-id-parameter on_create_policy = cr.AwsSdkCall( action="createPolicy", service="Organizations", physical_resource_id=cr.PhysicalResourceId.from_response(POLICY_ID_LOOKUP), parameters={ "Content": service_control_policy_string, "Description": description, "Name": name, "Type": "SERVICE_CONTROL_POLICY", }, output_path=POLICY_ID_LOOKUP, ) on_update_policy = cr.AwsSdkCall( action="updatePolicy", service="Organizations", physical_resource_id=cr.PhysicalResourceId.from_response(POLICY_ID_LOOKUP), parameters={ "Content": service_control_policy_string, "Description": description, "Name": name, "PolicyId": cr.PhysicalResourceIdReference(), }, output_path=POLICY_ID_LOOKUP, ) on_delete_policy = cr.AwsSdkCall( action="deletePolicy", service="Organizations", parameters={ "PolicyId": cr.PhysicalResourceIdReference(), }, ) policy = cr.AwsCustomResourcePolicy.from_sdk_calls( resources=cr.AwsCustomResourcePolicy.ANY_RESOURCE ) scp_create = cr.AwsCustomResource( self, "ServiceControlPolicyCreate", install_latest_aws_sdk=True, policy=policy, on_create=on_create_policy, on_update=on_update_policy, on_delete=on_delete_policy, resource_type="Custom::ServiceControlPolicy", ) self.policy_id = scp_create.get_response_field(POLICY_ID_LOOKUP)
<reponame>Sinnlos/PierwiastkiChemiczne<gh_stars>0 package com.s14014.tau.service; /* import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertNotNull; import java.text.SimpleDateFormat; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.test.annotation.Rollback; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.TestExecutionListeners; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; import org.springframework.test.context.support.DependencyInjectionTestExecutionListener; import org.springframework.test.context.support.DirtiesContextTestExecutionListener; import org.springframework.test.context.transaction.TransactionalTestExecutionListener; import org.springframework.transaction.annotation.Transactional; import com.github.springtestdbunit.DbUnitTestExecutionListener; import com.github.springtestdbunit.annotation.DatabaseSetup; import com.github.springtestdbunit.annotation.ExpectedDatabase; import com.github.springtestdbunit.assertion.DatabaseAssertionMode; import com.s14014.tau.domain.Pierwiastek; import com.s14014.tau.domain.Inventor; @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = { "classpath:/beans.xml"}) @Rollback @Transactional(transactionManager = "txManager") @TestExecutionListeners({ DependencyInjectionTestExecutionListener.class, DirtiesContextTestExecutionListener.class, TransactionalTestExecutionListener.class, DbUnitTestExecutionListener.class }) public class SortManagerDBUnitTest { @Autowired SortManager sortManager; @Test @DatabaseSetup("/fullDatapierwiastki.xml") @ExpectedDatabase(value = "/addInventorData.xml", assertionMode = DatabaseAssertionMode.NON_STRICT) public void addInventorCheck() throws Exception{ assertEquals(3, sortManager.getAllInventors().size()); Inventor i = new Inventor(); i.setImie("Taehyung"); i.setNazwisko("Kim"); i.setPesel("95122113440"); i.setFirstInventDate(new SimpleDateFormat("yyyy-MM-dd").parse("1874-05-14")); sortManager.addInventor(i); assertEquals(4, sortManager.getAllInventors().size()); } @Test @DatabaseSetup("/fullDatapierwiastki.xml") @ExpectedDatabase(value = "/deleteData.xml", assertionMode = DatabaseAssertionMode.NON_STRICT) public void deleteInventorCheck() throws Exception{ assertEquals(3, sortManager.getAllInventors().size()); Inventor inventor = sortManager.findInventorByPesel("85043021547"); sortManager.deleteInventor(inventor); assertEquals(2, sortManager.getAllInventors().size()); } @Test @DatabaseSetup("/fullDatapierwiastki.xml") @ExpectedDatabase(value = "/updateData.xml", assertionMode = DatabaseAssertionMode.NON_STRICT) public void updateInventorCheck() throws Exception{ Inventor inventor = sortManager.findInventorByPesel("43012144859"); inventor.setNazwisko("Update"); sortManager.updateInventor(inventor); assertEquals(sortManager.findInventorByPesel("43012144859").getNazwisko(), inventor.getNazwisko()); } @Test @DatabaseSetup("/fullDatapierwiastki.xml") @ExpectedDatabase(value = "/fullDatapierwiastki.xml", assertionMode = DatabaseAssertionMode.NON_STRICT) public void getInventorCheck() throws Exception{ Inventor inventor = sortManager.findInventorByPesel("12043021547"); assertNotNull(inventor); assertEquals(sortManager.findInventorByPesel("12043021547").getNazwisko(), inventor.getNazwisko()); } @Test @DatabaseSetup("/fullDatapierwiastki.xml") @ExpectedDatabase(value = "/disposePierwiastek.xml", assertionMode = DatabaseAssertionMode.NON_STRICT) public void disposePerwiastekCheck() throws Exception{ Inventor inventor = sortManager.findInventorByPesel("12043021547"); assertEquals(2, inventor.getPierwiastki().size()); Pierwiastek pierwiastek = inventor.getPierwiastki().get(0); sortManager.disposePierwiastek(inventor, pierwiastek); assertEquals(1, inventor.getPierwiastki().size()); } @Test @DatabaseSetup("/fullDatapierwiastki.xml") @ExpectedDatabase(value = "/pierwiastki.xml", assertionMode = DatabaseAssertionMode.NON_STRICT) public void getInventorsPierwiastki() throws Exception{ Inventor inventor = sortManager.findInventorByPesel("12043021547"); assertNotNull(inventor); assertNotNull(inventor.getPierwiastki()); assertEquals(2, inventor.getPierwiastki().size()); } } */