content
stringlengths
10
4.9M
Every share makes Black Voice louder! Share To Share To The issue of workers not being paid in accordance with hours they put in is a big problem in our society. Wage theft in our present society is one of the labor violations low wage labor workers have been facing. Many of the workers don’t get paid what they have earned. By honoring the contributions of workers in our nation, structural racism and practically stealing labor shouldn’t be the way to show appreciation for their hard work. The presence of wage theft is made up of limits in oversight, legal loopholes, institutional and structural racism people of color endure. The UCLA Labor Center has defined wage theft as the unlawful practice of not paying workers for all their labor, which comprises of not paying minimum wage and the need for workers to work off the clock. The Fair Labor Standards Act (FLSA) which was signed into law by Franklin Delano Roosevelt in 1938 is the foundation of current labor laws in America. The establishment of minimum wage and overtime pay are among the obligations of the FLSA. When it was first put into law, traditional Black labor sites in the fields and homes were excluded from the protection of the FLSA. It was only modified to cover most farmers and domestic workers in 1966 and 1974 respectively, after persistent protests by women’s rights and civil rights advocates. “The country is experiencing a wage theft epidemic of staggering proportions,” the National Employment Law Project illustrated. Taxi drivers, for example, aren’t entitled to overtime pay even though a majority of them work extra hours. Restaurant workers, who earn far below the federal minimum wage, are classed at “tipped earners,” because it is believed that the tips they receive are enough to fill the wage gap. The vulnerability of undocumented labor workforce, which highly consists of people of color, can’t be overstated. Most of them are in low-skill and low-wage jobs. According to the Economic Policy institute (EPI), workers of the undocumented workforce lose $50 billion annually to wage theft. Labor Day was celebrated a few days back, but it should be a celebration for all the labor force and not a few, as the wage inequalities in our society are stark. Economic security for the most vulnerable workers in our society should be boosted and curbing wage theft should be of precedence. Source: ColorLines
<filename>src/main/java/com/oracle/truffle/js/runtime/array/ByteBufferSupport.java /* * Copyright (c) 2020, 2020, Oracle and/or its affiliates. All rights reserved. * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. * * The Universal Permissive License (UPL), Version 1.0 * * Subject to the condition set forth below, permission is hereby granted to any * person obtaining a copy of this software, associated documentation and/or * data (collectively the "Software"), free of charge and under any and all * copyright rights in the Software, and any and all patent rights owned or * freely licensable by each licensor hereunder covering either (i) the * unmodified Software as contributed to or provided by such licensor, or (ii) * the Larger Works (as defined below), to deal in both * * (a) the Software, and * * (b) any piece of software and/or hardware listed in the lrgrwrks.txt file if * one is included with the Software each a "Larger Work" to which the Software * is contributed by such licensors), * * without restriction, including without limitation the rights to copy, create * derivative works of, display, perform, and distribute the Software and make, * use, sell, offer for sale, import, export, have made, and have sold the * Software and the Larger Work(s), and to sublicense the foregoing rights on * either these or other terms. * * This license is subject to the following condition: * * The above copyright notice and either this complete permission notice or at a * minimum a reference to the UPL must be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ package com.oracle.truffle.js.runtime.array; import java.lang.reflect.Field; import java.nio.Buffer; import java.nio.ByteBuffer; import java.nio.ByteOrder; import com.oracle.truffle.api.CompilerDirectives; import sun.misc.Unsafe; final class ByteBufferSupport { private static final ByteBufferAccess LITTLE_ENDIAN; private static final ByteBufferAccess BIG_ENDIAN; private static final ByteBufferAccess NATIVE_ORDER; private ByteBufferSupport() { } static ByteBufferAccess littleEndian() { return LITTLE_ENDIAN; } static ByteBufferAccess bigEndian() { return BIG_ENDIAN; } static ByteBufferAccess nativeOrder() { return NATIVE_ORDER; } static { // We only use Unsafe for architectures that we know support unaligned accesses. String arch = System.getProperty("os.arch"); boolean unaligned = arch.equals("amd64") || arch.equals("aarch64") || arch.equals("x86_64"); if (unaligned) { if (ByteOrder.nativeOrder() == ByteOrder.LITTLE_ENDIAN) { LITTLE_ENDIAN = NativeUnsafeByteBufferAccess.INSTANCE; BIG_ENDIAN = ReservedUnsafeByteBufferAccess.INSTANCE; } else { LITTLE_ENDIAN = ReservedUnsafeByteBufferAccess.INSTANCE; BIG_ENDIAN = NativeUnsafeByteBufferAccess.INSTANCE; } NATIVE_ORDER = NativeUnsafeByteBufferAccess.INSTANCE; } else { LITTLE_ENDIAN = LittleEndianByteBufferAccess.INSTANCE; BIG_ENDIAN = BigEndianByteBufferAccess.INSTANCE; NATIVE_ORDER = NativeByteBufferAccess.INSTANCE; } } } abstract class UnsafeByteBufferAccess extends ByteBufferAccess { private static final Unsafe UNSAFE; private static final long BUFFER_ADDRESS_FIELD_OFFSET; private static int checkIndex(ByteBuffer buffer, int i, int nb) { if (nb < 1 || i < 0 || i > buffer.limit() - nb) { CompilerDirectives.transferToInterpreterAndInvalidate(); throw new IndexOutOfBoundsException(); } return i; } private static long getBufferAddress(ByteBuffer buffer) { return UNSAFE.getLong(buffer, BUFFER_ADDRESS_FIELD_OFFSET); } private static long getAddress(ByteBuffer buffer, int index) { return getBufferAddress(buffer) + index; } @Override public int getInt16(ByteBuffer buffer, int index) { return UNSAFE.getShort(getAddress(buffer, checkIndex(buffer, index, Short.BYTES))); } @Override public int getInt32(ByteBuffer buffer, int index) { return UNSAFE.getInt(getAddress(buffer, checkIndex(buffer, index, Integer.BYTES))); } @Override public long getInt64(ByteBuffer buffer, int index) { return UNSAFE.getLong(getAddress(buffer, checkIndex(buffer, index, Long.BYTES))); } @Override public float getFloat(ByteBuffer buffer, int index) { return UNSAFE.getFloat(getAddress(buffer, checkIndex(buffer, index, Float.BYTES))); } @Override public double getDouble(ByteBuffer buffer, int index) { return UNSAFE.getDouble(getAddress(buffer, checkIndex(buffer, index, Double.BYTES))); } @Override public void putInt16(ByteBuffer buffer, int index, int value) { UNSAFE.putShort(getAddress(buffer, checkIndex(buffer, index, Short.BYTES)), (short) value); } @Override public void putInt32(ByteBuffer buffer, int index, int value) { UNSAFE.putInt(getAddress(buffer, checkIndex(buffer, index, Integer.BYTES)), value); } @Override public void putInt64(ByteBuffer buffer, int index, long value) { UNSAFE.putLong(getAddress(buffer, checkIndex(buffer, index, Long.BYTES)), value); } @Override public void putFloat(ByteBuffer buffer, int index, float value) { UNSAFE.putFloat(getAddress(buffer, checkIndex(buffer, index, Float.BYTES)), value); } @Override public void putDouble(ByteBuffer buffer, int index, double value) { UNSAFE.putDouble(getAddress(buffer, checkIndex(buffer, index, Double.BYTES)), value); } static { try { Field theUnsafeInstance = Unsafe.class.getDeclaredField("theUnsafe"); theUnsafeInstance.setAccessible(true); UNSAFE = (Unsafe) theUnsafeInstance.get(Unsafe.class); } catch (NoSuchFieldException | IllegalAccessException e) { throw new RuntimeException("exception while trying to get Unsafe.theUnsafe via reflection:", e); } try { Field bufferAddressField = Buffer.class.getDeclaredField("address"); BUFFER_ADDRESS_FIELD_OFFSET = UNSAFE.objectFieldOffset(bufferAddressField); } catch (NoSuchFieldException e) { throw new RuntimeException(e); } } } final class NativeUnsafeByteBufferAccess extends UnsafeByteBufferAccess { static final ByteBufferAccess INSTANCE = new NativeUnsafeByteBufferAccess(); } final class ReservedUnsafeByteBufferAccess extends UnsafeByteBufferAccess { static final ByteBufferAccess INSTANCE = new ReservedUnsafeByteBufferAccess(); @Override public int getInt16(ByteBuffer buffer, int index) { return Short.reverseBytes((short) super.getInt16(buffer, index)); } @Override public int getInt32(ByteBuffer buffer, int index) { return Integer.reverseBytes(super.getInt32(buffer, index)); } @Override public long getInt64(ByteBuffer buffer, int index) { return Long.reverseBytes(super.getInt64(buffer, index)); } @Override public float getFloat(ByteBuffer buffer, int index) { return Float.intBitsToFloat(getInt32(buffer, index)); } @Override public double getDouble(ByteBuffer buffer, int index) { return Double.longBitsToDouble(getInt64(buffer, index)); } @Override public void putInt16(ByteBuffer buffer, int index, int value) { super.putInt16(buffer, index, Short.reverseBytes((short) value)); } @Override public void putInt32(ByteBuffer buffer, int index, int value) { super.putInt32(buffer, index, Integer.reverseBytes(value)); } @Override public void putInt64(ByteBuffer buffer, int index, long value) { super.putInt64(buffer, index, Long.reverseBytes(value)); } @Override public void putFloat(ByteBuffer buffer, int index, float value) { putInt32(buffer, index, Float.floatToRawIntBits(value)); } @Override public void putDouble(ByteBuffer buffer, int index, double value) { putInt64(buffer, index, Double.doubleToRawLongBits(value)); } }
The Congenial Calories of the Candy Shop. At a time when conservation of foodstuffs, and especially of sugar, is one of the vital necessities of our country and the fundamental duty of every one of us, the soda-guzzling youngsters and the candy nibbling matinée followers are properly looked on with suspicion. There is no doubt that before the war our consumption of soda water, ice cream and confectionery was growing at a rapid rate, and there is every reason to believe that when the restrictions are removed and the prices abate, these typically American habits will continue as before, or increase. Just how large an amount of nutriment is taken in this way by many people is probably not generally appreciated, and formerly we had no definite information on which to base an estimate. This has now been furnished in a reliable and exact form from the Nutrition Laboratory of the Carnegie Institution.1 By aid of the method of direct determination of the heat of combustion in the calorimetric bomb, the total calories furnished by numerous popular sweets and beverages have been ascertained. These analyses reveal that an ordinary six-cent (war price) bar of sweet chocolate will furnish ordinarily 200 or 300 calories. Taking most of these chocolate candies sold in bar form, whether sweet, milk or nut chocolate, ten cents will purchase as high as 735 calories in some brands, ordinarily from 300 to 600 calories, and in only a few cases less than 200 calories. As 500 calories is nearly one third of the basal caloric requirement of normal man, and may represent from one fifth to one sixth of the total daily requirements of the average man not at severe muscular labor, a bar of chocolate candy, or its equivalent from the bonbon box, means a very considerable addition to the food supply which is usually a between-meals addition that does not materially curtail the food eaten at each meal.... America’s most prominent inventions in the line of beverages are undoubtedly the cocktail and the ice cream soda. What amount of fire the former provides, the Benedicts do not state; probably the calorimetric bomb could not do it justice anyway, for a calorimetric bomb is a soulless thing, without a particle of poetry or imagination. But a cold and clammy ice cream soda apparently digests readily in the alimentary apparatus of the bomb, and registers from 202 to 467 calories, depending on the liberality of the soda clerk. There seems to be some honesty in the soda trade, for the fifteen centers give more calories than the ten centers. The “sundaes” or “college ices” furnish a large fraction of a full meal, yielding generally from 300 to 500 calories. When you order a nut sundae you get from 4 to 6 gm. of protein to help balance the ration. And the children’s favorite, the ice cream cone, comes through with about 100 calories for a nickel. So it isn’t surprising or undesirable if Willie doesn’t eat all of his dinner when he has had two ice cream cones, a bar of chocolate almonds and an ice cream soda. He has had about a thousand calories in the mess, which is about one third as many calories as father needs for a day’s work. Even the simpler beverages furnish numerous calories. Ginger ale gives about 150 calories to the pint, and grape juice about twice as much. The plain vanilla or chocolate soda without cream averages about 200 calories. Ordinary “crackers” furnish about 4.5 to 5 calories per gram, which means that the usual soda cracker gives about 30 calories, or about one-tenth to one-fifteenth as much as ten cents’ worth of chocolate candy, although it does provide much more protein. So the afternoon tea, with candies and fancy crackers, furnishes not a few calories, usually to people who do not seem to need them very much. At a boys’ school it was found that the between-meals candies, sodas, etc., amounted to about 640 calories, which is a mere trifle in the nutrition of the adolescent male. We are tantalized by the statement that at Vassar these extras totaled 10 per cent. of the daily intake—we long for the details. Physicians will see in these figures reason for careful inquiry into the sweettooth requirements of his patients, as well as the meal-time consumption of foods...
/** * Asynchronous: Native(exception at callback from Java, then Occurred-throw new-clear). * @return status code */ public static int nativeClearExceptionTest() { int result1 = 4; /* STATUS_FAILED */ NativeClearExceptionTest cTest = new NativeClearExceptionTest(); try { cTest.nativeNativeClearExceptionTest(); processResult -= 1; } catch (NumberFormatException e1) { result1 = 3; System.out.println("======>" + e1.getMessage()); processResult -= 10; } catch (InvalidPathException e2) { result1 = 3; System.out.println("======>" + e2.getMessage()); processResult -= 10; } catch (StringIndexOutOfBoundsException e3) { result1 = 3; System.out.println("======>" + e3.getMessage()); processResult -= 10; } processResult -= 3; return result1; }
package submarineragent_test import ( "context" "testing" "time" . "github.com/onsi/ginkgo" . "github.com/onsi/gomega" "github.com/open-cluster-management/submariner-addon/pkg/helpers" testingHelpers "github.com/open-cluster-management/submariner-addon/pkg/helpers/testing" "k8s.io/apimachinery/pkg/api/meta" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" addonv1alpha1 "open-cluster-management.io/api/addon/v1alpha1" addonFake "open-cluster-management.io/api/client/addon/clientset/versioned/fake" ) const ( clusterName = "test" ) func TestSubmarinerAgent(t *testing.T) { RegisterFailHandler(Fail) RunSpecs(t, "Spoke Submariner Agent Suite") } type managedClusterAddOnTestBase struct { addOn *addonv1alpha1.ManagedClusterAddOn addOnClient *addonFake.Clientset } func (t *managedClusterAddOnTestBase) init() { t.addOn = newAddOn() t.addOnClient = addonFake.NewSimpleClientset() } func (t *managedClusterAddOnTestBase) run() { if t.addOn != nil { _, err := t.addOnClient.AddonV1alpha1().ManagedClusterAddOns(t.addOn.Namespace).Create(context.TODO(), t.addOn, metav1.CreateOptions{}) Expect(err).To(Succeed()) } t.addOnClient.ClearActions() } func (t *managedClusterAddOnTestBase) awaitManagedClusterAddOnStatusCondition(expCond *metav1.Condition) { testingHelpers.AwaitStatusCondition(expCond, func() ([]metav1.Condition, error) { config, err := t.addOnClient.AddonV1alpha1().ManagedClusterAddOns(clusterName).Get(context.TODO(), helpers.SubmarinerAddOnName, metav1.GetOptions{}) if err != nil { return nil, err } return config.Status.Conditions, nil }) } func (t *managedClusterAddOnTestBase) awaitNoManagedClusterAddOnStatusCondition(condType string) { Consistently(func() *metav1.Condition { config, err := t.addOnClient.AddonV1alpha1().ManagedClusterAddOns(clusterName).Get(context.TODO(), helpers.SubmarinerConfigName, metav1.GetOptions{}) Expect(err).To(Succeed()) return meta.FindStatusCondition(config.Status.Conditions, condType) }, 300*time.Millisecond).Should(BeNil()) } func newAddOn() *addonv1alpha1.ManagedClusterAddOn { return &addonv1alpha1.ManagedClusterAddOn{ ObjectMeta: metav1.ObjectMeta{ Name: helpers.SubmarinerAddOnName, Namespace: clusterName, }, } }
<filename>src/hooks/web/useDarkMode.ts<gh_stars>1-10 function isNowNight(): boolean { const now = new Date(); const month = now.getMonth(); const hour = now.getHours(); if (month > 4 && month <= 10) { return hour < 6 || hour >= 19; } return hour < 7 || hour >= 18; } export function useDarkMode() { return { isNowNight, }; }
package com.vaadin.tests.util; import java.lang.reflect.Modifier; import java.util.HashSet; import java.util.List; import java.util.Set; import com.vaadin.tests.VaadinClasses; public class GraphVizClassHierarchyCreator { public static void main(String[] args) { String gv = getGraphVizHierarchy((List) VaadinClasses.getComponents(), "com.vaadin"); System.out.println(gv); } private static String getGraphVizHierarchy(List<Class> classes, String packageToInclude) { boolean includeInterfaces = false; StringBuilder header = new StringBuilder(); header.append("digraph finite_state_machine {\n" + " rankdir=BT;\n" + " dpi=\"150\";\n" + " ratio=\"0.25\";\n"); StringBuilder sb = new StringBuilder(); Set<Class> classesAndParents = new HashSet<Class>(); for (Class<?> cls : classes) { addClassAndParents(classesAndParents, cls, packageToInclude); } Set<Class> interfaces = new HashSet<Class>(); for (Object cls : classesAndParents.toArray()) { for (Class<?> c : ((Class) cls).getInterfaces()) { addClassAndParentInterfaces(classesAndParents, c, packageToInclude); } } for (Class<?> c : classesAndParents) { appendClass(sb, c, c.getSuperclass(), packageToInclude, includeInterfaces); for (Class ci : c.getInterfaces()) { appendClass(sb, c, ci, packageToInclude, includeInterfaces); } } header.append(" node [shape = ellipse, style=\"dotted\"] "); for (Class c : classesAndParents) { if (!c.isInterface() && Modifier.isAbstract(c.getModifiers())) { header.append(c.getSimpleName() + " "); } } if (includeInterfaces) { System.out.print(" node [shape = ellipse, style=\"solid\"] "); for (Class c : classesAndParents) { if (c.isInterface()) { header.append(c.getSimpleName() + " "); } } header.append(";\n"); } header.append(";\n"); header.append(" node [shape = rectangle, style=\"solid\"];\n"); return header.toString() + sb.toString() + "}"; } private static void addClassAndParents(Set<Class> classesAndParents, Class<?> cls, String packageToInclude) { if (cls == null) { return; } if (classesAndParents.contains(cls)) { return; } if (!cls.getPackage().getName().startsWith(packageToInclude)) { return; } classesAndParents.add(cls); addClassAndParents(classesAndParents, cls.getSuperclass(), packageToInclude); } private static void addClassAndParentInterfaces( Set<Class> classesAndParents, Class<?> cls, String packageToInclude) { if (cls == null) { return; } if (classesAndParents.contains(cls)) { return; } if (!cls.getPackage().getName().startsWith(packageToInclude)) { return; } classesAndParents.add(cls); for (Class iClass : cls.getInterfaces()) { addClassAndParentInterfaces(classesAndParents, iClass, packageToInclude); } } private static void appendClass(StringBuilder sb, Class<?> c, Class<?> superClass, String packageToInclude, boolean includeInterfaces) { if (superClass == null) { return; } if (!c.getPackage().getName().startsWith(packageToInclude)) { return; } if (!superClass.getPackage().getName().startsWith(packageToInclude)) { return; } if (!includeInterfaces && (c.isInterface() || superClass.isInterface())) { return; } sb.append(c.getSimpleName()).append(" -> ") .append(superClass.getSimpleName()).append("\n"); } private static void addInterfaces(Set<Class> interfaces, Class<?> cls) { if (interfaces.contains(cls)) { return; } if (cls.isInterface()) { interfaces.add(cls); } for (Class c : cls.getInterfaces()) { addInterfaces(interfaces, c); } } }
<gh_stars>100-1000 ''' Automatron: Monitoring * Get all Targets and Runbooks * Schedule Checks * Execute Checks per schedule * Notify Check status * Listen for Runbook and Target changes * Reschedule Checks ''' import sys import signal import json import tempfile import types import fabric.api from apscheduler.schedulers.background import BackgroundScheduler from apscheduler.triggers.cron import CronTrigger import core.common import core.logs import core.db import core.fab def monitor(runbook, target, config, dbc, logger): ''' Execute monitor against target ''' # Clear out APSchedulers default loggin import logging logging.getLogger('apscheduler.scheduler').setLevel('WARNING') logging.getLogger('apscheduler.scheduler').propagate = False runbook_status = { 'msg_type' : 'runbook_status', 'runbook' : runbook, 'target' : target['hostname'], 'checks' : {} } logger.debug("Executing runbook {0} against target {1}".format(runbook, target['hostname'])) # Setup default fabric environment fabric.api.env = core.fab.set_env(config, fabric.api.env) fabric.api.env.host_string = target['ip'] for check_name in target['runbooks'][runbook]['checks'].keys(): check = target['runbooks'][runbook]['checks'][check_name] # Check for Credentials override if "credentials" in check: fabric.api.env = core.fab.set_env(config, fabric.api.env, override=check['credentials']) # Start Execution for plugin type if "plugin" in check['type']: plugin_file = check['plugin'] plugin_file = '{0}/checks/{1}'.format(config['plugin_path'], plugin_file) dest_name = next(tempfile._get_candidate_names()) destination = "{0}/{1}".format(config['monitoring']['upload_path'], dest_name) with fabric.api.hide('output', 'running', 'warnings'): try: if "target" in check["execute_from"]: logger.debug("Placing plugin script into {0}".format(destination)) fabric.api.put(plugin_file, destination) fabric.api.run("chmod 700 {0}".format(destination)) cmd = "{0} {1}".format(destination, check['args']) results = fabric.api.run(cmd) fabric.api.run("rm {0}".format(destination)) elif "remote" in check["execute_from"]: cmd = "{0} {1}".format(plugin_file, check['args']) results = fabric.api.local(cmd, capture=True) else: logger.warn('Unknown "execute_from" specified in check') return False except Exception as e: logger.debug("Could not put plugin file {0} on remote host {1}".format( plugin_file, target['ip'])) # Start Executing command type else: cmd = check['cmd'] # Perform Check with fabric.api.hide('output', 'running', 'warnings'): try: if "target" in check["execute_from"]: results = fabric.api.run(cmd) elif "remote" in check["execute_from"]: results = fabric.api.local(cmd, capture=True) else: logger.warn('Unknown "execute_from" specified in check') return False except Exception as e: logger.debug("Could not execute command {0}".format(cmd)) # Check results if results.return_code == 0: check_return = "OK" elif results.return_code == 1: check_return = "WARNING" elif results.return_code == 2: check_return = "CRITICAL" else: check_return = "UNKNOWN" runbook_status['checks'][check_name] = check_return logger.info("Check {0} for target {1} returned {2}".format( check_name, target['hostname'], check_return)) dbc.notify("check:results", runbook_status) def schedule(scheduler, runbook, target, config, dbc, logger): ''' Setup schedule for new runbooks and targets ''' # Default schedule (every minute) task_schedule = { 'second' : 0, 'minute' : '*', 'hour' : '*', 'day' : '*', 'month' : '*', 'day_of_week' : '*' } # If schedule is present override default if 'schedule' in target['runbooks'][runbook].keys(): if type(target['runbooks'][runbook]['schedule']) == types.DictType: for key in target['runbooks'][runbook]['schedule'].keys(): task_schedule[key] = target['runbooks'][runbook]['schedule'][key] elif type(target['runbooks'][runbook]['schedule']) == types.StringType: breakdown = target['runbooks'][runbook]['schedule'].split(" ") task_schedule = { 'second' : 0, 'minute' : breakdown[0], 'hour' : breakdown[1], 'day' : breakdown[2], 'month' : breakdown[3], 'day_of_week' : breakdown[4] } cron = CronTrigger( second=task_schedule['second'], minute=task_schedule['minute'], hour=task_schedule['hour'], day=task_schedule['day'], month=task_schedule['month'], day_of_week=task_schedule['day_of_week'], ) return scheduler.add_job( monitor, trigger=cron, args=[runbook, target, config, dbc, logger] ) def listen(scheduler, config, dbc, logger): ''' Listen for new events and schedule runbooks ''' logger.info("Starting subscription to monitors channel") pubsub = dbc.subscribe("monitors") for msg in pubsub.listen(): logger.debug("Got message: {0}".format(msg)) try: item = dbc.process_subscription(msg) logger.debug("Received {0} notification for {1}".format( item['msg_type'], item['target'])) target = dbc.get_target(target_id=item['target']) logger.debug("Found target: {0}".format(json.dumps(target))) job = schedule(scheduler, item['runbook'], target, config, dbc, logger) if job: name = "{0}:{1}".format( target['runbooks'][item['runbook']]['name'], target['hostname']) jobs.update({name : job}) logger.info("Scheduled runbook {0} for target {1}".format( item['runbook'], item['target'])) except Exception as e: logger.warn("Unable to process message: {0}".format(e.message)) def initialize(config, dbc, scheduler, logger): ''' Grab existing targets and setup monitors ''' targets = dbc.get_target() scheduled = 0 jobs = {} for target in targets.keys(): for runbook in targets[target]['runbooks'].keys(): job = schedule(scheduler, runbook, targets[target], config, dbc, logger) if job: name = "{0}:{1}".format( targets[target]['runbooks'][runbook]['name'], target) logger.debug("Scheduled runbook {0} for target {1}".format(runbook, target)) jobs.update({name : job}) scheduled = scheduled + 1 return jobs, scheduled def shutdown(signum, frame): ''' Shutdown this process ''' dbc.disconnect() # Remove jobs for job in jobs: jobs[job].remove() if signum == 15 or signum == 2: logger.info("Received signal {0} shutting down".format(signum)) sys.exit(0) elif signum == 0: sys.exit(1) else: logger.error("Received signal {0} shutting down".format(signum)) sys.exit(1) if __name__ == "__main__": config = core.common.get_config(description="Automatron: Monitoring") if config is False: print("Could not get configuration") sys.exit(1) # Setup Logging logs = core.logs.Logger(config=config, proc_name="monitoring") logger = logs.getLogger() # Listen for signals signal.signal(signal.SIGTERM, shutdown) signal.signal(signal.SIGINT, shutdown) # Open Datastore Connection db = core.db.SetupDatastore(config=config) try: dbc = db.get_dbc() except Exception as e: logger.error("Failed to connect to datastore: {0}".format(e.message)) shutdown(0, None) # Start Scheduler scheduler = BackgroundScheduler() scheduler.start() logger.info("Grabbing targets for initial scheduling") jobs, scheduled = initialize(config, dbc, scheduler, logger) logger.info("Scheduled {0} checks".format(scheduled)) while True: listen(scheduler, config, dbc, logger)
<reponame>coalpha/ts-steam-webapi import {final, newtype} from "../core/newtype"; type achievement_name_t = { [final]: "achievement_name"; [newtype]: achievement_name_t; }; /** This is a game-local achievement name */ export type achievement_name = string & achievement_name_t;
<filename>wetterdienst/settings.py # -*- coding: utf-8 -*- # Copyright (C) 2018-2022, earthobservations developers. # Distributed under the MIT License. See LICENSE for more info. from contextvars import ContextVar from dataclasses import dataclass from environs import Env @dataclass class Settings: """Wetterdienst class for general settings""" env = Env() env.read_env() with env.prefixed("WD_"): # cache cache_disable: bool = env.bool("CACHE_DISABLE", False) with env.prefixed("SCALAR_"): # scalar humanize: bool = env.bool("HUMANIZE", True) tidy: bool = env.bool("TIDY", True) si_units: bool = env.bool("SI_UNITS", True) skip_empty: bool = env.bool("SKIP_EMPTY", False) skip_threshold: bool = env.float("SKIP_THRESHOLD", 0.95) dropna: bool = env.bool("DROPNA", False) @classmethod def reset(cls): """Reset Wetterdienst Settings to start""" cls.env.read_env() cls.__init__(cls) @classmethod def default(cls): """Ignore environmental variables and use all default arguments as defined above""" # Put empty env to force using the given defaults cls.env = Env() cls.__init__(cls) _local_settings = ContextVar("local_settings") _local_settings_token = None # Context manager for managing settings in concurrent situations def __enter__(self): settings_token = self._local_settings.set(self) self._local_settings.get()._local_settings_token = settings_token return self._local_settings.get() def __exit__(self, type_, value, traceback): self._local_settings.reset(self._local_settings_token) # this is not the same object as the original one return Settings.__init__(self) Settings = Settings()
// writes an ASCII dump of this value and its contained values (NOT following pointers). size_t Value::dump(std::ostream &out, bool wide, int indent, const void *base) const { size_t pos = _byte - (uint8_t*)base; char buf[64]; sprintf(buf, "%04zx: %02x %02x", pos, _byte[0], _byte[1]); out << buf; auto size = dataSize(); if (wide && size < kWide) size = kWide; if (size > 2) { sprintf(buf, " %02x %02x", _byte[2], _byte[3]); out << buf; out << (size > 4 ? "…" : " "); } else { out << " "; } out << ": "; while (indent-- > 0) out << " "; writeDumpBrief(out, base, (size > 2)); switch (tag()) { case kArrayTag: { out << ":\n"; for (auto i = asArray()->begin(); i; ++i) { size += i.rawValue()->dump(out, isWideArray(), 1, base); } break; } case kDictTag: { out << ":\n"; for (auto i = asDict()->begin(); i; ++i) { size += i.rawKey() ->dump(out, isWideArray(), 1, base); size += i.rawValue()->dump(out, isWideArray(), 2, base); } break; } default: out << "\n"; break; } return size + (size & 1); }
/** * Parses a integer variable from a scalar. * * @param obj object to parse from. * @return parsed int value, or {@code 0} if failed parsing. */ private static int parseIntFromScalar(Object obj) { if (obj instanceof Integer) { return (Integer) obj; } else if (obj instanceof Long) { return (int) (long) (Long) obj; } else if (obj instanceof Short) { return (int) (short) (Short) obj; } else if (obj instanceof Byte) { return (int) (byte) (Byte) obj; } else if (obj instanceof Float) { return (int) (float) (Float) obj; } else if (obj instanceof Double) { return (int) (double) (Double) obj; } return 0; }
<reponame>iapain/smartwebapps<gh_stars>1-10 from django.contrib import admin from models import Cluster, News, NewsSource admin.site.register(NewsSource) admin.site.register(Cluster) admin.site.register(News)
An Analysis of the Expense Ratio Pricing of SMB,HML, and UMD Exposure in U.S. Equity Mutual Funds The expense ratio price of U.S. equity market exposure is close to zero with funds such as the Vanguard Total Stock Market Index (ticker: VTSAX), which charges an expense ratio of just 5 bps. An interesting, and more difficult, question to answer is, How much are mutual fund companies charging investors to gain exposure to small-capitalization, value, and positive momentum stocks? The authors answer this question using monthly returns data for low-cost mutual funds and ETFs in tandem with Fama–French three- and Carhart four-factor equity pricing models and current fund expense ratio levels. They indeed find strong evidence that fund companies charge for exposure to individual factor premiums. Additionally, they find significant variation in how aggressively factor exposure is priced, both across fund companies and across portfolios offering identical levels of factor premium exposure.
/** * ( -- String ) Returns the Decision Table and Action number * * @author paul snow * */ public static class ActionString extends ROperator { ActionString(){super("actionstring");} public void execute(DTState state) throws RulesException { state.datapush( RString.newRString( state.getCurrentTable().getName().stringValue()+" "+ state.getCurrentTableSection()+" "+ (state.getNumberInSection()+1))); } }
package Genreic; /** * 定义含有泛型的接口 * 格式: * 修饰符 interface 接口名<代表泛型的变量>{ * ... * } */ public interface MyInterface<E> { public void show(E e); }
<gh_stars>0 fn solve2(x1: u64, x2: u64) -> u64 { if x2 == 0 { 1 } else { (x1 % 10).pow(if x2 % 4 == 0 { 4 } else { x2 % 4 } as u32) % 10 } } fn solve3(x1: u64, x2: u64, x3: u64) -> u64 { let pow_x2_x3_equal_0 = x2 == 0 && x3 != 0; let pow_x2_x3_mod_4 = match x2 % 4 { 0 => if x3 == 0 { 1 } else { 0 }, 1 => 1, 2 => match x3 { 0 => 1, 1 => 2, _ => 0 }, _ => if x3 % 2 == 1 { 3 } else { 1 } }; // below are same with solve2, but replace x2 with pow(x2, x3) if pow_x2_x3_equal_0 { 1 } else { (x1 % 10).pow( if pow_x2_x3_mod_4 == 0 { 4 } else { pow_x2_x3_mod_4 } as u32 ) % 10 } } fn solve4(x1: u64, x2: u64, x3: u64, x4: u64) -> u64 { let pow_x3_x4_equal_0 = x3 == 0 && x4 != 0; let pow_x3_x4_is_odd = x3 % 2 == 1 || x4 == 0; let pow_x3_x4_is_one = x3 == 1 || x4 == 0; // below are same with solve3, but replace x3 with pow(x3, x4) let pow_x2_x3_equal_0 = x2 == 0 && !pow_x3_x4_equal_0; let pow_x2_x3_mod_4 = match x2 % 4 { 0 => if pow_x3_x4_equal_0 { 1 } else { 0 }, 1 => 1, 2 => if pow_x3_x4_equal_0 { 1 } else if pow_x3_x4_is_one { 2 } else { 0 }, _ => if pow_x3_x4_is_odd { 3 } else { 1 } }; // below are same with solve2, but replace x2 with pow(x2, x3) if pow_x2_x3_equal_0 { 1 } else { (x1 % 10).pow( if pow_x2_x3_mod_4 == 0 { 4 } else { pow_x2_x3_mod_4 } as u32 ) % 10 } } fn pow_chain_is_zero(lst: &[u64]) -> bool { if lst.len() > 1 { lst.first().unwrap().clone() == 0 && !pow_chain_is_zero(&lst[1..]) } else { lst.first().unwrap().clone() == 0 } } fn last_digit(lst: &[u64]) -> u64 { match lst.len() { 0 => 1, 1 => lst[0] % 10, 2 => solve2(lst[0], lst[1]), 3 => solve3(lst[0], lst[1], lst[2]), _ => { solve4(lst[0], lst[1], lst[2], if pow_chain_is_zero(&lst[3..]) { 0 } else { 1 }) } } }
15 000 new members joined the Labour party within 24 hours of the election and they are entering an organisation which is not as welcoming as it could be. I am one of the new, a returning member so to be fair I knew the score. I attended the first Stroud constituency group since the Corbyn surge on Friday. It was a packed house with a couple of great talks, one on Human Rights and the other on the local response to the Refugee Crisis. I chatted to a couple of new members, but I think the three of us, out of around fifty or sixty people, might have been the only new members there. Which you might think a little disappointing given the national surge in members, and if the numbers of locals joining the party on my Facebook feed are anything to go by. But I’m not sure people did. The thing is, as far as I can tell, that Labour has never really had a good system for welcoming new members, what they call in business jargon ‘on boarding’. At present you are sent a welcome pack, which includes your card, and if you are lucky you’ll get added to the local email list. I did receive a ‘welcome’ call from party HQ, but it wasn’t very welcoming. Instead it was just a list of questions about who I’d voted for before and whether I’d ever been a member of a different party. It wasn’t a welcome call it was a vetting call. This attitude of vetting new members goes to the heart of how labour needs to change. Instead of seeing new members as people that need to checked to see if they should belong to our gang; new members need to be embraced and supported to achieve the change they, the individual new member, want to see in the world. Instead of being asked whether you’ve ever voted for a different party, you need to be asked what do you want to see change and how can we the party help you achieve that. For members that don’t have something they want to do, then the party just needs to make them feel loved and that they belong. Sadly this is very far from what is happening; and it is potentially fatally wounding for team Corbyn and profoundly damaging for Labour. Many of the new members are young people who have never been involved in politics before. They will have however be members of other things like 38 Degrees, Greenpeace, Amnesty, Tesco Clubcard, The Costa Club to name a few. Each of these makes it very clear that you are hugely valued by them and that they exist in a major part to serve you. With the real clincher being that they don’t want much back in return, either a few clicks of your mouse or a few pounds spent. Labour on the other hand makes it clear that they expect you to support their vision, and right from word go you are bombarded by emails requesting more and more of your cash and time. This model of top-down politics will alienate many of the new members, and as the tough business of opposition starts many will drift away, feeling as if no one really listened to them. This cannot be allowed to happen. If Labour is to stand any chance of winning in 2020 these new members need to be cared for and not taken for granted. Corbyn’s team right now should be creating a new onboarding process for each member where their starting position is to ask: not what can you do for the Labour Party, but what can the Labour Party do for you?
import { EventEmitter } from "events"; import { Pool } from "pg"; import { InfiniteStream, TrackedDomainEventMessage, Logger, TrackingToken, PositionalTrackingToken } from "@eventia/core"; import { PostgresqlCursor } from "./PostgreSQLCursor"; import { PostgreSQLEventQuery } from "./PostgreSQLQuery"; export class PostgreSQLEventStorageStream extends EventEmitter implements InfiniteStream<TrackedDomainEventMessage> { protected readonly logger: Logger; protected readonly pool: Pool; protected readonly trackingToken: TrackingToken; protected closed: boolean; protected position: number; protected readonly cursor: PostgresqlCursor<StoredEvent>; protected iterator: AsyncIterableIterator<StoredEvent>; public constructor(logger: Logger, pool: Pool, trackingToken: TrackingToken) { super(); this.logger = logger; this.pool = pool; this.trackingToken = trackingToken; this.closed = false; this.position = 0; const query = PostgreSQLEventQuery .fromTrackingToken(this.logger, trackingToken) .build(); this.cursor = new PostgresqlCursor<StoredEvent>( this.logger, this.pool, query ); } public [Symbol.asyncIterator](): AsyncIterableIterator<TrackedDomainEventMessage> { return this; } public async next(): Promise<IteratorResult<TrackedDomainEventMessage>> { if (this.iterator === undefined) { const i = await this.cursor.execute(); // HACK: this.iterator = i[Symbol.asyncIterator](); this.iterator = i[Symbol.asyncIterator]() as unknown as AsyncIterableIterator<StoredEvent>; } const item = await this.iterator.next(); if (item.value !== undefined) { const storedEvent = item.value; const metadata = storedEvent.metadata || {}; if (storedEvent.tenantidentifier) { metadata.tenantId = storedEvent.tenantidentifier; } if (storedEvent.useridentifier) { metadata.userId = storedEvent.useridentifier; } return { done: false, value: new TrackedDomainEventMessage({ identifier: storedEvent.id, timestamp: storedEvent.logdate, aggregateIdentifier: storedEvent.aggregateidentifier, sequenceNumber: storedEvent.sequencenumber, payloadType: storedEvent.payloadtype, payload: storedEvent.payload, metadata: metadata, trackingToken: new PositionalTrackingToken( parseInt(storedEvent.position, 10) ), aggregateType: storedEvent.aggregatetype }) }; } return this.return(); } public async return(): Promise<IteratorResult<TrackedDomainEventMessage>> { this.close(); return { done: true, value: undefined as unknown as TrackedDomainEventMessage }; } public close(): void { if (this.iterator !== undefined && this.iterator.return !== undefined) { this.iterator.return(); } this.iterator = undefined as unknown as AsyncIterableIterator<StoredEvent>; this.closed = true; } }
// WriteBadBlock serializes the bad block into the database. If the cumulated // bad blocks exceeds the limitation, the oldest will be dropped. func WriteBadBlock(db ethdb.KeyValueStore, block *types.Block) { blob, err := db.Get(badBlockKey) if err != nil { log.Warn("Failed to load old bad blocks", "error", err) } var badBlocks badBlockList if len(blob) > 0 { if err := rlp.DecodeBytes(blob, &badBlocks); err != nil { log.Crit("Failed to decode old bad blocks", "error", err) } } for _, b := range badBlocks { if b.Header.Number.Uint64() == block.NumberU64() && b.Header.Hash() == block.Hash() { log.Info("Skip duplicated bad block", "number", block.NumberU64(), "hash", block.Hash()) return } } badBlocks = append(badBlocks, &badBlock{ Header: block.Header(), Body: block.Body(), }) sort.Sort(sort.Reverse(badBlocks)) if len(badBlocks) > badBlockToKeep { badBlocks = badBlocks[:badBlockToKeep] } data, err := rlp.EncodeToBytes(badBlocks) if err != nil { log.Crit("Failed to encode bad blocks", "err", err) } if err := db.Put(badBlockKey, data); err != nil { log.Crit("Failed to write bad blocks", "err", err) } }
n,d = map(int,input().split()) max = (n - d) - d count = 1 for i in range(1000): if max <= 1: break max = (max - (d*2) -1) count += 1 print(count)
#!/usr/bin/python3 import sys def input(): return sys.stdin.readline().rstrip('\n') N = int(input()) S = input() #D = list(map(int, input().split())) alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" # ord and chr asciiA = ord('A') def ord_rel_a(c): return ord(c) - asciiA def chr_rel_a(i): j = i + asciiA return chr(j) iZ = ord_rel_a('Z') Shift = "" for c in S: i = ord_rel_a(c) j = (i + N) % (iZ+1) Shift += chr_rel_a(j) print(Shift)
HaCaT cell line as a model system for vitamin D3 metabolism in human skin. Synthesis and catabolism of calcitriol (1,25(OH)2D3) were studied using HaCaT cell line as a cell culture model. Our results indicate that stimulation of HaCaT cells with epidermal growth factor (EGF) or transforming growth factor-alpha (TGF-alpha) within 16 h just prior to reaching confluence amplified the production of calcitriol when calcidiol (3H-25OHD3) was used as a substrate. EGF- and TGF-alpha-induced (0.1-10 nM) 1-hydroxylation of 3H-25OHD3 was concentration-dependent but showed different kinetics. Synthesis of calcitriol induced by EGF was inversely related to the degree of cellular confluence. Stimulation by EGF was an actinomycin D- and cycloheximide-sensitive process. Independently of the growth factor used, the production of 3H-24R,25(OH)2D3 and the catabolism of 3H-1,25(OH)2D3 to 3H-1,24,25(OH)3D3 were unexpectedly low (< or = 5% and < or = 2%/), as compared to the amount of calcitriol generated. Exogenous addition of unlabeled 1,25(OH)2D3, 1,24R(OH)2D3, calcipotriol, or 24R,25(OH)2D3 at concentrations as low as 10(-11) M, potently inhibited the 3H-1,25(OH)2D3 production. These results suggest that EGF-treated HaCaT keratinocytes could serve for further studies of the vitamin D3 pathway and its relationship to proliferation and differentiation, but differences in calcitriol synthesis and catabolism from those in cultured primary keratinocytes or other cell lines must be considered.
#pragma once #include "treedefinitions.h" #include "foe.h" #include "sprite.h" #include <SFML/Graphics/RenderWindow.hpp> // Box + ID typedef std::pair<Box, int> BoxID; // Region Tree typedef bgi::rtree<BoxID, bgi::quadratic<16>> RTree; class FoeWrapper final { RTree* tree; std::vector<Foe*> foes; std::vector<std::vector<cmm::Sprite*>> sprites; // Process ------------------------ sf::RenderWindow* window; double elapsedTime; Rect* screen; Rect* character; // character box Rect* characterAttack; // attack box bool characterHasAttacked; float characterDamage; float characterHP; float characterArmour; // -------------------------------- public: FoeWrapper(); ~FoeWrapper(); void free(); void clean(); void incinerate(); void insert(Foe* foe); void load(std::vector<std::string>&, std::vector<int>&, std::vector<int>&); void update(sf::RenderWindow* &, double &, Rect* &, Rect* &, Rect* &, bool &, float &, float &, float &); void process(); };
/** * Executes the given command, returning its success. * <br> * If false is returned, then the "usage" plugin.yml entry for this command * (if defined) will be sent to the player. * * @param sender Source of the command * @param command Command which was executed * @param label Alias of the command which was used * @param args Passed command arguments * @return true if a valid command, otherwise false */ @Override public boolean onCommand(@NotNull CommandSender sender, @NotNull Command command, @NotNull String label, @NotNull String[] args) { User user = new User(sender); if (user.hasPermission(Permission.UNKICK)) { if (args.length != 1) { executeInvalid(user, command, label); return true; } User target = new User(args[0]); user.sendMessage(!target.isValidUser(), "command.target.invalid"); if (target.isValidUser()) { if (target.isKicked()) { target.setKick(false, "", ""); user.sendMessage(target, "user.punishment.removed"); } else { user.sendMessage(target, "user.punishment.inactive"); } } } else { executeNoAccess(user); } return true; }
What the ‘greater good’ excludes: Patients left behind by pre‐operative COVID‐19 screening in an Ethiopian town Abstract During the coronavirus disease 2019 (COVID‐19) pandemic, bioethical analyses often emphasized population health and societal benefit. Hospital policies frequently focused on reducing risk of transmitting SARS‐CoV‐2 by restricting visitors; requiring protective equipment; and screening staff, patients and visitors. While restrictions can be burdensome, they are often justified as essential measures to protect the whole population against a virus with high rates of transmission, morbidity and mortality. Yet communities are not monolithic, and the impacts of these restrictions affect different groups differently. An ophthalmological unit outreach program in Ethiopia serves to illustrate. Pre‐operative screening policies were designed to protect as many patients as possible but had adverse impacts on underserved communities. As this case study demonstrates, creating hospital policies that truly serve the good of the society may require a more holistic review of impacts on inequitably positioned communities. policymakers who strive for a universal good overlook the systematic effects policies have on disadvantaged individuals and groups. Over the course of the pandemic, there has been greater recognition of the disproportionate burdens of COVID-19 infection on populations already suffering health disparities. 2 In what follows, we describe how policies that use a 'benefit the whole' calculus play out in clinical settings, using as an example a hospital policy in Ethiopia that requires screening for COVID-19 using a SARS-CoV-2 PCR test prior to surgery. As this example demonstrates, hospital policies that aim to protect the good of the whole may, in effect, protect only or primarily the good of socioeconomically advantaged communities, with no or limited benefits for socioeconomically disadvantaged communities. Ultimately, we demonstrate a situation in which weighing of "universal good" against "individual good" neglects health disparities that just policies must not neglect. Because the effects of this neglect are morally serious and the damage of entrenching health disparities unfair, we propose rethinking clinical policies aimed at the good of an undifferentiated 'whole'. Section I presents the case study. Section II identifies three alternative ways to incorporate health justice considerations in hospitals caring for underserved patients: justice as fairness, justice as equity, and responsive justice. Section III concludes that health equity can and should be dealt with by public health policy in a context-sensitive way that disaggregates groups that benefit from groups that suffer harm. In the final analysis, even when policies aim for a public good, such as preventing virus transmission, they can unwittingly create a public harm, further advantaging people already advantaged. When this occurs, public policies fall short of realizing a truly just and truly universal good. | CASE STUDY: DISPROPORTIONATE EFFECTS OF A COVID-19 MITIGATION STRATEGY ON LOW-INCOME PATIENTS A hospital in an urban area of Ethiopia has an outreach program that provides ophthalmology services for underserved low-income and impoverished people in collaboration with local churches and governmental and development institutions. 3 The costs of the service are covered by international donors. An eye nurse from the team treats patients with medications at outreach centers. But if patients' impairments require intervention, such as cataract surgery, they are scheduled for the procedure at the hospital. testing has not been free for preoperative testing purposes in Ethiopia, and there are no government subsidies for preoperative testing. The PCR testing requirement is hospital specific and remains in place as of April 2022 even though the number of daily nationwide cases has dropped to less than 100 (less than one per million per day). The Eye Unit requested the hospital waive the PCR test for lowincome patients who cannot afford it and justified this request by noting that this group of patients has a relatively low risk of transmitting COVID-19. The risk of transmission for these patients was argued to be low for two reasons. First, the procedure is short (under an hour), which involves less time at the hospital than a visit to the outpatient department or an MRI examination. These outpatient visits, tests and procedures do not require a SARS-CoV-2 PCR test and yet, theoretically, bring greater risk of transmission based on time of exposure. Second, general anesthesia with mechanical ventilation is not required for these eye surgeries, which means that the procedure is not aerosol-generating, and patients would be masked by the surgical drapes. Despite these arguments, the hospital administration declined the request for a waiver. The refusal might be based on the concern that procedures can lead to cardiac arrests and resuscitation attempts which, though exceedingly rare in this context, are aerosolgenerating and place staff and patients at risk. A second basis for 2 See, for example, Adebisi, Y.A., Ekpenyong, A., Ntacyabukura, B., et al. (2020). refusal might be the precedent that an exception sets. By granting this request, hospital administration could face additional waiver requests from the various hospital departments which manage approximately 1000 outpatients per day. Managing many waiver requestscan be challenging and can require increased resources. Given resource constraints, especially during a pandemic, the judicious use of staff time and attention is a paramount concern. This PCR testing policy raises the ethical question of whether the calculation of the "greater good" in this case fairly represents the needs of low-income communities. Testing aims to limit transmission of COVID-19 in the hospital, and in turn, reduce transmission in the wider community. For those who can afford testing, it represents a small inconvenience for the broader benefit of preventing COVID-19 transmission. However, for those who cannot easily afford testing, the sacrifice is much larger. The outreach program could pay the additional cost of testing, but this would mean 60% fewer patients could receive operations, given limited available funds. In other words, more than half of the number of patients currently receiving cataract surgery would remain visually impaired or blind. While the impact of vision impairment on individual patients and communities can vary widely, the burden can be significant. In 2020 in Ethiopia, 8.8 million people were estimated to have vision loss, and 780,000 people were blind. 5 In a 2015 study by Naido et al., Ethiopia was identified as having the second highest age-adjusted burden of blindness in the world after Afghanistan. 6 About half of blindness is due to cataracts. 7 For comparison, as of April 2022, the Ethiopian Public Health Institute estimates a total of 469,879 COVID-19 cases and 7,508 deaths, with 450,425 recovered from Vision impairment and blindness are associated with increased risk of death, 9,10 as well as diminished educational, economic and employment opportunities. 11,12,13 According to the 2019 Global Burden of Diseases (GDB) Injuries and Risk Factors Study blindness and low vision ranked eighth against all causes of disease by years lived with disability (YLDs) in those aged 50-69 years, but ranked fourth (behind age-related hearing loss, diabetes, and low back pain) in those aged 70 years and older. 14,15 The 1993 World Bank Disability-Adjusted Life Years assessment evaluated the burden of blindness more severely than the GDB Study, valuing blindness 60% as severe as death. 16 For patients in communities who are already low resourced, limited sight can result in lost wages and exacerbation of poverty as well as create challenges to maintaining healthy behaviors. One might reason that while a mild case of COVID-19 is not as bad as blindness, death or severe chronic debilitation from severe COVID sequelae is likely comparable to or worse than blindness. However, even if the community served by the outreach program faces the same risks caused by the pandemic at baseline, this community additionally experiences ongoing disability in the context of social disparity. For this community, the high risks of lost wages and ongoing poverty due to untreated vision impairment may outweigh the low risks of transmission of COVID-19 during inpatient treatment. When those made to sacrifice on behalf of the greater good are disproportionately socioeconomically disadvantaged, how should hospitals respond? Is it fair to burden the poorest members of the community to protect the greater good? It is to this question that we now turn. | POTENTIAL SOLUTIONS: RESPONSIVENESS AND EQUITY IN PUBLIC HEALTH POLICY In this section, we offer suggestions for maintaining the dual goals of reducing harm through preventing transmission and equitably protecting fair access to healthcare for poor communities. While we focus on the case presented in Section I, we argue that the analysis extends to many hospital, governmental and public health policies targeting the greater good while inadvertently burdening the least well-off members of a community. each person subject to the testing requirement is equally resourced to afford the test is not taken into account in appealing to the greatest good. Nor does appealing to the greatest good consider whether individuals' health is equally protected by this particular effort to mitigate transmission. If this policy were applied in a different setting, such as a high-income setting without income inequality, or in a setting of universal healthcare coverage for COVID-19 testing where every patient had access PCR testing and/ or eye treatment without charge, then a utilitarian calculus might suffice. For example, the analysis would be very different in a setting like South Africa where PCR testing in public sector hospitals is free. However, in this community, and in many low-income communities across sub-Saharan Africa, patients live hand-to-mouth and do not have discretionary income to pay for COVID testing. Ultimately, health equity may depend on progress towards Universal Health Coverage, which, if available, would eliminate the need to evaluate how to distribute costs in a public health emergency. However, as we work towards more universal approaches, individual healthcare professionals and institutions are forced to grapple with trade-off situations. Here, absent universal health coverage, the Eye Unit must advocate for an approach that effects justice for the communities they serve when the standard utilitarian framework fails. Societies without universal health coverage should proceed in a way that covers high-priority health services first, according to the World Health Organization. 19 In the case at hand, the priority of mandating PCR testing for a low-risk shortduration procedure must be balanced against the priority of preserving vision for low-income communities. Three options demonstrate alternative methods for incorporating health equity in a more holistic manner in the ophthalmology outreach program, while also continuing to mitigate risks of COVID-19 transmission for the population as a whole. 3.1 | Justice as fairness: distributed and sliding scale fees for services A practical and simple solution is to impose a small surcharge on all clinical services rather than a large surcharge only on those who need surgery. Since all patients seeking care at the hospital benefit from COVID-19 transmission prevention through PCR testing, a universal surcharge would arguably offer a fairer distribution of the burden of testing. Ethical frameworks that center justice as fairness support this approach. Consider a Rawlsian approach (with the caveat that Rawls did not propose this approach as a tool to be applied directly to health policy evaluation). Rawls' difference principle holds that social and economic opportunities are to be arranged so that they are to the greatest benefit for the least well-off. 20 Expanding the distribution of the burden through a universal surcharge in this case would be justified on the ground that it maximally protects those who are least well-off. It aims to protect all patients against COVID-19 transmission while simultaneously better protecting the opportunity of medically underserved patients to secure basic ophthalmological services that are already available to the well-resourced. Additionally, the fee could be adjusted on a sliding scale in order to better accommodate income-based disparities that obstruct fairness in access to healthcare. For some hospitals, fees like this vary at baseline, for instance, charging a larger fee for patients who are expatriates. A sliding scale for a COVID-19 surcharge on all clinical services aligns with the difference principle because it supports fairer access to health resources by ensuring that the least-advantaged members of the community served by the hospital benefit the most from the policy (i.e., are charged the least). In other words, all patients are charged a COVID-19 fee to cover PCR testing, and all are protected from COVID-19 transmission through PCR testing; the sliding scale allows equal access to healthcare services for underserved patients who would otherwise lack the same opportunities for healthcare in the pandemic as well-resourced patients. Establishing an income-adjusted fee for COVID-19 transmission prevention might be easier to implement in a hospital that has already adopted a sliding-scale for all services. For hospitals without this baseline, COVID-19 surcharges may offer an opportunity to advocate for piloting sliding-scale fees relative to income or other socioeconomic factors. Hospitals can define differences in fees according to nationally predefined limits and boundaries that correlate with income-based disparities or disparities in access to healthcare, such as national definitions of "poverty" and geographic zones that correlate with poverty. These recommendations are not without some concerns. A universal surcharge may hold broader appeal to all patients since it is grounded in the value of treating everyone the same; however, it risks not reducing the cost of the test far enough for individuals who face economic inequities. Similarly, a sliding scale approach may be difficult to implement, especially in the context of a global pandemic where data is unavailable or may not reflect real-time loss of income, housing, and other inequities. Ultimately, improving systems to incorporate justice as fairness effectively will depend on understanding the particular hospital's context pre-implementation, as well as the ability to track the effects of distribution policies in real-time, review data and make necessary adjustments. this gap, healthcare establishments should consider charging more for tests given to those who are well-resourced in order to fund tests for those who are under-resourced. An equity-based approach promotes justice insofar as it acknowledges that different responses are ethically warranted based on patients' different underlying needs. | Justice as equity: defining policy exemptions The claim is not that it would be benevolent or kind to help the poor, as a charity-based framework suggests, but that people without the means to pay for basic health care deserve or have a right to financial support, because everyone is entitled to have access to a basic level of healthcare. One might argue that logistics for this kind of policy could be challenging. Given the complexities of intersecting health disparities for different marginalized communities, it may become difficult to assess exemptions equitably. Therefore, the healthcare institution should strive to consider advantages and disadvantages beyond ophthalmology, incorporating the needs of patients across different care services when establishing exemptions. Further, individual and interpersonal biases of those with the power to make these decisions could create unfair advantages and disadvantages-reasons why hospital leaders tend to prefer simple, clear-cut policies. In reply, when hospitals partner with underserved communities and define equity-based criteria using known metrics, the imperfec- | CONCLUSION: THE VITAL ROLE OF HEALTH EQUITY IN PUBLIC HEALTH POLICY Cataract surgery is considered an elective procedure, but it avoids debilitating harms, such as blindness, and it carries relatively low risk of perioperative COVID-19 transmission. The Ethiopian hospital that implemented mandatory PCR testing sought to create the greatest good for the community, in alignment with prevailing practice in many medical institutions during the pandemic. Unwittingly, the policy (and probably other similar policies) exacerbated health inequities; for that reason, it should be amended. A more just policy must be context-sensitive and speak to the realities of the community's healthcare needs, barriers and insights. It requires engagement with members of the community, especially those most affected. With COVID-19 testing stretched in Ethiopia (5000-8000 tests/day per >110 million people at the time of the case's presentation), most tests go to people who can afford the cost in order for the system to be self-sustaining, given the limited resources to pay for it. In the context of the above case, this means that already highly resourced communities are able to afford the test and proceed with beneficial medical care. At the same time, for under-resourced communities the cost of the test is prohibitive. As a result, poorer communities are unable to accept free medical care that is highly beneficial, and arguably forced to endure risks equivalent or greater in magnitude (in comparison to consequences of COVID-19 transmission). This narrative is likely being replayed in multiple healthcare systems all over the world, and the issues extend well beyond Ethiopia. While advantaged communities continue to benefit from ongoing societal resources and medical care, disadvantaged communities face greater barriers to medical care without societal resources to protect them. Social justice demands a flip here. At minimum, those who benefit the most should take on the greatest cost and responsibility. The COVID-19 pandemic has amply shown that fair policies must go beyond simple utilitarian formulas aimed at serving the aggregate good and must consider health equity. The tensions faced by an Eye Unit and hospital in Ethiopia serve to illustrate this point on a small scale. The above recommendations should be viewed as a starting point for a broader conversation about how to move towards health equity in public health emergencies. In the final analysis, just policies must include a holistic, equitable and responsive assessment of benefit by asking who it is for? Which communities are advantaged, and which communities are burdened? For hospital administrators and healthcare professionals alike, who themselves often come from communities of privilege, embracing a holistic approach can be difficult or impossible without involving individuals with relevant lived experiences.
<gh_stars>0 import xml.etree.ElementTree as ET def get_input_data_xml(file_name): mytree = ET.parse(file_name) myroot = mytree.getroot() return myroot
package main import ( "encoding/xml" ) type Request struct { XMLName xml.Name `xml:"Request"` Subject Subject `xml:"Subject"` Resource Resource `xml:"Resource"` Action Action `xml:"Action"` Environment Environment `xml:"Environment"` } type Subject struct { XMLName xml.Name `xml:"Subject"` Attribute []Attribute `xml:"Attribute"` } type Resource struct { XMLName xml.Name `xml:"Resource"` Attribute []Attribute `xml:"Attribute"` } type Action struct { XMLName xml.Name `xml:"Action"` Attribute []Attribute `xml:"Attribute"` } type Environment struct { XMLName xml.Name `xml:"Environment"` Attribute []Attribute `xml:"Attribute"` } type Attribute struct { XMLName xml.Name `xml:"Attribute"` AttributeValue string `xml:"AttributeValue"` }
// Method to draw the counter button on top of the shopping cart icon in the toolbar private Drawable buildCounterDrawable(int count, int backgroundImageId) { LayoutInflater inflater = LayoutInflater.from(this); View view = inflater.inflate(R.layout.cart_menuitem_layout, null); view.setBackgroundResource(backgroundImageId); if (count == 0) { View counterTextPanel = view.findViewById(R.id.cartMenuLayout); counterTextPanel.setVisibility(View.GONE); } else { TextView textView = (TextView) view.findViewById(R.id.cartCounter); textView.setText("" + cartCount); } view.measure( View.MeasureSpec.makeMeasureSpec(0, View.MeasureSpec.UNSPECIFIED), View.MeasureSpec.makeMeasureSpec(0, View.MeasureSpec.UNSPECIFIED)); view.layout(0, 0, view.getMeasuredWidth(), view.getMeasuredHeight()); view.setDrawingCacheEnabled(true); view.setDrawingCacheQuality(View.DRAWING_CACHE_QUALITY_HIGH); Bitmap bitmap = Bitmap.createBitmap(view.getDrawingCache()); view.setDrawingCacheEnabled(false); return new BitmapDrawable(getResources(), bitmap); }
#include <iostream> #include <cstdlib> #include <cstdio> #include <algorithm> #include <string> #include <map> #include <vector> using namespace std; const int N = (int)1e5 + 100; int n; string f[N], l[N]; vector <string> all; map<string, int> inv; vector<int> can[N]; int pos[2*N]; int p[N], r[N], u[N]; int main() { ios_base::sync_with_stdio(0); cin >> n; for (int i = 1 ; i <= n ;i ++) { cin >> f[i] >> l[i]; inv[f[i]] = i; inv[l[i]] = i; all.push_back(f[i]); all.push_back(l[i]); } sort(all.begin(), all.end()); for (int i = 1 ; i <= n ; i ++) { cin >> p[i]; r[p[i]] = i; } for (int i = 0 ; i < all.size() ; i ++) { pos[i+1] = r[inv[all[i]]]; //cout << pos[i+1] << ' '; } //cout << endl; int cur_pos = 1; for (int i = 0 ; i < all.size() ; i ++) { int ind = inv[all[i]]; if (u[ind]) continue; if (pos[i+1] == cur_pos) { u[ind] = 1; cur_pos ++; } } //cout << cur_pos << endl; if (cur_pos != n + 1) cout << "NO\n"; else cout << "YES\n"; return 0; }
/** * Returns the a sequence of tokens. * This will returns number of tokens specified by {@code count}, * except EOF token ({@link JavadocTokenKind#EOF}) and its trailing tokens. * @param scanner the target scanner * @param start the starting index * @param count the max token count * @return the tokens in the range */ public static List<JavadocToken> lookaheadTokens(JavadocScanner scanner, int start, int count) { if (count < 0) { throw new IllegalArgumentException(); } List<JavadocToken> tokens = new ArrayList<>(count); for (int i = 0; i < count; i++) { JavadocToken token = scanner.lookahead(start + i); if (token.getKind() == JavadocTokenKind.EOF) { break; } tokens.add(token); } return tokens; }
<gh_stars>1-10 fn main() { println!("{}", Solution::missing_number1(vec![3, 0, 1])); println!( "{}", Solution::missing_number1(vec![9, 6, 4, 2, 3, 5, 7, 0, 1]) ); } struct Solution; impl Solution { pub fn missing_number(nums: Vec<i32>) -> i32 { (0..=nums.len() as i32).sum::<i32>() - nums.into_iter().sum::<i32>() } pub fn missing_number1(nums: Vec<i32>) -> i32 { let l = nums.len() as i32; nums.into_iter() .enumerate() .fold(l, |l, (x, y)| x as i32 ^ y ^ l) } }
/* Given an array of integers ,find the second largest value Example 01: Input: [3,2,1,9,8,4,6,5,4] Output: 9 Example 02: Input: [] Output: */ package main import ( "errors" "log" ) func main() { tests := [][]int{{2, 7, 4, 1, 8, 1}, {1, 3}, {9, 3, 2, 10}, {0}, {}} for _, test := range tests { index, value, err := secLargest(test) if err != nil { log.Println(err) continue } log.Printf("secLargest(%v) == test[%d] == %d\n", test, index, value) } } // given an array of ints return the index and value of second largest element func secLargest(x []int) (int, int, error) { var secLargestVal int var secLargestIndex int // find index of largest value var largestVal int var largestValIndex int for k, v := range x { if k == 0 { largestVal = v largestValIndex = k } else { if v > largestVal { largestVal = v largestValIndex = k } } } // log.Printf("\tsecLargest(): largestValIndex: %d\n", largestValIndex) if len(x) <= 0 { return secLargestIndex, secLargestVal, errors.New("secLargest(): empty input array.") } first := true for k, v := range x { // if v <= largestVal { if k != largestValIndex { // second largest value candidate // log.Printf("\tsecLargest(): considering x[%d] == %d as second largest candidate..\n", k, v) if first { // log.Printf("\tsecLargest(): initializing x[%d] == %d to sec largest val\n", k, v) secLargestVal = v secLargestIndex = k first = false } else { if v > secLargestVal { // log.Printf("\tsecLargest(): competing second largest value found at x[%d] == %d, resetting prev val..\n") secLargestVal = v secLargestIndex = k } } } } return secLargestIndex, secLargestVal, nil }
<filename>Remoting/Views/Testing/Cxx/TestImageScaleFactors.cxx<gh_stars>1-10 /*========================================================================= Program: ParaView Module: TestImageScaleFactors.cxx Copyright (c) Kitware, Inc. All rights reserved. See Copyright.txt or http://www.paraview.org/HTML/Copyright.html for details. This software is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the above copyright notice for more information. =========================================================================*/ #include "vtkSMSaveScreenshotProxy.h" #include "vtkVectorOperators.h" #include <cassert> namespace { void Test(const vtkVector2i& target, const vtkVector2i& maxsize, bool expect_approx = false) { bool approx; vtkVector2i size(maxsize); vtkVector2i mag = vtkSMSaveScreenshotProxy::GetScaleFactorsAndSize(target, size, &approx); cout << "----------------------------------------------------" << endl; cout << " Target: " << target << " Max size: " << maxsize << endl; cout << " Achieved: " << (size * mag) << " Approx: " << approx << endl; cout << " New size: " << size << " scale: " << mag << endl << endl; if (size * mag != target && expect_approx == false) { throw false; } } } // Tests code in vtkSMSaveScreenshotProxy to compute scale factors when saving // images at target resolution. int TestImageScaleFactors(int, char* []) { try { // totally crazy sizes. Test(vtkVector2i(2188, 1236), vtkVector2i(538, 638), true); // preserves aspect ratio Test(vtkVector2i(1280, 800), vtkVector2i(800, 800)); // let's try a prime target. Test(vtkVector2i(1280, 811), vtkVector2i(800, 800), true); // let's try a prime max. Test(vtkVector2i(1280, 815), vtkVector2i(811, 811)); } catch (bool) { return EXIT_FAILURE; } return EXIT_SUCCESS; }
<filename>collector/parsers.go package collector import ( "github.com/prometheus/client_golang/prometheus" "strconv" ) func parseConstMetric(stat JvbStat, desc *prometheus.Desc, valType prometheus.ValueType) (prometheus.Metric, error) { if value, err := strconv.ParseFloat(stat.Value, 32); err == nil { return prometheus.NewConstMetric(desc, valType, value) } else { return nil, err } } func ParseGauge(stat JvbStat, desc *prometheus.Desc) (prometheus.Metric, error) { return parseConstMetric(stat, desc, prometheus.GaugeValue) } func ParseCounter(stat JvbStat, desc *prometheus.Desc) (prometheus.Metric, error) { return parseConstMetric(stat, desc, prometheus.CounterValue) } func ParseHistogram(stat JvbStat, desc *prometheus.Desc) (prometheus.Metric, error) { // try to parse JSON array value that represents our bucketed values bucketedValues, err := ParseArray(stat.Value) if err != nil { return nil, err } var ( // we will have as many buckets as we got from the stats buckets = make(map[float64]uint64, len(bucketedValues)) count = uint64(0) sum = float64(0) ) for bucketSize, bucketCount := range bucketedValues { // Prometheus histograms are cumulative - increase count and then set bucket value count += bucketCount buckets[float64(bucketSize)] = count sum += float64(bucketSize) * float64(bucketCount) } return prometheus.NewConstHistogram(desc, count, sum, buckets) }
Foreign Direct Investment in East Africa - A Comparative Analysis of its Investment Laws Foreign direct investment (FDI) as a means of economic growth has been welcomed by most East African nations in recent times. Considerable liberalization of the prevalent investment regulations has been undertaken to facilitate the smooth flow of FDI into these economies. National foreign investment laws, policies and the international investment legal framework matter for attracting FDI to a larger number of developing countries, East Africa inclusive. The regulation of the activities of FDI in a host country is seen at three different levels. The national, Regional and third level which involves regulation at the international level. In this paper, emphasis shall be given to the model investment code of the East African Community, foreign investment laws of the partner states. A concurrent comparative analysis will made on the different investment laws providing for: entry of FDI in East Africa, treatment of Foreign Direct Investors and their Investments, transfer of Foreign invested funds and dispute resolution mechanism available. Recommendations shall be made to the partner states whose foreign investment laws are considered to be below standard in comparison to the other East African states.
/** * * @author Matti Tahvonen */ public class GridWithEditorRow extends AbstractTest { @Override public Component getTestComponent() { List<Person> listOfPersons = Service.getListOfPersons(100); final Person selectedDude = listOfPersons.get(2); MGrid<Person> g = new MGrid<Person>().withProperties("firstName", "lastName", "age"); g.setRows(listOfPersons); g.setEditorEnabled(true); return g; } }
<reponame>lucaspouzac/algoliasearch-client-java-2<filename>algoliasearch-apache/src/test/java/com/algolia/search/client/TimeoutTest.java package com.algolia.search.client; import static com.algolia.search.integration.TestHelpers.ALGOLIA_ADMIN_KEY_1; import static com.algolia.search.integration.TestHelpers.ALGOLIA_APPLICATION_ID_1; import com.algolia.search.DefaultSearchClient; import com.algolia.search.SearchClient; import com.algolia.search.SearchConfig; class TimeoutTest extends com.algolia.search.integration.client.TimeoutTest { protected SearchConfig.Builder createBuilder() { return new SearchConfig.Builder(ALGOLIA_APPLICATION_ID_1, ALGOLIA_ADMIN_KEY_1); } protected SearchClient createClient(SearchConfig config) { return DefaultSearchClient.create(config); } }
/** * Reset parser for given input. * @param input string input * @return this parser. */ public PegLegParser<V> using(CharSequence input) { this.source = new Source(input); frame = new LinkedList<>(); values = new Values<>(); lastReturn = null; lastSuccessfulReturn = null; farthestSuccessfulPos = new SourcePosition(); pushFrame(); return this; }
/*! * @file * @brief Stuff for working with configuration. */ #pragma once #include <arataga/exception.hpp> #include <arataga/bandlim_config.hpp> #include <spdlog/spdlog.h> #include <asio/ip/address.hpp> #include <algorithm> #include <chrono> #include <memory> #include <ostream> #include <string_view> #include <tuple> #include <variant> namespace arataga { // // denied_ports_config_t // /*! * @brief Config for denied TCP-ports. */ struct denied_ports_config_t { //! Type for holding port number. using port_t = std::uint16_t; //! A case when a single port is blocked. struct single_port_case_t { port_t m_port; // For debugging purposes only. [[nodiscard]] bool operator==( const single_port_case_t & b ) const noexcept { return this->m_port == b.m_port; } }; //! A case when a range of ports is blocked. /*! * Holds a range in the form [low, high]. */ struct ports_range_case_t { port_t m_low; port_t m_high; // For debugging purposes only. [[nodiscard]] bool operator==( const ports_range_case_t & b ) const noexcept { return this->m_low == b.m_low && this->m_high == b.m_high; } }; //! Description of a single case. using denied_case_t = std::variant< single_port_case_t, ports_range_case_t >; //! Type of storage for several cases. using case_container_t = std::vector< denied_case_t >; //! Description of denied ports. /*! * This container can be empty. It means that client can connect * to any port. */ case_container_t m_cases; //! Helper function for checking is specified port denied or not. [[nodiscard]] bool is_denied( port_t port ) const noexcept { struct checker_t { port_t m_port; bool operator()( const single_port_case_t & c ) const noexcept { return c.m_port == m_port; } bool operator()( const ports_range_case_t & c ) const noexcept { return c.m_low <= m_port && m_port <= c.m_high; } }; return std::any_of( m_cases.begin(), m_cases.end(), [port]( const auto & c ) noexcept { return std::visit( checker_t{ port }, c ); } ); } }; // // acl_protocol_t // /*! * @brief Type of protocol to be used by an ACL (http, socks, etc). */ enum class acl_protocol_t { //! ACL should detect the protocol automatically. autodetect, //! ACL should use SOCKS only. socks, //! ACL should use HTTP only. http }; // For debugging purposes only. std::ostream & operator<<( std::ostream & to, acl_protocol_t proto ); // // acl_config_t // /*! * @brief Config for a single ACL. */ struct acl_config_t { //! Type for TCP-port. using port_t = std::uint16_t; //! The protocol for that ACL. acl_protocol_t m_protocol; //! TCP-port for that ACL. /*! * The ACL opens an incoming socket on that port and accepts new * connections from clients on that port. */ port_t m_port; //! IP-address for incoming connections to that ACL. /*! * The ACL opens an incoming socket on that address. * Clients will use that address to connect to arataga. * * Only IPv4 addresses are supported now. */ asio::ip::address_v4 m_in_addr; //! IP-address for outgoing connections by that ACL. /*! * The ACL will use this address for outgoing connections to target * hosts during serving client's requests. */ asio::ip::address m_out_addr; //! Initializing constructor. acl_config_t( acl_protocol_t protocol, port_t port, asio::ip::address_v4 in_addr, asio::ip::address out_addr ) : m_protocol{ protocol } , m_port{ port } , m_in_addr{ std::move(in_addr) } , m_out_addr{ std::move(out_addr) } {} // For debugging purposes only. [[nodiscard]] bool operator==( const acl_config_t & b ) const noexcept { const auto tup = []( const auto & v ) { return std::tie( v.m_protocol, v.m_port, v.m_in_addr, v.m_out_addr ); }; return tup( *this ) == tup( b ); } }; // For debugging purposes only. std::ostream & operator<<( std::ostream & to, const acl_config_t & acl ); // // http_message_value_limits_t // /*! * @brief Set of constraints for elements of HTTP protocol. */ struct http_message_value_limits_t { //! Length of request-target in start-line of HTTP-request. std::size_t m_max_request_target_length{ 8u*1024u }; //! Length of HTTP-field name. std::size_t m_max_field_name_length{ 2u*1024u }; //! Length of HTTP-field value. std::size_t m_max_field_value_length{ 10u*1024u }; //! Total size of all HTTP-fields. std::size_t m_max_total_headers_size{ 80u*1024u }; //! Length of status-line of HTTP-response. std::size_t m_max_status_line_length{ 1u*1024u }; }; // // common_acl_params_t // /*! * @brief Set of common for all ACL parameters. */ struct common_acl_params_t { /*! * @brief The max count of parallel active connections to one ACL. */ unsigned int m_maxconn{ 100u }; /*! * @brief The default band-limits for a client. * * Those constraits are applied if there is no personal limits * for a client. */ bandlim_config_t m_client_bandlim; /*! * @brief Time-out before sending negative authentification response. */ std::chrono::milliseconds m_failed_auth_reply_timeout{ 750 }; /*! * @name Various time-outs used during handling of client connections. * @{ */ std::chrono::milliseconds m_protocol_detection_timeout{ 3'000 }; std::chrono::milliseconds m_socks_handshake_phase_timeout{ 5'000 }; std::chrono::milliseconds m_dns_resolving_timeout{ 4'000 }; std::chrono::milliseconds m_authentification_timeout{ 1'500 }; std::chrono::milliseconds m_connect_target_timeout{ 5'000 }; std::chrono::milliseconds m_socks_bind_timeout{ 20'000 }; std::chrono::milliseconds m_idle_connection_timeout{ 300'000 }; std::chrono::milliseconds m_http_headers_complete_timeout{ 5'000 }; std::chrono::milliseconds m_http_negative_response_timeout{ 2'000 }; /*! * @} */ /*! * @brief The size of one buffer for I/O ops. * * This size is used for accepted connections for those handshaking * and authentification are completed. During the handshaking * buffers of different sizes could be used. */ std::size_t m_io_chunk_size{ 8u * 1024u }; /*! * @brief Max count of buffers for I/O ops on single connection. * * Since v.0.2.0 several buffers can be used for I/O operations * for data transfer. While one buffer is used for reading another * buffer can be used for writting. * * This parameters sets number of buffers to be used for a single * connection. * * Please note that arataga uses one connection from a client to an ACL * and another connection from the ACL to the target host. It means * that there will be 2*m_io_chunk_count buffers (becasue every * connection uses own set of buffers). * * @since v.0.2.0 */ std::size_t m_io_chunk_count{ 4u }; /*! * @brief Constraints for values of HTTP-protocols. */ http_message_value_limits_t m_http_message_limits{}; }; /*! * @brief Configuration for the whole arataga. */ struct config_t { //! Type of container for IPs of name servers. /*! * @since v.0.4.0 */ using nameserver_ip_container_t = std::vector< asio::ip::address >; /*! * @brief Log level to be used for logging. * * The value spdlog::level::off means that logging should * be disabled. */ spdlog::level::level_enum m_log_level{ spdlog::level::info }; /*! * @brief Clearing period for DNS cache. */ std::chrono::milliseconds m_dns_cache_cleanup_period{ 30*1000 }; /*! * @brief IPs of name servers to be used. * * @attention * Shouldn't be empty. * * @since v.0.4.0 */ nameserver_ip_container_t m_nameserver_ips; /*! * @brief Denied TCP-ports. * * Clients can't use those ports on target hosts. */ denied_ports_config_t m_denied_ports; /*! * @brief Common parameters for all ACL. */ common_acl_params_t m_common_acl_params; /*! * @brief Type of storage for ACL configs. */ using acl_container_t = std::vector< acl_config_t >; /*! * @brief List of ACL. * * Can be empty. */ acl_container_t m_acls; }; // // config_parser_t // /*! * @brief A class for parsing arataga's config. * * It's supposed that an instance of that class is created just * once and then reused. */ class config_parser_t { public: //! Type of exception for parsing errors. struct parser_exception_t : public exception_t { public: parser_exception_t( const std::string & what ); }; config_parser_t(); ~config_parser_t(); //! Parse the content of the config. /*! * @throw parser_exception_t in the case of an error. */ [[nodiscard]] config_t parse( std::string_view content ); private: struct impl_t; std::unique_ptr<impl_t> m_impl; }; } /* namespace arataga */
<reponame>hucsmn/peg<filename>peg.go<gh_stars>1-10 // Package peg implements the Parsing Expression Grammars inspired by LPeg. // // This package implements the Parsing Expression Grammars (PEGs), // a powerful tool for pattern matching and writing top-down parsers. // PEGs were designed to focus on expressing the match or parsing progress, // rather than to describe what text should be matched as regexps do. // The package was strongly influenced by LPeg for lua, see: // http://www.inf.puc-rio.br/~roberto/lpeg/. // Take a look at it for further readings. // // Overlook // // There were four methods for PEGs pattern matching: // // MatchedPrefix(pat, text) (prefix, ok) // IsFullMatched(pat, text) ok // Parse(pat, text) (captures, err) // Match(pat, text) (result, err) // // The most general one is `config.Match(pat, text)`, which returns a `*Result` // typed match result and an error if any error occured. // // The config tells the max recursion level, the max repeatition times and // whether grouping or capturing is enabled. The default config enables // both grouping and capturing, while limits for recursion and repeat are // setup to DefaultCallstackLimit and DefaultRepeatLimit. // // The result of `config.Match(pat, text)` contains: // whether pattern was matched, how many bytes were matched, // the saved groups and the parser captures. // Saved groups are text pieces captured with an optional name. // Parser captures are parse trees or user defined structures constructed // during the parsing process. // // Note that, both `MatchedPrefix` and `IsFullMatched` disables capturing. // That is, the side effects of user defined constructors won't be triggered. // // Categories of patterns // // Basic patterns, which matches a single rune or a piece of text, // are listed below: // // T(text), TI(insensitivetext), TS(text, ...), TSI(insensitivetext, ...) // Dot, S(runes), NS(excluderunes), R(low, high, ...), NR(low, high, ...) // U(unicoderangename) // // Patterns are combined by sequence or alternation: // // Seq(sequence...), Alt(choices...) // // Predicators test if pattern would be matched, but consume no text: // // True, False, SOL, EOL, EOF // B(text), Test(cond), Not(cond), And(assertions...), Or(posiblities...), Abort(msg) // When(cond, pat), If(cond, yes, no), Switch(cond, pat, ..., [otherwise]) // // Available pattern qualifiers are: // // Skip(n), Until(pat), UntilB(pat) // Q0(choices...), Q1(choices...), Qn(atleast, choices...) // Q01(choices...), Q0n(atmost, choices...), Qnn(exact, choices...), Qmn(from, to, choices...) // // Pattern where item separated by sep could be expressed using: // // J0(item, sep), J1(item, sep), Jn(atleast, item, sep) // J0n(atmost, item, sep), Jnn(exact, item, sep), Jmn(from, to, item, sep) // // Functionalities for groups, references, triggers and injectors: // // G(pat), NG(groupname, pat) // Ref(groupname), RefB(groupname) // Trigger(hook, pat), Inject(injector, pat) // Check(checker, pat), Trunc(maxrune, pat) // // Functionalities for grammars and parsing captures: // // Let(scope, pat), V(varname), CV(varname), CK(tokentype, pat) // CC(nontermcons, pat), CT(termcons, pat) // // Common mistakes // // Greedy qualifiers: // // The qualifiers are designed to be greedy. Thus, considering the pattern // `Seq(Q0(A), B)`, text supposed to be matched by `B` could be swallowed // ahead of time by the preceding `A`, which is usually unexpected. // It is recommended to wrap `A` with an additional assertion to avoid this. // // For example, `Seq(Q0(R('0', '9')), S("02468"), T(" is even"))` is incorrect, // because the greedy `Q0(R('0', '9'))` would consume the last digit, thus the // following `S("02468")` would always dismatch. To make everything right, // `Q0(R('0', '9'))` should be replaced by a pattern like // `Q0(Seq(R('0', '9'), Test(R('0', '9'))))` (assert one digit follow it), // which won't consume the last digit. // // Unreachable branches: // // Branch of `Seq` or `Alt` could be unreachable, considering that Seq searches // the first dismatch in the sequence, while Alt searches the first match in the // choices. Thus, a pattern like `Alt(T("match"), T("match more"))` would get an // unexpected match result, becuase longer patterns are not in prior order. // // Infinite loops: // // Any pattern which could macth an empty string should not be nested inside // qualifiers like `Q0`, `Q1`, `Qn`, for this would cause infinite loops. // // For example, `Q1(True)` or `Q0(Q0(T("not empty")))` would loop until // `config.RepeatLimit` is reached. // // Left recursion: // // PEG parsers are top-down, that is, the grammar rules would be expanded // immediately, thus a left recursion never terminates until // `config.CallstackLimit` is reached. // // For example, `Let(map[string]Pattern{"var": Seq(T("A"), V("var"))}, V("var"))` // terminates, while // `Let(map[string]Pattern{"var": Seq(V("var"), T("A"))}, V("var"))` won't // terminate until `CallstackLimit` is reached. package peg // import "github.com/hucsmn/peg" import ( "fmt" "strings" ) // Default limits of pattern matching. const ( DefaultCallstackLimit = 500 DefaultRepeatLimit = 500 ) var ( defaultConfig = Config{ CallstackLimit: DefaultCallstackLimit, RepeatLimit: DefaultRepeatLimit, DisableLineColumnCounting: false, DisableGrouping: false, DisableCapturing: false, } ) type ( // Pattern is the tree representation for Parse Grammar Expression. Pattern interface { match(ctx *context) error String() string } // Config contains configration for pattern matching. Config struct { // Maximum callstack size, zero or negative for unlimited. CallstackLimit int // Maximum qualifier repeatition times, zero or negative for unlimited. RepeatLimit int // Determines if the position calculation is disabled. DisableLineColumnCounting bool // Determines if grouping is disabled. DisableGrouping bool // Determines if parse tree capturing is disabled. DisableCapturing bool } // Result stores the results from pattern matching. Result struct { // Is pattern matched and how many bytes matched. Ok bool N int // Grouped text pieces with optional names. Groups []string NamedGroups map[string]string // Parse captures. Captures []Capture } // Capture stores structures from parse capturing. // User defined structures (the types implemented Capture interface other // than the predefined Variable type and Token type) are constructed by // customed TerminalConstructor or NonTerminalConstructor. Capture interface { // IsTerminal tells if it is a terminal type. IsTerminal() bool } // Variable is a predefined non-terminal type for PEG variable capturing. Variable struct { Name string Subs []Capture } // Token is a predefined terminal type stores a piece of typed text // and its position in the source text. Token struct { Type int Value string Position Position } // TerminalConstructor is customed terminal type constructor. TerminalConstructor func(string, Position) (Capture, error) // NonTerminalConstructor is customed non-terminal type constructor. NonTerminalConstructor func([]Capture) (Capture, error) ) // MatchedPrefix returns the matched prefix of text when successfully matched. func MatchedPrefix(pat Pattern, text string) (prefix string, ok bool) { return defaultConfig.MatchedPrefix(pat, text) } // IsFullMatched tells if given pattern matches the full text. // It is recommended to use Seq(Alt(...), EOF) rather than use Alt(...) when // testing IsFullMatched. // For example, IsFullMatched(Alt(T("match"), T("match more")), "match more") // returns false rather than true counter-intuitively. func IsFullMatched(pat Pattern, text string) bool { return defaultConfig.IsFullMatched(pat, text) } // Parse runs pattern matching on given text, guaranteeing that the text must // only be full-matched when success. func Parse(pat Pattern, text string) (caps []Capture, err error) { return defaultConfig.Parse(pat, text) } // Match runs pattern matching on given text, using the default configuration. // The default configuration uses DefaultCallstackLimit and DefaultLoopLimit, // while line-column counting, grouping and parse capturing is enabled. // Returns nil result if any error occurs. func Match(pat Pattern, text string) (result *Result, err error) { return defaultConfig.Match(pat, text) } // MatchedPrefix returns the matched prefix of text when successfully matched. func (cfg Config) MatchedPrefix(pat Pattern, text string) (prefix string, ok bool) { // disable capturing. config := cfg config.DisableLineColumnCounting = true config.DisableCapturing = true r, err := config.Match(pat, text) if err != nil || !r.Ok { return "", false } return text[:r.N], true } // IsFullMatched tells if given pattern matches the full text. // It is recommended to use Seq(Alt(...), EOF) rather than use Alt(...) when // testing IsFullMatched. // For example, IsFullMatched(Alt(T("match"), T("match more")), "match more") // returns false rather than true counter-intuitively. func (cfg Config) IsFullMatched(pat Pattern, text string) bool { // disable capturing. config := cfg config.DisableLineColumnCounting = true config.DisableCapturing = true r, err := config.Match(pat, text) return err == nil && r.Ok && r.N == len(text) } // Parse runs pattern matching on given text, guaranteeing that the text must // only be full-matched when success. func (cfg Config) Parse(pat Pattern, text string) (caps []Capture, err error) { // enable capturing. config := cfg config.DisableLineColumnCounting = false config.DisableCapturing = false r, err := config.Match(pat, text) if err != nil { return nil, err } if !r.Ok { return nil, errorDismatch } if r.N != len(text) { return nil, errorNotFullMatched } return r.Captures, nil } // Match runs pattern matching on given text, using the default configuration. // The default configuration uses DefaultCallstackLimit and DefaultLoopLimit, // while line-column counting, grouping and parse capturing is enabled. // Returns nil result if any error occurs. func (cfg Config) Match(pat Pattern, text string) (result *Result, err error) { if pat == nil { return nil, errorNilMainPattern } ctx := newContext(pat, text, cfg) err = ctx.match() if err != nil { return nil, err } if ctx.ret.ok { return &Result{ Ok: true, N: ctx.ret.n, Groups: ctx.groups, NamedGroups: ctx.namedGroups, Captures: ctx.capstack[0].args, }, nil } return &Result{ Ok: false, N: 0, Groups: nil, NamedGroups: nil, Captures: nil, }, nil } // IsTerminal method of the Variable type always returns false. func (v *Variable) IsTerminal() bool { return false } // IsTerminal method of the Token type always returns true. func (tok *Token) IsTerminal() bool { return true } func (v *Variable) String() string { strs := make([]string, len(v.Subs)) for i := range v.Subs { strs[i] = fmt.Sprint(v.Subs[i]) } return fmt.Sprintf("%s(%s)", v.Name, strings.Join(strs, ", ")) } func (tok *Token) String() string { return fmt.Sprintf("token_%d%q@%s", tok.Type, tok.Value, tok.Position.String()) }
<filename>pkg/apis/eventbus/v1alpha1/generated.pb.go /* Copyright 2020 BlackRock, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ // Code generated by protoc-gen-gogo. DO NOT EDIT. // source: github.com/argoproj/argo-events/pkg/apis/eventbus/v1alpha1/generated.proto package v1alpha1 import ( fmt "fmt" common "github.com/argoproj/argo-events/pkg/apis/common" io "io" proto "github.com/gogo/protobuf/proto" github_com_gogo_protobuf_sortkeys "github.com/gogo/protobuf/sortkeys" k8s_io_api_core_v1 "k8s.io/api/core/v1" v1 "k8s.io/api/core/v1" resource "k8s.io/apimachinery/pkg/api/resource" math "math" math_bits "math/bits" reflect "reflect" strings "strings" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package func (m *BusConfig) Reset() { *m = BusConfig{} } func (*BusConfig) ProtoMessage() {} func (*BusConfig) Descriptor() ([]byte, []int) { return fileDescriptor_871e47633eb7aad4, []int{0} } func (m *BusConfig) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) } func (m *BusConfig) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { b = b[:cap(b)] n, err := m.MarshalToSizedBuffer(b) if err != nil { return nil, err } return b[:n], nil } func (m *BusConfig) XXX_Merge(src proto.Message) { xxx_messageInfo_BusConfig.Merge(m, src) } func (m *BusConfig) XXX_Size() int { return m.Size() } func (m *BusConfig) XXX_DiscardUnknown() { xxx_messageInfo_BusConfig.DiscardUnknown(m) } var xxx_messageInfo_BusConfig proto.InternalMessageInfo func (m *ContainerTemplate) Reset() { *m = ContainerTemplate{} } func (*ContainerTemplate) ProtoMessage() {} func (*ContainerTemplate) Descriptor() ([]byte, []int) { return fileDescriptor_871e47633eb7aad4, []int{1} } func (m *ContainerTemplate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) } func (m *ContainerTemplate) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { b = b[:cap(b)] n, err := m.MarshalToSizedBuffer(b) if err != nil { return nil, err } return b[:n], nil } func (m *ContainerTemplate) XXX_Merge(src proto.Message) { xxx_messageInfo_ContainerTemplate.Merge(m, src) } func (m *ContainerTemplate) XXX_Size() int { return m.Size() } func (m *ContainerTemplate) XXX_DiscardUnknown() { xxx_messageInfo_ContainerTemplate.DiscardUnknown(m) } var xxx_messageInfo_ContainerTemplate proto.InternalMessageInfo func (m *EventBus) Reset() { *m = EventBus{} } func (*EventBus) ProtoMessage() {} func (*EventBus) Descriptor() ([]byte, []int) { return fileDescriptor_871e47633eb7aad4, []int{2} } func (m *EventBus) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) } func (m *EventBus) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { b = b[:cap(b)] n, err := m.MarshalToSizedBuffer(b) if err != nil { return nil, err } return b[:n], nil } func (m *EventBus) XXX_Merge(src proto.Message) { xxx_messageInfo_EventBus.Merge(m, src) } func (m *EventBus) XXX_Size() int { return m.Size() } func (m *EventBus) XXX_DiscardUnknown() { xxx_messageInfo_EventBus.DiscardUnknown(m) } var xxx_messageInfo_EventBus proto.InternalMessageInfo func (m *EventBusList) Reset() { *m = EventBusList{} } func (*EventBusList) ProtoMessage() {} func (*EventBusList) Descriptor() ([]byte, []int) { return fileDescriptor_871e47633eb7aad4, []int{3} } func (m *EventBusList) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) } func (m *EventBusList) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { b = b[:cap(b)] n, err := m.MarshalToSizedBuffer(b) if err != nil { return nil, err } return b[:n], nil } func (m *EventBusList) XXX_Merge(src proto.Message) { xxx_messageInfo_EventBusList.Merge(m, src) } func (m *EventBusList) XXX_Size() int { return m.Size() } func (m *EventBusList) XXX_DiscardUnknown() { xxx_messageInfo_EventBusList.DiscardUnknown(m) } var xxx_messageInfo_EventBusList proto.InternalMessageInfo func (m *EventBusSpec) Reset() { *m = EventBusSpec{} } func (*EventBusSpec) ProtoMessage() {} func (*EventBusSpec) Descriptor() ([]byte, []int) { return fileDescriptor_871e47633eb7aad4, []int{4} } func (m *EventBusSpec) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) } func (m *EventBusSpec) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { b = b[:cap(b)] n, err := m.MarshalToSizedBuffer(b) if err != nil { return nil, err } return b[:n], nil } func (m *EventBusSpec) XXX_Merge(src proto.Message) { xxx_messageInfo_EventBusSpec.Merge(m, src) } func (m *EventBusSpec) XXX_Size() int { return m.Size() } func (m *EventBusSpec) XXX_DiscardUnknown() { xxx_messageInfo_EventBusSpec.DiscardUnknown(m) } var xxx_messageInfo_EventBusSpec proto.InternalMessageInfo func (m *EventBusStatus) Reset() { *m = EventBusStatus{} } func (*EventBusStatus) ProtoMessage() {} func (*EventBusStatus) Descriptor() ([]byte, []int) { return fileDescriptor_871e47633eb7aad4, []int{5} } func (m *EventBusStatus) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) } func (m *EventBusStatus) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { b = b[:cap(b)] n, err := m.MarshalToSizedBuffer(b) if err != nil { return nil, err } return b[:n], nil } func (m *EventBusStatus) XXX_Merge(src proto.Message) { xxx_messageInfo_EventBusStatus.Merge(m, src) } func (m *EventBusStatus) XXX_Size() int { return m.Size() } func (m *EventBusStatus) XXX_DiscardUnknown() { xxx_messageInfo_EventBusStatus.DiscardUnknown(m) } var xxx_messageInfo_EventBusStatus proto.InternalMessageInfo func (m *NATSBus) Reset() { *m = NATSBus{} } func (*NATSBus) ProtoMessage() {} func (*NATSBus) Descriptor() ([]byte, []int) { return fileDescriptor_871e47633eb7aad4, []int{6} } func (m *NATSBus) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) } func (m *NATSBus) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { b = b[:cap(b)] n, err := m.MarshalToSizedBuffer(b) if err != nil { return nil, err } return b[:n], nil } func (m *NATSBus) XXX_Merge(src proto.Message) { xxx_messageInfo_NATSBus.Merge(m, src) } func (m *NATSBus) XXX_Size() int { return m.Size() } func (m *NATSBus) XXX_DiscardUnknown() { xxx_messageInfo_NATSBus.DiscardUnknown(m) } var xxx_messageInfo_NATSBus proto.InternalMessageInfo func (m *NATSConfig) Reset() { *m = NATSConfig{} } func (*NATSConfig) ProtoMessage() {} func (*NATSConfig) Descriptor() ([]byte, []int) { return fileDescriptor_871e47633eb7aad4, []int{7} } func (m *NATSConfig) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) } func (m *NATSConfig) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { b = b[:cap(b)] n, err := m.MarshalToSizedBuffer(b) if err != nil { return nil, err } return b[:n], nil } func (m *NATSConfig) XXX_Merge(src proto.Message) { xxx_messageInfo_NATSConfig.Merge(m, src) } func (m *NATSConfig) XXX_Size() int { return m.Size() } func (m *NATSConfig) XXX_DiscardUnknown() { xxx_messageInfo_NATSConfig.DiscardUnknown(m) } var xxx_messageInfo_NATSConfig proto.InternalMessageInfo func (m *NativeStrategy) Reset() { *m = NativeStrategy{} } func (*NativeStrategy) ProtoMessage() {} func (*NativeStrategy) Descriptor() ([]byte, []int) { return fileDescriptor_871e47633eb7aad4, []int{8} } func (m *NativeStrategy) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) } func (m *NativeStrategy) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { b = b[:cap(b)] n, err := m.MarshalToSizedBuffer(b) if err != nil { return nil, err } return b[:n], nil } func (m *NativeStrategy) XXX_Merge(src proto.Message) { xxx_messageInfo_NativeStrategy.Merge(m, src) } func (m *NativeStrategy) XXX_Size() int { return m.Size() } func (m *NativeStrategy) XXX_DiscardUnknown() { xxx_messageInfo_NativeStrategy.DiscardUnknown(m) } var xxx_messageInfo_NativeStrategy proto.InternalMessageInfo func (m *PersistenceStrategy) Reset() { *m = PersistenceStrategy{} } func (*PersistenceStrategy) ProtoMessage() {} func (*PersistenceStrategy) Descriptor() ([]byte, []int) { return fileDescriptor_871e47633eb7aad4, []int{9} } func (m *PersistenceStrategy) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) } func (m *PersistenceStrategy) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { b = b[:cap(b)] n, err := m.MarshalToSizedBuffer(b) if err != nil { return nil, err } return b[:n], nil } func (m *PersistenceStrategy) XXX_Merge(src proto.Message) { xxx_messageInfo_PersistenceStrategy.Merge(m, src) } func (m *PersistenceStrategy) XXX_Size() int { return m.Size() } func (m *PersistenceStrategy) XXX_DiscardUnknown() { xxx_messageInfo_PersistenceStrategy.DiscardUnknown(m) } var xxx_messageInfo_PersistenceStrategy proto.InternalMessageInfo func init() { proto.RegisterType((*BusConfig)(nil), "github.com.argoproj.argo_events.pkg.apis.eventbus.v1alpha1.BusConfig") proto.RegisterType((*ContainerTemplate)(nil), "github.com.argoproj.argo_events.pkg.apis.eventbus.v1alpha1.ContainerTemplate") proto.RegisterType((*EventBus)(nil), "github.com.argoproj.argo_events.pkg.apis.eventbus.v1alpha1.EventBus") proto.RegisterType((*EventBusList)(nil), "github.com.argoproj.argo_events.pkg.apis.eventbus.v1alpha1.EventBusList") proto.RegisterType((*EventBusSpec)(nil), "github.com.argoproj.argo_events.pkg.apis.eventbus.v1alpha1.EventBusSpec") proto.RegisterType((*EventBusStatus)(nil), "github.com.argoproj.argo_events.pkg.apis.eventbus.v1alpha1.EventBusStatus") proto.RegisterType((*NATSBus)(nil), "github.com.argoproj.argo_events.pkg.apis.eventbus.v1alpha1.NATSBus") proto.RegisterType((*NATSConfig)(nil), "github.com.argoproj.argo_events.pkg.apis.eventbus.v1alpha1.NATSConfig") proto.RegisterType((*NativeStrategy)(nil), "github.com.argoproj.argo_events.pkg.apis.eventbus.v1alpha1.NativeStrategy") proto.RegisterMapType((map[string]string)(nil), "github.com.argoproj.argo_events.pkg.apis.eventbus.v1alpha1.NativeStrategy.NodeSelectorEntry") proto.RegisterType((*PersistenceStrategy)(nil), "github.com.argoproj.argo_events.pkg.apis.eventbus.v1alpha1.PersistenceStrategy") } func init() { proto.RegisterFile("github.com/argoproj/argo-events/pkg/apis/eventbus/v1alpha1/generated.proto", fileDescriptor_871e47633eb7aad4) } var fileDescriptor_871e47633eb7aad4 = []byte{ // 1386 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xb4, 0x57, 0xdf, 0x6f, 0x1b, 0xc5, 0x13, 0xcf, 0x25, 0x8e, 0x63, 0xaf, 0xdd, 0xc4, 0xde, 0xf6, 0xfb, 0xd5, 0x11, 0x15, 0xbb, 0xb2, 0x54, 0x14, 0x89, 0xf6, 0x4c, 0x2b, 0x04, 0xa5, 0x2f, 0xc5, 0x97, 0xa6, 0xd0, 0x52, 0xa7, 0x61, 0xdd, 0x56, 0x02, 0x2a, 0xca, 0xe6, 0x32, 0x71, 0x2e, 0xf1, 0xdd, 0xb9, 0xb7, 0x7b, 0x56, 0xcc, 0x13, 0xe2, 0x2f, 0x40, 0x3c, 0x20, 0xfe, 0x03, 0xfe, 0x95, 0x3e, 0xf0, 0xd0, 0x37, 0xfa, 0x82, 0xd5, 0xba, 0xe2, 0x9f, 0xe8, 0x13, 0xda, 0xbd, 0xbd, 0x1f, 0xf6, 0x39, 0x25, 0x25, 0xe1, 0xc9, 0xb7, 0xb3, 0x33, 0x9f, 0xcf, 0xcc, 0xec, 0xce, 0xcc, 0x1a, 0xdd, 0xe9, 0xda, 0x7c, 0x2f, 0xd8, 0x36, 0x2c, 0xcf, 0x69, 0x52, 0xbf, 0xeb, 0xf5, 0x7d, 0x6f, 0x5f, 0x7e, 0x5c, 0x86, 0x01, 0xb8, 0x9c, 0x35, 0xfb, 0x07, 0xdd, 0x26, 0xed, 0xdb, 0xac, 0x29, 0xd7, 0xdb, 0x01, 0x6b, 0x0e, 0xae, 0xd0, 0x5e, 0x7f, 0x8f, 0x5e, 0x69, 0x76, 0xc1, 0x05, 0x9f, 0x72, 0xd8, 0x31, 0xfa, 0xbe, 0xc7, 0x3d, 0x7c, 0x3d, 0xc1, 0x32, 0x22, 0x2c, 0xf9, 0xf1, 0x38, 0xc4, 0x32, 0xfa, 0x07, 0x5d, 0x43, 0x60, 0x19, 0x11, 0x96, 0x11, 0x61, 0xad, 0xde, 0x38, 0xb6, 0x1f, 0x96, 0xe7, 0x38, 0x9e, 0x3b, 0x4d, 0xbe, 0x7a, 0x39, 0x05, 0xd0, 0xf5, 0xba, 0x5e, 0x53, 0x8a, 0xb7, 0x83, 0x5d, 0xb9, 0x92, 0x0b, 0xf9, 0xa5, 0xd4, 0x1b, 0x07, 0xd7, 0x98, 0x61, 0x7b, 0x02, 0xb2, 0x69, 0x79, 0x3e, 0x34, 0x07, 0x99, 0x78, 0x56, 0x3f, 0x4c, 0x74, 0x1c, 0x6a, 0xed, 0xd9, 0x2e, 0xf8, 0xc3, 0xc8, 0x8f, 0xa6, 0x0f, 0xcc, 0x0b, 0x7c, 0x0b, 0xde, 0xca, 0x8a, 0x35, 0x1d, 0xe0, 0x74, 0x16, 0x57, 0xf3, 0x28, 0x2b, 0x3f, 0x70, 0xb9, 0xed, 0x64, 0x69, 0x3e, 0xfa, 0x27, 0x03, 0x66, 0xed, 0x81, 0x43, 0xa7, 0xed, 0x1a, 0x4f, 0x50, 0xd1, 0x0c, 0xd8, 0xba, 0xe7, 0xee, 0xda, 0x5d, 0xbc, 0x83, 0x72, 0x2e, 0xe5, 0x4c, 0xd7, 0x2e, 0x68, 0x6b, 0xa5, 0xab, 0xb7, 0x8c, 0x7f, 0x7f, 0x80, 0xc6, 0x66, 0xeb, 0x7e, 0x27, 0x44, 0x35, 0x0b, 0xe3, 0x51, 0x3d, 0x27, 0xd6, 0x44, 0xa2, 0x37, 0x5c, 0x54, 0x5d, 0xf7, 0x5c, 0x4e, 0x85, 0x8f, 0xf7, 0xc1, 0xe9, 0xf7, 0x28, 0x07, 0xfc, 0x15, 0x2a, 0x46, 0x29, 0x8c, 0xf8, 0xd7, 0x8c, 0x30, 0x26, 0x41, 0x61, 0x88, 0x43, 0x31, 0x06, 0x57, 0x0c, 0xa2, 0x94, 0x08, 0x3c, 0x09, 0x6c, 0x1f, 0x1c, 0xe1, 0x87, 0x59, 0x7d, 0x3a, 0xaa, 0xcf, 0x8d, 0x47, 0xf5, 0x62, 0xb4, 0xcb, 0x48, 0x82, 0xd6, 0xf8, 0x7d, 0x1e, 0x15, 0x36, 0x84, 0x83, 0x66, 0xc0, 0xf0, 0x77, 0xa8, 0x20, 0x72, 0xbe, 0x43, 0x39, 0x55, 0x34, 0x1f, 0xa4, 0x68, 0xe2, 0xd4, 0x25, 0xa1, 0x09, 0x6d, 0x41, 0x7c, 0x6f, 0x7b, 0x1f, 0x2c, 0xde, 0x06, 0x4e, 0x4d, 0xac, 0xe8, 0x50, 0x22, 0x23, 0x31, 0x2a, 0xde, 0x47, 0x39, 0xd6, 0x07, 0x4b, 0x9f, 0x97, 0xe8, 0x9f, 0x9f, 0x24, 0x89, 0x91, 0xd7, 0x9d, 0x3e, 0x58, 0x66, 0x59, 0xb1, 0xe6, 0xc4, 0x8a, 0x48, 0x0e, 0xec, 0xa3, 0x3c, 0xe3, 0x94, 0x07, 0x4c, 0x5f, 0x90, 0x6c, 0x77, 0x4e, 0x85, 0x4d, 0x22, 0x9a, 0xcb, 0x8a, 0x2f, 0x1f, 0xae, 0x89, 0x62, 0x6a, 0xfc, 0xa1, 0xa1, 0x72, 0xa4, 0x7a, 0xd7, 0x66, 0x1c, 0x3f, 0xca, 0xa4, 0xd4, 0x38, 0x5e, 0x4a, 0x85, 0xb5, 0x4c, 0x68, 0x45, 0x51, 0x15, 0x22, 0x49, 0x2a, 0x9d, 0x36, 0x5a, 0xb4, 0x39, 0x38, 0x4c, 0x9f, 0xbf, 0xb0, 0xb0, 0x56, 0xba, 0x7a, 0xf3, 0x34, 0x22, 0x34, 0xcf, 0x28, 0xc2, 0xc5, 0xdb, 0x02, 0x9a, 0x84, 0x0c, 0x8d, 0x27, 0x49, 0x60, 0x22, 0xc7, 0x98, 0x4e, 0x94, 0xc3, 0xfa, 0x49, 0xcb, 0x41, 0x10, 0x4f, 0xd7, 0xc2, 0x0b, 0x0d, 0x2d, 0x4f, 0xe6, 0x1d, 0x3f, 0x8e, 0xcf, 0x34, 0xe4, 0xfd, 0xf8, 0xf8, 0xbc, 0x61, 0x2f, 0x34, 0xde, 0x7c, 0x80, 0xd8, 0x41, 0x79, 0x4b, 0x56, 0xa6, 0xba, 0xa2, 0x1b, 0x27, 0x09, 0x2c, 0x6e, 0x1e, 0x09, 0x5d, 0xb8, 0x26, 0x8a, 0xa4, 0xf1, 0x97, 0x86, 0x96, 0x54, 0xf8, 0xd8, 0x45, 0x79, 0x97, 0x72, 0x7b, 0x00, 0x2a, 0xb6, 0x13, 0xdd, 0xd7, 0x4d, 0x89, 0xd4, 0xe1, 0xa2, 0x9d, 0x75, 0x87, 0x26, 0x12, 0xdc, 0xa1, 0x8c, 0x28, 0x16, 0xbc, 0x8f, 0xf2, 0x70, 0xe8, 0x71, 0x3b, 0xaa, 0xc6, 0xd3, 0x6a, 0x69, 0x92, 0x6b, 0x43, 0x22, 0x13, 0xc5, 0xd0, 0x78, 0xa5, 0x21, 0x94, 0xa8, 0xe0, 0x77, 0xd1, 0x42, 0xe0, 0xf7, 0x64, 0x9c, 0x45, 0xb3, 0xa4, 0x72, 0xb3, 0xf0, 0x80, 0xdc, 0x25, 0x42, 0x8e, 0xdf, 0x47, 0x45, 0xab, 0x17, 0x30, 0x0e, 0xfe, 0xed, 0x9b, 0xd2, 0xb9, 0xa2, 0x79, 0x46, 0x74, 0xb0, 0xf5, 0x48, 0x48, 0x92, 0x7d, 0x7c, 0x09, 0xe5, 0x68, 0xc0, 0xf7, 0x64, 0x91, 0x17, 0x4d, 0x5d, 0xdc, 0xa1, 0x56, 0xc0, 0xf7, 0x5e, 0x8f, 0xea, 0x65, 0xf1, 0x1b, 0xa5, 0x80, 0x48, 0x2d, 0xfc, 0x0d, 0x2a, 0x53, 0xcb, 0x02, 0xc6, 0x3a, 0x60, 0xf9, 0xc0, 0xf5, 0x9c, 0x0c, 0xfd, 0xe2, 0xac, 0x6e, 0x1a, 0x6a, 0x7c, 0x01, 0xc3, 0x0e, 0xf4, 0xc0, 0xe2, 0x9e, 0x6f, 0x56, 0xc6, 0x02, 0x34, 0x65, 0x4e, 0x26, 0xc0, 0x1a, 0x7f, 0x96, 0xd1, 0xf2, 0x64, 0xe2, 0xf1, 0x25, 0x54, 0xf0, 0xa1, 0xdf, 0xb3, 0x2d, 0x1a, 0x5e, 0xd9, 0xc5, 0xa4, 0x9e, 0x89, 0x92, 0x93, 0x58, 0x23, 0x8e, 0x65, 0xfe, 0x58, 0xb1, 0x98, 0xa8, 0x4c, 0x5d, 0x6e, 0xb7, 0x76, 0x77, 0x6d, 0xd7, 0xe6, 0x43, 0x99, 0x81, 0x82, 0x59, 0x53, 0xf8, 0xff, 0xbf, 0x09, 0x7d, 0x1f, 0x2c, 0x31, 0xce, 0x5a, 0x29, 0x2d, 0x32, 0x61, 0x83, 0x7f, 0xd4, 0x50, 0xa9, 0x0f, 0x3e, 0xb3, 0x19, 0x07, 0xd7, 0x02, 0x95, 0x8f, 0x7b, 0x27, 0xb9, 0x0a, 0x5b, 0x09, 0x5c, 0x7c, 0xff, 0x56, 0xc6, 0xa3, 0x7a, 0x29, 0xb5, 0x41, 0xd2, 0xa4, 0xf8, 0x67, 0x0d, 0x55, 0xad, 0xe9, 0xa9, 0xa7, 0x2f, 0x4a, 0x57, 0xda, 0x27, 0x71, 0x25, 0x33, 0x4a, 0xcd, 0xff, 0x8d, 0x47, 0xf5, 0xec, 0x84, 0x25, 0x59, 0x7a, 0xfc, 0x9b, 0x86, 0x74, 0x07, 0xb8, 0x6f, 0x5b, 0x2c, 0xa3, 0xaf, 0xe7, 0xff, 0x0b, 0xdf, 0xce, 0x8f, 0x47, 0x75, 0xbd, 0x7d, 0x04, 0x25, 0x39, 0xd2, 0x19, 0xfc, 0x8b, 0x86, 0xca, 0xae, 0xb7, 0x03, 0xd1, 0x3d, 0xd5, 0x97, 0xe4, 0x34, 0x78, 0x74, 0x7a, 0xfd, 0xc3, 0xd8, 0x4c, 0xc1, 0x6f, 0xb8, 0xdc, 0x1f, 0x9a, 0xe7, 0xd4, 0x35, 0x2b, 0xa7, 0xb7, 0xc8, 0x84, 0x1f, 0xf8, 0x01, 0x2a, 0x71, 0xaf, 0x27, 0x9e, 0x54, 0xb6, 0xe7, 0x32, 0xbd, 0x20, 0xdd, 0xaa, 0xcd, 0xaa, 0xb5, 0xfb, 0xb1, 0x9a, 0x79, 0x56, 0x01, 0x97, 0x12, 0x19, 0x23, 0x69, 0x1c, 0x6c, 0xa5, 0x66, 0x6a, 0x51, 0x1e, 0xc4, 0x27, 0x6f, 0x3d, 0x06, 0xda, 0x0a, 0xc0, 0x2c, 0x8b, 0x52, 0x8c, 0x56, 0xa9, 0xd1, 0x0a, 0x68, 0x85, 0x81, 0x15, 0xf8, 0x36, 0x1f, 0x8a, 0x8c, 0xc3, 0x21, 0xd7, 0x91, 0xe4, 0x7a, 0x6f, 0x96, 0xff, 0x5b, 0xde, 0x4e, 0x67, 0x52, 0xdb, 0x3c, 0x3b, 0x1e, 0xd5, 0x57, 0xa6, 0x84, 0x64, 0x1a, 0x13, 0x37, 0x50, 0xde, 0xa1, 0x87, 0xad, 0x2e, 0xe8, 0x25, 0x59, 0xf3, 0xb2, 0x79, 0xb6, 0xa5, 0x84, 0xa8, 0x1d, 0xec, 0xa2, 0x8a, 0xed, 0xd0, 0x2e, 0x6c, 0x05, 0xbd, 0x5e, 0xd8, 0x69, 0x98, 0x5e, 0x96, 0xb9, 0x9c, 0xf9, 0x0a, 0xbc, 0xeb, 0x59, 0xb4, 0x17, 0xbe, 0xbe, 0x08, 0xec, 0x82, 0x2f, 0x4a, 0xcc, 0xd4, 0x55, 0x56, 0x2b, 0xb7, 0xa7, 0x90, 0x48, 0x06, 0x1b, 0xdf, 0x41, 0x98, 0x81, 0x3f, 0xb0, 0x2d, 0x68, 0x59, 0x96, 0x17, 0xb8, 0x7c, 0x93, 0x3a, 0xa0, 0x9f, 0x91, 0xfe, 0xad, 0x2a, 0x1c, 0xdc, 0xc9, 0x68, 0x90, 0x19, 0x56, 0xf8, 0x33, 0x54, 0xed, 0xfb, 0xb6, 0x27, 0x43, 0xee, 0x51, 0xc6, 0x24, 0xd4, 0xb2, 0x84, 0x7a, 0x47, 0x41, 0x55, 0xb7, 0xa6, 0x15, 0x48, 0xd6, 0x06, 0xaf, 0xa1, 0x42, 0x24, 0xd4, 0x57, 0x64, 0x23, 0x95, 0x27, 0x17, 0xd9, 0x92, 0x78, 0x17, 0xdf, 0x42, 0x05, 0x1a, 0xb5, 0xc4, 0x8a, 0x3c, 0xb2, 0xf3, 0xb3, 0xd2, 0x14, 0xb5, 0xc0, 0x10, 0x27, 0x6e, 0x8f, 0xb1, 0x2d, 0xbe, 0x88, 0x96, 0x1c, 0x7a, 0xd8, 0x66, 0x5d, 0xa6, 0x57, 0x2f, 0x68, 0x6b, 0x39, 0xb3, 0x34, 0x1e, 0xd5, 0x97, 0xda, 0xa1, 0x88, 0x44, 0x7b, 0xc2, 0x31, 0x87, 0x1e, 0x9a, 0x43, 0x0e, 0x4c, 0xc7, 0x32, 0xb0, 0xf0, 0x4a, 0x29, 0x19, 0x89, 0x77, 0x57, 0x6f, 0xa0, 0x6a, 0xa6, 0x8e, 0x70, 0x05, 0x2d, 0x1c, 0xc0, 0x30, 0x1c, 0x85, 0x44, 0x7c, 0xe2, 0x73, 0x68, 0x71, 0x40, 0x7b, 0x01, 0x84, 0x53, 0x80, 0x84, 0x8b, 0xeb, 0xf3, 0xd7, 0xb4, 0xc6, 0xaf, 0xf3, 0xe8, 0xec, 0x8c, 0xee, 0x8a, 0x3f, 0x45, 0x15, 0xc6, 0x3d, 0x9f, 0x76, 0x21, 0xc9, 0x71, 0x38, 0x5b, 0xcf, 0x89, 0x23, 0xef, 0x4c, 0xed, 0x91, 0x8c, 0x36, 0x7e, 0x8c, 0x50, 0x38, 0xc9, 0xda, 0xde, 0x8e, 0x22, 0x36, 0x6f, 0x88, 0x57, 0x7c, 0x2b, 0x96, 0xbe, 0x1e, 0xd5, 0x2f, 0x67, 0xff, 0x08, 0x26, 0xdd, 0x9e, 0x3f, 0xf4, 0x7a, 0x81, 0x03, 0x89, 0x01, 0x49, 0x41, 0xe2, 0x6f, 0x11, 0x1a, 0xc8, 0xfd, 0x8e, 0xfd, 0x3d, 0xa8, 0x07, 0xf9, 0x1b, 0x5f, 0xc2, 0x46, 0xf4, 0x1f, 0xc5, 0xf8, 0x32, 0x10, 0x13, 0x8b, 0x0f, 0xcd, 0x65, 0xe1, 0xd0, 0xc3, 0x18, 0x85, 0xa4, 0x10, 0x4d, 0xe3, 0xe9, 0xcb, 0xda, 0xdc, 0xb3, 0x97, 0xb5, 0xb9, 0xe7, 0x2f, 0x6b, 0x73, 0x3f, 0x8c, 0x6b, 0xda, 0xd3, 0x71, 0x4d, 0x7b, 0x36, 0xae, 0x69, 0xcf, 0xc7, 0x35, 0xed, 0xc5, 0xb8, 0xa6, 0xfd, 0xf4, 0xaa, 0x36, 0xf7, 0x75, 0x21, 0xea, 0x6f, 0x7f, 0x07, 0x00, 0x00, 0xff, 0xff, 0xd9, 0xdd, 0x9f, 0xc3, 0xcc, 0x0f, 0x00, 0x00, } func (m *BusConfig) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) if err != nil { return nil, err } return dAtA[:n], nil } func (m *BusConfig) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } func (m *BusConfig) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int _ = l if m.NATS != nil { { size, err := m.NATS.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } i -= size i = encodeVarintGenerated(dAtA, i, uint64(size)) } i-- dAtA[i] = 0xa } return len(dAtA) - i, nil } func (m *ContainerTemplate) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) if err != nil { return nil, err } return dAtA[:n], nil } func (m *ContainerTemplate) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } func (m *ContainerTemplate) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int _ = l { size, err := m.Resources.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } i -= size i = encodeVarintGenerated(dAtA, i, uint64(size)) } i-- dAtA[i] = 0xa return len(dAtA) - i, nil } func (m *EventBus) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) if err != nil { return nil, err } return dAtA[:n], nil } func (m *EventBus) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } func (m *EventBus) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int _ = l { size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } i -= size i = encodeVarintGenerated(dAtA, i, uint64(size)) } i-- dAtA[i] = 0x1a { size, err := m.Spec.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } i -= size i = encodeVarintGenerated(dAtA, i, uint64(size)) } i-- dAtA[i] = 0x12 { size, err := m.ObjectMeta.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } i -= size i = encodeVarintGenerated(dAtA, i, uint64(size)) } i-- dAtA[i] = 0xa return len(dAtA) - i, nil } func (m *EventBusList) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) if err != nil { return nil, err } return dAtA[:n], nil } func (m *EventBusList) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } func (m *EventBusList) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int _ = l if len(m.Items) > 0 { for iNdEx := len(m.Items) - 1; iNdEx >= 0; iNdEx-- { { size, err := m.Items[iNdEx].MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } i -= size i = encodeVarintGenerated(dAtA, i, uint64(size)) } i-- dAtA[i] = 0x12 } } { size, err := m.ListMeta.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } i -= size i = encodeVarintGenerated(dAtA, i, uint64(size)) } i-- dAtA[i] = 0xa return len(dAtA) - i, nil } func (m *EventBusSpec) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) if err != nil { return nil, err } return dAtA[:n], nil } func (m *EventBusSpec) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } func (m *EventBusSpec) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int _ = l if m.NATS != nil { { size, err := m.NATS.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } i -= size i = encodeVarintGenerated(dAtA, i, uint64(size)) } i-- dAtA[i] = 0xa } return len(dAtA) - i, nil } func (m *EventBusStatus) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) if err != nil { return nil, err } return dAtA[:n], nil } func (m *EventBusStatus) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } func (m *EventBusStatus) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int _ = l { size, err := m.Config.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } i -= size i = encodeVarintGenerated(dAtA, i, uint64(size)) } i-- dAtA[i] = 0x12 { size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } i -= size i = encodeVarintGenerated(dAtA, i, uint64(size)) } i-- dAtA[i] = 0xa return len(dAtA) - i, nil } func (m *NATSBus) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) if err != nil { return nil, err } return dAtA[:n], nil } func (m *NATSBus) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } func (m *NATSBus) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int _ = l if m.Exotic != nil { { size, err := m.Exotic.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } i -= size i = encodeVarintGenerated(dAtA, i, uint64(size)) } i-- dAtA[i] = 0x12 } if m.Native != nil { { size, err := m.Native.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } i -= size i = encodeVarintGenerated(dAtA, i, uint64(size)) } i-- dAtA[i] = 0xa } return len(dAtA) - i, nil } func (m *NATSConfig) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) if err != nil { return nil, err } return dAtA[:n], nil } func (m *NATSConfig) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } func (m *NATSConfig) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int _ = l if m.AccessSecret != nil { { size, err := m.AccessSecret.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } i -= size i = encodeVarintGenerated(dAtA, i, uint64(size)) } i-- dAtA[i] = 0x22 } if m.Auth != nil { i -= len(*m.Auth) copy(dAtA[i:], *m.Auth) i = encodeVarintGenerated(dAtA, i, uint64(len(*m.Auth))) i-- dAtA[i] = 0x1a } if m.ClusterID != nil { i -= len(*m.ClusterID) copy(dAtA[i:], *m.ClusterID) i = encodeVarintGenerated(dAtA, i, uint64(len(*m.ClusterID))) i-- dAtA[i] = 0x12 } i -= len(m.URL) copy(dAtA[i:], m.URL) i = encodeVarintGenerated(dAtA, i, uint64(len(m.URL))) i-- dAtA[i] = 0xa return len(dAtA) - i, nil } func (m *NativeStrategy) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) if err != nil { return nil, err } return dAtA[:n], nil } func (m *NativeStrategy) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } func (m *NativeStrategy) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int _ = l if m.MaxBytes != nil { i -= len(*m.MaxBytes) copy(dAtA[i:], *m.MaxBytes) i = encodeVarintGenerated(dAtA, i, uint64(len(*m.MaxBytes))) i-- dAtA[i] = 0x1 i-- dAtA[i] = 0x92 } if m.MaxMsgs != nil { i = encodeVarintGenerated(dAtA, i, uint64(*m.MaxMsgs)) i-- dAtA[i] = 0x1 i-- dAtA[i] = 0x88 } if m.Affinity != nil { { size, err := m.Affinity.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } i -= size i = encodeVarintGenerated(dAtA, i, uint64(size)) } i-- dAtA[i] = 0x1 i-- dAtA[i] = 0x82 } if m.Priority != nil { i = encodeVarintGenerated(dAtA, i, uint64(*m.Priority)) i-- dAtA[i] = 0x78 } i -= len(m.PriorityClassName) copy(dAtA[i:], m.PriorityClassName) i = encodeVarintGenerated(dAtA, i, uint64(len(m.PriorityClassName))) i-- dAtA[i] = 0x72 i -= len(m.ServiceAccountName) copy(dAtA[i:], m.ServiceAccountName) i = encodeVarintGenerated(dAtA, i, uint64(len(m.ServiceAccountName))) i-- dAtA[i] = 0x6a if len(m.ImagePullSecrets) > 0 { for iNdEx := len(m.ImagePullSecrets) - 1; iNdEx >= 0; iNdEx-- { { size, err := m.ImagePullSecrets[iNdEx].MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } i -= size i = encodeVarintGenerated(dAtA, i, uint64(size)) } i-- dAtA[i] = 0x62 } } if m.MaxAge != nil { i -= len(*m.MaxAge) copy(dAtA[i:], *m.MaxAge) i = encodeVarintGenerated(dAtA, i, uint64(len(*m.MaxAge))) i-- dAtA[i] = 0x5a } if m.SecurityContext != nil { { size, err := m.SecurityContext.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } i -= size i = encodeVarintGenerated(dAtA, i, uint64(size)) } i-- dAtA[i] = 0x52 } if m.Metadata != nil { { size, err := m.Metadata.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } i -= size i = encodeVarintGenerated(dAtA, i, uint64(size)) } i-- dAtA[i] = 0x4a } if len(m.Tolerations) > 0 { for iNdEx := len(m.Tolerations) - 1; iNdEx >= 0; iNdEx-- { { size, err := m.Tolerations[iNdEx].MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } i -= size i = encodeVarintGenerated(dAtA, i, uint64(size)) } i-- dAtA[i] = 0x42 } } if len(m.NodeSelector) > 0 { keysForNodeSelector := make([]string, 0, len(m.NodeSelector)) for k := range m.NodeSelector { keysForNodeSelector = append(keysForNodeSelector, string(k)) } github_com_gogo_protobuf_sortkeys.Strings(keysForNodeSelector) for iNdEx := len(keysForNodeSelector) - 1; iNdEx >= 0; iNdEx-- { v := m.NodeSelector[string(keysForNodeSelector[iNdEx])] baseI := i i -= len(v) copy(dAtA[i:], v) i = encodeVarintGenerated(dAtA, i, uint64(len(v))) i-- dAtA[i] = 0x12 i -= len(keysForNodeSelector[iNdEx]) copy(dAtA[i:], keysForNodeSelector[iNdEx]) i = encodeVarintGenerated(dAtA, i, uint64(len(keysForNodeSelector[iNdEx]))) i-- dAtA[i] = 0xa i = encodeVarintGenerated(dAtA, i, uint64(baseI-i)) i-- dAtA[i] = 0x3a } } if m.MetricsContainerTemplate != nil { { size, err := m.MetricsContainerTemplate.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } i -= size i = encodeVarintGenerated(dAtA, i, uint64(size)) } i-- dAtA[i] = 0x32 } if m.ContainerTemplate != nil { { size, err := m.ContainerTemplate.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } i -= size i = encodeVarintGenerated(dAtA, i, uint64(size)) } i-- dAtA[i] = 0x2a } if m.Persistence != nil { { size, err := m.Persistence.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } i -= size i = encodeVarintGenerated(dAtA, i, uint64(size)) } i-- dAtA[i] = 0x22 } i-- if m.DeprecatedAntiAffinity { dAtA[i] = 1 } else { dAtA[i] = 0 } i-- dAtA[i] = 0x18 if m.Auth != nil { i -= len(*m.Auth) copy(dAtA[i:], *m.Auth) i = encodeVarintGenerated(dAtA, i, uint64(len(*m.Auth))) i-- dAtA[i] = 0x12 } i = encodeVarintGenerated(dAtA, i, uint64(m.Replicas)) i-- dAtA[i] = 0x8 return len(dAtA) - i, nil } func (m *PersistenceStrategy) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) if err != nil { return nil, err } return dAtA[:n], nil } func (m *PersistenceStrategy) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } func (m *PersistenceStrategy) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int _ = l if m.VolumeSize != nil { { size, err := m.VolumeSize.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } i -= size i = encodeVarintGenerated(dAtA, i, uint64(size)) } i-- dAtA[i] = 0x1a } if m.AccessMode != nil { i -= len(*m.AccessMode) copy(dAtA[i:], *m.AccessMode) i = encodeVarintGenerated(dAtA, i, uint64(len(*m.AccessMode))) i-- dAtA[i] = 0x12 } if m.StorageClassName != nil { i -= len(*m.StorageClassName) copy(dAtA[i:], *m.StorageClassName) i = encodeVarintGenerated(dAtA, i, uint64(len(*m.StorageClassName))) i-- dAtA[i] = 0xa } return len(dAtA) - i, nil } func encodeVarintGenerated(dAtA []byte, offset int, v uint64) int { offset -= sovGenerated(v) base := offset for v >= 1<<7 { dAtA[offset] = uint8(v&0x7f | 0x80) v >>= 7 offset++ } dAtA[offset] = uint8(v) return base } func (m *BusConfig) Size() (n int) { if m == nil { return 0 } var l int _ = l if m.NATS != nil { l = m.NATS.Size() n += 1 + l + sovGenerated(uint64(l)) } return n } func (m *ContainerTemplate) Size() (n int) { if m == nil { return 0 } var l int _ = l l = m.Resources.Size() n += 1 + l + sovGenerated(uint64(l)) return n } func (m *EventBus) Size() (n int) { if m == nil { return 0 } var l int _ = l l = m.ObjectMeta.Size() n += 1 + l + sovGenerated(uint64(l)) l = m.Spec.Size() n += 1 + l + sovGenerated(uint64(l)) l = m.Status.Size() n += 1 + l + sovGenerated(uint64(l)) return n } func (m *EventBusList) Size() (n int) { if m == nil { return 0 } var l int _ = l l = m.ListMeta.Size() n += 1 + l + sovGenerated(uint64(l)) if len(m.Items) > 0 { for _, e := range m.Items { l = e.Size() n += 1 + l + sovGenerated(uint64(l)) } } return n } func (m *EventBusSpec) Size() (n int) { if m == nil { return 0 } var l int _ = l if m.NATS != nil { l = m.NATS.Size() n += 1 + l + sovGenerated(uint64(l)) } return n } func (m *EventBusStatus) Size() (n int) { if m == nil { return 0 } var l int _ = l l = m.Status.Size() n += 1 + l + sovGenerated(uint64(l)) l = m.Config.Size() n += 1 + l + sovGenerated(uint64(l)) return n } func (m *NATSBus) Size() (n int) { if m == nil { return 0 } var l int _ = l if m.Native != nil { l = m.Native.Size() n += 1 + l + sovGenerated(uint64(l)) } if m.Exotic != nil { l = m.Exotic.Size() n += 1 + l + sovGenerated(uint64(l)) } return n } func (m *NATSConfig) Size() (n int) { if m == nil { return 0 } var l int _ = l l = len(m.URL) n += 1 + l + sovGenerated(uint64(l)) if m.ClusterID != nil { l = len(*m.ClusterID) n += 1 + l + sovGenerated(uint64(l)) } if m.Auth != nil { l = len(*m.Auth) n += 1 + l + sovGenerated(uint64(l)) } if m.AccessSecret != nil { l = m.AccessSecret.Size() n += 1 + l + sovGenerated(uint64(l)) } return n } func (m *NativeStrategy) Size() (n int) { if m == nil { return 0 } var l int _ = l n += 1 + sovGenerated(uint64(m.Replicas)) if m.Auth != nil { l = len(*m.Auth) n += 1 + l + sovGenerated(uint64(l)) } n += 2 if m.Persistence != nil { l = m.Persistence.Size() n += 1 + l + sovGenerated(uint64(l)) } if m.ContainerTemplate != nil { l = m.ContainerTemplate.Size() n += 1 + l + sovGenerated(uint64(l)) } if m.MetricsContainerTemplate != nil { l = m.MetricsContainerTemplate.Size() n += 1 + l + sovGenerated(uint64(l)) } if len(m.NodeSelector) > 0 { for k, v := range m.NodeSelector { _ = k _ = v mapEntrySize := 1 + len(k) + sovGenerated(uint64(len(k))) + 1 + len(v) + sovGenerated(uint64(len(v))) n += mapEntrySize + 1 + sovGenerated(uint64(mapEntrySize)) } } if len(m.Tolerations) > 0 { for _, e := range m.Tolerations { l = e.Size() n += 1 + l + sovGenerated(uint64(l)) } } if m.Metadata != nil { l = m.Metadata.Size() n += 1 + l + sovGenerated(uint64(l)) } if m.SecurityContext != nil { l = m.SecurityContext.Size() n += 1 + l + sovGenerated(uint64(l)) } if m.MaxAge != nil { l = len(*m.MaxAge) n += 1 + l + sovGenerated(uint64(l)) } if len(m.ImagePullSecrets) > 0 { for _, e := range m.ImagePullSecrets { l = e.Size() n += 1 + l + sovGenerated(uint64(l)) } } l = len(m.ServiceAccountName) n += 1 + l + sovGenerated(uint64(l)) l = len(m.PriorityClassName) n += 1 + l + sovGenerated(uint64(l)) if m.Priority != nil { n += 1 + sovGenerated(uint64(*m.Priority)) } if m.Affinity != nil { l = m.Affinity.Size() n += 2 + l + sovGenerated(uint64(l)) } if m.MaxMsgs != nil { n += 2 + sovGenerated(uint64(*m.MaxMsgs)) } if m.MaxBytes != nil { l = len(*m.MaxBytes) n += 2 + l + sovGenerated(uint64(l)) } return n } func (m *PersistenceStrategy) Size() (n int) { if m == nil { return 0 } var l int _ = l if m.StorageClassName != nil { l = len(*m.StorageClassName) n += 1 + l + sovGenerated(uint64(l)) } if m.AccessMode != nil { l = len(*m.AccessMode) n += 1 + l + sovGenerated(uint64(l)) } if m.VolumeSize != nil { l = m.VolumeSize.Size() n += 1 + l + sovGenerated(uint64(l)) } return n } func sovGenerated(x uint64) (n int) { return (math_bits.Len64(x|1) + 6) / 7 } func sozGenerated(x uint64) (n int) { return sovGenerated(uint64((x << 1) ^ uint64((int64(x) >> 63)))) } func (this *BusConfig) String() string { if this == nil { return "nil" } s := strings.Join([]string{`&BusConfig{`, `NATS:` + strings.Replace(this.NATS.String(), "NATSConfig", "NATSConfig", 1) + `,`, `}`, }, "") return s } func (this *ContainerTemplate) String() string { if this == nil { return "nil" } s := strings.Join([]string{`&ContainerTemplate{`, `Resources:` + strings.Replace(strings.Replace(fmt.Sprintf("%v", this.Resources), "ResourceRequirements", "v1.ResourceRequirements", 1), `&`, ``, 1) + `,`, `}`, }, "") return s } func (this *EventBus) String() string { if this == nil { return "nil" } s := strings.Join([]string{`&EventBus{`, `ObjectMeta:` + strings.Replace(strings.Replace(fmt.Sprintf("%v", this.ObjectMeta), "ObjectMeta", "v11.ObjectMeta", 1), `&`, ``, 1) + `,`, `Spec:` + strings.Replace(strings.Replace(this.Spec.String(), "EventBusSpec", "EventBusSpec", 1), `&`, ``, 1) + `,`, `Status:` + strings.Replace(strings.Replace(this.Status.String(), "EventBusStatus", "EventBusStatus", 1), `&`, ``, 1) + `,`, `}`, }, "") return s } func (this *EventBusList) String() string { if this == nil { return "nil" } repeatedStringForItems := "[]EventBus{" for _, f := range this.Items { repeatedStringForItems += strings.Replace(strings.Replace(f.String(), "EventBus", "EventBus", 1), `&`, ``, 1) + "," } repeatedStringForItems += "}" s := strings.Join([]string{`&EventBusList{`, `ListMeta:` + strings.Replace(strings.Replace(fmt.Sprintf("%v", this.ListMeta), "ListMeta", "v11.ListMeta", 1), `&`, ``, 1) + `,`, `Items:` + repeatedStringForItems + `,`, `}`, }, "") return s } func (this *EventBusSpec) String() string { if this == nil { return "nil" } s := strings.Join([]string{`&EventBusSpec{`, `NATS:` + strings.Replace(this.NATS.String(), "NATSBus", "NATSBus", 1) + `,`, `}`, }, "") return s } func (this *EventBusStatus) String() string { if this == nil { return "nil" } s := strings.Join([]string{`&EventBusStatus{`, `Status:` + strings.Replace(strings.Replace(fmt.Sprintf("%v", this.Status), "Status", "common.Status", 1), `&`, ``, 1) + `,`, `Config:` + strings.Replace(strings.Replace(this.Config.String(), "BusConfig", "BusConfig", 1), `&`, ``, 1) + `,`, `}`, }, "") return s } func (this *NATSBus) String() string { if this == nil { return "nil" } s := strings.Join([]string{`&NATSBus{`, `Native:` + strings.Replace(this.Native.String(), "NativeStrategy", "NativeStrategy", 1) + `,`, `Exotic:` + strings.Replace(this.Exotic.String(), "NATSConfig", "NATSConfig", 1) + `,`, `}`, }, "") return s } func (this *NATSConfig) String() string { if this == nil { return "nil" } s := strings.Join([]string{`&NATSConfig{`, `URL:` + fmt.Sprintf("%v", this.URL) + `,`, `ClusterID:` + valueToStringGenerated(this.ClusterID) + `,`, `Auth:` + valueToStringGenerated(this.Auth) + `,`, `AccessSecret:` + strings.Replace(fmt.Sprintf("%v", this.AccessSecret), "SecretKeySelector", "v1.SecretKeySelector", 1) + `,`, `}`, }, "") return s } func (this *NativeStrategy) String() string { if this == nil { return "nil" } repeatedStringForTolerations := "[]Toleration{" for _, f := range this.Tolerations { repeatedStringForTolerations += fmt.Sprintf("%v", f) + "," } repeatedStringForTolerations += "}" repeatedStringForImagePullSecrets := "[]LocalObjectReference{" for _, f := range this.ImagePullSecrets { repeatedStringForImagePullSecrets += fmt.Sprintf("%v", f) + "," } repeatedStringForImagePullSecrets += "}" keysForNodeSelector := make([]string, 0, len(this.NodeSelector)) for k := range this.NodeSelector { keysForNodeSelector = append(keysForNodeSelector, k) } github_com_gogo_protobuf_sortkeys.Strings(keysForNodeSelector) mapStringForNodeSelector := "map[string]string{" for _, k := range keysForNodeSelector { mapStringForNodeSelector += fmt.Sprintf("%v: %v,", k, this.NodeSelector[k]) } mapStringForNodeSelector += "}" s := strings.Join([]string{`&NativeStrategy{`, `Replicas:` + fmt.Sprintf("%v", this.Replicas) + `,`, `Auth:` + valueToStringGenerated(this.Auth) + `,`, `DeprecatedAntiAffinity:` + fmt.Sprintf("%v", this.DeprecatedAntiAffinity) + `,`, `Persistence:` + strings.Replace(this.Persistence.String(), "PersistenceStrategy", "PersistenceStrategy", 1) + `,`, `ContainerTemplate:` + strings.Replace(this.ContainerTemplate.String(), "ContainerTemplate", "ContainerTemplate", 1) + `,`, `MetricsContainerTemplate:` + strings.Replace(this.MetricsContainerTemplate.String(), "ContainerTemplate", "ContainerTemplate", 1) + `,`, `NodeSelector:` + mapStringForNodeSelector + `,`, `Tolerations:` + repeatedStringForTolerations + `,`, `Metadata:` + strings.Replace(fmt.Sprintf("%v", this.Metadata), "Metadata", "common.Metadata", 1) + `,`, `SecurityContext:` + strings.Replace(fmt.Sprintf("%v", this.SecurityContext), "PodSecurityContext", "v1.PodSecurityContext", 1) + `,`, `MaxAge:` + valueToStringGenerated(this.MaxAge) + `,`, `ImagePullSecrets:` + repeatedStringForImagePullSecrets + `,`, `ServiceAccountName:` + fmt.Sprintf("%v", this.ServiceAccountName) + `,`, `PriorityClassName:` + fmt.Sprintf("%v", this.PriorityClassName) + `,`, `Priority:` + valueToStringGenerated(this.Priority) + `,`, `Affinity:` + strings.Replace(fmt.Sprintf("%v", this.Affinity), "Affinity", "v1.Affinity", 1) + `,`, `MaxMsgs:` + valueToStringGenerated(this.MaxMsgs) + `,`, `MaxBytes:` + valueToStringGenerated(this.MaxBytes) + `,`, `}`, }, "") return s } func (this *PersistenceStrategy) String() string { if this == nil { return "nil" } s := strings.Join([]string{`&PersistenceStrategy{`, `StorageClassName:` + valueToStringGenerated(this.StorageClassName) + `,`, `AccessMode:` + valueToStringGenerated(this.AccessMode) + `,`, `VolumeSize:` + strings.Replace(fmt.Sprintf("%v", this.VolumeSize), "Quantity", "resource.Quantity", 1) + `,`, `}`, }, "") return s } func valueToStringGenerated(v interface{}) string { rv := reflect.ValueOf(v) if rv.IsNil() { return "nil" } pv := reflect.Indirect(rv).Interface() return fmt.Sprintf("*%v", pv) } func (m *BusConfig) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { preIndex := iNdEx var wire uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ wire |= uint64(b&0x7F) << shift if b < 0x80 { break } } fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { return fmt.Errorf("proto: BusConfig: wiretype end group for non-group") } if fieldNum <= 0 { return fmt.Errorf("proto: BusConfig: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field NATS", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ msglen |= int(b&0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } if m.NATS == nil { m.NATS = &NATSConfig{} } if err := m.NATS.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipGenerated(dAtA[iNdEx:]) if err != nil { return err } if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthGenerated } if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } iNdEx += skippy } } if iNdEx > l { return io.ErrUnexpectedEOF } return nil } func (m *ContainerTemplate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { preIndex := iNdEx var wire uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ wire |= uint64(b&0x7F) << shift if b < 0x80 { break } } fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { return fmt.Errorf("proto: ContainerTemplate: wiretype end group for non-group") } if fieldNum <= 0 { return fmt.Errorf("proto: ContainerTemplate: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Resources", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ msglen |= int(b&0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } if err := m.Resources.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipGenerated(dAtA[iNdEx:]) if err != nil { return err } if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthGenerated } if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } iNdEx += skippy } } if iNdEx > l { return io.ErrUnexpectedEOF } return nil } func (m *EventBus) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { preIndex := iNdEx var wire uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ wire |= uint64(b&0x7F) << shift if b < 0x80 { break } } fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { return fmt.Errorf("proto: EventBus: wiretype end group for non-group") } if fieldNum <= 0 { return fmt.Errorf("proto: EventBus: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field ObjectMeta", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ msglen |= int(b&0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } if err := m.ObjectMeta.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 2: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Spec", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ msglen |= int(b&0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } if err := m.Spec.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ msglen |= int(b&0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipGenerated(dAtA[iNdEx:]) if err != nil { return err } if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthGenerated } if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } iNdEx += skippy } } if iNdEx > l { return io.ErrUnexpectedEOF } return nil } func (m *EventBusList) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { preIndex := iNdEx var wire uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ wire |= uint64(b&0x7F) << shift if b < 0x80 { break } } fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { return fmt.Errorf("proto: EventBusList: wiretype end group for non-group") } if fieldNum <= 0 { return fmt.Errorf("proto: EventBusList: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field ListMeta", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ msglen |= int(b&0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } if err := m.ListMeta.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 2: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Items", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ msglen |= int(b&0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } m.Items = append(m.Items, EventBus{}) if err := m.Items[len(m.Items)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipGenerated(dAtA[iNdEx:]) if err != nil { return err } if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthGenerated } if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } iNdEx += skippy } } if iNdEx > l { return io.ErrUnexpectedEOF } return nil } func (m *EventBusSpec) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { preIndex := iNdEx var wire uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ wire |= uint64(b&0x7F) << shift if b < 0x80 { break } } fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { return fmt.Errorf("proto: EventBusSpec: wiretype end group for non-group") } if fieldNum <= 0 { return fmt.Errorf("proto: EventBusSpec: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field NATS", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ msglen |= int(b&0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } if m.NATS == nil { m.NATS = &NATSBus{} } if err := m.NATS.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipGenerated(dAtA[iNdEx:]) if err != nil { return err } if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthGenerated } if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } iNdEx += skippy } } if iNdEx > l { return io.ErrUnexpectedEOF } return nil } func (m *EventBusStatus) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { preIndex := iNdEx var wire uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ wire |= uint64(b&0x7F) << shift if b < 0x80 { break } } fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { return fmt.Errorf("proto: EventBusStatus: wiretype end group for non-group") } if fieldNum <= 0 { return fmt.Errorf("proto: EventBusStatus: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ msglen |= int(b&0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 2: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Config", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ msglen |= int(b&0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } if err := m.Config.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipGenerated(dAtA[iNdEx:]) if err != nil { return err } if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthGenerated } if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } iNdEx += skippy } } if iNdEx > l { return io.ErrUnexpectedEOF } return nil } func (m *NATSBus) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { preIndex := iNdEx var wire uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ wire |= uint64(b&0x7F) << shift if b < 0x80 { break } } fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { return fmt.Errorf("proto: NATSBus: wiretype end group for non-group") } if fieldNum <= 0 { return fmt.Errorf("proto: NATSBus: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Native", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ msglen |= int(b&0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } if m.Native == nil { m.Native = &NativeStrategy{} } if err := m.Native.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 2: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Exotic", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ msglen |= int(b&0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } if m.Exotic == nil { m.Exotic = &NATSConfig{} } if err := m.Exotic.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipGenerated(dAtA[iNdEx:]) if err != nil { return err } if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthGenerated } if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } iNdEx += skippy } } if iNdEx > l { return io.ErrUnexpectedEOF } return nil } func (m *NATSConfig) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { preIndex := iNdEx var wire uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ wire |= uint64(b&0x7F) << shift if b < 0x80 { break } } fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { return fmt.Errorf("proto: NATSConfig: wiretype end group for non-group") } if fieldNum <= 0 { return fmt.Errorf("proto: NATSConfig: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field URL", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } intStringLen := int(stringLen) if intStringLen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } m.URL = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field ClusterID", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } intStringLen := int(stringLen) if intStringLen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } s := string(dAtA[iNdEx:postIndex]) m.ClusterID = &s iNdEx = postIndex case 3: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Auth", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } intStringLen := int(stringLen) if intStringLen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } s := AuthStrategy(dAtA[iNdEx:postIndex]) m.Auth = &s iNdEx = postIndex case 4: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field AccessSecret", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ msglen |= int(b&0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } if m.AccessSecret == nil { m.AccessSecret = &v1.SecretKeySelector{} } if err := m.AccessSecret.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipGenerated(dAtA[iNdEx:]) if err != nil { return err } if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthGenerated } if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } iNdEx += skippy } } if iNdEx > l { return io.ErrUnexpectedEOF } return nil } func (m *NativeStrategy) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { preIndex := iNdEx var wire uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ wire |= uint64(b&0x7F) << shift if b < 0x80 { break } } fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { return fmt.Errorf("proto: NativeStrategy: wiretype end group for non-group") } if fieldNum <= 0 { return fmt.Errorf("proto: NativeStrategy: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field Replicas", wireType) } m.Replicas = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ m.Replicas |= int32(b&0x7F) << shift if b < 0x80 { break } } case 2: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Auth", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } intStringLen := int(stringLen) if intStringLen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } s := AuthStrategy(dAtA[iNdEx:postIndex]) m.Auth = &s iNdEx = postIndex case 3: if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field DeprecatedAntiAffinity", wireType) } var v int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ v |= int(b&0x7F) << shift if b < 0x80 { break } } m.DeprecatedAntiAffinity = bool(v != 0) case 4: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Persistence", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ msglen |= int(b&0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } if m.Persistence == nil { m.Persistence = &PersistenceStrategy{} } if err := m.Persistence.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 5: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field ContainerTemplate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ msglen |= int(b&0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } if m.ContainerTemplate == nil { m.ContainerTemplate = &ContainerTemplate{} } if err := m.ContainerTemplate.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 6: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field MetricsContainerTemplate", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ msglen |= int(b&0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } if m.MetricsContainerTemplate == nil { m.MetricsContainerTemplate = &ContainerTemplate{} } if err := m.MetricsContainerTemplate.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 7: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field NodeSelector", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ msglen |= int(b&0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } if m.NodeSelector == nil { m.NodeSelector = make(map[string]string) } var mapkey string var mapvalue string for iNdEx < postIndex { entryPreIndex := iNdEx var wire uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ wire |= uint64(b&0x7F) << shift if b < 0x80 { break } } fieldNum := int32(wire >> 3) if fieldNum == 1 { var stringLenmapkey uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ stringLenmapkey |= uint64(b&0x7F) << shift if b < 0x80 { break } } intStringLenmapkey := int(stringLenmapkey) if intStringLenmapkey < 0 { return ErrInvalidLengthGenerated } postStringIndexmapkey := iNdEx + intStringLenmapkey if postStringIndexmapkey < 0 { return ErrInvalidLengthGenerated } if postStringIndexmapkey > l { return io.ErrUnexpectedEOF } mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) iNdEx = postStringIndexmapkey } else if fieldNum == 2 { var stringLenmapvalue uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ stringLenmapvalue |= uint64(b&0x7F) << shift if b < 0x80 { break } } intStringLenmapvalue := int(stringLenmapvalue) if intStringLenmapvalue < 0 { return ErrInvalidLengthGenerated } postStringIndexmapvalue := iNdEx + intStringLenmapvalue if postStringIndexmapvalue < 0 { return ErrInvalidLengthGenerated } if postStringIndexmapvalue > l { return io.ErrUnexpectedEOF } mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue]) iNdEx = postStringIndexmapvalue } else { iNdEx = entryPreIndex skippy, err := skipGenerated(dAtA[iNdEx:]) if err != nil { return err } if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthGenerated } if (iNdEx + skippy) > postIndex { return io.ErrUnexpectedEOF } iNdEx += skippy } } m.NodeSelector[mapkey] = mapvalue iNdEx = postIndex case 8: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Tolerations", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ msglen |= int(b&0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } m.Tolerations = append(m.Tolerations, v1.Toleration{}) if err := m.Tolerations[len(m.Tolerations)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 9: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ msglen |= int(b&0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } if m.Metadata == nil { m.Metadata = &common.Metadata{} } if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 10: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field SecurityContext", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ msglen |= int(b&0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } if m.SecurityContext == nil { m.SecurityContext = &v1.PodSecurityContext{} } if err := m.SecurityContext.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 11: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field MaxAge", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } intStringLen := int(stringLen) if intStringLen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } s := string(dAtA[iNdEx:postIndex]) m.MaxAge = &s iNdEx = postIndex case 12: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field ImagePullSecrets", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ msglen |= int(b&0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } m.ImagePullSecrets = append(m.ImagePullSecrets, v1.LocalObjectReference{}) if err := m.ImagePullSecrets[len(m.ImagePullSecrets)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 13: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field ServiceAccountName", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } intStringLen := int(stringLen) if intStringLen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } m.ServiceAccountName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 14: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field PriorityClassName", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } intStringLen := int(stringLen) if intStringLen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } m.PriorityClassName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 15: if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field Priority", wireType) } var v int32 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ v |= int32(b&0x7F) << shift if b < 0x80 { break } } m.Priority = &v case 16: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Affinity", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ msglen |= int(b&0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } if m.Affinity == nil { m.Affinity = &v1.Affinity{} } if err := m.Affinity.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 17: if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field MaxMsgs", wireType) } var v uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ v |= uint64(b&0x7F) << shift if b < 0x80 { break } } m.MaxMsgs = &v case 18: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field MaxBytes", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } intStringLen := int(stringLen) if intStringLen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } s := string(dAtA[iNdEx:postIndex]) m.MaxBytes = &s iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipGenerated(dAtA[iNdEx:]) if err != nil { return err } if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthGenerated } if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } iNdEx += skippy } } if iNdEx > l { return io.ErrUnexpectedEOF } return nil } func (m *PersistenceStrategy) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { preIndex := iNdEx var wire uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ wire |= uint64(b&0x7F) << shift if b < 0x80 { break } } fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { return fmt.Errorf("proto: PersistenceStrategy: wiretype end group for non-group") } if fieldNum <= 0 { return fmt.Errorf("proto: PersistenceStrategy: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field StorageClassName", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } intStringLen := int(stringLen) if intStringLen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } s := string(dAtA[iNdEx:postIndex]) m.StorageClassName = &s iNdEx = postIndex case 2: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field AccessMode", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } intStringLen := int(stringLen) if intStringLen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } s := k8s_io_api_core_v1.PersistentVolumeAccessMode(dAtA[iNdEx:postIndex]) m.AccessMode = &s iNdEx = postIndex case 3: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field VolumeSize", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ msglen |= int(b&0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } if m.VolumeSize == nil { m.VolumeSize = &resource.Quantity{} } if err := m.VolumeSize.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipGenerated(dAtA[iNdEx:]) if err != nil { return err } if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthGenerated } if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } iNdEx += skippy } } if iNdEx > l { return io.ErrUnexpectedEOF } return nil } func skipGenerated(dAtA []byte) (n int, err error) { l := len(dAtA) iNdEx := 0 depth := 0 for iNdEx < l { var wire uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return 0, ErrIntOverflowGenerated } if iNdEx >= l { return 0, io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ wire |= (uint64(b) & 0x7F) << shift if b < 0x80 { break } } wireType := int(wire & 0x7) switch wireType { case 0: for shift := uint(0); ; shift += 7 { if shift >= 64 { return 0, ErrIntOverflowGenerated } if iNdEx >= l { return 0, io.ErrUnexpectedEOF } iNdEx++ if dAtA[iNdEx-1] < 0x80 { break } } case 1: iNdEx += 8 case 2: var length int for shift := uint(0); ; shift += 7 { if shift >= 64 { return 0, ErrIntOverflowGenerated } if iNdEx >= l { return 0, io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ length |= (int(b) & 0x7F) << shift if b < 0x80 { break } } if length < 0 { return 0, ErrInvalidLengthGenerated } iNdEx += length case 3: depth++ case 4: if depth == 0 { return 0, ErrUnexpectedEndOfGroupGenerated } depth-- case 5: iNdEx += 4 default: return 0, fmt.Errorf("proto: illegal wireType %d", wireType) } if iNdEx < 0 { return 0, ErrInvalidLengthGenerated } if depth == 0 { return iNdEx, nil } } return 0, io.ErrUnexpectedEOF } var ( ErrInvalidLengthGenerated = fmt.Errorf("proto: negative length found during unmarshaling") ErrIntOverflowGenerated = fmt.Errorf("proto: integer overflow") ErrUnexpectedEndOfGroupGenerated = fmt.Errorf("proto: unexpected end of group") )
// Init initialize application configuration. func Init(opts ...Option) error { viper.Reset() vip := viper.GetViper() vip.AutomaticEnv() vip.SetEnvKeyReplacer(strings.NewReplacer(".", "_")) pflag.CommandLine.AddGoFlagSet(flag.CommandLine) pflag.Parse() if err := vip.BindPFlags(pflag.CommandLine); err != nil { return fmt.Errorf("bind flags: %w", err) } option := applyOptions(opts) if option.file != "" { vip.SetConfigFile(option.file) if option.fs != nil { vip.SetFs(afero.FromIOFS{FS: option.fs}) } if err := vip.ReadInConfig(); err != nil { return fmt.Errorf("read config file %s: %w", option.file, err) } } return nil }
/** \brief Worker thread. * * Woken by thread flags to receive packets or clean up transmit * * \param[in] pvParameters pointer to the interface data */ void STM32_EMAC::rmii_watchdog_thread_function(void *pvParameters) { struct STM32_EMAC *stm32_enet = static_cast<STM32_EMAC *>(pvParameters); while (1) { if (stm32_enet->EthHandle.Instance->MMCRGUFCR > 0) { while (1) { osDelay(0xFFFFFFFF); } } else if (stm32_enet->EthHandle.Instance->MMCRFCECR > 10) { SYSCFG->PMC &= ~SYSCFG_PMC_MII_RMII_SEL; SYSCFG->PMC |= SYSCFG_PMC_MII_RMII_SEL; stm32_enet->EthHandle.Instance->MMCCR |= ETH_MMCCR_CR; } else { osDelay(100); } } }
// // Created by <NAME> on 2022/02/04. // #include "Base/Input.h" Input::Input(float last_xpos, float last_ypos, float keyboard_speed, float mouse_speed) : last_xpos_(last_xpos), last_ypos_(last_ypos), keyboard_speed_(keyboard_speed), mouse_speed_(mouse_speed) {} Input::~Input() {} void Input::processKeyboard(const Uint8 *state, const float &delta_time, Camera &cam) { const float velocity = keyboard_speed_ * delta_time; if (state[SDL_SCANCODE_W]) cam.setPosition(cam.getPosition() + cam.getFront() * velocity); if (state[SDL_SCANCODE_S]) cam.setPosition(cam.getPosition() - cam.getFront() * velocity); if (state[SDL_SCANCODE_A]) cam.setPosition(cam.getPosition() - cam.getRight() * velocity); if (state[SDL_SCANCODE_D]) cam.setPosition(cam.getPosition() + cam.getRight() * velocity); } void Input::processMouse(const int &xpos, const int &ypos, Camera &cam) { xoffset_ = xpos - last_xpos_; yoffset_ = last_ypos_ - ypos; last_xpos_ = xpos; last_ypos_ = ypos; xoffset_ *= mouse_speed_; yoffset_ *= mouse_speed_; cam.setYaw(cam.getYaw() + xoffset_); cam.setPitch(cam.getPitch() + yoffset_); if (cam.getPitch() > 89.0f) cam.setPitch(89.0f); if (cam.getPitch() < -89.0f) cam.setPitch(-89.0f); }
// Create a templated solidityContract from an native contract description func NewSolidityContract(contract *native.Contract) *solidityContract { return &solidityContract{ SolidityPragmaVersion: ">=0.4.24", Contract: contract, } }
<filename>src/cache/clock.rs use std::marker::PhantomData; use std::ptr::NonNull; struct ClockEntry<T> { value: T, next: NonNull<ClockEntry<T>>, } /// A simple clock, i.e. a circular singly linked list. pub struct Clock<T> { /// Contains the pointer to the current tail of the circular list. tail: Option<NonNull<ClockEntry<T>>>, } unsafe impl<T: Sync> Sync for Clock<T> {} unsafe impl<T: Send> Send for Clock<T> {} impl<T> Default for Clock<T> { fn default() -> Self { Clock { tail: None } } } impl<T> Clock<T> { /// Returns the number of elements in this clock. pub fn len(&self) -> usize { self.iter().count() } /// Removes the element at the head of the list. pub fn pop_front(&mut self) -> Option<T> { self.tail.map(|mut tail| { let head = unsafe { tail.as_ref() }.next; if head != tail { unsafe { tail.as_mut() }.next = unsafe { head.as_ref() }.next; } else { self.tail = None; } unsafe { Box::from_raw(head.as_ptr()) }.value }) } /// Peeks at the element at the head of the list. pub fn peek_front(&self) -> Option<&T> { let tail = self.tail?; let head = unsafe { tail.as_ref() }.next; let value = &unsafe { &*head.as_ptr() }.value; Some(value) } /// Increments the *hand* so that the head of the list becomes the tail. pub fn next(&mut self) { if let Some(ref mut tail) = self.tail { *tail = unsafe { tail.as_ref() }.next; }; } /// Inserts the given element at the end of the list. pub fn push_back(&mut self, value: T) { let entry = Box::new(ClockEntry { value, next: NonNull::dangling(), }); let entry = Box::into_raw(entry); let mut entry = NonNull::new(entry).unwrap(); if let Some(ref mut tail) = self.tail { unsafe { entry.as_mut() }.next = unsafe { tail.as_ref() }.next; unsafe { tail.as_mut() }.next = entry; } else { unsafe { entry.as_mut() }.next = entry; } self.tail = Some(entry); } /// Returns an iterator that iterates over the list. pub fn iter(&self) -> ClockIter<T> { if let Some(tail) = self.tail { ClockIter { current: Some(unsafe { tail.as_ref() }.next), last: tail, marker: PhantomData, } } else { ClockIter { current: None, last: NonNull::dangling(), marker: PhantomData, } } } /// Returns an iterator that iterates over the list that allows /// modification. pub fn iter_mut(&self) -> ClockIterMut<T> { if let Some(tail) = self.tail { ClockIterMut { current: Some(unsafe { tail.as_ref() }.next), last: tail, marker: PhantomData, } } else { ClockIterMut { current: None, last: NonNull::dangling(), marker: PhantomData, } } } /// Retains only the elements in the list for which `f` returns `true`. pub fn retain<F>(&mut self, mut f: F) where F: FnMut(&T) -> bool, { unsafe { let tail = match self.tail { None => return, Some(tail) => tail, }; let mut last = tail; let mut current = tail.as_ref().next; while tail != current { if f(&current.as_ref().value) { // Retain element last = current; current = current.as_ref().next; } else { // Remove element let next = current.as_ref().next; Box::from_raw(current.as_ptr()); last.as_mut().next = next; current = next; } } if !f(&current.as_ref().value) { // Remove tail element of list. if last == current { // List contains only one element self.tail = None; } else { self.tail = Some(last); last.as_mut().next = current.as_ref().next; } Box::from_raw(current.as_ptr()); } } } } impl<T> Drop for Clock<T> { fn drop(&mut self) { while self.pop_front().is_some() {} } } /// Immutable clock iterator pub struct ClockIter<'a, T: 'a> { current: Option<NonNull<ClockEntry<T>>>, last: NonNull<ClockEntry<T>>, marker: PhantomData<&'a T>, } impl<'a, T> Iterator for ClockIter<'a, T> { type Item = &'a T; fn next(&mut self) -> Option<Self::Item> { self.current.take().map(|current| { let entry = unsafe { &*current.as_ptr() }; self.current = if current == self.last { None } else { Some(entry.next) }; &entry.value }) } } /// Mutable clock iterator pub struct ClockIterMut<'a, T: 'a> { current: Option<NonNull<ClockEntry<T>>>, last: NonNull<ClockEntry<T>>, marker: PhantomData<&'a mut T>, } impl<'a, T> Iterator for ClockIterMut<'a, T> { type Item = &'a mut T; fn next(&mut self) -> Option<Self::Item> { self.current.take().map(|current| { let entry = unsafe { &mut *current.as_ptr() }; self.current = if current == self.last { None } else { Some(entry.next) }; &mut entry.value }) } } #[cfg(test)] mod tests { use super::Clock; use std::cell::Cell; use std::collections::VecDeque; #[test] fn push_pop_next_retain() { let mut clock = Clock::default(); let mut vec_deque = VecDeque::new(); assert!(clock.iter().eq(&vec_deque)); for i in 0..1000 { if i % 3 == 0 { assert_eq!(clock.pop_front(), vec_deque.pop_front()); } else if i % 5 == 0 { clock.next(); if let Some(entry) = vec_deque.pop_front() { vec_deque.push_back(entry); } } else { clock.push_back(i); vec_deque.push_back(i); } assert!(clock.iter().eq(&vec_deque)); } for i in &[4, 5, 1] { clock.retain(|e| e % i == 1); vec_deque.retain(|e| e % i == 1); assert!(clock.iter().eq(&vec_deque)); } } #[test] fn test_drop() { struct DropMe<'a>(&'a Cell<bool>); impl<'a> Drop for DropMe<'a> { fn drop(&mut self) { self.0.set(true); } } { let dropped = Cell::new(false); let mut clock = Clock::default(); clock.push_back(DropMe(&dropped)); assert!(!dropped.get()); clock.pop_front(); assert!(dropped.get()); } { let first = Cell::new(false); let second = Cell::new(false); let mut clock = Clock::default(); clock.push_back(DropMe(&first)); clock.push_back(DropMe(&second)); let mut cnt = 0; clock.retain(|_| { cnt += 1; cnt == 1 }); assert!(!first.get()); assert!(second.get()); clock.retain(|_| false); assert!(first.get()); } } }
<commit_msg>Substitute pragma once for ifndef guards <commit_before> struct Sprite { SDL_Rect Rect; iPoint Pivot; }; <commit_after> struct Sprite { SDL_Rect Rect; iPoint Pivot; }; #endif // __SPRITE_H__
# widgets.py import tkinter as tk from tkinter import ttk import sqlite3 from tkinter import messagebox from styles import make_formats_dict, NEUTRAL_COLOR from utes import create_tooltip from query_strings import select_current_person_id import dev_tools as dt from dev_tools import looky, seeline formats = make_formats_dict() # print('formats is', formats) # formats is {'bg': '#34615f', 'highlight_bg': '#4a8a87', 'head_bg': '#486a8c', 'fg': '#b9ddd9', 'output_font': ('courier', 16), 'input_font': ('tahoma', 16), 'heading1': ('courier', 32, 'bold'), 'heading2': ('courier', 24, 'bold'), 'heading3': ('courier', 17, 'bold'), 'heading4': ('courier', 13, 'bold'), 'status': ('tahoma', 13), 'boilerplate': ('tahoma', 10), 'show_font': ('tahoma', 16, 'italic'), 'titlebar_0': ('tahoma', 10, 'bold'), 'titlebar_1': ('tahoma', 14, 'bold'), 'titlebar_2': ('tahoma', 16, 'bold'), 'titlebar_3': ('tahoma', 20, 'bold'), 'titlebar_hilited_0': ('tahoma', 10), 'titlebar_hilited_1': ('tahoma', 14), 'titlebar_hilited_2': ('tahoma', 16), 'titlebar_hilited_3': ('tahoma', 20), 'unshow_font': ('tahoma', 14, 'italic')} class Framex(tk.Frame): def __init__(self, master, *args, **kwargs): tk.Frame.__init__(self, master, *args, **kwargs) pass def winfo_subclass(self): ''' Like built-in tkinter method w.winfo_class() except it gets subclass names. ''' subclass = type(self).__name__ return subclass class FrameStay(Framex): ''' Frame background color will not change when color scheme changes. ''' def __init__(self, master, *args, **kwargs): Framex.__init__(self, master, *args, **kwargs) pass class Frame(Framex): ''' Frame background color changes when color scheme changes. ''' def __init__(self, master, *args, **kwargs): Framex.__init__(self, master, *args, **kwargs) self.config(bg=formats['bg']) class FrameTest(Framex): ''' Frame background color can be altered for testing/visibility. ''' def __init__(self, master, *args, **kwargs): Framex.__init__(self, master, *args, **kwargs) self.config(bg='orange') class FrameTest2(Framex): ''' Frame background color can be altered for testing/visibility. ''' def __init__(self, master, *args, **kwargs): Framex.__init__(self, master, *args, **kwargs) self.config(bg='green') class FrameTitleBar(Framex): ''' Frame hilited by border and a different background color. ''' def __init__(self, master, *args, **kwargs): Framex.__init__(self, master, *args, **kwargs) self.config(bg=NEUTRAL_COLOR) class FrameHilited(Framex): ''' Frame hilited by groove border and background color. ''' def __init__(self, master, *args, **kwargs): Framex.__init__(self, master, *args, **kwargs) self.config(bg=formats['highlight_bg'], bd=3, relief='groove') class FrameHilited1(Framex): ''' Used for narrow resizing sash on left edge of attributes table. Could be used as vertical separator. ''' def __init__(self, master, *args, **kwargs): Framex.__init__(self, master, *args, **kwargs) self.config(bg=formats['highlight_bg'], bd=6, relief='ridge') class FrameHilited2(Framex): ''' Frame hilited by border and a different background color. ''' def __init__(self, master, *args, **kwargs): Framex.__init__(self, master, *args, **kwargs) self.config(bg=formats['head_bg']) class FrameHilited3(Framex): ''' Frame hilited by different background color but not border. ''' def __init__(self, master, *args, **kwargs): Framex.__init__(self, master, *args, **kwargs) self.config(bg=formats['highlight_bg']) class FrameHilited4(Framex): ''' Frame hilited by sunken border and background color. ''' def __init__(self, master, *args, **kwargs): Framex.__init__(self, master, *args, **kwargs) self.config(bg=formats['highlight_bg'], bd=2, relief='sunken') class FrameHilited5(Framex): ''' Frame hilited by sunken border and background color. ''' def __init__(self, master, *args, **kwargs): Framex.__init__(self, master, *args, **kwargs) self.config(bg=formats['highlight_bg'], bd=1, relief='solid') class FrameHilited6(Framex): ''' Frame hilited by sunken border only. ''' def __init__(self, master, *args, **kwargs): Framex.__init__(self, master, *args, **kwargs) self.bd = 2 self.config(bg=formats['bg'], bd=self.bd, relief='sunken') class LabelFramex(tk.LabelFrame): def __init__(self, master, *args, **kwargs): tk.LabelFrame.__init__(self, master, *args, **kwargs) pass def winfo_subclass(self): ''' Like built-in tkinter method w.winfo_class() except it gets subclass names. ''' subclass = type(self).__name__ return subclass class LabelFrame(LabelFramex): def __init__(self, master, *args, **kwargs): LabelFramex.__init__(self, master, *args, **kwargs) self.config( bg=formats['bg'], fg=formats['fg'], font=formats['output_font']) class Labelx(tk.Label): def __init__(self, master, *args, **kwargs): tk.Label.__init__(self, master, *args, **kwargs) def winfo_subclass(self): ''' a method that works like built-in tkinter method w.winfo_class() except it gets subclass names of widget classes custom-made by inheritance ''' subclass = type(self).__name__ return subclass class Label(Labelx): ''' If this subclass is detected it will be reconfigured according to user preferences. ''' def __init__(self, master, *args, **kwargs): Labelx.__init__(self, master, *args, **kwargs) self.config( bg=formats['bg'], fg=formats['fg'], font=formats['output_font']) class LabelTest(Labelx): ''' Color can be changed for testing/visibility. ''' def __init__(self, master, *args, **kwargs): Labelx.__init__(self, master, *args, **kwargs) self.config( bg='purple', fg=formats['fg'], font=formats['output_font']) class LabelItalic(Labelx): ''' Uses input font and italics to display errors & such. ''' def __init__(self, master, *args, **kwargs): Labelx.__init__(self, master, *args, **kwargs) self.config( bg=formats['bg'], fg=formats['fg'], font=formats['show_font']) class LabelHeader(Labelx): def __init__(self, master, *args, **kwargs): Labelx.__init__(self, master, *args, **kwargs) self.config( bg=formats['highlight_bg'], fg=formats['fg'], font=formats['heading3'], bd=1, relief='raised') class LabelHilited(Labelx): ''' Like Label with a different background. ''' def __init__(self, master, *args, **kwargs): Labelx.__init__(self, master, *args, **kwargs) self.formats = make_formats_dict() self.config( bg=self.formats['highlight_bg'], fg=self.formats['fg'], font=self.formats['output_font']) def highlight(self, evt): self.config(bg=self.formats['head_bg']) def unhighlight(self, evt): self.config(bg=self.formats['highlight_bg']) class LabelHilited2(Labelx): ''' Like Label with a different background. ''' def __init__(self, master, *args, **kwargs): Labelx.__init__(self, master, *args, **kwargs) self.config( bg=formats['head_bg'], fg=formats['fg'], font=formats['output_font']) class LabelHilited3(Labelx): ''' Like Label with a different background and a monospaced sans-serif font. Because it's monospaced, this font is ideal for places such as dropdown menus where a single label needs to have both flush left and flush right text with variable space in the middle keeping both strings flush. ''' def __init__(self, master, *args, **kwargs): Labelx.__init__(self, master, *args, **kwargs) self.config( bg=formats['highlight_bg'], fg=formats['fg'], font=formats['input_font'] ) class LabelTip(LabelHilited): ''' Like Label with a different background. For tooltips. ''' def __init__(self, master, *args, **kwargs): LabelHilited.__init__(self, master, *args, **kwargs) self.config(font=formats['status'], bd=0, relief='solid') class LabelTip2(LabelHilited2): ''' Like Label with a different background. For tooltips. ''' def __init__(self, master, *args, **kwargs): LabelHilited2.__init__(self, master, *args, **kwargs) self.config(font=formats['status'], bd=1, relief='solid') class LabelTipBold(LabelHilited): ''' Like Label with a different background. ''' def __init__(self, master, *args, **kwargs): LabelTip.__init__(self, master, *args, **kwargs) self.config(font=formats['titlebar_1']) class LabelNegative(Labelx): ''' Usual bg and fg reversed. ''' def __init__(self, master, *args, **kwargs): Labelx.__init__(self, master, *args, **kwargs) self.config( bg=formats['fg'], fg=formats['bg'], font=formats['output_font']) class LabelH1(Label): ''' For largest subheadings. ''' def __init__(self, master, *args, **kwargs): Label.__init__(self, master, *args, **kwargs) self.config(font=formats['heading1']) class LabelH2(Label): ''' For large subheadings. ''' def __init__(self, master, *args, **kwargs): Label.__init__(self, master, *args, **kwargs) self.config(font=formats['heading2']) class LabelH3(Label): ''' For small subheadings. ''' def __init__(self, master, *args, **kwargs): Label.__init__(self, master, *args, **kwargs) self.config(font=formats['heading3']) class LabelH4(Label): ''' For tiny subheadings. ''' def __init__(self, master, *args, **kwargs): Label.__init__(self, master, *args, **kwargs) self.config(font=formats['heading4']) class LabelButtonImage(Labelx): ''' A label that looks and works like a button. Good for images since it sizes itself to its contents, so don't add width and height to this class or change its color. ''' def __init__(self, master, *args, **kwargs): Labelx.__init__(self, master, *args, **kwargs) self.bind('<FocusIn>', self.show_focus) self.bind('<FocusOut>', self.unshow_focus) self.bind('<Button-1>', self.on_press) self.bind('<ButtonRelease-1>', self.on_release) self.bind('<Enter>', self.on_hover) self.bind('<Leave>', self.on_unhover) def show_focus(self, evt): self.config(borderwidth=2) def unshow_focus(self, evt): self.config(borderwidth=1) def on_press(self, evt): formats = make_formats_dict() self.config(relief='sunken', bg=formats['head_bg']) def on_release(self, evt): formats = make_formats_dict() self.config(relief='raised', bg=formats['bg']) def on_hover(self, evt): self.config(relief='groove') def on_unhover(self, evt): self.config(relief='raised') class LabelButtonText(LabelButtonImage): ''' A label that looks and works like a button. Displays Text. ''' def __init__(self, master, width=8, *args, **kwargs): LabelButtonImage.__init__(self, master, *args, **kwargs) self.formats = make_formats_dict() self.config( anchor='center', borderwidth=1, relief='raised', takefocus=1, bg=self.formats['bg'], width=width, font=self.formats['input_font'], fg=self.formats['fg']) class LabelDots(LabelButtonText): ''' Display clickable dots if more info, no dots if no more info. ''' def __init__( self, master, dialog_class, treebard, *args, **kwargs): LabelButtonText.__init__(self, master, *args, **kwargs) self.master = master self.dialog_class = dialog_class self.treebard = treebard self.current_person = None self.root = master.master self.finding_id = None self.header = [] self.config(width=5, font=formats['heading3']) self.bind('<Button-1>', self.open_dialog) self.bind('<Return>', self.open_dialog) self.bind('<space>', self.open_dialog) def open_dialog(self, evt): dlg = self.dialog_class( self.master, self.finding_id, self.header, self.current_person, self.treebard, pressed=evt.widget) class LabelBoilerplate(Labelx): ''' Like Label for fine print. ''' def __init__(self, master, *args, **kwargs): Labelx.__init__(self, master, *args, **kwargs) self.config( bg=formats['bg'], fg=formats['fg'], font=formats['boilerplate']) class LabelTitleBar(Labelx): ''' Like Label for fine print. Can be sized independently of other font sizes so users who want larger fonts elsewhere can keep titles tiny if they want. Used for window titlebar and menu strip since people are so used to Windows' tiny fonts on these widgets that some people will not want to see the font get bigger even if they can't read it. ''' def __init__(self, master, size='tiny', *args, **kwargs): Labelx.__init__(self, master, *args, **kwargs) self.config( bg=NEUTRAL_COLOR, fg=formats['fg']) if size == 'tiny': self.config(font=formats['titlebar_0']) elif size == 'small': self.config(font=formats['titlebar_1']) elif size == 'medium': self.config(font=formats['titlebar_2']) elif size == 'large': self.config(font=formats['titlebar_3']) class LabelMenuBarTest(LabelTitleBar): ''' Color can be changed for testing/visibility. ''' def __init__(self, master, size='tiny', *args, **kwargs): LabelTitleBar.__init__(self, master,**options) self.config(bg='blue') self.bind('<Enter>', self.enrise) self.bind('<Leave>', self.flatten) self.bind('<Button-1>', self.sink) if size == 'tiny': self.config(font=formats['titlebar_hilited_0']) elif size == 'small': self.config(font=formats['titlebar_hilited_1']) elif size == 'medium': self.config(font=formats['titlebar_hilited_2']) elif size == 'large': self.config(font=formats['titlebar_hilited_3']) def enrise(self, evt): evt.widget.config(relief='raised') def flatten(self, evt): evt.widget.config(relief='flat') def sink(self, evt): evt.widget.config(relief='sunken') class LabelMenuBar(Labelx): ''' Like LabelTitleBar but normal font weight. ''' def __init__(self, master, size='tiny', *args, **kwargs): Labelx.__init__(self, master, *args, **kwargs) self.config(bg=formats['head_bg']) if size == 'tiny': self.config(font=formats['titlebar_hilited_0']) elif size == 'small': self.config(font=formats['titlebar_hilited_1']) elif size == 'medium': self.config(font=formats['titlebar_hilited_2']) elif size == 'large': self.config(font=formats['titlebar_hilited_3']) class LabelTitleBarHilited(Labelx): ''' Like LabelTitleBar but instead of using a highlight color as its normal background, it uses the normal background color so it will be highlighted. ''' def __init__(self, master, size='tiny', *args, **kwargs): Labelx.__init__(self, master, *args, **kwargs) self.config(bg=formats['highlight_bg']) if size == 'tiny': self.config(font=formats['titlebar_hilited_0']) elif size == 'small': self.config(font=formats['titlebar_hilited_1']) elif size == 'medium': self.config(font=formats['titlebar_hilited_2']) elif size == 'large': self.config(font=formats['titlebar_hilited_3']) class LabelStay(Labelx): ''' If this subclass is detected its background won't be reconfigured. ''' def __init__(self, master, *args, **kwargs): Labelx.__init__(self, master, *args, **kwargs) pass class LabelButtonImage(Labelx): ''' A label that looks and works like a button. Good for images since it sizes itself to its contents, so don't add width and height to this class or change its color. ''' def __init__(self, master, *args, **kwargs): Labelx.__init__(self, master, *args, **kwargs) self.bind('<FocusIn>', self.show_focus) self.bind('<FocusOut>', self.unshow_focus) self.bind('<Button-1>', self.on_press) self.bind('<ButtonRelease-1>', self.on_release) self.bind('<Enter>', self.on_hover) self.bind('<Leave>', self.on_unhover) def show_focus(self, evt): self.config(borderwidth=2) def unshow_focus(self, evt): self.config(borderwidth=1) def on_press(self, evt): formats = make_formats_dict() self.config(relief='sunken', bg=formats['head_bg']) def on_release(self, evt): formats = make_formats_dict() self.config(relief='raised', bg=formats['bg']) def on_hover(self, evt): self.config(relief='groove') def on_unhover(self, evt): self.config(relief='raised') class LabelButtonText(LabelButtonImage): ''' A label that looks and works like a button. Displays Text. ''' def __init__(self, master, width=8, *args, **kwargs): LabelButtonImage.__init__(self, master, *args, **kwargs) self.config( bg=formats['bg'], fg=formats['fg'], font=formats['input_font'], anchor='center', borderwidth=1, relief='raised', takefocus=1, width=width) class LabelMovable(LabelHilited): ''' A label that can be moved to a different grid position by trading places with another widget on press of an arrow key. The master can't contain anything but LabelMovables. The ipadx, ipady, padx, pady, and sticky grid options can be used as long as they're the same for every LabelMovable in the master. With some more coding, columnspan and rowspan could be set too but as is the spans should be left at their default values which is 1. ''' def __init__(self, master, first_column=0, first_row=0, *args, **kwargs): LabelHilited.__init__(self, master, *args, **kwargs) self.formats = make_formats_dict() self.master = master self.first_column = first_column self.first_row = first_row self.config( takefocus=1, bg=formats['highlight_bg'], fg=formats['fg'], font=formats['output_font']) self.bind('<FocusIn>', self.highlight_on_focus) self.bind('<FocusOut>', self.unhighlight_on_unfocus) self.bind('<Key>', self.locate) self.bind('<Key>', self.move) def locate(self, evt): ''' Get the grid position of the two widgets that will trade places. ''' self.mover = evt.widget mover_dict = self.mover.grid_info() self.old_col = mover_dict['column'] self.old_row = mover_dict['row'] self.ipadx = mover_dict['ipadx'] self.ipady = mover_dict['ipady'] self.pady = mover_dict['pady'] self.padx = mover_dict['padx'] self.sticky = mover_dict['sticky'] self.less_col = self.old_col - 1 self.less_row = self.old_row - 1 self.more_col = self.old_col + 1 self.more_row = self.old_row + 1 self.last_column = self.master.grid_size()[0] - 1 self.last_row = self.master.grid_size()[1] - 1 def move(self, evt): ''' Determine which arrow key was pressed and make the trade. ''' def move_left(): if self.old_col > self.first_column: for child in self.master.winfo_children(): if (child.grid_info()['column'] == self.less_col and child.grid_info()['row'] == self.old_row): movee = child movee.grid_forget() movee.grid( column=self.old_col, row=self.old_row, ipadx=self.ipadx, ipady=self.ipady, padx=self.padx, pady=self.pady, sticky=self.sticky) self.mover.grid_forget() self.mover.grid( column=self.less_col, row=self.old_row, ipadx=self.ipadx, ipady=self.ipady, padx=self.padx, pady=self.pady, sticky=self.sticky) def move_right(): if self.old_col < self.last_column: for child in self.master.winfo_children(): if (child.grid_info()['column'] == self.more_col and child.grid_info()['row'] == self.old_row): movee = child movee.grid_forget() movee.grid( column=self.old_col, row=self.old_row, ipadx=self.ipadx, ipady=self.ipady, padx=self.padx, pady=self.pady, sticky=self.sticky) self.mover.grid_forget() self.mover.grid( column=self.more_col, row=self.old_row, ipadx=self.ipadx, ipady=self.ipady, padx=self.padx, pady=self.pady, sticky=self.sticky) def move_up(): if self.old_row > self.first_row: for child in self.master.winfo_children(): if (child.grid_info()['column'] == self.old_col and child.grid_info()['row'] == self.less_row): movee = child movee.grid_forget() movee.grid( column=self.old_col, row=self.old_row, ipadx=self.ipadx, ipady=self.ipady, padx=self.padx, pady=self.pady, sticky=self.sticky) self.mover.grid_forget() self.mover.grid( column=self.old_col, row=self.less_row, ipadx=self.ipadx, ipady=self.ipady, padx=self.padx, pady=self.pady, sticky=self.sticky) def move_down(): if self.old_row < self.last_row: for child in self.master.winfo_children(): if (child.grid_info()['column'] == self.old_col and child.grid_info()['row'] == self.more_row): movee = child movee.grid_forget() movee.grid( column=self.old_col, row=self.old_row, ipadx=self.ipadx, ipady=self.ipady, padx=self.padx, pady=self.pady, sticky=self.sticky) self.mover.grid_forget() self.mover.grid( column=self.old_col, row=self.more_row, ipadx=self.ipadx, ipady=self.ipady, padx=self.padx, pady=self.pady, sticky=self.sticky) self.locate(evt) keysyms = { 'Left' : move_left, 'Right' : move_right, 'Up' : move_up, 'Down' : move_down} for k,v in keysyms.items(): if evt.keysym == k: v() self.fix_tab_order() def fix_tab_order(self): new_order = [] for child in self.master.winfo_children(): new_order.append(( child, child.grid_info()['column'], child.grid_info()['row'])) new_order.sort(key=lambda i: (i[1], i[2])) for tup in new_order: widg = tup[0] widg.lift() def highlight_on_focus(self, evt): evt.widget.config(bg=self.formats['head_bg']) def unhighlight_on_unfocus(self, evt): evt.widget.config(bg=self.formats['highlight_bg']) class Buttonx(tk.Button): def __init__(self, master, *args, **kwargs): tk.Button.__init__(self, master, *args, **kwargs) pass def winfo_subclass(self): ''' a method that works like built-in tkinter method w.winfo_class() except it gets subclass names of widget classes custom-made by inheritance ''' subclass = type(self).__name__ return subclass # BUTTONS should not use a medium background color because the highlightthickness # and highlightcolor options don't work and the button highlight focus might not # be visible since Tkinter or Windows is choosing the color of the focus highlight # and it can't be made thicker. class Button(Buttonx): ''' Includes tk.Button in the colorizer scheme. ''' def __init__(self, master, *args, **kwargs): Buttonx.__init__(self, master, *args, **kwargs) self.config( font=(formats['output_font']), overrelief=tk.GROOVE, activebackground=formats['head_bg'], bg=formats['bg'], fg=formats['fg']) class ButtonBigPic(Buttonx): ''' Used for top_pic on person tab and tree decoration on opening_dialog. ''' def __init__(self, master, *args, **kwargs): Buttonx.__init__(self, master, *args, **kwargs) self.formats = make_formats_dict() self.config( bd=0, relief="flat", bg=self.formats['highlight_bg'], fg=self.formats['bg'], cursor='hand2') self.bind('<FocusIn>', self.highlight) self.bind('<FocusOut>', self.unhighlight) def highlight(self, evt): self.config(bg=self.formats['fg']) def unhighlight(self, evt): self.config(bg=self.formats['bg']) class ButtonFlatHilited(Buttonx): ''' A button with no relief or border. Used for the Combobox dropdown. ''' def __init__(self, master, *args, **kwargs): Buttonx.__init__(self, master, *args, **kwargs) self.config( bg=formats['highlight_bg'], relief='flat', fg=formats['fg'], activebackground=formats['fg'], # bg color while pressed activeforeground=formats['bg'], # fg color while pressed overrelief='flat', # relief when hovered by mouse bd=0) # prevents sunken relief while pressed self.grid_configure(sticky='ew') def highlight(self, evt): self.config( bg=formats['highlight_bg'], fg=formats['fg'], activebackground=formats['fg'], activeforeground=formats['bg']) class ButtonPlain(Buttonx): ''' Used for icon menu ''' def __init__(self, master, *args, **kwargs): Buttonx.__init__(self, master, *args, **kwargs) self.config( font=(formats['input_font']), bd=0, activebackground=formats['head_bg'], bg=formats['bg'], fg=formats['fg'], cursor='hand2') self.bind('<FocusIn>', self.highlight) self.bind('<FocusOut>', self.unhighlight) def highlight(self, evt): self.config(bg=formats['head_bg']) def unhighlight(self, evt): self.config(bg=formats['bg']) class ButtonQuiet(Buttonx): ''' Same color as background, no text. ''' def __init__(self, master, *args, **kwargs): Buttonx.__init__(self, master, *args, **kwargs) self.config( text='', width=3, overrelief=tk.GROOVE, activebackground=formats['head_bg'], bg=formats['bg'], fg=formats['fg'], font=formats['boilerplate']) class Entryx(tk.Entry): def __init__(self, master, *args, **kwargs): tk.Entry.__init__(self, master, *args, **kwargs) pass def winfo_subclass(self): ''' a method that works like built-in tkinter method w.winfo_class() except it gets subclass names of widget classes custom-made by inheritance ''' subclass = type(self).__name__ return subclass class Entry(Entryx): def __init__(self, master, *args, **kwargs): Entryx.__init__(self, master, *args, **kwargs) self.config( bg=formats['highlight_bg'], fg=formats['fg'], font=formats['input_font'], insertbackground=formats['fg']) class EntryUnhilited(Entryx): ''' Looks like a Label. ''' def __init__(self, master, *args, **kwargs): Entryx.__init__(self, master, *args, **kwargs) self.config( bd=0, bg=formats['bg'], fg=formats['fg'], font=formats['input_font'], insertbackground=formats['fg']) class EntryHilited1(Entryx): ''' Looks like a Label but different background color. ''' def __init__(self, master, *args, **kwargs): Entryx.__init__(self, master, *args, **kwargs) self.config( bd=0, bg=formats['highlight_bg'], fg=formats['fg'], font=formats['input_font'], insertbackground=formats['fg']) class EntryHilited2(Entryx): ''' Looks like a Label but different background color. ''' def __init__(self, master, *args, **kwargs): Entryx.__init__(self, master, *args, **kwargs) self.config( bd=0, bg=formats['head_bg'], fg=formats['fg'], font=formats['output_font'], insertbackground=formats['fg']) class EntryAutofill(EntryUnhilited): ''' SUPERCEDED BY EntryAutofill Simple case-insensitive autofill entry with no dropdown list, lets you type as fast as you want. Values option is not a real tkinter option, so you can't use instance.config(values=new_values). Change values list like this: instance.values = [5, 15, 19, 42]. Autofills nothing till you type up to the first unique character. Example: If the list has "Bill" and "Bilbo", nothing will autofill till you type the second b or the l. You can backspace and keep typing a different word with no extra key strokes or controls and it still fills correctly. Width is set to fit the longest item in the values list. instance.config(textvariable=instance.var) is required in the instance to turn on the autofill functionality. ''' def __init__(self, master, *args, **kwargs): EntryUnhilited.__init__(self, master, *args, **kwargs) self.config(width=1) self.values = ['red', 'rust', 'black', 'blue', 'Bill', 'Bilbo', 'billboard'] self.autofill = False self.var = tk.StringVar() self.bind('<KeyRelease>', self.get_typed) self.bind('<Key>', self.detect_pressed) def match_string(self): hits = [] got = self.var.get() for item in self.values: if item.lower().startswith(got.lower()): hits.append(item) return hits def get_typed(self, event): if self.autofill is False: return if len(event.keysym) == 1: hits = self.match_string() self.show_hit(hits) def detect_pressed(self, event): if self.autofill is False: return key = event.keysym pos = self.index('insert') self.delete(pos, 'end') def show_hit(self, lst): if len(lst) == 1: self.var.set(lst[0]) class EntryAutofillHilited(EntryAutofill): ''' Same as EntryAutofill but has a highlighted background like a typical Entry. ''' def __init__(self, master, *args, **kwargs): EntryAutofill.__init__(self, master, *args, **kwargs) self.config(bg=formats['highlight_bg']) class EntryDefaultText(Entry): def __init__(self, master, default_text, *args, **kwargs): Entry.__init__(self, master, *args, **kwargs) ''' For entries that need to have instructions/default text. Can't use this with a widget that automatically comes into focus since the default text would be cleared. ''' self.default_text = default_text self.formats = make_formats_dict() var = tk.StringVar() var.set(self.default_text) self.config( fg=self.formats['head_bg'], bg=self.formats['highlight_bg'], font=self.formats['show_font'], textvariable=var) self.textvariable = var self.bind('<Button-1>', self.clear_default_text) self.bind('<FocusIn>', self.clear_default_text) self.bind('<FocusOut>', self.clear_selection) def clear_default_text(self, evt=None): if self.cget('state') == 'disabled': print('disabled') return if self.get() == self.default_text: self.delete(0, 'end') self.config( bg=self.formats['highlight_bg'], font=self.formats['input_font']) else: self.config( bg=self.formats['highlight_bg'], font=self.formats['input_font'], fg=self.formats['fg']) def clear_selection(self, evt): if len(self.get()) == 0: self.insert(0, self.default_text) self.config( font=self.formats['show_font'], fg=formats['head_bg']) self.select_clear() def replace_default_text(self): self.insert(0, self.default_text) self.config(fg=formats['head_bg'], font=self.formats['show_font']) class LabelCopiable(Entryx): ''' To use as a Label whose text can be selected with mouse, set the state to disabled after constructing the widget and giving it text. Enable temporarily to change color or text, for example. ''' def __init__(self, master, *args, **kwargs): Entryx.__init__(self, master, *args, **kwargs) self.config( readonlybackground=self.cget('background'), justify='center', bd=0, takefocus=0) class LabelGoTo(Labelx): ''' Ctrl+click runs code relevant to the entity named in the clicked Label. For example, if label says John Doe, Ctrl+click label can be used to make <NAME> the current person. The subject_id parameter can be used for any entity with an ID such as person, place, citation. The EntryLabel/LabelGoTo in dialogs can't be used with Ctrl+click to change the current person. Probably could be done once but not twice because the findings table that existed when the dialog was made would be destroyed upon making a new table for a new current person. So trying to change the current person wouldn't work a 2nd time, so I'm not going to allow it at all. ''' def __init__( self, master, table=None, change_person=None, subject_id=None, place_id=None, source_id=None, citation_id=None, *args, **kwargs): Labelx.__init__(self, master, *args, **kwargs) self.formats = make_formats_dict() self.table = table self.change_person = change_person self.subject_id = subject_id self.bind('<Enter>', self.highlight_on_enter) self.bind('<Leave>', self.unhighlight_on_leave) self.bind('<Button-1>', self.set_focus, add='+') self.bind('<Control-Button-1>', self.go_to_entity) self.config( takefocus=1, anchor='w', bg=self.formats['bg'], fg=self.formats['fg'], font=self.formats['input_font']) self.grid_configure(sticky='ew') def go_to_entity(self, evt): ''' self.change_person is for the name of the function passed to this widget when it's made which changes the current person displayed on the persons tab. ''' if self.table is None: return self.change_person( self.table.master, self.table.main.persons.attributes_content, self.table.main.new_person_fill, self.table.main.persons.top_pic_button, self.table.main, self.table.main.tabs.store['person'], self.subject_id) def set_focus(self, evt): self.focus_set() def highlight_on_enter(self, evt): self.config(bg=self.formats['highlight_bg']) def unhighlight_on_leave(self, evt): self.config(bg=self.formats['bg']) # for demo of LabelCopiable see label_with_selectable_text.py class Textx(tk.Text): def __init__(self, master, *args, **kwargs): tk.Text.__init__(self, master, *args, **kwargs) def winfo_subclass(self): ''' ''' subclass = type(self).__name__ return subclass class Text(Textx): def __init__(self, master, *args, **kwargs): Textx.__init__(self, master, *args, **kwargs) self.config( wrap='word', bg=formats['highlight_bg'], fg=formats['fg'], font=formats['input_font'], insertbackground=formats['fg']) self.bind("<Tab>", self.focus_next_window) self.bind("<Shift-Tab>", self.focus_prev_window) # make the Text widget use tab key for traversal like other widgets # I think return('break') prevents the built-in binding to Tab def focus_next_window(self, evt): evt.widget.tk_focusNext().focus() return('break') def focus_prev_window(self, evt): evt.widget.tk_focusPrev().focus() return('break') class LabelStylable(Textx): def __init__(self, master, *args, **kwargs): Textx.__init__(self, master, *args, **kwargs) self.master = master self.bind('<Map>', lambda event: self.set_height()) self.tag_config('bold', font="courier 12 bold") self.tag_config('italic', font="courier 12 italic") self.config(wrap='word', padx=12, pady=12, bd=0) def set_height(self): height = self.count(1.0, 'end', 'displaylines') self.config(height=height) self.configure(state="disabled") # # to use LabelStylable: # stylin = LabelStylable(root, width=75) # stylin.insert("end", "Hello, ") # stylin.insert("end", "silly ", "italic") # stylin.insert("end", "world", "bold") class MessageCopiable(Textx): ''' To use as a Label whose text can be selected with mouse, set the state to disabled after constructing the widget and giving it text. Enable temporarily to change color or text, for example. ''' def __init__(self, master, *args, **kwargs): Textx.__init__(self, master, *args, **kwargs) self.config( bg=formats['bg'], fg=formats['fg'], borderwidth=0, wrap='word', state='disabled', font=(formats['output_font']), takefocus=0) def set_height(self): # answer is wrong first time thru mainloop so update: self.update_idletasks() lines = self.count('1.0', 'end', 'displaylines') self.config(height=lines) self.tag_configure('left', justify='left') self.tag_add('left', '1.0', 'end') self.config(state='disabled') # How to use: # www = MessageCopiable(root, width=32) # www.insert(1.0, # 'Maecenas quis elit eleifend, lobortis turpis at, iaculis ' # 'odio. Phasellus congue, urna sit amet posuere luctus, mauris ' # 'risus tincidunt sapien, vulputate scelerisque ipsum libero at ' # 'neque. Nunc accumsan pellentesque nulla, a ultricies ex ' # 'convallis sit amet. Etiam ut sollicitudi felis, sit amet ' # 'dictum lacus. Mauris sed mattis diam. Pellentesque eu malesuada ' # 'ipsum, vitae sagittis nisl Morbi a mi vitae nunc varius ' # 'ullamcorper in ut urna. Maecenas auctor ultrices orci. ' # 'Donec facilisis a tortor pellentesque venenatis. Curabitur ' # 'pulvinar bibendum sem, id eleifend lorem sodales nec. Mauris ' # 'eget scelerisque libero. Lorem ipsum dolor sit amet, consectetur ' # 'adipiscing elit. Integer vel tellus nec orci finibus ornare. ' # 'Praesent pellentesque aliquet augue, nec feugiat augue posuere ') # www.grid() # www.set_height() class Checkbuttonx(tk.Checkbutton): def __init__(self, master, *args, **kwargs): tk.Checkbutton.__init__(self, master, *args, **kwargs) pass def winfo_subclass(self): ''' ''' subclass = type(self).__name__ return subclass class Checkbutton(Checkbuttonx): def __init__(self, master, *args, **kwargs): Checkbuttonx.__init__(self, master, *args, **kwargs) ''' To see selection set the selectcolor option to either bg or highlight_bg. ''' self.config( bg=formats['bg'], fg=formats['fg'], activebackground=formats['highlight_bg'], selectcolor=formats['bg'], padx=6, pady=6) class Radiobuttonx(tk.Radiobutton): def __init__(self, master, *args, **kwargs): tk.Radiobutton.__init__(self, master, *args, **kwargs) pass def winfo_subclass(self): ''' ''' subclass = type(self).__name__ return subclass class Radiobutton(Radiobuttonx): def __init__(self, master, *args, **kwargs): Radiobuttonx.__init__(self, master, *args, **kwargs) ''' To see selection set the selectcolor option to either bg or highlight_bg. ''' self.config( bg=formats['bg'], fg=formats['fg'], activebackground=formats['highlight_bg'], selectcolor=formats['highlight_bg'], padx=6, pady=6) class RadiobuttonBig(Radiobutton): def __init__(self, master, *args, **kwargs): Radiobutton.__init__(self, master, *args, **kwargs) ''' If the main content of a dialog is a set of radiobuttons, use standard text size. ''' self.config(font=formats['output_font']) class RadiobuttonHilited(Radiobuttonx): def __init__(self, master, *args, **kwargs): Radiobuttonx.__init__(self, master, *args, **kwargs) self.config( bg=formats['highlight_bg'], activebackground=formats['bg'], highlightthickness=3, overrelief='sunken', font=formats['output_font'], fg=formats['fg'], selectcolor=formats['highlight_bg'], padx=6, pady=6) class Toplevelx(tk.Toplevel): ''' All my toplevels have to declare a parent whether they need one or not. This keeps the code consistent and symmetrical across all widgets, even though Tkinter doesn't require a parent for its Toplevel. ''' def __init__(self, master, *args, **kwargs): tk.Toplevel.__init__(self, master, *args, **kwargs) def winfo_subclass(self): ''' ''' subclass = type(self).__name__ return subclass class Toplevel(Toplevelx): def __init__(self, master, *args, **kwargs): Toplevelx.__init__(self, master, *args, **kwargs) self.config(bg=formats['bg']) class ToplevelHilited(Toplevelx): def __init__(self, *args, **kwargs): Toplevelx.__init__(self, *args, **kwargs) self.config(bg=formats['highlight_bg']) class Scalex(tk.Scale): def __init__(self, master, *args, **kwargs): tk.Scale.__init__(self, master, *args, **kwargs) def winfo_subclass(self): ''' ''' subclass = type(self).__name__ return subclass class Scale(Scalex): def __init__(self, master, *args, **kwargs): Scalex.__init__(self, master, *args, **kwargs) self.config( bg=formats['bg'], fg=formats['fg'], font=formats['output_font'], troughcolor=formats['highlight_bg'], activebackground=formats['head_bg'], highlightthickness=0) class Canvasx(tk.Canvas): def __init__(self, master, *args, **kwargs): tk.Canvas.__init__(self, master, *args, **kwargs) pass def winfo_subclass(self): ''' ''' subclass = type(self).__name__ return subclass class Canvas(Canvasx): def __init__(self, master, *args, **kwargs): Canvasx.__init__(self, master, *args, **kwargs) self.config(bg=formats['bg'], bd=0, highlightthickness=0) class CanvasHilited(Canvasx): def __init__(self, master, *args, **kwargs): Canvasx.__init__(self, master, *args, **kwargs) self.config(bg=formats['highlight_bg'], bd=0, highlightthickness=0)
//CADASTRA O PRODUTO NO FINAL DA LISTA void inseri_produto_fim(Listaproduto *listaproduto, produto dado) { Noproduto* novo = (Noproduto *) malloc(sizeof(Noproduto)); novo->dado = dado; novo->prox = NULL; if(listaproduto->inicio == NULL) { listaproduto->inicio = novo; } else { Noproduto* pi; for(pi = listaproduto->inicio; pi->prox != NULL; pi = pi->prox); pi->prox = novo; } }
N = int(input()) a = list(map(int, input().split())) a.sort() x = a[0] y = a[N // 3] z = a[- N // 3] if a[-1] == 0: print('Yes') else: if N % 3 == 0: if a[N // 3 - 1] == 0 and y == a[-1]: print('Yes') else: if a.count(x) == N // 3 and a.count(y) == N // 3 and a.count(z) == N // 3: if x ^ y == z and y ^ z == x and z ^ x == y: print('Yes') else: print('No') else: print('No') else: print('No')
<gh_stars>0 package lib import ( "log" "math/big" "time" tele "gopkg.in/telebot.v3" ) func CreateBot(telegramBot string) *tele.Bot { pref := tele.Settings{ Token: telegramBot, Poller: &tele.LongPoller{Timeout: 10 * time.Second}, } b, err := tele.NewBot(pref) if err != nil { log.Fatal(err) } return b } func StartBot(b *tele.Bot, config *Config) { b.Handle("/healthz", func(c tele.Context) error { health := Query(config) if health.Cmp(big.NewFloat(float64(config.Treshold))) == -1 { return c.Send("⚠ XToken health factor\n" + health.String()) } else { return c.Send("ℹ XToken health factor\n" + health.String()) } }) b.Start() } func Warn(b *tele.Bot, receiver int64, message string) { user := tele.ChatID(receiver) b.Send(user, "⚠ XToken health factor\n "+message, &tele.SendOptions{}) }
package meta import ( "io/ioutil" "os" "path/filepath" "testing" "github.com/gobuffalo/envy" "github.com/stretchr/testify/require" ) func Test_ModulesPackageName(t *testing.T) { r := require.New(t) tmp := os.TempDir() modsOn = true r.NoError(os.Chdir(tmp)) tcases := []struct { Content string PackageName string }{ {"module github.com/wawandco/zekito", "github.com/wawandco/zekito"}, {"module zekito", "zekito"}, {"module gopkg.in/some/friday.v2", "gopkg.in/some/friday.v2"}, {"", "zekito"}, } for _, tcase := range tcases { envy.Set("GOPATH", tmp) t.Run(tcase.Content, func(st *testing.T) { r := require.New(st) r.NoError(ioutil.WriteFile("go.mod", []byte(tcase.Content), 0644)) a := New(filepath.Join(tmp, "zekito")) r.Equal(tcase.PackageName, a.PackagePkg) }) } }
Early last year, we [made some adjustments](http://boards.na.leagueoflegends.com/en/c/riot-official/nWyb57Ti-rp-price-adjustment-in-canada) to the price of RP in Canada due to the weakening of the Canadian dollar in comparison to the US dollar. Since then, we’ve continued to monitor the gap in RP purchasing power between US and Canadian players, which has unfortunately widened over time. As a result, we’ll once again implement a slight adjustment to the cost of RP in Canadian dollars on February 15th 2016. The adjustment will be an ~10% increase in the cost of RP for players paying in Canadian dollars. We'll effect this increase by adjusting the amount of RP available at the different price points (e.g., The 10 CAD pack will now give 1000 RP instead of 1100 RP). Please note that this change will only affect RP purchases made after February 15th. The RP price of content will be unaffected, and Canadian players can continue to pay in US dollars with no change in RP amounts. Price increases obviously suck regardless of the reasons behind them, but we’d like to provide some background as to why this price change and others like it are sometimes necessary. Generally, we attempt to keep RP prices in relative balance globally. In some regions that just means keeping up with high inflation rates, but in regions like NA that have multiple currencies, it also means ensuring that players are getting similar value for their money regardless of home country. We try to think long-term, so roughly once per year we evaluate (and where necessary, adjust) the RP to currency ratios for each region to ensure that RP prices remain relatively similar. We had hoped the Canadian dollar would strengthen or at least hold steady after last year’s adjustment, but due to recent economic trends it's continued to fall relative to the US dollar, and the discrepancy has now reached the point where further adjustment is necessary. Typically in cases like this we leave prices slightly lower for the affected country, so Canadian prices will still be a bit cheaper than US ones in order to allow a cushion for any reversals in rates. We've also tried to make sure that the new price points are still above certain thresholds to make sure you can still get the same content with the same pack (e.g. 10CAD still gives enough RP for a 975 skin). We’re giving the usual two-week notice for this change so we hopefully don't catch players by surprise. Here are the new amounts of RP for all price points as of February 15th for all payment methods: ***2/16 updated for Mystery gift at 5 CAD Pack, new prices are now live*** 5 CAD = 490 RP 10 CAD = 980 RP + 40 Bonus RP 20 CAD = 1960 RP + 115 Bonus RP 25 CAD = 2450 RP + 150 Bonus RP (Prepaid only) 35 CAD = 3430 RP + 270 Bonus RP 50 CAD = 4900 RP + 450 Bonus RP 100 CAD = 9800 RP + 1200 Bonus RP I'll also be hanging around in this thread to clear up any confusion / address any issues. - Wingfield --- ** FAQ ** ** I'm Canadian, why am I getting charged in USD? ** Canadian players can currently purchase RP in either USD or CAD at their preference (it should default to CAD). You can change the payment option to USD or CAD at the bottom of "Step 1: Select Purchase Method" in the Purchase RP flow. ** I'm already paying in USD, so won't this change impact me twice? ** If you're already paying in USD, this adjustment will have no effect on you. Your preferred payment method was/is already converting the US price to Canadian dollars at the current exchange rate. Since you're paying in USD you will get the USD RP amounts, which are not changing in this adjustment. ** Wtf Riot? Does Canada really pay more than the US? ** In terms of RP purchases, Canadians are actually currently paying less than US players for the same amount of RP in equivalent currencies. Math below: Canadian players paying in CAD are currently getting approximately 155 RP to the US dollar (using the 10 CAD pack), while US/Canadian players paying in USD are getting 138 RP to the US dollar. There's currently a significant inequality between players on the same server (NA). Assuming current USD to CAD exchange rates hold where they are as of writing, on 02/15 Canadian players paying in CAD will be getting 141 RP to the US dollar, still slightly more than their US counterparts. Title Body Cancel Save
def process_options(self, func: FunctionType, options, context=None): raise NotImplementedError
class Compiler: """ Given a intent with underspecified inputs, compile the intent into fully specified visualizations for visualization. """ def __init__(self): self.name = "Compiler" warnings.formatwarning = lux.warning_format def __repr__(self): return f"<Compiler>" @staticmethod def compile_vis(ldf: LuxDataFrame, vis: Vis) -> Vis: """ Root method for compiling visualizations Parameters ---------- ldf : LuxDataFrame vis : Vis Returns ------- Vis Compiled Vis object """ if vis: # autofill data type/model information Compiler.populate_data_type_model(ldf, [vis]) # remove invalid visualizations from collection Compiler.remove_all_invalid([vis]) # autofill viz related information Compiler.determine_encoding(ldf, vis) ldf._compiled = True return vis @staticmethod def compile_intent(ldf: LuxDataFrame, _inferred_intent: List[Clause]) -> VisList: """ Compiles input specifications in the intent of the ldf into a collection of lux.vis objects for visualization. 1) Enumerate a collection of visualizations interested by the user to generate a vis list 2) Expand underspecified specifications(lux.Clause) for each of the generated visualizations. 3) Determine encoding properties for each vis Parameters ---------- ldf : lux.core.frame LuxDataFrame with underspecified intent. vis_collection : list[lux.vis.Vis] empty list that will be populated with specified lux.Vis objects. Returns ------- vis_collection: list[lux.Vis] vis list with compiled lux.Vis objects. """ valid_intent = _inferred_intent # ensures intent is non-empty if valid_intent and Validator.validate_intent(_inferred_intent, ldf, True): vis_collection = Compiler.enumerate_collection(_inferred_intent, ldf) # autofill data type/model information Compiler.populate_data_type_model(ldf, vis_collection) # remove invalid visualizations from collection if len(vis_collection) >= 1: vis_collection = Compiler.remove_all_invalid(vis_collection) for vis in vis_collection: # autofill viz related information Compiler.determine_encoding(ldf, vis) ldf._compiled = True return vis_collection elif _inferred_intent: return [] @staticmethod def enumerate_collection(_inferred_intent: List[Clause], ldf: LuxDataFrame) -> VisList: """ Given specifications that have been expanded thorught populateOptions, recursively iterate over the resulting list combinations to generate a vis list. Parameters ---------- ldf : lux.core.frame LuxDataFrame with underspecified intent. Returns ------- VisList: list[lux.Vis] vis list with compiled lux.Vis objects. """ import copy intent = Compiler.populate_wildcard_options(_inferred_intent, ldf) attributes = intent["attributes"] filters = intent["filters"] if len(attributes) == 0 and len(filters) > 0: return [] collection = [] # TODO: generate combinations of column attributes recursively by continuing to accumulate attributes for len(colAtrr) times def combine(col_attrs, accum): last = len(col_attrs) == 1 n = len(col_attrs[0]) for i in range(n): column_list = copy.deepcopy(accum + [col_attrs[0][i]]) if last: # if we have filters, generate combinations for each row. if len(filters) > 0: for row in filters: _inferred_intent = copy.deepcopy(column_list + [row]) vis = Vis(_inferred_intent) collection.append(vis) else: vis = Vis(column_list) collection.append(vis) else: combine(col_attrs[1:], column_list) combine(attributes, []) return VisList(collection) @staticmethod def populate_data_type_model(ldf, vlist): """ Given a underspecified Clause, populate the data_type and data_model information accordingly Parameters ---------- ldf : lux.core.frame LuxDataFrame with underspecified intent vis_collection : list[lux.vis.Vis] List of lux.Vis objects that will have their underspecified Clause details filled out. """ # TODO: copy might not be neccesary from lux.utils.date_utils import is_datetime_string data_model_lookup = lux.config.executor.compute_data_model_lookup(ldf.data_type) for vis in vlist: for clause in vis._inferred_intent: if clause.description == "?": clause.description = "" # TODO: Note that "and not is_datetime_string(clause.attribute))" is a temporary hack and breaks the `test_row_column_group` example if clause.attribute != "" and clause.attribute != "Record": if clause.data_type == "": clause.data_type = ldf.data_type[clause.attribute] if clause.data_type == "id": clause.data_type = "nominal" if clause.data_type == "geographical": clause.data_type = "nominal" if clause.data_model == "": clause.data_model = data_model_lookup[clause.attribute] if clause.value != "": # If user provided title for Vis, then don't override. if vis.title == "": if isinstance(clause.value, np.datetime64): chart_title = date_utils.date_formatter(clause.value, ldf) else: chart_title = clause.value vis.title = f"{clause.attribute} {clause.filter_op} {chart_title}" vis._ndim = 0 vis._nmsr = 0 for clause in vis._inferred_intent: if clause.value == "": if clause.data_model == "dimension": vis._ndim += 1 elif clause.data_model == "measure" and clause.attribute != "Record": vis._nmsr += 1 @staticmethod def remove_all_invalid(vis_collection: VisList) -> VisList: """ Given an expanded vis list, remove all visualizations that are invalid. Currently, the invalid visualizations are ones that do not contain: - two of the same attribute, - more than two temporal attributes, - no overlapping attributes (same filter attribute and visualized attribute), - more than 1 temporal attribute with 2 or more measures Parameters ---------- vis_collection : list[lux.vis.Vis] empty list that will be populated with specified lux.Vis objects. Returns ------- lux.vis.VisList vis list with compiled lux.Vis objects. """ new_vc = [] for vis in vis_collection: num_temporal_specs = 0 attribute_set = set() for clause in vis._inferred_intent: attribute_set.add(clause.attribute) if clause.data_type == "temporal": num_temporal_specs += 1 all_distinct_specs = 0 == len(vis._inferred_intent) - len(attribute_set) if ( num_temporal_specs < 2 and all_distinct_specs and not (vis._nmsr == 2 and num_temporal_specs == 1) ): new_vc.append(vis) # else: # warnings.warn("\nThere is more than one duplicate attribute specified in the intent.\nPlease check your intent specification again.") return VisList(new_vc) @staticmethod def determine_encoding(ldf: LuxDataFrame, vis: Vis): """ Populates Vis with the appropriate mark type and channel information based on ShowMe logic Currently support up to 3 dimensions or measures Parameters ---------- ldf : lux.core.frame LuxDataFrame with underspecified intent vis : lux.vis.Vis Returns ------- None Notes ----- Implementing automatic encoding from Tableau's VizQL Mackinlay, J. D., Hanrahan, P., & Stolte, C. (2007). Show Me: Automatic presentation for visual analysis. IEEE Transactions on Visualization and Computer Graphics, 13(6), 1137–1144. https://doi.org/10.1109/TVCG.2007.70594 """ # Count number of measures and dimensions ndim = vis._ndim nmsr = vis._nmsr # preserve to add back to _inferred_intent later filters = utils.get_filter_specs(vis._inferred_intent) # Helper function (TODO: Move this into utils) def line_or_bar_or_geo(ldf, dimension: Clause, measure: Clause): dim_type = dimension.data_type # If no aggregation function is specified, then default as average if measure.aggregation == "": measure.set_aggregation("mean") if dim_type == "temporal" or dim_type == "oridinal": if isinstance(dimension.attribute, pd.Timestamp): # If timestamp, use the _repr_ (e.g., TimeStamp('2020-04-05 00.000')--> '2020-04-05') attr = str(dimension.attribute._date_repr) else: attr = dimension.attribute if ldf.cardinality[attr] == 1: return "bar", {"x": measure, "y": dimension} else: return "line", {"x": dimension, "y": measure} else: # unordered categorical # if cardinality large than 5 then sort bars if ldf.cardinality[dimension.attribute] > 5: dimension.sort = "ascending" if utils.like_geo(dimension.get_attr()): return "geographical", {"x": dimension, "y": measure} return "bar", {"x": measure, "y": dimension} # ShowMe logic + additional heuristics # count_col = Clause( attribute="count()", data_model="measure") count_col = Clause( attribute="Record", aggregation="count", data_model="measure", data_type="quantitative", ) auto_channel = {} if ndim == 0 and nmsr == 1: # Histogram with Count measure = vis.get_attr_by_data_model("measure", exclude_record=True)[0] if len(vis.get_attr_by_attr_name("Record")) < 0: vis._inferred_intent.append(count_col) # If no bin specified, then default as 10 if measure.bin_size == 0: measure.bin_size = 10 auto_channel = {"x": measure, "y": count_col} vis._mark = "histogram" elif ndim == 1 and (nmsr == 0 or nmsr == 1): # Line or Bar Chart if nmsr == 0: vis._inferred_intent.append(count_col) dimension = vis.get_attr_by_data_model("dimension")[0] measure = vis.get_attr_by_data_model("measure")[0] vis._mark, auto_channel = line_or_bar_or_geo(ldf, dimension, measure) elif ndim == 2 and (nmsr == 0 or nmsr == 1): # Line or Bar chart broken down by the dimension dimensions = vis.get_attr_by_data_model("dimension") d1 = dimensions[0] d2 = dimensions[1] if ldf.cardinality[d1.attribute] < ldf.cardinality[d2.attribute]: # d1.channel = "color" vis.remove_column_from_spec(d1.attribute) dimension = d2 color_attr = d1 else: # if same attribute then remove_column_from_spec will remove both dims, we only want to remove one if d1.attribute == d2.attribute: vis._inferred_intent.pop(0) else: vis.remove_column_from_spec(d2.attribute) dimension = d1 color_attr = d2 # Colored Bar/Line chart with Count as default measure if not ldf.pre_aggregated: if nmsr == 0 and not ldf.pre_aggregated: vis._inferred_intent.append(count_col) measure = vis.get_attr_by_data_model("measure")[0] vis._mark, auto_channel = line_or_bar_or_geo(ldf, dimension, measure) auto_channel["color"] = color_attr elif ndim == 0 and nmsr == 2: # Scatterplot vis._mark = "scatter" vis._inferred_intent[0].set_aggregation(None) vis._inferred_intent[1].set_aggregation(None) auto_channel = {"x": vis._inferred_intent[0], "y": vis._inferred_intent[1]} elif ndim == 1 and nmsr == 2: # Scatterplot broken down by the dimension measure = vis.get_attr_by_data_model("measure") m1 = measure[0] m2 = measure[1] vis._inferred_intent[0].set_aggregation(None) vis._inferred_intent[1].set_aggregation(None) color_attr = vis.get_attr_by_data_model("dimension")[0] vis.remove_column_from_spec(color_attr) vis._mark = "scatter" auto_channel = {"x": m1, "y": m2, "color": color_attr} elif ndim == 0 and nmsr == 3: # Scatterplot with color vis._mark = "scatter" auto_channel = { "x": vis._inferred_intent[0], "y": vis._inferred_intent[1], "color": vis._inferred_intent[2], } relevant_attributes = [auto_channel[channel].attribute for channel in auto_channel] relevant_min_max = dict( (attr, ldf._min_max[attr]) for attr in relevant_attributes if attr != "Record" and attr in ldf._min_max ) # Replace scatterplot with heatmap if vis.mark == "scatter" and lux.config.heatmap and len(ldf) > lux.config._heatmap_start: vis._postbin = True ldf._message.add_unique( f"Large scatterplots detected: Lux is automatically binning scatterplots to heatmaps.", priority=98, ) vis._mark = "heatmap" vis._min_max = relevant_min_max if auto_channel != {}: vis = Compiler.enforce_specified_channel(vis, auto_channel) vis._inferred_intent.extend(filters) # add back the preserved filters @staticmethod def enforce_specified_channel(vis: Vis, auto_channel: Dict[str, str]): """ Enforces that the channels specified in the Vis by users overrides the showMe autoChannels. Parameters ---------- vis : lux.vis.Vis Input Vis without channel specification. auto_channel : Dict[str,str] Key-value pair in the form [channel: attributeName] specifying the showMe recommended channel location. Returns ------- vis : lux.vis.Vis Vis with channel specification combining both original and auto_channel specification. Raises ------ ValueError Ensures no more than one attribute is placed in the same channel. """ # result of enforcing specified channel will be stored in result_dict result_dict = {} # specified_dict={"x":[],"y":[list of Dobj with y specified as channel]} specified_dict = {} # create a dictionary of specified channels in the given dobj for val in auto_channel.keys(): specified_dict[val] = vis.get_attr_by_channel(val) result_dict[val] = "" # for every element, replace with what's in specified_dict if specified for sVal, sAttr in specified_dict.items(): if len(sAttr) == 1: # if specified in dobj # remove the specified channel from auto_channel (matching by value, since channel key may not be same) for i in list(auto_channel.keys()): # need to ensure that the channel is the same (edge case when duplicate Cols with same attribute name) if ( auto_channel[i].attribute == sAttr[0].attribute and auto_channel[i].channel == sVal ): auto_channel.pop(i) break sAttr[0].channel = sVal result_dict[sVal] = sAttr[0] elif len(sAttr) > 1: raise ValueError( "There should not be more than one attribute specified in the same channel." ) # For the leftover channels that are still unspecified in result_dict, # and the leftovers in the auto_channel specification, # step through them together and fill it automatically. leftover_channels = list(filter(lambda x: result_dict[x] == "", result_dict)) for leftover_channel, leftover_encoding in zip(leftover_channels, auto_channel.values()): leftover_encoding.channel = leftover_channel result_dict[leftover_channel] = leftover_encoding vis._inferred_intent = list(result_dict.values()) return vis @staticmethod # def populate_wildcard_options(ldf: LuxDataFrame) -> dict: def populate_wildcard_options(_inferred_intent: List[Clause], ldf: LuxDataFrame) -> dict: """ Given wildcards and constraints in the LuxDataFrame's intent, return the list of available values that satisfies the data_type or data_model constraints. Parameters ---------- ldf : LuxDataFrame LuxDataFrame with row or attributes populated with available wildcard options. Returns ------- intent: Dict[str,list] a dictionary that holds the attributes and filters generated from wildcards and constraints. """ import copy from lux.utils.utils import convert_to_list inverted_data_type = lux.config.executor.invert_data_type(ldf.data_type) data_model = lux.config.executor.compute_data_model(ldf.data_type) intent = {"attributes": [], "filters": []} for clause in _inferred_intent: spec_options = [] if clause.value == "": # attribute if clause.attribute == "?": options = set(list(ldf.columns)) # all attributes if clause.data_type != "": options = options.intersection(set(inverted_data_type[clause.data_type])) if clause.data_model != "": options = options.intersection(set(data_model[clause.data_model])) options = list(options) else: options = convert_to_list(clause.attribute) for optStr in options: if str(optStr) not in clause.exclude: spec_copy = copy.copy(clause) spec_copy.attribute = optStr spec_options.append(spec_copy) intent["attributes"].append(spec_options) else: # filters attr_lst = convert_to_list(clause.attribute) for attr in attr_lst: options = [] if clause.value == "?": options = ldf.unique_values[attr] specInd = _inferred_intent.index(clause) _inferred_intent[specInd] = Clause( attribute=clause.attribute, filter_op="=", value=list(options), ) else: options.extend(convert_to_list(clause.value)) for optStr in options: if str(optStr) not in clause.exclude: spec_copy = copy.copy(clause) spec_copy.attribute = attr spec_copy.value = optStr spec_options.append(spec_copy) intent["filters"].extend(spec_options) return intent
def buffer(qgeometry, distance: float, resolution=None, cap_style=CAP_STYLE.flat, join_style=JOIN_STYLE.mitre, mitre_limit=None, overwrite=False): if mitre_limit is None: mitre_limit = DefaultMetalOptions.default_generic.geometry.buffer_mitre_limit if resolution is None: resolution = DefaultMetalOptions.default_generic.geometry.buffer_resolution def buffer_me(obj, *args, **kwargs): return obj.buffer(*args, **kwargs) return _iter_func_geom_(buffer_me, qgeometry, distance, resolution=resolution, cap_style=cap_style, join_style=join_style, mitre_limit=mitre_limit, overwrite=overwrite)
# -* coding: utf-8 -*- """Plane (1st order) surfaces.""" from .surfaces import SymbolicSurface class Plane(SymbolicSurface): """Plane surface.""" def __init__(self): super().__init__() raise NotImplementedError
/// Creates a new `RegisteredProtocol`. pub fn new(protocol: impl Into<ProtocolId>, versions: &[u8], handshake_message: Arc<RwLock<Vec<u8>>>) -> Self { let protocol = protocol.into(); let mut base_name = b"/substrate/".to_vec(); base_name.extend_from_slice(protocol.as_bytes()); base_name.extend_from_slice(b"/"); RegisteredProtocol { base_name, id: protocol, supported_versions: { let mut tmp = versions.to_vec(); tmp.sort_unstable_by(|a, b| b.cmp(&a)); tmp }, handshake_message, } }
<filename>graph/src/blockchain/mod.rs<gh_stars>0 //! The `blockchain` module exports the necessary traits and data structures to integrate a //! blockchain into Graph Node. A blockchain is represented by an implementation of the `Blockchain` //! trait which is the centerpiece of this module. pub mod block_ingestor; pub mod block_stream; mod types; // Try to reexport most of the necessary types use crate::{ components::store::DeploymentLocator, data::subgraph::{Mapping, Source}, prelude::DataSourceContext, runtime::AscType, }; use crate::{ components::store::{BlockNumber, ChainStore}, prelude::{thiserror::Error, DeploymentHash, LinkResolver}, }; use anyhow::Error; use async_trait::async_trait; use slog; use slog::Logger; use std::collections::HashMap; use std::fmt::Debug; use std::sync::Arc; use web3::types::H256; pub use block_stream::{ BlockStream, ChainHeadUpdateListener, ChainHeadUpdateStream, TriggersAdapter, }; pub use types::{BlockHash, BlockPtr}; use self::block_stream::BlockStreamMetrics; pub trait Block: Send + Sync { fn ptr(&self) -> BlockPtr; fn parent_ptr(&self) -> Option<BlockPtr>; fn number(&self) -> i32 { self.ptr().number } fn hash(&self) -> BlockHash { self.ptr().hash } fn parent_hash(&self) -> Option<BlockHash> { self.parent_ptr().map(|ptr| ptr.hash) } } pub trait Blockchain: Sized + Send + Sync + 'static { type Block: Block; type DataSource: DataSource<C = Self>; type DataSourceTemplate; type Manifest: Manifest<Self>; type TriggersAdapter: TriggersAdapter<Self>; /// Trigger data as parsed from the triggers adapter. type TriggerData; /// Decoded trigger ready to be processed by the mapping. type MappingTrigger: AscType; /// Trigger filter used as input to the triggers adapter. type TriggerFilter: TriggerFilter<Self>; type NodeCapabilities: std::fmt::Display; type IngestorAdapter: IngestorAdapter<Self>; // type RuntimeAdapter: RuntimeAdapter; // ...WIP fn reorg_threshold() -> u32; fn triggers_adapter( &self, loc: &DeploymentLocator, capabilities: &Self::NodeCapabilities, ) -> Result<Arc<Self::TriggersAdapter>, Error>; fn new_block_stream( &self, deployment: DeploymentLocator, start_blocks: Vec<BlockNumber>, filter: Self::TriggerFilter, metrics: Arc<BlockStreamMetrics>, ) -> Result<BlockStream<Self>, Error>; fn ingestor_adapter(&self) -> Arc<Self::IngestorAdapter>; fn chain_store(&self) -> Arc<dyn ChainStore>; } pub type BlockchainMap<C> = HashMap<String, Arc<C>>; #[derive(Error, Debug)] pub enum IngestorError { /// The Ethereum node does not know about this block for some reason, probably because it /// disappeared in a chain reorg. #[error("Block data unavailable, block was likely uncled (block hash = {0:?})")] BlockUnavailable(H256), /// An unexpected error occurred. #[error("Ingestor error: {0}")] Unknown(Error), } impl From<Error> for IngestorError { fn from(e: Error) -> Self { IngestorError::Unknown(e) } } #[async_trait] pub trait IngestorAdapter<C: Blockchain> { fn logger(&self) -> &Logger; /// How many ancestors of the current chain head to ingest. For chains /// that can experience reorgs, this should be large enough to cover all /// blocks that could be subject to reorgs to ensure that `graph-node` /// has enough blocks in its local cache to traverse a sidechain back to /// the main chain even if those blocks get removed from the network /// client. fn ancestor_count(&self) -> BlockNumber; /// Get the latest block from the chain async fn latest_block(&self) -> Result<BlockPtr, IngestorError>; /// Retrieve all necessary data for the block `hash` from the chain and /// store it in the database async fn ingest_block(&self, hash: &BlockHash) -> Result<Option<BlockHash>, IngestorError>; /// Return the chain head that is stored locally, and therefore visible /// to the block streams of subgraphs fn chain_head_ptr(&self) -> Result<Option<BlockPtr>, Error>; /// Remove old blocks from the database cache and return a pair /// containing the number of the oldest block retained and the number of /// blocks deleted if anything was removed. This is generally only used /// in small test installations, and can remain a noop without /// influencing correctness. fn cleanup_cached_blocks(&self) -> Result<Option<(i32, usize)>, Error> { Ok(None) } } pub trait TriggerFilter<C: Blockchain>: Default + Clone + Send + Sync { fn from_data_sources<'a>( data_sources: impl Iterator<Item = &'a C::DataSource> + Clone, ) -> Self { let mut this = Self::default(); this.extend(data_sources); this } fn extend<'a>(&mut self, data_sources: impl Iterator<Item = &'a C::DataSource> + Clone); fn node_capabilities(&self) -> C::NodeCapabilities; } // ETHDEP: `Source` and `Mapping`, at least, are Ethereum-specific. pub trait DataSource: 'static + Sized + Send + Sync { type C: Blockchain; fn mapping(&self) -> &Mapping; fn source(&self) -> &Source; fn from_manifest( kind: String, network: Option<String>, name: String, source: Source, mapping: Mapping, context: Option<DataSourceContext>, ) -> Result<Self, Error>; fn name(&self) -> &str; fn kind(&self) -> &str; fn network(&self) -> Option<&str>; fn context(&self) -> Option<&DataSourceContext>; fn creation_block(&self) -> Option<BlockNumber>; /// Checks if `trigger` matches this data source, and if so decodes it into a `MappingTrigger`. /// A return of `Ok(None)` mean the trigger does not match. fn match_and_decode( &self, trigger: &<Self::C as Blockchain>::TriggerData, block: Arc<<Self::C as Blockchain>::Block>, logger: &Logger, ) -> Result<Option<<Self::C as Blockchain>::MappingTrigger>, Error>; fn is_duplicate_of(&self, other: &Self) -> bool; } #[async_trait] pub trait Manifest<C: Blockchain>: Sized { async fn resolve_from_raw( id: DeploymentHash, raw: serde_yaml::Mapping, resolver: &impl LinkResolver, logger: &Logger, ) -> Result<Self, Error>; fn data_sources(&self) -> &[C::DataSource]; fn templates(&self) -> &[C::DataSourceTemplate]; }
/** * End the boss fight * @param forced If the end of the fight was forced (By a command or otherwise). If forced is false it is assumed * the fight was ended by killing the boss */ public static void endFight(boolean forced) { String endMessage = Messages.get("Status-Ended"); if (forced) { endMessage = Messages.get("Status-Ended-Forced"); } for (Player p : trackedPlayers) { TitleAPI.clearTitle(p); TitleAPI.sendTitle(p, 20, 200, 20, endMessage, null); } for (Player p : trackedPlayers) { removePlayer(p); } trackedPlayers.clear(); playerBoss = null; started = false; }
// ending in every node inside a cluster. The above connections are updating the queryPatter. private static void connectClustersAccordingToNode( List<Cluster> clusters, ORMNode interconnectNode, List<ORMNode> containedSchemaNodes, List<PatternNode> containedPatterNodes, ORMSchemaGraph schemaGraph, QueryPattern queryPattern) { for (Cluster cluster: clusters) { ORMNode schemaNode = schemaGraph.getORMNode(cluster.nodes.get(0).getReferredRelationName()); List<ORMNode> distinctContainedSchemaNodes = new ArrayList<>(containedSchemaNodes); distinctContainedSchemaNodes.add(schemaNode); Graph<ORMNode, NoLabel> path = schemaGraph.getPathConnecting2Nodes(interconnectNode, schemaNode); for (PatternNode node: cluster.nodes) { List<PatternNode> distinctContainedPatternNodes = new ArrayList<>(containedPatterNodes); distinctContainedPatternNodes.add(node); completeQueryPatternLikeGraph( queryPattern, path, distinctContainedSchemaNodes, distinctContainedPatternNodes ); } } }
<filename>src/app/pages/professeurs/professeurs.component.ts import { Component, OnInit } from "@angular/core"; import {ModalDismissReasons, NgbModal} from '@ng-bootstrap/ng-bootstrap'; import {MatDialog} from '@angular/material/dialog'; import {Router} from '@angular/router'; import {Eleve} from '../../../assets/models/eleve'; import {FormProfComponent} from './form-prof/form-prof.component'; import {Professeur} from '../../../assets/models/professeur'; import {DataHelperService} from '../../services/dataHelper.service'; import {Classe} from '../../../assets/models/classe'; import {FormClasseComponent} from '../classe/list-classe/form-classe/form-classe.component'; import {MessageService} from '../../services/message-service.service'; @Component({ selector: "app-inscription", templateUrl: "professeurs.component.html", styleUrls: ['./professeurs.component.css'] }) export class ProfesseursComponent implements OnInit { closeResult: string; dialogValue: Professeur; default = new Professeur(); professeurs: Professeur[]; sendValue: string; constructor( public dialog: MatDialog, private router: Router, private dataHelper: DataHelperService, private messageService: MessageService) { } ngOnInit() { this.dataHelper.getItems("Professeur").subscribe( res => { this.professeurs = res; }, error => console.log(error) ) } openDialog(item: Professeur): void { const dialogRef = this.dialog.open(FormProfComponent, { width: '500px', height: '590px', backdropClass: 'custom-dialog-backdrop-class', panelClass: 'custom-dialog-panel-class', data: { pageValue: item } }); dialogRef.afterClosed().subscribe(result => { console.log('The dialog was closed', result); if (result !== undefined){ this.dialogValue = result.data; if (result.data !== null) if (this.dialogValue.id == null){ this.dataHelper.addItem(this.dialogValue, "Professeur").subscribe( res => { this.professeurs.push(res); this.messageService.sendMessage("professeur enregistré avec succès") }, error => console.log(error) ) }else { this.dataHelper.updateItem(this.dialogValue, "Professeur").subscribe( res => { for (let i = 0; i++; i < this.professeurs.length) { if (this.professeurs[i].id = res.id) this.professeurs[i] = res } this.messageService.sendMessage("Professeur mis à jour avec succès") },error => console.log(error) ) } } }); } delete(items: Professeur){ console.log(items); this.dataHelper.deleteItem(items, "Professeur").subscribe( response => { this.professeurs = this.professeurs.filter(item => item.id !== items.id); this.messageService.sendMessage("Professeur Supprimer avec succès") }, error1 => { console.log(error1); } ) } getInfo(id: any){ console.log(id); this.router.navigate(["prof", id]) } }
/** * System: CleanBnB * Name: TeacherDaoImpl * Description: Class that represents a TeacherDaoImpl's Entity in the application * * @author carlosdeltoro * @version 1.0 * @since 11/21/21 */ public class TeacherDaoImpl extends SessionCourse implements TeacherDao { private SessionCourse _sessionCourse; public TeacherDaoImpl() { _sessionCourse = new SessionCourse(); } @Override public void saveTeacher(TeacherEntity teacher) { _sessionCourse.getSession().persist(teacher); _sessionCourse.getSession().getTransaction().commit(); _sessionCourse.closeSession(); } @Override public List<TeacherEntity> findAllTeachers() { return _sessionCourse.getSession() .createQuery("SELECT a FROM TeacherEntity a",TeacherEntity.class).getResultList(); } @Override public void deleteTeacher(long id) { } @Override public void updateTeacher(TeacherEntity teacher) { } @Override public TeacherEntity findById(long id) { return null; } @Override public TeacherEntity findByName(String name) { return null; } }
// Help returns long-form help text. func (c *CmdGenConfig) Help() string { txt := fmt.Sprintf(` Usage: %s %s name Description: %s Required Args: name The software name that you want to generate a configuration. Available names are "postfix" and "dovecot". `, c.CmdName, c.SubCmdName, c.Synopsis()) return txt[1:] }
Get the biggest Swansea stories by email Subscribe Thank you for subscribing We have more newsletters Show me See our privacy notice Could not subscribe, try again later Invalid Email The owner of dachshund dogs and puppies worth £20,000 stolen from a house says she has received a phone call demanding a ransom. Julie Knight, who is offering a reward for information leading to their return, believes a phone call demanding money for their return is a hoax and reported it to police. Her pregnant dogs Fraya and Angel and four puppies were stolen in a burglary at her home in Swansea on July 23 along with a safe containing cash and jewellery and a Mercedes car which was found later. The two dogs are due to give birth to their first litters of puppies and the puppies, now 11 days old, are likely to die without being fed by their mother Cleo, who was left behind by the burglars. Mrs Knight said a caller rang the telephone number featured on posters appealing for the dogs return, demanding a £3,000 ransom for their safe return. She said she wouldn’t pay a ransom and believes the call was a hoax because when she asked to see photographs of her dogs the caller refused. Mrs Knight is offering a reward of an undisclosed sum for anyone with information leading to the return of her pedigree dogs. Anyone with any information is asked to ring 07414 702 430 or 01792 896 869 or Crimestoppers on 0800 555111.
President and Chief Operating Officer at AirTronic USA Richard Vandiver says that his company began cooperation with Ukraine two years ago, and first batches of Precision Shoulder-fired Rocket Launchers (PSRLs) were shipped to Ukraine in 2016, VoA said. The PSRL is a U.S.-manufactured advanced version of the Soviet-developed RPG-7. Vandiver says that PSRL shipments continue in "very close coordination" with the U.S. Embassy in Kyiv, the U.S. Department of State, the Pentagon, and the Ukrainian government, yet he declines to disclose details. The private arms manufacturer also said it had not been easy to obtain a DSP-5 export license from the U.S. Directorate of Defense Trade Controls, as the supplies should be in line with the Minsk 2 peace agreement on Donbas. As Vandiver assures, everything went through the traditional process of filling out documents for obtaining the license. Read alsoTrump to be presented with $47mln deal to arm Ukraine against Russia – ABC NewsAccording to Vandiver, the U.S. Embassy in Kyiv coordinates actions with the Ministry of Defense of Ukraine to monitor how U.S. weapons are used in Ukraine. The business contract also prohibits Ukrainian consumers from reselling or re-exporting the manufacturer's products. The contract is updated every year depending on Ukraine's needs and is open-ended, he said. The Embassy of Ukraine in the United States also joined the process. Ukraine's Ambassador to the United States Valeriy Chaly says that contracts and licenses are both political decisions and commercial issues. "Creating conditions for establishing such cooperation is what we have been doing. We once brought together AirTronic USA and [Ukraine's arms manufacturing concern] Ukroboronprom, and they further developed this cooperation," the ambassador said. The companies started to establish first contacts in 2015. Valery Chaly notes that there is a difference between such commercial contracts and the U.S. military assistance Ukraine expects to obtain, as the latter suggests more powerful defensive weapons. The AirTronic USA PSRL is a lethal type of weapons, yet it has a limited destruction range, up to 1,000 meters, so if it is used from the territory of Ukraine, its shells are unable to cross the contact line with the Russian-occupied areas in eastern Ukraine. Thus, the U.S. government guarantees that the PSRLs will only be used for defensive purposes, VoA said.
<filename>filter/clusterAccessLog.go package filter import ( "time" motan "github.com/weibocom/motan-go/core" ) type ClusterAccessLogFilter struct { next motan.ClusterFilter } func (t *ClusterAccessLogFilter) GetIndex() int { return 1 } func (t *ClusterAccessLogFilter) GetName() string { return ClusterAccessLog } func (t *ClusterAccessLogFilter) NewFilter(url *motan.URL) motan.Filter { return &ClusterAccessLogFilter{} } func (t *ClusterAccessLogFilter) Filter(haStrategy motan.HaStrategy, loadBalance motan.LoadBalance, request motan.Request) motan.Response { start := time.Now() response := t.GetNext().Filter(haStrategy, loadBalance, request) doAccessLog(clientAgentRole, "", t.GetName(), start, request, response) return response } func (t *ClusterAccessLogFilter) HasNext() bool { return t.next != nil } func (t *ClusterAccessLogFilter) SetNext(nextFilter motan.ClusterFilter) { t.next = nextFilter } func (t *ClusterAccessLogFilter) GetNext() motan.ClusterFilter { return t.next } func (t *ClusterAccessLogFilter) GetType() int32 { return motan.ClusterFilterType }
// loop for handling metric events (appends and Get/Update DB responses) // TODO: we can use multiple Go routines and spread the metrics across based on Hash LSB func (mc *MetricsCache) Start() error { go func() { for { select { case resp := <-mc.responseChan: mc.rmapMtx.Lock() metric, ok := mc.requestsMap[resp.ID] delete(mc.requestsMap, resp.ID) mc.rmapMtx.Unlock() respErr := resp.Error if respErr != nil { mc.logger.ErrorWith("failed v3io Update request", "metric", resp.ID, "err", respErr, "request", *resp.Request().Input.(*v3io.UpdateItemInput).Expression, "key", resp.Request().Input.(*v3io.UpdateItemInput).Path) } else { mc.logger.DebugWith("Process Update resp", "id", resp.ID, "request", *resp.Request().Input.(*v3io.UpdateItemInput).Expression, "key", resp.Request().Input.(*v3io.UpdateItemInput).Path) } resp.Release() if ok { metric.Lock() if respErr == nil { metric.store.ProcessWriteResp() } else { metric.retryCount++ if metric.retryCount == MAX_WRITE_RETRY { metric.err = errors.Wrap(respErr, "chunk update failed") } } err := metric.store.WriteChunks(mc, metric) if err != nil { mc.logger.ErrorWith("Submit failed", "metric", metric.Lset, "err", err) metric.err = errors.Wrap(err, "chunk write submit failed") } metric.Unlock() } else { mc.logger.ErrorWith("Req ID not found", "id", resp.ID) } case app := <-mc.asyncAppendChan: metric := app.metric metric.Lock() if metric.store.GetState() == storeStateInit { err := metric.store.GetChunksState(mc, metric, app.t) if err != nil { metric.err = err } } metric.store.Append(app.t, app.v) if metric.store.IsReady() { err := metric.store.WriteChunks(mc, metric) if err != nil { mc.logger.ErrorWith("Async Submit failed", "metric", metric.Lset, "err", err) metric.err = err } } metric.Unlock() case resp := <-mc.getRespChan: mc.rmapMtx.Lock() metric, ok := mc.requestsMap[resp.ID] delete(mc.requestsMap, resp.ID) mc.rmapMtx.Unlock() respErr := resp.Error if respErr != nil { mc.logger.DebugWith("failed v3io GetItem request", "metric", resp.ID, "err", respErr, "key", resp.Request().Input.(*v3io.GetItemInput).Path) } else { mc.logger.DebugWith("Process GetItem resp", "id", resp.ID, "key", resp.Request().Input.(*v3io.GetItemInput).Path) } if ok { metric.Lock() metric.store.ProcessGetResp(mc, metric, resp) if metric.store.IsReady() { err := metric.store.WriteChunks(mc, metric) if err != nil { mc.logger.ErrorWith("Async Submit failed", "metric", metric.Lset, "err", err) metric.err = err } } metric.Unlock() } else { mc.logger.ErrorWith("GetItem Req ID not found", "id", resp.ID) } resp.Release() } } }() return nil }
import { ScaleValue, ScaleItem } from "../types" /** * Takes a keypress on a div that's imitating a radio and determines where the selection should go. * Uses the following as a guide, with one modification - allowing space press on a selected item to unselect: * https://www.w3.org/TR/wai-aria-practices/examples/radio/radio-1/radio-1.html */ const SCALE_VALUE_RESPONSE = new Map<number, ScaleValue>([ [-1, -1], [1, 1], [2, 2], [3, 3], [4, 4], [5, 5], ]) const determineSelectionFromKeyPress = ( keyCode: number, currentSelection: ScaleItem | null, focusedItem: ScaleItem ): null | ScaleValue => { const supportedKeyCodes = [32, 37, 38, 39, 40] if (supportedKeyCodes.indexOf(keyCode) === -1) { return null } const spacePressed = keyCode === 32 const backPressed = keyCode === 37 || keyCode === 38 const forwardPressed = keyCode === 39 || keyCode === 40 const noCurrentSelection = !currentSelection || currentSelection.value <= 0 if (spacePressed) { return noCurrentSelection ? focusedItem.value : -1 } if (noCurrentSelection || !currentSelection) { if (backPressed) { return oneSelectionBackward(focusedItem.value) } return oneSelectionForward(focusedItem.value) } if (backPressed) { return oneSelectionBackward(currentSelection.value) } if (forwardPressed) { return oneSelectionForward(currentSelection.value) } return null } const oneSelectionForward = (value: ScaleValue) => { if (value === 5) { return 1 } const calculatedPosition = SCALE_VALUE_RESPONSE.get(value + 1) return calculatedPosition || null } const oneSelectionBackward = (value: ScaleValue) => { if (value === 1) { return 5 } const calculatedPosition = SCALE_VALUE_RESPONSE.get(value - 1) return calculatedPosition || null } export default determineSelectionFromKeyPress
<filename>fhs-project/hello-web/Main.hs import Web.Scotty hi :: ActionM () hi = html "Hello!" main :: IO () main = scotty 3000 $ do get "/hi" $ hi get "/hello" $ hi get "/echo/:word" $ do word <- param "word" html $ mconcat [ "<h1>Hello! You said ", word, "!</h1>" ]
<reponame>misfits42/advent_of_code_2017 use super::utils::collections::Spinlock; #[aoc_generator(day17)] fn generate_input(input: &str) -> usize { return input.trim().parse::<usize>().unwrap(); } #[aoc(day17, part1)] fn solve_part_1(input: &usize) -> usize { // Create a new spinlock let skip_size = *input; let mut spinlock = Spinlock::new(skip_size); // Conduct insertions into spinlock for value in 1..=2017 { spinlock.skip_forward(); spinlock.insert_after_cursor(value); } // Return the value after the last value inserted into the spinlock return spinlock.peek_after_cursor(); } #[aoc(day17, part2)] fn solve_part_2(input: &usize) -> usize { // Initialise values to keep track of result let mut value_after_0: usize = 0; let skip_size = *input; let mut cursor = 0; for value in 1..=50000000 { // Skip cursor forward within circular buffer and move to new insert index cursor = (cursor + skip_size) % value + 1; // Check if a new value would be inserted directly after the value 0 - which remains at // index 0 due to implementation of the spinlock if cursor == 1 { value_after_0 = value; } } return value_after_0; } #[cfg(test)] mod tests { use super::*; #[test] fn test_d17_p1_proper() { let input = generate_input(&std::fs::read_to_string("./input/2017/day17.txt").unwrap()); let result = solve_part_1(&input); assert_eq!(1642, result); } #[test] fn test_d17_p2_proper() { let input = generate_input(&std::fs::read_to_string("./input/2017/day17.txt").unwrap()); let result = solve_part_2(&input); assert_eq!(33601318, result); } }
package gruff import ( "fmt" ) // Inference is an edge from the target (a Claim or Argument) of an Argument // to the Argument that is making the inference type Inference struct { Edge } // ArangoObject interface func (i Inference) CollectionName() string { return "inferences" } func (i Inference) ArangoKey() string { return i.Key } func (i Inference) ArangoID() string { return fmt.Sprintf("%s/%s", i.CollectionName(), i.ArangoKey()) } func (i Inference) DefaultQueryParameters() ArangoQueryParameters { return DEFAULT_QUERY_PARAMETERS } func (i *Inference) Create(ctx *ServerContext) Error { return CreateArangoObject(ctx, i) } func (i *Inference) Update(ctx *ServerContext, updates Updates) Error { return NewServerError("This item cannot be modified") } func (i *Inference) Delete(ctx *ServerContext) Error { return DeleteArangoObject(ctx, i) }
package com.codeborne.selenide.conditions; import com.codeborne.selenide.CheckResult; import com.codeborne.selenide.Condition; import com.codeborne.selenide.Driver; import org.openqa.selenium.WebElement; import javax.annotation.Nonnull; import javax.annotation.ParametersAreNonnullByDefault; @ParametersAreNonnullByDefault public abstract class TextCondition extends Condition { private final String expectedText; protected TextCondition(String name, String expectedText) { super(name); this.expectedText = expectedText; } protected abstract boolean match(String actualText, String expectedText); protected String getText(Driver driver, WebElement element) { return element.getText(); } @Nonnull @Override public CheckResult check(Driver driver, WebElement element) { String elementText = getText(driver, element); return new CheckResult(match(elementText, expectedText), String.format("text=\"%s\"", elementText)); } @Override public final String toString() { return String.format("%s \"%s\"", getName(), expectedText); } }
import numpy as np import tensorflow as tf """ Disclaimer: the weight_variable_glorot function from this file comes from tkipf/gae original repository on Graph Autoencoders """ def weight_variable_glorot(input_dim, output_dim, name = ""): """ Create a weight variable with Glorot&Bengio (AISTATS 2010) initialization """ init_range = np.sqrt(6.0 / (input_dim + output_dim)) initial = tf.random_uniform([input_dim, output_dim], minval = -init_range, maxval = init_range, dtype = tf.float32) return tf.Variable(initial, name = name)
Advertisement Photo: Nautilus Minerals Like a Tank: The heavy-duty equipment for mining the ocean floor, like the partially assembled machine shown here, is built to withstand punishing conditions 1,600 meters below the waves. For decades, futurists have predicted that commercial miners would one day tap the unimaginable mineral wealth of the world’s ocean floor. Soon, that subsea gold rush could finally begin: The world’s first deep-sea mining robots are poised to rip into rich deposits of copper, gold, and silver 1,600 meters down at the bottom of the Bismarck Sea, near Papua New Guinea. The massive machines, which are to be tested sometime in 2016, are part of a high-stakes gamble for the Toronto-based mining company Nautilus Minerals. Photos: Nautilus Minerals I, Robot Miner: Three different types of remotely operated machines will be used: a “bulk cutter” [top], a “collecting machine” [center], and an “auxiliary cutter” [see photo “Like A Tank,” above]. Clenched in a manipulator arm is a sample of the kind of metal-rich rock these robots will retrieve from the ocean floor [bottom]. Nautilus’s machines have been ready to go since 2012, when a dispute between the firm and the Papua New Guinean government stalled the project. What broke the impasse was the company’s offer, in 2014, to provide Papua New Guinea with certain intellectual property from the mining project. The deal enabled Nautilus to get financing to build a €127 million ship, the first of its kind, which will deploy the subsea mining robots and process the ore they recover. This 227-meter-long production vessel is now being built in a Chinese shipyard and is scheduled to depart for Papua New Guinea in early 2018. The mining robots were built for Nautilus by Soil Machine Dynamics, based in the United Kingdom, which supplies construction equipment for laying undersea cables, servicing offshore oil platforms, and other heavy-duty deep-sea jobs. The main robots are a pair of tractor-trailer-size excavators. One uses 4-meter-wide counterrotating heads studded with tungsten carbide picks to chew through the metal-rich chimneys that form around superhot water spewing from sulfurous vents in the seafloor. Its partner adds brute strength, using a studded drum that is 2.5 meters in diameter and 4 meters wide to pulverize rock walls. Dredge pumps built into these machines will push the smashed ore back to a central pile on the seafloor, where a third Nautilus robot will feed a slurry of crushed rock and water up a pipe dangling from the production vessel. There the water will be wrung out from the ore, which will be loaded on another ship and carried to China for processing. Matter of Fact A ship called the Hughes Glomar Explorer was constructed in the 1970s, ostensibly for deep-sea mining, although it was in fact used to recover a sunken Soviet submarine. As 2015 drew to a close, Nautilus was still negotiating for access to a shallow-water site for an initial subsea test of these machines, which it hoped to begin in mid-2016. The plan is to do some rock cutting, though in an interview Nautilus’s CEO, Michael Johnston, says it is “difficult getting materials that are a good proxy for the materials we’ll be mining.” If time allows, the machines will also get a deep-sea trial before they are integrated with the production vessel, Johnston adds. Barring that, they will have to prove their stuff at Nautilus’s first mining site, called Solwara 1, which is located some 30 kilometers from shore in Papua New Guinea’s New Ireland province. Assuming all goes well, the robotic diggers will spend 30 months scouring the Solwara 1 site, bringing up 2.5 million metric tons of ore containing metals worth more than US $1.5 billion at today’s prices. Next, the robots will likely set to work on one of Nautilus’s 18 other prospects in the Bismarck Sea or one of its 19 discoveries off the shores of the Polynesian archipelago of Tonga. Illustration: James Provost Competitors are staking out deep-sea mining sites of their own, with much of the development activity focused on rich deposits of polymetallic nodules in a vast region southeast of Hawaii known as the Clarion-Clipperton Fracture Zone. The potato-size nodules, found in waters more than 4 km deep, contain manganese along with nickel, cobalt, and other metals. But some marine biologists warn that deep-sea mining interests are outpacing the readiness of scientists and governments to assess and manage the environmental impact. Verena Tunnicliffe, a specialist in deep-sea vent ecosystems at the University of Victoria, in British Columbia, Canada, says robo-miners will strip away deep-sea ecosystems that are as unique as they are poorly understood. Johnston points out that Nautilus is taking pains to study these ecosystems and will protect them to the extent possible. A refuge zone within the leased area, for example, will provide a source of local fauna for recolonization of the company’s deep-sea strip mine. Tunnicliffe worries that this vision for recolonization could prove wildly optimistic: “The habitat is going to be pulverized, and the energy flow of the system will be completely altered. I do not believe recolonization of these types of populations is going to happen.” Other marine biologists are more sanguine, however. With luck, the mining will prove no more devastating to these vent communities over the long term than the frequent earthquakes and outpourings of lava that these amazing deep-sea creatures are somehow able to survive. This article originally appeared in print as “Robot Miners of the Briny Deep.”
Incidence and impact of surgical site infections on length of stay and cost of care for patients undergoing open procedures Background Surgical site infections (SSIs) are associated with increased morbidity and mortality; however, current SSI rates across open procedures and their effect on healthcare delivery are unknown. The objective of this study was to examine incidence of SSIs for open surgical procedures in the United States and impact on length of stay (LOS) and costs. Methods This retrospective study utilizing 2019–2020 data from Medicare and Premier identified patients with SSIs occurring during hospitalization or within 90 days of discharge. Propensity score matching was used to calculate incremental LOS and costs attributable to SSIs. Mean LOS and costs attributable to SSIs for the index admission, readmissions, and outpatient visits were summed by procedure and Charlson Comorbidity Index score to estimate the overall impact of an SSI on LOS and costs across healthcare settings. Results SSI rates were 2.0% for 2,696,986 Medicare and 1.8% for 1,846,254 Premier open surgeries. Total incremental LOS and cost per SSI, including index admission, readmissions, and outpatient visits were 9.3 days and $18,626 for Medicare patients and 7.8 days and $20,979 for Premier patients. SSI rates were higher for urgent/emergency surgeries compared to overall SSI rates. Although less common that superficial SSIs, deep SSIs resulted in higher incremental LOS and index costs for the index admission and for SSI-related readmissions. Conclusions This study of SSIs utilizing two large national databases provides robust data and analytics reinforcing and bolstering current evidence that SSIs occur infrequently but are detrimental to patients in terms of increased LOS and care costs. Background Surgical site infections (SSIs) are post-operative infections of incisions, organs, and/or spaces involved in a surgical procedure and result in increased patient morbidity and mortality . Many patient and facility level factors can increase SSI risk, but it is estimated that approximately half of SSIs may be eliminated or mitigated with the use of emergent, evidence-based practices . Patients who experience SSIs often have increased health care utilization, including increased length of hospital stay (LOS), emergency department visits, readmissions, and outpatient visits . As a result, SSIs are costly care complications in terms of both patient health and increased financial costs to patients, providers, and payers . Several studies have examined incidence of SSIs and their negative impact on patient outcomes and costs . However, literature on postoperative complications across surgical subspecialties is limited , and no large national studies have been conducted recently to provide updated rates of SSIs and their impact on LOS, readmissions, and costs of care, especially using data collected during the COVID-19 pandemic. The impact of underlying comorbidities on the risk of developing an SSI has not been quantified by comorbidity score across surgical categories. This information is crucial to determining where implementation of evidence-based interventions shown to reduce SSIs and associated negative outcomes can provide the most benefit for patients. The objective of this study was to identify incidence of SSIs in patients undergoing open surgical procedures in the United States in 2019-2020 by surgical category and comorbidity level and to examine the impact of SSIs on LOS and costs. patients. PHD contains deidentified clinical and cost data from over 10 million inpatient admissions a year, representing approximately 25% of U.S. inpatient admissions as well as data from outpatient visits to emergency departments and ambulatory surgery centers . This study utilized deidentified data and was exempt from IRB review. Study population. Patients who underwent common open surgical procedures in 2019-2020, including cardiac, general abdominal, obstetrical/gynecological, orthopedic, vascular, and skin/subcutaneous tissue/ breast procedures and had a least one ICD-10 procedure code with the 1st and the 5th character equal to "0", designating an open procedure were included in the analysis. The diagnostic-related group (DRG) assigned at discharge was used to determine the overall surgical category for each patient. As DRGs are used to identify similar patterns of resource use and assigned based on clinical coherence such as a common organ system or clinical specialty and not strictly on surgical approach, using DRGs alone was not sufficient to determine if a patient had an open procedure. Although the descriptions of several included DRGs indicate a closed procedure (i.e., percutaneous intracardiac), patients included in this analysis had at least one open procedure as indicated by an ICD-10 procedure code which are more granular than DRGs. By definition the study excluded patients who only underwent percutaneous or laparoscopic procedures. Measures. SSIs were identified using an ICD-10 diagnosis code (T81. 4) and DRGs for postoperative infections (856-858, 862,863) and were included in the analysis if they were documented during index hospitalization as a secondary diagnosis or were listed as a primary diagnosis for a readmission, outpatient visit, or professional provider's service (Medicare only) within 90 days of discharge. SSIs were categorized as deep, superficial, organ/space SSIs, or other SSIs (e.g., other surgical sites, postoperative sepsis). Patients with multiple types of documented SSIs were considered to have a deep SSI if ≥1 deep SSI was documented, an organ/space SSI if ≥1 organ/space SSI was documented and no deep SSIs were documented, or a superficial SSI if nodeep or organ/space SSIs were documented . Charlson Comorbidity Index (CCI) scores were calculated for each patient based on the comorbidities documented during the index admission or in the two years prior. Levels of comorbidity were classified based on CCI score as none (0), mild (1-2), moderate (3)(4), or severe (≥5). Costs represent the amount paid by Medicare to providers for services rendered or the estimated cost of care provided by Premier facilities calculated with relative value units or cost-to-charge ratios. Statistical analysis. Patient demographics and comorbidities in both the Medicare and Premier populations were examined using counts and percentages. SSI incidence rates were calculated for each surgical category as well as by CCI levels. As the primary study objective was to examine the impact of SSIs on LOS and costs, propensity scoring was used to create a matched set of patients with SSIs and a control group without SSIs during the index admission to determine the incremental effect of an SSI on index LOS and care costs . Propensity score matching allowed for the control of underlying risk factors that may make a patient more susceptible to SSI and also be independently associated with greater health care utilization and costs. Controlling for this relationship prevented an overestimation of the impact of SSI on LOS and cost. To perform the propensity score matching, a multi-level random-intercept logistic regression model was used to estimate the effect of SSI predictors on the odds of developing an SSI during the index admission with a binary indicator for SSI as the dependent variable. The hospital facility was included as a random intercept to account for within-hospital clustering. The following SSI risk factors were included in the model: surgical category ; type of admission (elective or non-elective) ; CCI score ; blood transfusion ; alcohol, drug, and/or nicotine use ; and comorbidities including chronic obstructive pulmonary disease (COPD) ; peripheral vascular disease ; congestive heart failure ; rheumatic disease ; hypertension ; obesity ; and blood disorders . A propensity score was calculated from the model for each observation indicating the likelihood of developing an SSI during the index admission and used to match individual patients with SSIs 1:1 to a control group of patients without SSIs in the same surgical category. Matching criterion for the propensity scores was set to ≤0.01 to ensure patients were well matched in terms of observed characteristics and SSI risk. The distributions of the independent variables included in the regression model were compared between groups before and after matching to ensure that covariate balance was achieved. Differences in inpatient LOS and costs for the index hospitalization were calculated between each matched pair, with observed differences attributed to SSIs. The additional LOS and cost due to an SSI readmission were calculated as the mean readmission LOS and cost for all patients who were readmitted with a primary diagnosis of SSI. The additional costs of SSIrelated outpatient visits were calculated as the mean cost of outpatient visits with primary diagnosis of SSI within 90 days after discharge. To determine the overall impact of an SSI on LOS and cost in terms of prolonged hospitalization and SSI-related readmissions and outpatient visits, SSI-related LOS were calculated as the weighted average of incremental LOS due to index SSIs and LOS of SSI readmissions. The SSIrelated cost was calculated as the weighted average of costs from index SSIs, SSI readmissions, and SSI outpatient visits. Percentages of index SSIs, SSI readmissions, and SSI outpatient visits in each cohort were utilized as the weights. Observations with missing data or a cost of $0 were excluded from the calculations. A two-way ANOVA was used to test if the surgical category and CCI had inter-group effects on increased LOS and costs due to SSI. The model indicated a statistically significant (p < 0.0001) effect for surgical category and CCI group, thus SSI-related LOS and costs were calculated by surgical category and CCI group. All statistical analyses were conducted using SAS (9.4; SAS Institute Inc., Cary, NC). A p-value of <0.05 was considered statistically significant. Results The study population included 2,696,986 Medicare patients and 1,846,254 Premier patients who underwent common open surgical procedures from 2019 to 2020 (Table 1). Although there were some similarities between Medicare and Premier patients, the Medicare population was older on average (71.9 vs. 61.7 years), underwent more orthopedic procedures (64.1% vs. 55.5%) and fewer obstetric/gynecological procedures (1.3% vs. 9.0%), and had more comorbidities compared to the Premier population as reflected by a higher CCI (3.1 vs. 2.2). The percentage of patients undergoing elective surgeries versus emergency, urgent, or trauma procedures was similar between the populations. SSI rates by surgical type and CCI score for Medicare patients and Premier patients are documented in Tables 2-3 and Fig. 1. The overall SSI rate was 2.0% in the Medicare population and 1.8% in the Premier population, and the SSI rate for urgent and emergency surgeries in these populations was 2.6% and 2.2% respectively. Overall SSI rates varied across surgical categories and ranged from 1.4% for orthopedic procedures to 5.7% for abdominal procedures for Medicare patients and from 1.0% for obstetric/gynecological procedures to 5.0% for abdominal procedures for Premier patients. SSI rates increased with increased levels of comorbidities across surgical categories in both populations. Superficial SSIs were more commonly observed than deep and organ/ space SSIs across groups. The effect of surgical category and specific comorbidities on the odds of developing an SSI obtained from the logistic regression model used for propensity scoring are displayed in Table 4. Odds of SSI were highest for Medicare patients undergoing general abdominal or obstetric/gynecologic procedures and for Premier patients undergoing general abdominal, vascular, or skin/subcutaneous tissue/breast surgeries. Comorbid conditions increased the odds of SSI, with blood disorders in Medicare patients (OR = 2.40, 95% CI: 2.35-2.46) and obesity in Premier patients (OR = 1.53, 95% CI: 1.50-1.57) resulting in the largest increases in odds of SSI. The impact of an SSI in terms of additional LOS and costs for Medicare and Premier patients is described in Table 5 Additional details regarding the impact of SSIs on incremental LOS and costs across the continuum of care for Medicare and Premier patients can be found in eTables 1-2. Approximately 20% (n = 10,954) of Medicare patients and 23% (n = 7337) of Premier patients experienced an SSI during the index admission, resulting in an average increased LOS of 11.5 days and 12.0 days and increased costs of $19,661 and $32,771 respectively. Additionally, 65.1% (n = 35,682) of Medicare patients with an SSI and 59.8% (n = 19,345) of Premier patients with an SSI had an SSI-related readmission in the 90 days following the index admission, with an average LOS of 10.8 days and 8.5 days and associated costs of $22,375 and $21,916 respectively. Approximately 19% (n = 10,408) of Medicare patients with an SSI had at least one SSI-related outpatient visit with an associated cost of $629 per SSI. Approximately 24% (n = 7760) of Premier patients with an SSI had at least one SSI-related outpatient visit with an associated cost of $1843 per SSI. Although less common than superficial SSIs, deep SSIs resulted in higher incremental LOS and index costs for the index admission and for SSIrelated readmissions. Discussion This retrospective analysis of patients undergoing open surgical procedures in the United States in 2019-2020 utilizing two large national databases, provides strong and consistent evidence that SSIs result in significant increases in LOS and costs despite low rates of occurrence across most surgical categories. Overall SSI rates were approximately 2% in the both the Medicare and Premier populations. SSI rates for urgent and emergency procedures were higher than the overall SSI rate in both the Medicare (2.6%) and Premier (2.2%) population, likely due to inability to mitigate certain risk factors for patients in need of urgent care. Overall SSI rates varied by surgical category but were highest for general abdominal surgeries for both Medicare (5.7%) and Premier (5.0%) patients. Medicare and Premier patients with SSIs experienced a total 9.3 and 7.8 day increase in average LOS and increased costs of $18,626 and $20,979 respectively across the index hospitalization and SSI-related readmissions and outpatient visits. Across all surgical categories, SSI rates, SSI-related LOS, and costs increased as CCI scores increased. Superficial SSIs were more common than deep and organ/ space SSIs for both Medicare and Premier patients, and although these SSIs were not as costly as deep SSIs, they did result in an increased LOS of 9.2 days for Medicare patients and 7.3 days for Premier patients and increased of costs of $18,383 and $18,182 respectively. Extrapolating the average overall impact of SSIs across the 54,779 Medicare and 32,352 Premier patients who experienced an SSI results in an additional 509,444 and 252,345 in hospital days and more than $1.02 billion and $678 million in potentially avoidable costs to the respective health systems over a two year period (including one year impacted by . Given that the PHD represents approximately 25% of annual inpatient admissions in the United States, the impact of SSIs nationally can be estimated as an additional 504,690 hospital days and over $1.3 billion in additional healthcare costs annually. This study provides updated SSI rates for open surgeries and quantifies the impact of SSIs on LOS and costs across the care continuum at a national level in two large patient populations. It is the first large study to stratify risk of SSI by both surgery type and level of comorbidities and examines the impact of SSIs on the index admission as well as readmissions and outpatient visits. This study is also the first to report on the topic for the period when health systems were heavily taxed by COVID-19. Our findings that SSIs have low rates of occurrence across most surgical types but greatly impact LOS and health care costs are similar to other studies examining the impact of SSIs. In a large study utilizing 2005 data from the Healthcare Cost and Utilization Project's Nationwide Inpatient Sample, de Lissovoy et al. found that 1% of patients experienced an SSI and that SSIs extended LOS by 9.7 days while increasing costs by $20,842 per admission . Unlike our study which focused on open procedures, de Lissovoy et al. included common surgical procedures based on DRG codes and did not determine whether patients received open or laparoscopic procedures. The inclusion of laparoscopic procedures likely contributed to the lower observed SSI rate as these minimally invasive surgeries have much smaller incisions that decrease the risk of infections and complications . Our study also accounts for SSI-related outpatient visits and associated costs, a noted limitation of the de Lissovoy study. A 2013 simulation utilizing estimates from a systematic literature review and the National Healthcare Safety Network of the Centers for Disease Control and Prevention, found that SSIs occurred in approximately 2% of patients and were associated with an increase in LOS of 11.2 days and increased costs of $20,785 . Similar to the findings of de Lissovoy et al., we observed that abdominal procedures had the highest rate of SSIs. Although reported rates of SSI following general abdominal surgeries vary, they are consistently higher than other surgical specialties . This finding is expected given that these procedures are heavily colonized by digestive flora causing an inherent risk of skin or deep tissue infection for procedures where the colon is opened . Despite this risk, high rates of SSIs in general abdominal surgeries are not inevitable or acceptable given that SSI risk can be reduced or mitigated through various interventions and improved delivery of surgical care . Despite recent efforts to reduce SSIs, these complications continue to be a serious and costly problem in the United States and worldwide, resulting in increased hospital days, outpatient visits, readmissions, mortality, and health care costs . We found that even superficial SSIs had a detrimental impact on patients in terms of increased LOS and costs. Additionally, SSIs have negative impacts on patients beyond increased time spent in hospitals and increased healthcare costs including increased pain and morbidity, prolonged rehabilitation, and delayed return to normal activity and work . As over 50% of reported SSIs are likely preventable with adherence to evidence-based guidelines, SSI is now a designated quality indicator and pay-for-performance metric . In 2021, 774 hospitals were penalized a 1% reduction in Medicare reimbursement for falling in the lowest 25% of hospitals on the Hospital-Acquired Condition score which includes measures for SSIs following colorectal procedures and hysterectomies . Given the increasing number of complex surgeries and the increasing age and comorbidities of patients, the implementation of guidelines and interventions to reduce SSIs is crucial to preventing an increase in SSI-related complications and health care utilization and has been advocated by several major healthcare organizations . These interventions include smoking cessation, preoperative medical nutrition and diabetes management, preoperative chlorhexidine wipes, appropriate antibiotic use, patient warming, meticulous surgical technique, attentive postoperative monitoring, modern wound closure dressings, and closedincision negative pressure therapy . Many of these practices are included in enhanced recovery after surgery (ERAS) protocols which have been associated with reductions in post-operative complications , SSI , LOS , and costs . Despite mounting evidence of their success, there has not been widespread implementation and uptake of these interventions. This study provides rates and impact of SSIs by surgery type and patient comorbidity level and identifies areas where implementation of these strategies should be prioritized and may have the greatest impact on improving patient care and outcomes. While implementation of some of these strategies may be costly, the negative consequences of SSIs identified by this study indicate that they have the potential to reduce costs or be costeffective. This study provides a reference for evaluating evidencebased interventions and modeling cost-effectiveness for open surgeries. Limitations This is a retrospective study with inherent limitations, including the inability to adjust for unobserved covariates. It is limited to selected open surgical procedures. Additionally, claims for services provided by noninstitutional providers were only available for 5% of the Medicare population. Outpatient SSI-related visits were limited to outpatient hospital visits for Premier patients. Thus, this study may underestimate the impact of SSIs on costs of outpatient visits. As this study only captures SSIs documented in ICD-10s and select DRGs, the rates of SSIs may be underreported. We were also unable to determine which SSIs were potentially preventable due to data limitations. While there may be limited opportunities for preoperative risk mitigation in emergency/urgent procedures, which constituted 40% of the surgeries in this analysis, strategies and interventions to reduce SSIs exist, particularly for elective procedures, and have not been implemented broadly . While this study did not investigate the impact of these strategies on patient care and outcomes, it does provide a detailed overview of current SSI rates in open procedures and demonstrate the potential cost savings that may be achieved by reducing SSIs and associated LOS, readmissions, and outpatient visits. Conclusions This study demonstrates that SSIs following open surgeries are a substantial burden to the healthcare system in terms of increased LOS and costs and emphasizes the need for increased adoption of evidence-based interventions that have been shown to reduce SSIs and improve patient outcomes . Data generated from this study may provide a benchmark for evaluating the cost-effectiveness of these interventions. Funding No external funding was received for this work. Ethics approval This study involved de-identified registry data and was therefore exempt from IRB approval.
// deriveClone1 returns a clone of the src parameter. func deriveClone1(src BuiltInTypes) BuiltInTypes { dst := new(BuiltInTypes) deriveDeepCopyPtrToBuiltInTypes(dst, &src) return *dst }
s = input() q = int(input()) front = [] back = [] cnt = 0 for i in range(q): t = input() if t[0] == '1': cnt ^= 1 else: f = 1 if int(t[2]) == 1 else 0 c = t[4] if cnt ^ f: front.append(c) else: back.append(c) ans = ''.join(front)[::-1] + s + ''.join(back) print(ans[::-1] if cnt else ans)
/** * @author Soham Chakravarti * */ public class CommandTransactionInterceptor { public MultiExecuteOutput handleResponse(MultiExecuteOutput rawResult) { return rawResult; } public MultiExecuteOutput handleResponse(ExecuteOutput.BehaviorExecute<?> rawResult) { return new MultiExecuteOutput(rawResult); } public MultiExecuteOutput handleResponse(ExecuteOutput<?> rawResult) { // This was done to handle a scenario where the result of the executeOutput had a state = MultiExecuteOutput // in that case, it was wrapping the response into the outer EXECUTE behavior due to this implementation. // so checking if the result is of Type Holder then calling the handleResponse(Holder) method if(rawResult.getResult() instanceof Holder) { return handleResponse(((Holder)rawResult.getResult())); } ExecuteOutput.BehaviorExecute<?> bExec = new ExecuteOutput.BehaviorExecute<>(Behavior.$execute, rawResult.getResult()); bExec.setExecuteException(rawResult.getExecuteException()); bExec.setValidationResult(rawResult.getValidationResult()); return handleResponse(bExec); } public MultiExecuteOutput handleResponse(Holder<?> rawResult) { return handleResponse(rawResult.getState()); } public MultiExecuteOutput handleResponse(Object rawResult) { if(rawResult instanceof MultiExecuteOutput) { return handleResponse((MultiExecuteOutput)rawResult); } if(rawResult instanceof ExecuteOutput.BehaviorExecute) { return handleResponse((ExecuteOutput.BehaviorExecute<?>)rawResult); } if(rawResult instanceof ExecuteOutput) { return handleResponse((ExecuteOutput<?>)rawResult); } if(rawResult instanceof Holder) { return handleResponse((Holder<?>)rawResult); } MultiExecuteOutput mExecOutput = new MultiExecuteOutput(new ExecuteOutput.BehaviorExecute<>(Behavior.$execute, rawResult)); return handleResponse(mExecOutput); } }
<reponame>enternityFan/FakeNewsProject<filename>Module/RNNModel.py<gh_stars>0 # @Time : 2022-02-24 15:36 # @Author : Phalange # @File : RNNModel.py # @Software: PyCharm # C'est la vie,enjoy it! :D import math import torch from torch import nn from torch.nn import functional as F from d2l import torch as d2l def mlp(num_inputs,num_hiddens): net = [] net.append(nn.Dropout(0.2)) net.append(nn.Linear(num_inputs,num_hiddens)) net.append(nn.PReLU()) net.append(nn.Linear(num_hiddens,num_hiddens)) net.append(nn.PReLU()) net.append(nn.Dropout(0.2)) net.append(nn.Linear(num_hiddens,num_hiddens)) net.append(nn.PReLU()) return nn.Sequential(*net) class RNNModel(nn.Module): """循环神经网络模型""" def __init__(self,vocab,embed_size,num_hiddens,mode='train',**kwargs): super(RNNModel,self).__init__(**kwargs) self.mode = mode self.rnn_hpy = nn.RNN(embed_size,num_hiddens,batch_first=True) self.rnn_prem = nn.RNN(embed_size,num_hiddens,batch_first=True) self.mlp = mlp(num_hiddens * 2,200) # 隐藏层是200 self.vocab_size = len(vocab) self.num_hiddens = self.rnn_hpy.hidden_size self.embedding = nn.Embedding(len(vocab),embed_size) # 单向的RNN网络 self.num_directions = 1 self.linear = nn.Linear(200,3) def forward(self,inputs,state): premises,hypotheses = inputs A = self.embedding(premises) B = self.embedding(hypotheses) # (batch_size,A/B的词元数,embed_size)变为(A/B的词元数,batch_size,embed_size) Y_hpy,state_hpy = self.rnn_hpy(A,state) # params'num:4015K Y_prem,state_prem = self.rnn_prem(B,state) #if self.mode == 'predict': # state_hpy = torch.squeeze(state_hpy,dim=1) # state_prem = torch.squeeze(state_prem,dim=1) #elif self.mode == 'train': state_hpy = torch.squeeze(state_hpy) state_prem = torch.squeeze(state_prem) output1 = self.mlp(torch.cat([state_hpy,state_prem],1)) # params'num:160.K output = self.linear(output1) # params'num: 600 # output = F.softmax(output,dim=-1) return output def begin_state(self, device, batch_size=1): if not isinstance(self.rnn, nn.LSTM): # nn.GRU以张量作为隐状态 return torch.zeros((self.num_directions * self.rnn.num_layers, batch_size, self.num_hiddens), device=device) else: # nn.LSTM以元组作为隐状态 return (torch.zeros(( self.num_directions * self.rnn.num_layers, batch_size, self.num_hiddens), device=device), torch.zeros(( self.num_directions * self.rnn.num_layers, batch_size, self.num_hiddens), device=device))
/** * Convolves a 1D kernel horizontally and vertically */ private static void convolve1D( GrayU8 gray ) { var kernel = new Kernel1D_S32(2); kernel.offset = 1; kernel.data[0] = 1; kernel.data[1] = -1; var output = new GrayS16(gray.width, gray.height); GConvolveImageOps.horizontal(kernel, gray, output, BorderType.EXTENDED); panel.addImage(VisualizeImageData.standard(output, null), "1D Horizontal"); GConvolveImageOps.vertical(kernel, gray, output, BorderType.EXTENDED); panel.addImage(VisualizeImageData.standard(output, null), "1D Vertical"); }
def recognize_char(input_img, first_letter_model,first_letter_labels, second_letter_model, second_letter_labels,digit_model, digit_labels, model, labels): blur = cv2.bilateralFilter(input_img,9,95,95) blur = cv2.detailEnhance(blur, 5, 0.95) gray_img = cv2.cvtColor(blur, cv2.COLOR_BGR2GRAY) canny_edges = cv2.Canny(gray_img,220, 250, None, 5) cv2.imwrite("PlateImg.png", input_img) lines = cv2.HoughLines(canny_edges, 1, np.pi / 180, 50, None, 0, 0) avg_theta = 0 theta =0 line_count = 0 if lines is not None: for i in range(0, len(lines)): theta = lines[i][0][1] angle = (180*theta/3.1415926 - 90) if -15<=angle<=15: avg_theta += angle line_count += 1 if line_count != 0: avg_theta = avg_theta/line_count img_rotated = rotate_image(input_img, avg_theta) else: img_rotated = rotate_image(input_img, 0) enhanced_img = cv2.detailEnhance(img_rotated, 9, 10, 0.5) enhanced_gray_img = cv2.cvtColor(enhanced_img, cv2.COLOR_BGR2GRAY) blur_enhanced_gray_img = cv2.bilateralFilter(enhanced_gray_img, 9,10,10) binary_image = cv2.threshold(blur_enhanced_gray_img, 100,255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1] cont, _ = cv2.findContours(binary_image, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) char_segmented = [] char_w = 96 char_h = 96 char_ratio_list = [] sorted_bounding_boxes = sort_contours(cont) for box in sorted_bounding_boxes: (x, y, w, h) = box ratio = h/w if 1<=ratio<=5: char_ratio_list.append(h/img_rotated.shape[0]) mini_char = minimum_character(char_ratio_list) print(mini_char) if mini_char == -1: flag = False else: flag = True if(flag): for box in sorted_bounding_boxes: (x, y, w, h) = box ratio = h/w if 1<=ratio<=5: if 0.3 <= h/img_rotated.shape[0] <= 0.9: curr_num = binary_image[y:y+h,x:x+w] curr_num = cv2.resize(curr_num, dsize=(char_w, char_h)) _, curr_num = cv2.threshold(curr_num, 90, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) char_segmented.append(curr_num) if len(char_segmented) < 5: return "ERROR_CHAR_LEN" final_string = '' title = np.array2string(predict_from_model_128(char_segmented[0],first_letter_model,first_letter_labels)) final_string +=title.strip("'[]") title = np.array2string(predict_from_model_128(char_segmented[1],second_letter_model,second_letter_labels)) final_string +=title.strip("'[]") title = np.array2string(predict_from_model_128(char_segmented[2],digit_model,digit_labels)) final_string +=title.strip("'[]") title = np.array2string(predict_from_model_128(char_segmented[3],digit_model,digit_labels)) final_string +=title.strip("'[]") for i in range(4,len(char_segmented)-4): title = np.array2string(predict_from_model_80(char_segmented[i],model,labels)) final_string+=title.strip("'[]") for i in range(len(char_segmented)-4,len(char_segmented)): title = np.array2string(predict_from_model_128(char_segmented[i],digit_model,digit_labels)) final_string+=title.strip("'[]") return final_string else: return pytesseract.image_to_string(blur_enhanced_gray_img)
Last year, as in many previous years, I sat next to my phone on my birthday at eight in the morning, waiting for a call. I kept waiting. The call never came. I get a few calls from friends and family on my birthday. But I could be certain that a call would come at exactly eight on that day. No mistake. It always came. Penny Strong was punctilious on that point. She told me that she had the date and time entered in her calendar. No matter wherever she was in the world – she travelled a lot – and whatever the local hour was, she would call on the dot. I first met Penny in Kathmandu. As the consul in the US Embassy, I had rejected the visa application of a young Nepali woman, Luna, who did not meet our criteria: she had no money, scant education, and could not explain why she wanted to go to the US. The next morning a spirited American woman turned up in my office. She had a foundation that worked for poor children in Nepalese villages, and Luna, a staff member, needed training in Denver. How could I be heartless enough to refuse a visa? I reversed the order. Three months later she was in my office again. She had brought in a large consignment of books, notebooks, pencils and blackboards for village schools and the customs bosses were demanding excise duty. Since I spoke the local language, I could vigorously argue to customs that the entire lot was for charity and the only beneficiaries would be poor Nepali children. It also helped that I was on first name terms with the Home Minister. The levy was withdrawn. Penny ran a foundation that focused on women and children of Nepal. She had come to Nepal first as a tourist, but had seen first-hand the misery of women and the malnutrition of small children. She was kind and sympathetic, but she was also resolute and indefatigable. If I ever made a casual promise to attend a meeting of disabled girls or blind boys, she would make sure I did not renege even if the Heavens fell. She induced me to visit polio-stricken kids and disabled orphans no matter how long my hours were in the consulate. She would call, leave me a million messages, buy me dinner – in short, do anything that would advance the children’s cause an inch. Initially I resented her multiple calls. My secretary and assistants passed me her messages with a sardonic smile. In time she won us all over. Nobody could question her total sincerity or fierce devotion to the poorest and the most disadvantaged. Nobody could doubt that she would go to any length to bring relief to people whose families had no resources or whose government had no capability to bring them education or healthcare. She lived in Colorado but visited Nepal four to six times each year and never came empty handed. She would fight her way to the executive suite of major US companies and persuade cynical but affluent fat cats to make huge gifts of exercise books, ballpoint pens, cereals, vitamins and packaged food, then sweet talk transport companies to ship them free to Kathmandu. She would go to major hospital groups and persuade top doctors and dentists to come to Nepal for a fortnight: a week of splendid vacation and, then, – you guessed it – a week of free treatment for Nepali children. She wangled free medicines, solutions and bandages from pharmaceutical companies. Over time we became friends. We went together on trips, to mountains and monasteries, verdant valleys and towering temples, and also to nightclubs and speakeasies she had spotted while crisscrossing the land. She did not drive – and would not let me drive either, saying facetiously, “you wouldn’t look at me then” – and engaged a young Sherpa chauffeur who drove pell-mell through cattle and crowds, all the while whistling Bollywood tunes. On my monthly visits to the commissary I always gathered supplies of Campari for me and Bristol cream sherry for her. They represented the fuel for our endless discussions, while candles flickered and cast shadows on her fair face during Kathmandu’s usual power outage. And all discussions had to end with the final question: How do we do better for Nepalese children. No more of such discussions. Not even a call on my birthday. Just as Nepal’s capital collapsed in one summer’s earthquake, her world collapsed the same year with the implacable advance of Alzheimer’s. The writer is a Washington-based international development advisor and had worked with the World Bank.
//////////////////////////////////////////////////////////////////// // Function: PNMImage::do_fill_distance // Access: Private // Description: Recursively fills in the minimum distance measured // from a certain set of points into the gray channel. //////////////////////////////////////////////////////////////////// void PNMImage:: do_fill_distance(int xi, int yi, int d) { if (xi < 0 || xi >= get_x_size() || yi < 0 || yi >= get_y_size()) { return; } if (get_gray_val(xi, yi) <= d) { return; } set_gray_val(xi, yi, d); do_fill_distance(xi + 1, yi, d + 1); do_fill_distance(xi - 1, yi, d + 1); do_fill_distance(xi, yi + 1, d + 1); do_fill_distance(xi, yi - 1, d + 1); }
The conclusion of the 2014 season brings with it word that the San Francisco 49ers will have a little bit more space under the salary cap in 2015. According to the NFLPA salary database, Anthony Davis has seen his 2015 base salary drop from $2,350,000 to $1,850,000. When Davis first signed his contract, we learned that the deal included various de-escalators for weight and workout requirements. Jason from Over the Cap tweeted that the de-escalation was due to the playing time he missed this year because of injury. Hopefully our own Jason can get further clarification as to what the specific requirements were. Davis played in only seven games this season due to a variety of injuries, and his absence was keenly felt. While Davis can be inconsistent at times in pass protection, he is a fantastic run blocker, like much of the 49ers offensive line. When he was in the game, Frank Gore found great success running toward the right. When Jonathan Martin was in the lineup, well, not so much. The 49ers offensive line returns four of the five starters next season, with Mike Iupati potentially departing in free agency. My guess is the 49ers let him walk. They have Brandon Thomas coming off his ACL injury, and he will likely compete with Daniel Kilgore and Marcus Martin for left guard and center. While there will likely be some turnover in 2015, the key will be everybody staying healthy. The changes at center and right tackle were a significant issue in the offense's struggles. They were not the only ones, but they were problematic, nonetheless.
<filename>pampas-ui-backend/src/main/java/com/github/pampas/ui/service/base/ServiceInfoServiceImpl.java package com.github.pampas.ui.service.base; import com.github.pampas.common.tools.AssertTools; import com.github.pampas.common.tools.JsonTools; import com.github.pampas.storage.entity.ServiceCondition; import com.github.pampas.storage.entity.ServiceInstance; import com.github.pampas.storage.entity.ServiceRegistry; import com.github.pampas.storage.mapper.ServiceMapper; import com.github.pampas.ui.base.BusinessException; import com.github.pampas.ui.base.ServiceTypeEnum; import com.github.pampas.ui.base.vo.Result; import com.github.pampas.ui.utils.DiscoveryClientContainer; import com.github.pampas.ui.vo.req.InstanceSaveReq; import org.apache.commons.lang3.StringUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.cloud.client.discovery.DiscoveryClient; import org.springframework.context.annotation.Lazy; import org.springframework.stereotype.Service; import org.springframework.transaction.annotation.Transactional; import org.springframework.util.Assert; import java.util.ArrayList; import java.util.Collections; import java.util.List; import java.util.stream.Collectors; /** * Description: * User: darrenfu * Date: 2018-11-16 */ @Service public class ServiceInfoServiceImpl implements ServiceInfoService { private static final Logger log = LoggerFactory.getLogger(ServiceInfoServiceImpl.class); @SuppressWarnings("SpringJavaInjectionPointsAutowiringInspection") @Autowired private ServiceMapper serviceMapper; @Autowired @Lazy private DiscoveryClientContainer discoveryClientContainer; @Autowired private ServiceRegistryService registryService; @Autowired private ServiceInstanceService serviceInstanceService; @Override public com.github.pampas.storage.entity.Service getService(Integer id) { AssertTools.notNull(id, "ID不能为空"); return serviceMapper.selectByPrimaryKey(id); } @Override public Result<com.github.pampas.storage.entity.Service> getServiceList(String serviceName, String group, Integer pageNum, Integer pageSize) { ServiceCondition condition = new ServiceCondition(); ServiceCondition.Criteria criteria = condition.createCriteria(); if (StringUtils.isNotEmpty(serviceName)) { criteria.andServiceNameLikeInsensitive("%" + serviceName + "%"); } if (StringUtils.isNotEmpty(group)) { criteria.andGroupEqualTo(group); } condition.setPageInfo(pageNum, pageSize); long total = serviceMapper.countByExample(condition); List<com.github.pampas.storage.entity.Service> serviceList = serviceMapper.selectByExample(condition); log.info("查询服务列表:{},{}", total, serviceList); return Result.buildResult(serviceList, (int) total); } @Override public com.github.pampas.storage.entity.Service saveService(com.github.pampas.storage.entity.Service service) { if (service.getId() != null) { int i = serviceMapper.updateByPrimaryKeySelective(service); Assert.isTrue(i == 1, "保存失败"); log.info("保存服务成功"); } else { int i = serviceMapper.insertSelective(service); Assert.isTrue(i == 1, "保存失败"); log.info("新增服务成功"); } return service; } @Override public void deleteService(Integer id) { AssertTools.notNull(this.getService(id), "不存在的服务:" + id); int i = serviceMapper.deleteByPrimaryKey(id); AssertTools.isTrue(i == 1, "删除失败"); log.info("删除服务成功"); } @Override public List<ServiceInstance> getListInRegistry(ServiceTypeEnum type, String service, Integer registryId) { AssertTools.notNull(type, "类型不能为空"); AssertTools.notNull(registryId, "注册中心不能为空"); AssertTools.notEmpty(service, "服务名不能为空"); if (type != ServiceTypeEnum.RESTful) { throw new BusinessException("目前只支持RESTful服务获取实例列表"); } // Spring Cloud服务 ConsulClient ServiceRegistry serviceRegistry = registryService.getServiceRegistry(registryId); AssertTools.notNull(serviceRegistry, "注册中心不存在"); DiscoveryClient discoveryClient = discoveryClientContainer.getDiscoveryClient(serviceRegistry.getId()); AssertTools.notNull(discoveryClient, "不支持此注册中心查询列表:" + serviceRegistry.getAddress()); List<String> services = discoveryClient.getServices(); if (!services.contains(service)) { return Collections.EMPTY_LIST; } List<com.github.pampas.storage.entity.ServiceInstance> instanceList = new ArrayList<>(); List<org.springframework.cloud.client.ServiceInstance> scInstanceList = discoveryClient.getInstances(service); for (org.springframework.cloud.client.ServiceInstance scInstance : scInstanceList) { com.github.pampas.storage.entity.ServiceInstance instance = new com.github.pampas.storage.entity.ServiceInstance(); instance.setInstanceId(scInstance.getServiceId()); instance.setHost(scInstance.getUri().getHost()); instance.setPort(scInstance.getUri().getPort()); instance.setProtocol(scInstance.getUri().getScheme()); instance.setStatus(1); instance.setServiceName(service); List<InstanceSaveReq.KeyAndVal> keyAndValList = InstanceSaveReq.KeyAndVal.convertMapToKeyAndVal(scInstance.getMetadata()); instance.setProps(JsonTools.NON_NULL.toJson(keyAndValList)); instanceList.add(instance); } return instanceList; } @Override @Transactional public void updateInstanceInService(Integer serviceId, List<ServiceInstance> instanceList, boolean flushBeforeUpdate) { com.github.pampas.storage.entity.Service service = this.getService(serviceId); AssertTools.notNull(service, "不存在此服务"); for (ServiceInstance instance : instanceList) { instance.setServiceId(service.getId()); instance.setServiceName(service.getServiceName()); instance.setProtocol(service.getProtocol()); } //查找当前已经存在的 List<ServiceInstance> existInstanceList = serviceInstanceService.getServiceInstanceList(serviceId); List<Integer> existIdList = existInstanceList.stream().map(ServiceInstance::getId).collect(Collectors.toList()); for (ServiceInstance serviceInstance : instanceList) { ServiceInstance save = serviceInstanceService.save(serviceInstance); existIdList.remove(save.getId()); } if (flushBeforeUpdate) { for (Integer existId : existIdList) { //删除多余的 serviceInstanceService.delete(existId); } } log.info("更新服务[{}]下的实例完成:删除:{}个,详情:{},保存:{}个,详情:{}", service.getServiceName(), existIdList.size(), existIdList, instanceList.size(), instanceList); } }
/** * Class to hold default, historical and in-simulation travel times for a link * * \author Harish Loganathan */ class LinkTravelTime { private: typedef unsigned int TimeInterval; typedef std::map<unsigned int, double> DownStreamLinkSpecificTT_Map; typedef std::map<unsigned int, TimeAndCount> DownStreamLinkSpecificTimeAndCount_Map; typedef std::map<TimeInterval, DownStreamLinkSpecificTT_Map> TravelTimeStore; typedef std::map<TimeInterval, DownStreamLinkSpecificTimeAndCount_Map> TimeAndCountStore; unsigned int linkId; double defaultTravelTime; TravelTimeStore historicalTT_Map; TimeAndCountStore currentSimulationTT_Map; boost::shared_mutex ttMapMutex; public: LinkTravelTime(); virtual ~LinkTravelTime(); LinkTravelTime& operator=(const LinkTravelTime& rhs); double getDefaultTravelTime() const { return defaultTravelTime; } void setDefaultTravelTime(double defaultTravelTime) { this->defaultTravelTime = defaultTravelTime; } unsigned int getLinkId() const { return linkId; } void setLinkId(unsigned int linkId) { this->linkId = linkId; } void addHistoricalTravelTime(const DailyTime& dt, unsigned int downstreamLinkId, double travelTime); void addInSimulationTravelTime(const LinkTravelStats& stats); double getHistoricalLinkTT(unsigned int downstreamLinkId, const DailyTime& dt) const; double getInSimulationLinkTT(unsigned int downstreamLinkId, const DailyTime& dt) const; double getHistoricalLinkTT(const DailyTime& dt) const; void dumpTravelTimesToFile(const std::string fileName) const; }
<reponame>Trisfald/choices<filename>src/lib.rs //! Easy HTTP configuration library. //! //! `choices` is a library that lets you expose your application's configuration //! over HTTP with a simple struct! //! //! ## Examples //! //! Given the following code: //! //! ```no_run //! use choices::Choices; //! use lazy_static::lazy_static; //! use std::sync::{Arc, Mutex}; //! //! #[derive(Choices)] //! struct Config { //! debug: bool, //! id: Option<i32>, //! log_file: String, //! } //! //! lazy_static! { //! static ref CONFIG: Arc<Mutex<Config>> = { //! Arc::new(Mutex::new(Config { //! debug: false, //! id: Some(3), //! log_file: "log.txt".to_string() //! })) //! }; //! } //! //! #[tokio::main] //! async fn main() { //! CONFIG.run((std::net::Ipv4Addr::LOCALHOST, 8081)).await; //! } //! ``` //! //! You can see all configuration fields at `localhost:8081/config` //! and the individual fields' values at `localhost:8081/config/<field name>`\ //! A field's value can be changed with a `PUT`, for instance //! `curl -X PUT localhost:8081/config/debug -d "true"`. //! //! More examples on [github](https://github.com/Trisfald/choices/blob/master/examples/). //! //! ## Documentation //! Check out the documentation on //! [github](https://github.com/Trisfald/choices/blob/master/documentation.md). #![forbid(unsafe_code)] #![deny(missing_docs)] #[doc(hidden)] pub use choices_derive::*; /// Re-export of `bytes` pub mod bytes { pub use bytes::*; } /// Re-export of `warp` pub mod warp { pub use warp::*; } #[cfg(feature = "json")] /// Re-export of `serde_json` pub mod serde_json { pub use serde_json::*; } #[doc(hidden)] pub use async_trait::*; pub mod error; pub use crate::error::{ChoicesError, ChoicesResult}; pub mod serde; pub use crate::serde::{ChoicesInput, ChoicesOutput}; use std::net::SocketAddr; use std::sync::{Arc, Mutex, RwLock}; /// A trait to manage the http server responsible for the configuration. #[self::async_trait] pub trait Choices { /// Starts the configuration http server on the chosen address. async fn run<T: Into<SocketAddr> + Send>(&'static self, addr: T); #[doc(hidden)] async fn run_mutable<T: Into<SocketAddr> + Send>(_: Arc<Mutex<Self>>, _: T) { unimplemented!() } #[doc(hidden)] async fn run_mutable_rw<T: Into<SocketAddr> + Send>(_: Arc<RwLock<Self>>, _: T) where Self: Sync, { unimplemented!() } } #[self::async_trait] impl<C: Choices + Send> Choices for Arc<Mutex<C>> { async fn run<T: Into<SocketAddr> + Send>(&'static self, addr: T) { C::run_mutable(self.clone(), addr).await; } } #[self::async_trait] impl<C: Choices + Send + Sync> Choices for Arc<RwLock<C>> { async fn run<T: Into<SocketAddr> + Send>(&'static self, addr: T) { C::run_mutable_rw(self.clone(), addr).await; } }
The New York Jets and the Indianapolis Colts distributed huge quantities of Vicodin and the powerful anti-inflammatory drug Toradol to their players, according to court documents unsealed today. The documents are part of an ongoing federal lawsuit filed by several former NFL players who say the teams gave them massive amounts of addictive painkillers and other drugs without warning them about the possibility for addiction and/or long-term health repercussions. The lawsuit discusses all 32 NFL teams, but in certain cases, like the Colts and Jets, also provides documents that go into the specifics of just how many drugs the teams dished out. One unsealed document was an internal drug audit for the Colts and an evaluation of four years of drug distribution for the Jets. As shown in the chart below, the Jets’ use of Toradol and Vicodin increased every year from 2005 to 2008. The team’s Vicodin usage more than doubled in two years; its Toradol usage nearly tripled in three years. In 2008, the team dispensed 1,031 doses of Toradol and 1,295 doses of Vicodin. The Colts internal drug audit over a seven-month period in 2004 and 2005 shows that the team administered 900 doses of Toradol and 585 doses of Vicodin. Advertisement Emails included in the court records show the NFL’s Dr. Lawrence Brown sending a letter to the Colts about their “Prescription Drug Annual Report.” The letter itself has draft stamped on it, and the emails don’t say why. In the letter, Brown noted an uptick in controlled substances over the past year, but said that the audit “did not reveal any concerns.” The documents released today were previously filed back in February as part of an amended complaint by the former players, which had several exhibits placed under seal. The plaintiffs asked to keep them under seal, and the judge granted a request to give all parties until Monday to file any agreements or disagreements for keeping the documents under seal. Nobody else filed anything in regards to keeping the documents under seal, according to the court docket, on Monday the judge ordered the entire complaint and its exhibits be made public. They were filed as such today. Advertisement NFL spokesman Brian McCarthy denied the allegations in a statement to Deadspin, saying that “controlled substances are not stored at any NFL club facility.” He also said that the NFL will “continue to put the health and safety of our players first.” The NFL player’s union also issued a statement, saying it was “alarmed by the revelations in the lawsuit.” The full statement is below:
<filename>src/leetcode/contests/contest_186/MaximumPointsFromCards.java package leetcode.contests.contest_186; import org.junit.jupiter.api.Assertions; import org.junit.jupiter.api.BeforeEach; import org.junit.jupiter.api.Test; import others.MasterPrinter; import java.util.Arrays; public class MaximumPointsFromCards { MaximumPointsFromCards maximumPointsFromCards; @BeforeEach public void init() { maximumPointsFromCards = new MaximumPointsFromCards(); } @Test public void firstTest() { int[] input = new int[]{1, 2, 3, 4, 5, 6, 1}; int k = 3; int res = maximumPointsFromCards.maxScore(input, k); Assertions.assertEquals(12, res); } @Test public void secondTest() { int[] input = new int[]{2,2,2}; int k = 2; int res = maximumPointsFromCards.maxScore(input, k); Assertions.assertEquals(4, res); } @Test public void thirdTest() { int[] input = new int[]{9,7,7,9,7,7,9}; int k = 7; int res = maximumPointsFromCards.maxScore(input, k); Assertions.assertEquals(55, res); } @Test public void fourthTest() { int[] input = new int[]{1,100,1}; int k = 1; int res = maximumPointsFromCards.maxScore(input, k); Assertions.assertEquals(1, res); } @Test public void fifthTest() { int[] input = new int[]{1,79,80,1,1,1,200,1}; int k = 3; int res = maximumPointsFromCards.maxScore(input, k); Assertions.assertEquals(202, res); } @Test public void sixthTest() { int[] input = new int[]{100,40,17,9,73,75}; int k = 3; int res = maximumPointsFromCards.maxScore(input, k); Assertions.assertEquals(248, res); } public static void preCompute(int arr[], int n, int pre[]) { pre[0] = arr[0]; for (int i = 1; i < n; i++) pre[i] = arr[i] + pre[i - 1]; } // Returns sum of elements in arr[i..j] // It is assumed that i <= j public static int rangeSum(int i, int j, int pre[]) { if (i == 0) return pre[j]; return pre[j] - pre[i - 1]; } public int maxScore(int[] cardPoints, int k) { int left=0; int right=0; int first=cardPoints[0]; int last=cardPoints[cardPoints.length-1]; int leftSum=0; int[] pre=new int[cardPoints.length]; preCompute(cardPoints,cardPoints.length,pre); int tk=k-1; if(tk==0) return Math.max(first,last); for (int i = 1; i <= cardPoints.length-tk; i++) { int ranSum=rangeSum(i,i+tk-1,pre); left=Math.max(left,ranSum); } System.out.println(left); for (int i = 0; i < cardPoints.length-1-tk; i++) { right=Math.max(leftSum,rangeSum(i,i+tk-1,pre)); } return Math.max(first+left,last+right)%1_000_000_007; } }
/** * A method used to execute our reload task. * * @param user Entity that executed the command * @since 1.1 */ private void executeReload(@NotNull User user) { if (user.hasPermission(Permission.ALL, Permission.RELOAD)) { config.reload(); locale.reload(); plugin.initializeAudit(); user.sendMessage("plugin.reload.success"); } else { executeNoAccess(user); } }
/** * Abstract parent implementation of Output Rate Limiting. Output Rate Limiting is used to throttle the output of * Siddhi queries based on various criteria. * * @param <S> current state of the RateLimiter */ public abstract class OutputRateLimiter<S extends State> implements PartitionCreationListener { protected List<QueryCallback> queryCallbacks = new ArrayList<QueryCallback>(); protected OutputCallback outputCallback = null; protected LatencyTracker latencyTracker; protected SiddhiQueryContext siddhiQueryContext; protected LockWrapper lockWrapper; protected StateHolder<S> stateHolder; private boolean hasCallBack = false; public void init(LockWrapper lockWrapper, boolean groupBy, SiddhiQueryContext siddhiQueryContext) { this.siddhiQueryContext = siddhiQueryContext; if (outputCallback != null) { this.lockWrapper = lockWrapper; } latencyTracker = siddhiQueryContext.getLatencyTracker(); stateHolder = siddhiQueryContext.generateStateHolder(this.getClass().getName(), groupBy, init()); } protected abstract StateFactory<S> init(); public void sendToCallBacks(ComplexEventChunk complexEventChunk) { MultiProcessStreamReceiver.ReturnEventHolder returnEventHolder = MultiProcessStreamReceiver.getMultiProcessReturn().get(); if (Level.BASIC.compareTo(siddhiQueryContext.getSiddhiAppContext().getRootMetricsLevel()) <= 0 && latencyTracker != null) { latencyTracker.markOut(); } if (returnEventHolder != null) { returnEventHolder.setReturnEvents(complexEventChunk); return; } else if (lockWrapper != null) { lockWrapper.unlock(); } if (Level.BASIC.compareTo(siddhiQueryContext.getSiddhiAppContext().getRootMetricsLevel()) <= 0 && latencyTracker != null) { latencyTracker.markOut(); } if (lockWrapper != null) { lockWrapper.unlock(); } if (!queryCallbacks.isEmpty()) { for (QueryCallback callback : queryCallbacks) { callback.receiveStreamEvent(complexEventChunk); } } if (outputCallback != null && complexEventChunk.getFirst() != null) { complexEventChunk.reset(); int noOfEvents = 0; while (complexEventChunk.hasNext()) { ComplexEvent complexEvent = complexEventChunk.next(); if (complexEvent.getType() == ComplexEvent.Type.EXPIRED) { complexEvent.setType(ComplexEvent.Type.CURRENT); noOfEvents++; } else if (complexEvent.getType() == ComplexEvent.Type.RESET) { complexEventChunk.remove(); } else { noOfEvents++; } } if (complexEventChunk.getFirst() != null) { outputCallback.send(complexEventChunk, noOfEvents); // 11 } } } public void addQueryCallback(QueryCallback callback) { queryCallbacks.add(callback); hasCallBack = true; } public abstract void process(ComplexEventChunk complexEventChunk); public OutputCallback getOutputCallback() { return outputCallback; } public void setOutputCallback(OutputCallback outputCallback) { this.outputCallback = outputCallback; if (outputCallback != null) { hasCallBack = true; } } public boolean hasCallBack() { return hasCallBack; } }
<reponame>syslist/Syslist #ifndef MOUSE_COLLECTOR_H_INCLUDED #define MOUSE_COLLECTOR_H_INCLUDED #include <CollectProto.h> class MouseCollector: public AutoCreateDataCollector<MouseCollector> { public: MouseCollector() {} virtual ~MouseCollector() {} virtual long Collect(NVDataItem ** ReturnItem); }; #endif
def space_init(self, agent_space, body_a, global_nets): self.agent_space = agent_space self.body_a = body_a self.aeb_space = agent_space.aeb_space self.nanflat_body_a = util.nanflatten(self.body_a) for idx, body in enumerate(self.nanflat_body_a): if idx == 0: self.body = body body.agent = self body.nanflat_a_idx = idx MemoryClass = getattr(memory, ps.get(self.agent_spec, 'memory.name')) body.memory = MemoryClass(self.agent_spec['memory'], body) self.body_num = len(self.nanflat_body_a) AlgorithmClass = getattr(algorithm, ps.get(self.agent_spec, 'algorithm.name')) self.algorithm = AlgorithmClass(self, global_nets) for idx, body in enumerate(self.nanflat_body_a): for k, v in vars(self.body).items(): if util.gen_isnan(getattr(body, k, None)): setattr(body, k, v)
from .models import Category, Color, Size from django.db.models import Count from cache_memoize import cache_memoize @cache_memoize(3600) def menu(request): categs = Category.objects.annotate(fashion_count=Count('fashionitem')).order_by('-fashion_count')[:20] colors = Color.objects.annotate(color_count=Count('fashionitem')).order_by('-color_count')[:20] return {'categories': categs, 'colors': colors, }
namespace PrototypePattern { export interface Prototype { split(): Prototype; toString(): string; } export class SingleCellOrganism1 implements Prototype { split() : Prototype { return new SingleCellOrganism1(); } toString(): string { return "This is SingleCellOrganism1"; } } export class SingleCellOrganism2 implements Prototype { split() : Prototype { return new SingleCellOrganism2(); } toString(): string { return "This is SingleCellOrganism2"; } } export class SingleCellOrganism3 implements Prototype { split() : Prototype { return new SingleCellOrganism3(); } toString(): string { return "This is SingleCellOrganism3"; } } export class Builder { private prototypeMap: { [s: string]: Prototype; } = {}; constructor() { this.prototypeMap['cell1'] = new SingleCellOrganism1(); this.prototypeMap['cell2'] = new SingleCellOrganism2(); this.prototypeMap['cell3'] = new SingleCellOrganism3(); } createOne(s: string): Prototype { console.log(s); return this.prototypeMap[s].split(); } } }
import logging from reconcile import queries from reconcile.utils.gitlab_api import GitLabApi QONTRACT_INTEGRATION = 'jenkins-webhooks-cleaner' def run(dry_run): instance = queries.get_gitlab_instance() settings = queries.get_app_interface_settings() gl = GitLabApi(instance, settings=settings) previous_urls = queries.get_jenkins_instances_previous_urls() repos = queries.get_repos(server=gl.server) for repo in repos: hooks = gl.get_project_hooks(repo) for hook in hooks: hook_url = hook.url for previous_url in previous_urls: if hook_url.startswith(previous_url): logging.info(['delete_hook', repo, hook_url]) if not dry_run: hook.delete()
package heller.socket.rpc.demo.provider; import com.rpc.framework.annotation.RpcService; import heller.socket.rpc.demo.api.EchoService; @RpcService public class EchoServiceImpl implements EchoService { @Override public String echo(String echo) { return " : " + echo; } }
By: J. Schafer and Stephen Koranda At a time when Kansas is facing a serious budget deficit and a court order saying school funding is inadequate, Governor Sam Brownback may be preparing to leave the state for a job in Italy. A former high-ranking government official, speaking on condition of anonymity, tells Kansas Public Radio that Brownback will be named the next U.S. ambassador to the United Nations agencies for food and agriculture in Rome. The last person to hold the job also said he's heard Brownback may be selected for the position. The governor's office did not confirm or deny the appointment, but a source tells Kansas Public Radio that the appointment is "a done deal." If Brownback leaves his post, Lt. Governor Jeff Colyer would become governor. “Governor Brownback is focused on working with the Kansas Legislature to balance the budget and pass a modern school funding system,” said Brownback's Communications Director Melika Willoughby when asked for comment. If appointed and confirmed by the U.S. Senate, Brownback would become the leader of the U.S. Mission to the U.N. Agencies in Rome. That organization is the link between the U.S. government and several international organizations based in Rome, including the U.N. Food and Agriculture Organization, the World Food Programme and the International Fund for Agricultural Development. David Lane held the job as ambassador to the U.N. agencies in Rome from 2012 to 2016. In an interview with KPR, he confirms that a week ago he also heard Brownback may be selected for the position. Lane said the U.S. is a major funder for the international organizations and the ambassador leads the U.S. team working with those groups. “Provides strategic direction to the boards of those agencies...holds them accountable for the U.S. contribution and looks for results,” said Lane. Lane said he met then-Senator Brownback while they were both working on efforts related to malaria. He said the governor’s agriculture background and humanitarian work would make him a good fit for the ambassador job. “His humanitarian work, his work on malaria and some of the other things he was associated with as a senator, would be as valuable or even more than his experience with agriculture,” said Lane. Lane said high-profile global refugee crises add extra importance to the services offered by international food organizations. "It is a hugely important role right now," said Lane. There has been widespread talk since the election that Brownback could take a job in the administration of President Donald Trump, but Brownback has deflected such questions. “I’m just making no comments about anything regarding the Trump administration,” said Brownback in November. Clay Barker, executive director of the Kansas Republican Party, said after the election that the Trump administration is open to hiring Brownback. “Someone on the Trump team told me that there are positions - I have no idea which ones - that if Governor Brownback wanted them, he could have them,” said Barker. The question so far has been whether Brownback wants to stay and work on his Kansas policies or move to the national stage. Brownback would be leaving the state when Kansas is struggling to fill a budget hole of hundreds of millions of dollars. At the same time, the Kansas Supreme Court has said the state isn’t adequately funding schools, potentially requiring hundreds of millions of dollars in additional spending. ============================== Source: Governor Brownback Leaving Kansas for Ambassador Job in Rome At a time when Kansas is facing a serious budget deficit and under a court order to adequately fund public schools, Governor Sam Brownback may be preparing to leave the state for a job in Italy. A former high-ranking government official tells Kansas Public Radio that Brownback will become the next U.S. ambassador to the United Nations agencies for food and agriculture in Rome. KPR's J. Schafer spoke with Statehouse reporter Stephen Koranda to discuss the possible appointment, which the governor's office is neither confirming nor denying. That's KPR's J. Schafer speaking with Kansas Statehouse reporter Stephen Koranda.