content
stringlengths
10
4.9M
import { OptionSetValueMock} from "../../../src/xrm-mock/optionsetvalue/optionsetvalue.mock"; describe("Xrm.OptionSetValue Mock", () => { beforeEach(() => { this.optionSetValue = new OptionSetValueMock("statecode", 0); }); it("should instantiate", () => { expect(this.optionSetValue).toBeDefined(); }); it("should have a text property of statecode", () => { expect(this.optionSetValue.text).toBe("statecode"); }); it("should have a value property of 0", () => { expect(this.optionSetValue.value).toBe(0); }); });
def compat(prev_version: "Version") -> None: if prev_version == "-1.-1": return elif prev_version < "2.0": print("Review Hotmouse: Running v1_compat()") v1_compat()
Electronic Cigarettes: a report commissioned by Public Health England by Professor John Britton and Dr Ilze Bogdanovica (University of Nottingham) takes a broad look at the issues relating to e-cigarettes including their role in tobacco harm reduction, potential hazards, potential benefits and regulation. E-cigarette uptake and marketing: a report commissioned by Public Health England by Professor Linda Bauld, Kathryn Angus and Dr Marisa de Andrade (University of Stirling) examines use of e-cigarettes by children and young people, the scale and nature of current marketing and its implications, in particular in relation to its potential appeal to young people. Publication of the evidence papers coincides with a national symposium, ‘Electronic cigarettes and tobacco harm reduction’, being held by PHE in London today. The symposium brings together senior public health leaders to discuss the opportunities and risks presented by the rise of e-cigarettes, and to identify areas of consensus to inform future action. Photo by pixelblume. Used under Flickr Creative Commons
Last night when I made the Strawberry Rhubarb Jam, I decided I couldn’t wait to enjoy it. So I took some of the jam and made these simple and sweet little hand pies. As the name implies, it uses traditional pie dough. We used large cookie cutters to create the shape, created vents on the top (beware if you skip this step…you could have a huge mess on your hands!) and sprinkled with a bit of sugar. I dare you to eat just one 🙂 The pie dough takes just a few minutes to prepare. I did mine in the food processor. Wrap in plastic wrap and chill for at least 30 minutes before rolling. Be sure to cut vent holes!! Like any traditional fruit pie, these require venting. Otherwise they will probably burst from the steam created by the filling. As I mentioned above, the filling was created using our Strawberry Rhubarb Jam. Place the cutout without vent holes on the cookie sheet. Spoon filling in the center, being sure not to go all the way to the edges. Using your fingers, wet the edges with a bit of water and place the cutout with the vent holes on top. Press the edges lightly to seal. We just spritzed the tops with water and sprinkled with sugar prior to baking. Bake until golden. Some of the jam will likely seep out of the vent holes. This is normal. If you use less filling, you will have more cookie than jam. Kind of too much cookie. I don’t know about you, but I am ok with (and totally prefer) a little seepage of jam 🙂 Don’t try to overfill or you’ll have a great big mess on your hands. Literally. Cool before serving as the filling is VERY hot. Print Pin 5 from 2 votes Strawberry Rhubarb Hand Pie {Vegan} Servings 8 Author aimee Tried this recipe? Mention or Tag me @aimee_stock Ingredients FOR THE CRUST: 1-1/4 C All-Purpose Flour 1/8 tsp Salt 1-1/2 Tbl vegan Sugar plus more for sprinkling the tops 1/8 Vegetable Shortening very cold 6 Tbl vegan Butter very cold and cubed 1/8 to 1/4 C Ice Water FOR THE FILLING: 1 Tbl of our Strawberry Rhubarb Jam Instructions Combine all ingredients in a food processor except the ice water and process until coarse crumbs appear. Drizzle in the ice water until a ball of dough is nearly formed. Turn out onto a lightly floured surface and knead until it just comes together, 4-5 turns. Wrap in plastic wrap and refrigerate for at least 30 minutes. Preheat the oven to 350 degrees. Once chilled, turn out onto a lightly floured surface and roll to 1/8" thick. Using a 3" cookie cutter, cut out enough for 8 tops and bottoms. Place the bottoms on a prepared cookie sheet. For the tops, using mini cutters or just a sharp knife, create vents. Spoon about 1 Tbl of the Strawberry Rhubarb Jam into the center of each of the bottoms taking care not to get too close to the edges. Do not overfill. Using a bit of water on your fingertips, moisten the edges of the bottom crust. Place the vented tops over the filling and press to lightly seal the edges. This will prevent the filling from oozing out. Spritz the tops of each cookie with water and sprinkle with a bit of sugar. Bake at 350 degress for about 20 minutes or until golden. [bha size=’234×60′ variation=’05’ align=’aligncenter’]
def register(self, device): if not device: LOG.error("Called with an invalid device: %r", device) return LOG.info("Subscribing to events from %r", device) self.devices[device.host] = device with self._event_thread_cond: subscriptions = self._subscriptions[device] = [] for service in self.subscription_service_names: if service in device.services: subscription = Subscription(device, self.port, service) subscriptions.append(subscription) self._schedule(0, subscription) self._event_thread_cond.notify()
/** * Called when an item is removed from the RecyclerView. Implementors can choose * whether and how to animate that change, but must always call * {@link #dispatchRemoveFinished(android.support.v7.widget.RecyclerView.ViewHolder)} when done, either * immediately (if no animation will occur) or after the animation actually finishes. * The return value indicates whether an animation has been set up and whether the * ItemAnimator's {@link #runPendingAnimations()} method should be called at the * next opportunity. This mechanism allows ItemAnimator to set up individual animations * as separate calls to {@link #animateAdd(android.support.v7.widget.RecyclerView.ViewHolder) animateAdd()}, * {@link #animateMove(android.support.v7.widget.RecyclerView.ViewHolder, int, int, int, int) animateMove()}, * {@link #animateRemove(android.support.v7.widget.RecyclerView.ViewHolder) animateRemove()}, and * {@link #animateChange(android.support.v7.widget.RecyclerView.ViewHolder, android.support.v7.widget.RecyclerView.ViewHolder, int, int, int, int)} come in one by one, * then start the animations together in the later call to {@link #runPendingAnimations()}. * <p> * <p>This method may also be called for disappearing items which continue to exist in the * RecyclerView, but for which the system does not have enough information to animate * them out of view. In that case, the default animation for removing items is run * on those items as well.</p> * * @param holder The item that is being removed. * @return true if a later call to {@link #runPendingAnimations()} is requested, * false otherwise. */ public boolean animateRemove(RecyclerView.ViewHolder holder) { log.trace("#animateRemove"); if (holder == null || holder.itemView == null) { return false; } final Event event = Event.of(holder, EventType.REMOVE); final AnimationTask.AnimationEvent animationEvent = new AnimationTask.AnimationEvent( event, AnimationTask.ViewPreState.of(holder.itemView), Option.none()); prepareState(animationEvent.getEvent()); eventQueue.add(animationEvent); return true; }
/** * Removes the option with the specified name (if it has been set) from the set * of execution options. If the option is not currently set, this method will * do nothing. * * @param optionName the name of the option to remove * @return a reference to this object * * @throws IllegalArgumentException if the given optionName is blank */ public ExecutionOptions removeOption(String optionName) { if (StringUtils.isBlank(optionName)) { throw new IllegalArgumentException("optionName was blank"); } options.remove(optionName); return this; }
// Copyright (c) Microsoft Corporation. // Licensed under the MIT license. import * as PQP from "@microsoft/powerquery-parser"; import { Ast, Type } from "@microsoft/powerquery-parser/lib/powerquery-parser/language"; import { TXorNode } from "@microsoft/powerquery-parser/lib/powerquery-parser/parser"; import { AutocompleteItem } from "./autocompleteItem"; export type TriedAutocompleteFieldAccess = PQP.Result<AutocompleteFieldAccess | undefined, PQP.CommonError.CommonError>; export type TriedAutocompleteKeyword = PQP.Result<ReadonlyArray<AutocompleteItem>, PQP.CommonError.CommonError>; export type TriedAutocompleteLanguageConstant = PQP.Result<AutocompleteItem | undefined, PQP.CommonError.CommonError>; export type TriedAutocompletePrimitiveType = PQP.Result<ReadonlyArray<AutocompleteItem>, PQP.CommonError.CommonError>; export interface Autocomplete { readonly triedFieldAccess: TriedAutocompleteFieldAccess; readonly triedKeyword: TriedAutocompleteKeyword; readonly triedLanguageConstant: TriedAutocompleteLanguageConstant; readonly triedPrimitiveType: TriedAutocompletePrimitiveType; } export interface AutocompleteFieldAccess { readonly field: TXorNode; readonly fieldType: Type.TPowerQueryType; readonly inspectedFieldAccess: InspectedFieldAccess; readonly autocompleteItems: ReadonlyArray<AutocompleteItem>; } export interface InspectedFieldAccess { readonly isAutocompleteAllowed: boolean; readonly maybeIdentifierUnderPosition: Ast.GeneralizedIdentifier | undefined; readonly fieldNames: ReadonlyArray<string>; } export interface TrailingToken extends PQP.Language.Token.Token { readonly isInOrOnPosition: boolean; }
import Test.Hspec.Megaparsec import Test.Hspec import Parser.Lex.Lexer import Parser.Lex.Stream import Language.Tokens import Data.Text ( pack ) import Text.Megaparsec hiding ( Tokens ) testLexer :: String -> Either Error [Tokens] testLexer = parse (manyTill lexerToken eof) mempty . pack main :: IO () main = hspec $ describe "Lexer Suite Test" $ do it "Nothing To Lexe" $ testLexer mempty `shouldParse` [] it "Reserved Names" $ testLexer "if match define fn let type auto -> => <~ ~> ~ #[ ]# [ ] ( ) { } ; , := | : v `" `shouldParse` [ If , Match , Define , Fn , Let , Type , Auto , ArrowRight , BigArrowRight , CoerceLeft , CoerceRight , Iso , OpenMacroBlock , CloseMacroBlock , OpenBrace , CloseBrace , OpenParent , CloseParent , OpenBracket , CloseBracket , SemiColon , Comma , MorseEqual , Pipe , Colon , TypeSeparator , Splice ] it "Operators" $ testLexer "+ +_ <~~ ==> _ +_" `shouldParse` [ IdenOp "+" , IdenOp "+_" , IdenOp "<~~" , IdenOp "==>" , Iden "_" , IdenOp "+_" ] it "Identifier" $ testLexer "a+ a +a" `shouldParse` [Iden "a+", Iden "a", IdenOp "+", Iden "a"] it "Numbers (Float And Int)" $ testLexer "1 1.0" `shouldParse` [INT 1, FLOAT 1.0] it "String and Escape" $ testLexer "\"a\\\"b\n\t\"" `shouldParse` [String "a\"b\n\t"] it "In a tricky case (entire program)" $ testLexer "type Either A B :={Left A>> Ignore This Message\nv Right B{>{><}Also Ignore This Message<}};type A :|:B:={It A v This B};define A :|:B~Either A B :={from :A :|:B ->Either A B;from arg :={let a+ :=a+ +a;a+ + +a- a};to :Either A B-> A :|:B;to arg :={if arg |some =>test;};" `shouldParse` [ Type , TypeName "Either" , TypeVar "A" , TypeVar "B" , MorseEqual , OpenBracket , TypeName "Left" , TypeVar "A" , TypeSeparator , TypeName "Right" , TypeVar "B" , CloseBracket , SemiColon , Type , TypeVar "A" , IdenOp ":|:" , TypeVar "B" , MorseEqual , OpenBracket , TypeName "It" , TypeVar "A" , TypeSeparator , TypeName "This" , TypeVar "B" , CloseBracket , SemiColon , Define , TypeVar "A" , IdenOp ":|:" , TypeVar "B" , Iso , TypeName "Either" , TypeVar "A" , TypeVar "B" , MorseEqual , OpenBracket , Iden "from" , Colon , TypeVar "A" , IdenOp ":|:" , TypeVar "B" , ArrowRight , TypeName "Either" , TypeVar "A" , TypeVar "B" , SemiColon , Iden "from" , Iden "arg" , MorseEqual , OpenBracket , Let , Iden "a+" , MorseEqual , Iden "a+" , IdenOp "+" , Iden "a" , SemiColon , Iden "a+" , IdenOp "+" , IdenOp "+" , Iden "a-" , Iden "a" , CloseBracket , SemiColon , Iden "to" , Colon , TypeName "Either" , TypeVar "A" , TypeVar "B" , ArrowRight , TypeVar "A" , IdenOp ":|:" , TypeVar "B" , SemiColon , Iden "to" , Iden "arg" , MorseEqual , OpenBracket , If , Iden "arg" , Pipe , Iden "some" , BigArrowRight , Iden "test" , SemiColon , CloseBracket , SemiColon ]
India’s goods and services tax collections fell to Rs 83,346 crore in October, from more than Rs 90,000 crore in each of the first three months after the new tax regime was rolled out on July 1. finance ministry statement attributed the lower collections to the release of state and central GST out of integrated GST (IGST) paid in the first three months, reduction in taxes and payment of GST based on self-declared tax return.So far, 95.9 lakh taxpayers have registered under GST, of which 15.1lakh are composition dealers who are required to file returns every quarter. As many as 50.1lakh returns were filed for October till November 26, the statement said.“While the overall collection for the October month is lower, this may not be a cause of immediate concern as it might be due to refunds given to exporters and opening credit claimed by businesses, along with some reduction in the rate in October,” said Pratik Jain, leader-indirect tax, PwC.“The collection for November may also be on the lower side due to substantial rate cuts from mid of the month. While to a large extent the shortfall is likely to be offset by increase in demand, the results may take another 2-3 months to become visible.”States collected Rs 87,238 crore of SGST in July, August, September and October, it said. States get a share in th e IGST collection from inter-state trade when IGST collected is used for payment of SGST. By way of such share, states received Rs 31,821 crore for August, September and October, and Rs 13,882 crores far October.The states are also entitled to a compensation for loss of revenue from the rollout of GST. A compensation amount of Rs 10,806 crore has been released to the states for July and August 2017 and Rs 13,695 crore for September and October.“States’ revenues have thus been fully protected, taking base year revenue as 2015-16 and providing for a projected revenue growth rate of 14 per cent ,” the statement said. This adds up to Rs 1.57 lakh crore for states. The Centre’s revenue on account of GST in July, August, September and October added up to Rs 58,556 crore.In addition to this, the statement said, Rs 16,233 crore had been transferred from the IGST account to CGST for the first three months and Rs 10,145 crore for October. “Taxpayers are using the balance credit available with them in the previous tax regime,” the statement said.The government has offered three reasons for revenues to be lower in October. One, the first-time requirement of paying IGST on transfer of goods from one state to another state, even within the same company.This meant an additional cash flow of IGST in the first three months, but the same was not being utilised for paying CGST and SGST when the final transaction of these goods took place. Two, the overall incidence of taxes on most of the commodities had come down under GST.Three, because GST is now based on self-declared tax return, the assesse decided on his own how much tax liability he had and claimed input tax credit as per his own calculations.
import warnings from django.apps import AppConfig class DashboardConfig(AppConfig): name = 'dashboard' def ready(self): # Remove warnings of the bunq sdk module warnings.filterwarnings(action='ignore', module='bunq')
def _exponent_handler_factory(ion_type, exp_chars, parse_func, first_char=None): def transition(prev, c, ctx, trans): if c in _SIGN and prev in exp_chars: ctx.value.append(c) else: _illegal_character(c, ctx) return trans illegal = exp_chars + _SIGN return _numeric_handler_factory(_DIGITS, transition, lambda c, ctx: c in exp_chars, illegal, parse_func, illegal_at_end=illegal, ion_type=ion_type, first_char=first_char)
<gh_stars>0 import { Box, Heading, ResponsiveContext } from "grommet" import { lighten } from "polished" import { ReactNode } from "react" import { BaseSectionProps } from "./BaseSectionProps" import { purple } from "./colors" import SectionContainer from "./SectionContainer" export type InfoBlock = { title?: string body: ReactNode icon: ReactNode } type BlockProps = InfoBlock & { isMobileLayout: boolean alignIconHorizontallyOnNarrowWidth?: boolean } /* Blocks position their icons relative to the content in a different way on different device widths * * On phones: above unless alignIconHorizontallyOnNarrowWidth is specified * On tablet-sized devices: on the left * On desktops and laptops: above */ const Block = (props: BlockProps) => ( <ResponsiveContext.Consumer> {(size) => ( <Box direction={ (size === "small" && props.alignIconHorizontallyOnNarrowWidth) || size === "mediumSmall" ? "row" : "column" } gap={props.isMobileLayout ? "small" : "medium"} fill="horizontal" > <Box direction="row" align="start" flex="grow"> <Box flex="shrink" pad="small" style={{ borderRadius: "50%", border: `1px solid ${lighten(0.3, purple)}`, background: "white", }} > {props.icon} </Box> </Box> <Box direction="column" fill="horizontal"> <Heading level="3" margin="0"> {props.title} </Heading> <Box>{props.body}</Box> </Box> </Box> )} </ResponsiveContext.Consumer> ) type InfoBlocksSectionProps = { title: string subtitle?: string blocks: Array<InfoBlock> alignIconsHorizontallyOnNarrowWidth?: boolean } & BaseSectionProps const InfoBlocksSection = (props: InfoBlocksSectionProps) => ( <SectionContainer flex="grow"> <Box pad={{ horizontal: "large" }} style={{ maxWidth: 1600 }} alignSelf="center"> <Box direction="column" fill="horizontal" align="start"> <Heading level={2} margin="0"> {props.title} </Heading> {props.subtitle ? <Box fill="horizontal">{props.subtitle}</Box> : null} </Box> <Box height="20px" /> <Box direction={props.isMobileLayout ? "column" : "row"} gap="40px" pad={{ vertical: "medium" }} align="start" > {props.blocks.map((block) => ( <Box fill="horizontal"> <Block {...block} isMobileLayout={props.isMobileLayout} alignIconHorizontallyOnNarrowWidth={props.alignIconsHorizontallyOnNarrowWidth} /> </Box> ))} </Box> </Box> </SectionContainer> ) export default InfoBlocksSection
def wheelCoronalScroll(self, event): direction = np.sign(event.angleDelta().y()) if event.modifiers() == Qt.ShiftModifier: if direction > 0: self.canvas.coronalIndex = ( max(0, self.canvas.coronalIndex-1)) else: self.canvas.coronalIndex = ( min(self.canvas.image.shape[1], self.canvas.coronalIndex+1)) coronalQPixmap = self.canvas.getQPixmap( self.canvas.getImageWithCoronal()) self.coronalQLabel.setPixmap(coronalQPixmap)
<reponame>PBarde/IBoatPIE<gh_stars>1-10 #!/usr/bin/env python3 # -*- coding: utf-8 -*- """ :Autors: <NAME> & <NAME> Module encapsulating all the classes required to manipulate weather forecasts. """ import netCDF4 import pickle import numpy as np import math import matplotlib import matplotlib.pyplot as plt from matplotlib import animation from mpl_toolkits.basemap import Basemap from scipy.interpolate import RegularGridInterpolator as rgi class Weather: """ This class is supposed to be used on GrAD's server files. No warranty however. :ivar numpy.array lat: latitude in degree, comprised in [-90 : 90] :ivar numpy.array lon: longitude in degree, comprised in [0 : 360] :ivar numpy.array time: in days. Time is given in days (GrADS gives 81 times steps of 3hours so it is 10.125 days with time steps of 0.125 days) :ivar u: velocity toward east in m/s. Must be of shape (ntime, nlat, nlon) :ivar v: velocity toward north. """ def __init__(self, lat=None, lon=None, time=None, u=None, v=None): """ class constructor,by default sets all attributes to None. lat, lon, time u and v must have same definition as in netCDF4 file of GrADS server. """ self.lat = lat self.lon = lon self.time = time self.u = u self.v = v @classmethod def load(cls, path, latBound=[-90, 90], lonBound=[0, 360], timeSteps=[0, 64]): """ Takes a file path where a Weather object is saved and loads it into the script. If no lat or lon boundaries are defined, it takes the whole span present in the saved object. If no number of time step is defined it takes the whole span present if the saved object (but not more than 81 the value for GrAD files). :param str path: path to file of saved Weather object. :param latBound: [minlat, maxlat], lat span one wants to consider, the largest span is [-90,90]. :type latBound: list of int. :param lonBound: [minlon, maxlon], lon span one wants to consider, the largest span is [0,360]. :type lonBound: list of int. :param timeSteps: time steps of the forecasts one wants to load. :type timeSteps: list of int. :return: loaded object. :rtype: :any:`Weather` """ filehandler = open(path, 'rb') obj = pickle.load(filehandler) filehandler.close() Cropped = obj.crop(latBound, lonBound, timeSteps) return Cropped @classmethod def download(cls, url, path, ens=False, latBound=[-90, 90], lonBound=[0, 360], timeSteps=[0, 64]): """ Downloads Weather object from url server and writes it into path file. :param str url: url to server (designed for GrAD server). :param str path: path toward where the downloaded object is to be saved. :param bool ens: True is the downloaded data corresponds to a GEFS forecast, False for GFS. :param latBound: [minlat, maxlat], lat span one wants to consider, the largest span is [-90,90]. :type latBound: list of int. :param lonBound: [minlon, maxlon], lon span one wants to consider, the largest span is [0,360]. :type lonBound: list of int. :param timeSteps: time steps of the forecasts one wants to load. :type timeSteps: list of int. :return: the object corresponding to the downloaded weather. :rtype: :any:`Weather` """ file = netCDF4.Dataset(url) lat = file.variables['lat'][:] lon = file.variables['lon'][:] # put time bounds ! time = file.variables['time'][timeSteps[0]:timeSteps[1]] # latitude lower and upper index latli = np.argmin(np.abs(lat - latBound[0])) latui = np.argmin(np.abs(lat - latBound[1])) # longitude lower and upper index lonli = np.argmin(np.abs(lon - lonBound[0])) lonui = np.argmin(np.abs(lon - lonBound[1])) lat = lat[latli:latui] lon = lon[lonli:lonui] if ens : u = file.variables['ugrd10m'][0,timeSteps[0]:timeSteps[1], latli:latui, lonli:lonui] v = file.variables['vgrd10m'][0,timeSteps[0]:timeSteps[1], latli:latui, lonli:lonui] else : u = file.variables['ugrd10m'][timeSteps[0]:timeSteps[1], latli:latui, lonli:lonui] v = file.variables['vgrd10m'][timeSteps[0]:timeSteps[1], latli:latui, lonli:lonui] toBeSaved = cls(lat, lon, time, u, v) file.close() filehandler = open(path, 'wb') pickle.dump(toBeSaved, filehandler) filehandler.close() return toBeSaved def getPolarVel(self): """ Computes wind magnitude and direction and adds it to the object's attribute as self.wMag (magnitude) and self.wAng (direction toward which the wind is blowing). """ self.wMag = np.empty(np.shape(self.u)) self.wAng = np.empty(np.shape(self.u)) for t in range(np.size(self.time)): self.wMag[t] = (self.u[t] ** 2 + self.v[t] ** 2) ** 0.5 for i in range(np.size(self.lat)): for j in range(np.size(self.lon)): self.wAng[t, i, j] = (180 / math.pi * math.atan2(self.u[t, i, j], self.v[t, i, j])) % 360 @staticmethod def returnPolarVel(u, v): """ Computes wind magnitude and direction from the velocities u and v. :param float u: velocity toward east. :param float v: velocity toward north. :return: magnitude, direction :rtype: float, float """ mag = (u ** 2 + v ** 2) ** 0.5 ang = (180 / math.pi * math.atan2(u, v)) % 360 return mag, ang def crop(self, latBound=[-90, 90], lonBound=[0, 360], timeSteps=[0, 64]): """ Returns a cropped Weather object's data to the selected range of lon, lat and time steps. If no lat or lon boundaries are defined it takes the whole span present in the object. If no number of time step is defined it takes the whole span present if the object (but not more than 81 the value for GrAD files) :param latBound: [minlat, maxlat], the largest span is [-90,90] :type latBound: list of int :param lonBound: [minlon, maxlon], the largest span is [0,360] :type lonBound: list of int :param int nbTimes: number of frames to load :return: the cropped object. :rtype: :any:`Weather` """ if latBound != [-90, 90] or lonBound != [0, 360]: Cropped = Weather() lat_inds = np.where((self.lat > latBound[0]) & (self.lat < latBound[1])) lon_inds = np.where((self.lon > lonBound[0]) & (self.lon < lonBound[1])) Cropped.time = self.time[timeSteps[0]:timeSteps[1]] Cropped.lat = self.lat[lat_inds] Cropped.lon = self.lon[lon_inds] Cropped.u = np.empty((timeSteps[1] - timeSteps[0], np.size(lat_inds), np.size(lon_inds))) Cropped.v = np.empty((timeSteps[1] - timeSteps[0], np.size(lat_inds), np.size(lon_inds))) for time in range(timeSteps[1] - timeSteps[0]): i = 0 for idlat in lat_inds[0]: j = 0 for idlon in lon_inds[0]: Cropped.u[time, i, j] = self.u[timeSteps[0] + time, idlat, idlon] Cropped.v[time, i, j] = self.v[timeSteps[0] + time, idlat, idlon] j = j + 1 i = i + 1 elif latBound == [-90, 90] and lonBound == [0, 360] and timeSteps != [0, 64]: Cropped = Weather() Cropped.lat = self.lat Cropped.lon = self.lon Cropped.time = self.time[timeSteps[0]:timeSteps[1]] Cropped.u = self.u[timeSteps[0]:timeSteps[1]][:][:] Cropped.v = self.v[timeSteps[0]:timeSteps[1]][:][:] else: Cropped = self return Cropped def plotQuiver(self, proj='mill', res='i', instant=0, Dline=5, density=1): """ Plots a quiver of the :any:`Weather` object's wind for a given instant. Basemap projection using the lat/lon limits of the data itself. :param str proj: `Basemap <https://matplotlib.org/basemap/api/basemap_api.html#module-mpl_toolkits.basemap>`_ projection method. :param str res: `Basemap <https://matplotlib.org/basemap/api/basemap_api.html#module-mpl_toolkits.basemap>`_ resolution. :param int instant: Time index at which the wind should be displayed. :param int Dline: Lat and lon steps to plot parallels and meridians. :param int density: Lat and lon steps to plot quiver. :return: Plot framework. :rtype: `pyplot <https://matplotlib.org/api/pyplot_api.html>`_ """ # Start with setting the map. # projection using the limits of the lat/lon data itself: plt.figure() m = Basemap(projection=proj, lat_ts=10, llcrnrlon=self.lon.min(), \ urcrnrlon=self.lon.max(), llcrnrlat=self.lat.min(), urcrnrlat=self.lat.max(), \ resolution=res) x, y = m(*np.meshgrid(self.lon, self.lat)) m.quiver(x[0::density, 0::density], y[0::density, 0::density], self.u[instant, 0::density, 0::density], self.v[instant, 0::density, 0::density]) m.drawcoastlines() m.fillcontinents() m.drawmapboundary() m.drawparallels(self.lat[0::Dline], labels=[1, 0, 0, 0]) m.drawmeridians(self.lon[0::Dline], labels=[0, 0, 0, 1]) plt.title('Wind amplitude and direction in [m/s] at time : ' + str(self.time[instant]) + ' days') plt.show() return m def plotMultipleQuiver(self, otherWeather, proj='mill', res='i', instant=0, Dline=5, density=1): """ Pretty much the same than :func:`plotQuiver` but to superimpose two quivers. :param Weather otherWeather: Second forecasts to be plotted with the one calling the method. """ # Plot the field using Basemap. Start with setting the map # projection using the limits of the lat/lon data itself: plt.figure() m = Basemap(projection=proj, lat_ts=10, llcrnrlon=self.lon.min(), \ urcrnrlon=self.lon.max(), llcrnrlat=self.lat.min(), urcrnrlat=self.lat.max(), \ resolution=res) x, y = m(*np.meshgrid(self.lon, self.lat)) m.quiver(x[0::density, 0::density], y[0::density, 0::density], self.u[instant, 0::density, 0::density], self.v[instant, 0::density, 0::density],color='black') x, y = m(*np.meshgrid(otherWeather.lon, otherWeather.lat)) m.quiver(x[0::density, 0::density], y[0::density, 0::density], otherWeather.u[instant, 0::density, 0::density], otherWeather.v[instant, 0::density, 0::density], color = 'red') m.drawcoastlines() m.fillcontinents() m.drawmapboundary() m.drawparallels(self.lat[0::Dline], labels=[1, 0, 0, 0]) m.drawmeridians(self.lon[0::Dline], labels=[0, 0, 0, 1]) plt.title('Wind amplitude and direction in [m/s] at time : ' + str(self.time[instant]) + ' days') plt.show() return plt def plotColorQuiver(self, proj='mill', res='i', instant=0, Dline=5, density=1): """ Pretty much the same than :func:`plotQuiver` but on a contour plot of wind magnitude. """ # Plot the field using Basemap. Start with setting the map # projection using the limits of the lat/lon data itself: font = {'family': 'normal', 'weight': 'bold', 'size': 22} matplotlib.rc('font', **font) plt.figure() m = Basemap(projection=proj, lat_ts=10, llcrnrlon=self.lon.min(), \ urcrnrlon=self.lon.max(), llcrnrlat=self.lat.min(), urcrnrlat=self.lat.max(), \ resolution=res) x, y = m(*np.meshgrid(self.lon, self.lat)) m.pcolormesh(x, y, self.wMag[instant], shading='flat', cmap=plt.cm.jet) m.quiver(x[0::density, 0::density], y[0::density, 0::density], self.u[instant, 0::density, 0::density], self.v[instant, 0::density, 0::density]) cbar = m.colorbar(location='right') cbar.ax.set_ylabel('wind speed m/s') m.drawcoastlines() m.fillcontinents() m.drawmapboundary() m.drawparallels(self.lat[0::Dline], labels=[1, 0, 0, 0]) m.drawmeridians(self.lon[0::Dline], labels=[0, 0, 0, 1]) plt.title('Wind amplitude and direction in [m/s] at time : ' + str(self.time[instant]) + ' days') plt.show() return plt def animateQuiver(self, proj='mill', res='i', instant=0, Dline=5, density=1): """ Pretty much the same than :func:`plotQuiver` but animating the quiver over the different time steps starting at instant. """ # Plot the field using Basemap. Start with setting the map # projection using the limits of the lat/lon data itself: fig = plt.figure() m = Basemap(projection=proj, lat_ts=10, llcrnrlon=self.lon.min(), \ urcrnrlon=self.lon.max(), llcrnrlat=self.lat.min(), urcrnrlat=self.lat.max(), \ resolution=res) x, y = m(*np.meshgrid(self.lon, self.lat)) plt.C = m.pcolormesh(x, y, self.wMag[instant], shading='flat', cmap=plt.cm.jet) plt.Q = m.quiver(x[0::density, 0::density], y[0::density, 0::density], self.u[instant, 0::density, 0::density], self.v[instant, 0::density, 0::density]) m.colorbar(location='right') m.drawcoastlines() m.fillcontinents() m.drawmapboundary() m.drawparallels(self.lat[0::Dline], labels=[1, 0, 0, 0]) m.drawmeridians(self.lon[0::Dline], labels=[0, 0, 0, 1]) def update_quiver(t, plt, self): """method required to animate quiver and contour plot """ plt.C = m.pcolormesh(x, y, self.wMag[instant + t], shading='flat', cmap=plt.cm.jet) plt.Q = m.quiver(x[0::density, 0::density * 2], y[0::density, 0::density * 2], self.u[instant + t, 0::density, 0::density * 2], self.v[instant + t, 0::density, 0::density * 2]) plt.title('Wind amplitude and direction in [m/s] at time : ' + str(self.time[instant + t]) + ' days') return plt anim = animation.FuncAnimation(fig, update_quiver, frames=range(np.size(self.time[instant:])), fargs=(plt, self), interval=50, blit=False) plt.show() return anim def Interpolators(self): """ Add the u and v `Interpolator <https://docs.scipy.org/doc/scipy-0.16.1/reference/generated/scipy.interpolate.RegularGridInterpolator.html>`_ objects to the weather object (two new attributes : self.uInterpolator and self.vInterpolator). :: u = self.uInterpolator([t,lat,lon]) #with u in m/s, t in days, lat and lon in degrees. """ self.uInterpolator = rgi((self.time, self.lat, self.lon), self.u) self.vInterpolator = rgi((self.time, self.lat, self.lon), self.v)
Sunday on CNN, veteran journalist Carl Bernstein reacted to the report that Democratic presidential nominee Hillary Clinton is canceling her trip to California after being diagnosed with pneumonia, saying Clinton needs to spend an hour in front of press with her doctor discussing her medical history. “I think we can hope that some people around her will finally say, ‘Hillary, you’ve got to open up in all kinds of ways here because that’s when you’re really at your best.’ But I think, among other things, she and her doctor need to be in front of the press for an hour with medical records and discussing and open to questions about her medical history, and Donald Trump needs to do the same. And we need to demand it of both of them,” Bernstein stated. Follow Trent Baker on Twitter @MagnifiTrent
A number of leading browser vendors and other tech companies, including Microsoft, Google, Apple, Adobe, Facebook, HP, Nokia, Mozilla, Opera and the W3C, just announced the launch of the Web Platform Docs project at WebPlatform.org. The project aims to create “a new, authoritative open web standards documentation site,” says Opera Software. The wiki-like site, says Opera, wants to ensure that developers can easily find “accurate, quality information on all the latest HTML5, CSS4 and other standards features across the multitude of available web-based resources.” Currently, the companies behind WebPlatform.org argue, developers struggle to find authoritative answers to their questions about modern web technologies and often, developers have to resort to figuring out the right solutions through trial and error (the Google team describes this as a “scavenger hunt”). The new site, says Adobe, will change this by providing developers a “single, definitive resource to go to.” On the site, users will find API documentation, information on browser compatibility, examples, best practices and the status of the various specifications. The site has been seeded with information from the participating organizations, but anyone will be able to contribute content to the project. The W3C will serve as the site’s convener and curator, but the various participating organizations stress that this is a community effort. “People in the web community — including browser makers, authoring tool makers, and leading-edge developers and designers — have tremendous experience and practical knowledge about the web,” said W3C Director Tim Berners-Lee in a canned statement today. “Web Platform Docs is an ambitious project where all of us who are passionate about the web can share knowledge and help one another.”
Evaluation of the FVTD technique for electromagnetic field problems containing highly inhomogeneous structures In this paper, the cut-off frequency of the dominant mode of a finned waveguide is computed using the Symmetric Condensed Node Transmission Line Matrix (SCN-TLM) and Finite Volume Time Domain (FVTD) methods. The cut-off frequencies are computed for various sizes of fins as well as fine and coarse meshes. The values of the cut-off frequencies are obtained and compared with each other and with a benchmark solution.
<filename>ecs/StartInstance.go package ecs import ( "fmt" "net/http" "github.com/qiniu/stack-go/components/client" ) // StartInstanceParams 主机开机参数 type StartInstanceParams struct { RegionID string `json:"region_id"` InstanceID string `json:"instance_id"` // 适用于实例规格族d1、i1或者i2等包含本地盘的实例。当d1、i1或者i2的本地盘出现故障时,可通过此参数指定启动实例时,是否将实例恢复到最初的健康状态。取值范围: // - true:将实例恢复到最初的健康状态,实例原有本地盘中的数据将会丢失。 // - false(默认):不做任何处理,维持现状。 InitLocalDisk bool `json:"init_local_disk"` } // StartInstanceResponse 主机开机返回数据 type StartInstanceResponse struct { RequestID string `json:"request_id"` } // StartInstance 主机开机 func (s *ECS) StartInstance(p *StartInstanceParams) (resp *StartInstanceResponse, err error) { req := client.NewRequest(http.MethodPost, fmt.Sprintf("/v1/vm/instance/%s/start", p.InstanceID)).WithRegionID(&p.RegionID).WithJSONBody(p) err = s.client.Call(req, &resp) return }
Molecular dynamics simulation of anticancer drug delivery from carbon nanotube using metal nanowires In this study, we have investigated delivery of cisplatin as the anticancer drug molecules in different carbon nanotubes (CNTs) in the gas phase using molecular dynamics simulation. We examined the shape and composition of the releasing agent by using the different nanowires and nanoclusters. We also investigated the doping effect on the drug delivery process using N‐, Si, B‐, and Fe‐doped CNTs. Different thermodynamics, structural, and dynamical properties have been studied by using the pure and different doped CNTs in this study. Our results show that the doping of the CNT has significant effect on the rate of the drug releasing process regardless of the composition of the releasing agent. © 2019 Wiley Periodicals, Inc.
/** * Sleep for some number of seconds. * * @param seconds The number of seconds to sleep. * @see <a href="https://stackoverflow.com/questions/24104313/how-do-i-make-a-delay-in-java">make a delay in java</a> */ public static void sleep(int seconds){ try{ Thread.sleep(1000); } catch(InterruptedException ex) { Thread.currentThread().interrupt(); } }
A narrow path in the wilderness off the Temple Road in Mcleodganj leads to Jampling Elders’ Home run by the Central Tibetan Administration. Here, flipping through the pages of a national daily, sits 66-year old Bhuchung Tsering. As a news item commemorating the 1965 war heroes catches his eye, his mind immediately trails off to another war that he and scores of other Tibetans had fought for India. Tsering was part of the secret Special Frontier Force – also known as Establishment 22 or “Tutu Army” (Tutu is the colloquial term for Two, Two). The force comprised solely of Tibetan foot-soldiers – which the Indian government raised and used, most notably, in the 1971 Bangladesh war. However, decades later, thousands of SFF ex-servicemen are living in abject poverty, still waiting for their dues.Born in U-Tsang province of Tibet in 1950, Tsering left for India with His Holiness Dalai Lama at the age of nine. In April 1970, while he was still finishing school, he enrolled in Tutu. “It was almost like an unsaid compulsion then. Most young Tibetans about to complete schooling were expected to enrol,” he recalled.Soon after his training finished, the 1971 war broke out. Almost immediately, he was flown out of the SFF headquarters in Chakrata in Uttarakhand to Sarsawa in Uttar Pradesh. From there, the Tutu soldiers moved to Guwahati, from where, convoys carried Tsering and his companions to Lunglei in Mizoram. “We then sneaked into Bangladesh, towards the Burma border in North.” The images from that war are all too vivid. Tsering lost a number of school fellows; many others were injured critically.His own health started deteriorating some years later. Initial treatment continued at the military hospital. However, after six years of service, he was discharged from Tutu in 1976 – for further medical treatment of “pulmonary tuberculosis”, his discharge papers read.At the time of his retirement, all Tsering received was Rs 850 – the money that had accumulated in his Service Savings Deposit fund, similar to Provident Fund for Indian employees. “There were no benefits, no pension. Whatever I received was mostly spent over the next two-and-a-half years on my treatment at the civil hospital in Dehradun.”Tsering never married. With not much of an education, no skill developments training imparted by Tutu and failing health, he could neither find a job nor a companion – a common story of most Tutu ex-servicemen. Penniless, he now lives in the Jampling old age home with at least 34 other ex-servicemen, many of whom participated in 1971.The Special Frontier Force, codenamed Establishment 22, was set up by the Indian government in November 1962 after the Indo-China war. Its headquarters were based in Chakrata, Dehradun and Major General Uban Singh became its first chief. Though SFF was never part of the Indian Army, its seniormost officer would always be an army officer on deputation to SFF as an Inspector General, veteran Tibetologist Claude Arpi told Scroll.Initially, former Chushi Gangdruk fighters were recruited in SFF. These were traders, students and monks who had organised themselves as a guerrilla force in Tibet in 1956 and were instrumental in staging Dalai Lama’s escape to India. After Chinese government’s heavy crackdown, they all had fled to India. Subsequently, young Tibetans began to be recruited in heavy numbers. Dalai Lama’s younger brother, Tenzin Chogyal aka Ngari Rinpoche was also one of the officers in 22.According to UK-based professor Tsering Shakya, at one time, every Tibetan finishing high school in India, particularly those who failed class 10 and were unable to gain admission in college, had to join and go for compulsory military training. “In the past, every Tibetan household in the refugee settlement had at least one member serving in 22. Since late 1980s, this has declined. But if you look at the Tibetan refugee settlements in Ladakh and Arunachal, you will still find almost all households have atleast one member serving in 22,” he wrote in an email.According to filmmaker Tenzing Sonam, the CIA and Indian Intelligence agencies started giving weapons, training and funds to the Tibetan rebels for setting up both the 2000-strong Mustang Resistance Force in Nepal and the SFF in India.Sonam’s father Lhamo Tsering was then the chief liaison officer between CIA and Tibetan rebels for Mustang. He recalled how a tripartite office was set up in Delhi with a representative each of CIA, Indian Intelligence and Tibetan rebels. “Tibetans were better suited for mountainfare, had the language advantage and could be useful against Chinese in case the need arose. That’s how Tutu army was formed.”“When CIA withdrew its aid to the Tibetan establishment in Mustang, some of those soldiers from Mustang were also recruited in 22,” Shakya added.Senior journalist Dilip Bobb, who broke the SFF story in India Today magazine in late 1970s, told Scroll that the main aim was to “create unrest inside Tibet” and spy on China’s nuclear plan. “SFF soldiers were part of a covert operation led by an Indian Army colonel under the garb of mountaineering expedition to install listening devices on high altitude. They were specifically trained in parachute jumping, high altitude warfare and surveillance. The Aviation Research Centre was set up under R&AW [Research and Analysis Wing] to aid SFF get airborne. The idea was to airdrop them into Tibet.”A highly placed former R&AW officer said, on condition of anonymity that another reason for setting up SFF was that a number of armed rebels were “floating around” in India and “could have become a problem”.“They were extremely unruly; a law into themselves. So, it was decided to keep them in a camp and pay some money,” the R&AW officer said.According to members of the Ex- Tibetan Servicemen Association in Dharamshala, nearly 3000 SFF soldiers participated in the Bangladesh war. Those still alive recollect the same route like a photographic memory: they were summoned back to Chakrata from their postings, flown to Sarsawa Air Force station, taken to Assam from where they branched out to Lunglei and Demagiri in Mizoram and finally into Bangladesh.“Many of these were monks. Almost 58 soldiers died and 250 received cash awards depending on injuries and citations,” President Tashi Bhuchung said.There, however, was no official recognition – no chakras, no medal. Moreover, the ones who had actually helped India fight and win such an important war were also among the majority who never got any monetary benefits.The initial terms of employment for SFF were very hazy.Till 1985, retiring soldiers got no monetary benefits. Interestingly, a large number of those who participated in Bangladesh war fell in this group. 1985 onwards, SFF began giving lumpsome amounts to those retiring. These were actually small amounts, mainly comprising of the soldiers’ own savings and gratuity. According to official records available with Tsering Dhondup, who joined Tutu in December 1962 and is one of earliest soldiers alive, almost 4500 soldiers retired before 1985 and around 2000 in 1985-2008.From 2009, the SFF soldiers started getting pensions. However, in the last 3-4 years, servicemen say things have really improved because the current payscales and pensions are almost equal to that of Indian Army.The former R&AW officer said that though Indian government is “under debt of gratitude” to SFF troops, “the bureaucracy was rule bound”. “These were foreigners; so the question of paying pensions initially did not arise. It was a very clandestine affair. They were directly under the cabinet secretariat, so these were unaccountable funds. They used to get what R&AW sources would get in early days, which was not a great deal of money. It was only after the 4th pay commission that things began improving and that, some Tibetans soldiers born in India began getting pensions.”Given the circumstances, many Bangladesh war veterans were then forced to work as chowkidaars, porters or garment businessmen to make ends meet.72-year old Sonam, for example, worked as a carpenter. As one enters his small, non-descript house on Jogiwara road, wood-dust is scattered all around. Inside, the room is cramped with books and files belonging to his three children, idols of Tibetan deities and a picture of him in the uniform proudly displayed on one of the walls.When he retired in 1982 at the post equivalent to sepoy, he got Rs 11,000. “Almost all of it was spent on my children’s education. Never got a chance to invest or think of future,” Sonam said.To earn a living, he began working as a carpenter. The wooden prayer table and book shelf at his home are fine examples of his skill. His wife still works in the local handicrafts unit, making Rs 800 a month. “The only good thing is that the house is given to her by handicrafts department and we will get to live here till she is alive. Recently, 22 gave me a smart card to buy rations from Army canteen.” Sonam’s daughter is relieved that they have now also got medical insurance; last year when he got Tuberculosis, they paid for treatment from their own savings.Similarly, Bhuchung Tsering worked as a second-hand clothes businessman for 28 years, procuring garments from Delhi and selling them in Manali. “In the end, I had become too weak. The municipality would constantly trouble us; make us go away from roads, snatch away our goods. I could no longer cope with it, so I came to this old age home. Here, clothes, food, medical checkup is all free,” he told Scroll.His friend, Pema Wangden, whose unit was posted in Demagiri during Bangladesh war and was responsible for supplying rations and reinforcements to troops, retired in 1983 after 19 years of service following a severe back injury. He got Rs 8000 in all.Jampling old age home in-charge, Wangchen, himself an ex-serviceman, said most retired Tutu soldiers have neither savings nor a family to take care of them. As such, 20 seats have been reserved for ex-servicemen in the old age home. Ex-Servicemen Association records state that 349 members in Dharamshala and around 6000-7000 across India have got no pensions.Some of them, however, get financial assistance from CTA and SFF based on how “critical” their needs are.As per June 2013, SFF was paying Rs 1000 per month to 1127 soldiers who retired before 1985 as “financial assistance”.The Department of Home under Central Tibetan Administration also provides Rs 1300 monthly to 202 ex-servicemen who retired before 1985 and Rs 1000 monthly to 200 ex-servicemen who retired between 1985 and 2008 across 41 settlements in North East, Himachal Pradesh, South India and Nepal, Additional Secretary Tsewang Dolma told Scroll.Most ex-servicemen, however, laugh away at the amounts. “The average monthly expenditure of an elderly is minimum Rs 1000-2000 and goes up to Rs 7000 in case of severe health issues. What will Rs 1000 do in today’s day and age?”A case has been going on reportedly in the headquarters in Chakrata for many years, where it is being “considered to pay some financial assistance to the SFF veterans”. Refusing to be identified, a young Tibetan from Darjeeling now based in Delhi, whose father fought during 1971, says he “waited for dues all his life” before he eventually died of lung cancer last year.All said and done, what pinches these ex-servicemen most is not even the non-payment of dues but the fact that they were never allowed to fight against the Chinese – which was the real aim behind raising this force.A 79-year old monk I met at Namgyal monastery recounted the initial reaction to the orders. “300 monks from my monastery had joined SFF. The feeling did crop up that Pakistan was not our personal enemy. But India was in trouble and we had to obey orders. We take our role as guests seriously,” he said, smiling. After retiring, the soldier went back to his monastic life. “There were no funds to sustain myself. So, I came back.”He, however, does not feel dejected at being denied recognition or monetary benefits. “Our establishment was a secret and we understand that. Humne Hindustan ka namak khaya, badle mein unki madad ki. Hisaab barabar.”
<reponame>DLMdevelopments/Hologram-pyramid #!/usr/bin/env python import rospy import curses import socket import time import sys import yaml from avatar_msg.msg import AUlist from avatar_msg.msg import expresion def handle_exp(req): # call the gestures dictionary from yaml file global pub stream = open('../cfg/gestos.yaml') data = yaml.load(stream) stream.close() # publica la expresion deseada options = [data] expr = data[req.exp] action_units = list(expr['aus']) + list(req.au_ext) print action_units pub.publish(req.it, req.tt, action_units) def gestion_gestos(): global pub rospy.init_node('gestion_gestos', anonymous=True) print "Nodo gestion gestos creado" pub = rospy.Publisher('topic_au', AUlist, queue_size=1) sub = rospy.Subscriber("topic_expresion",expresion,handle_exp,queue_size=1) rospy.spin() if __name__ == '__main__': try: gestion_gestos() except rospy.ROSInterruptException: pass
// String will print the book info in a string (for fmt) func (book Book) String() string { return fmt.Sprintf( "%s\n Title: %s\n Authors: %s\n Year: %s\n Price: %s", book.Category, book.Title, strings.Join(book.Authors, ", "), book.Year, book.Price, ) }
/** * * Linea de un vehiculo * */ @Entity @Table(name = "LINEA") public class Linea { @Id @GeneratedValue (generator = "SEQ") @Column(name = "ID_LINEA") private Long idLinea; @Column(name = "NOMBRE") private String nombre; @Column(name = "CILINDRAJE") private int cilindraje; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "MARCA") private Marca marca; public String getNombre() { return nombre; } public void setNombre(String nombre) { this.nombre = nombre; } public int getCilindraje() { return cilindraje; } public void setCilindraje(int cilindraje) { this.cilindraje = cilindraje; } public Marca getMarca() { return marca; } public void setMarca(Marca marca) { this.marca = marca; } public Long getIdLinea() { return idLinea; } public void setIdLinea(Long idLinea) { this.idLinea = idLinea; } }
/** read a stated CSV file from disk */ vector<string> utilCSV::readCSV(string iFileN) { vector<string> theCSV; string inFileLine; string inFileName = fromCSVFile; ifstream infile(fromCSVFile.c_str(), ios::in); if (!infile) { cout << "Could not open file." << endl; exit(1); } while(true) { infile >> inFileLine; if(infile.eof()) break; theCSV.push_back(inFileLine+"\n"); } infile.close(); return theCSV; }
/** * \brief - set default values for element in tm DB. * This function is not called during init since the current default is 0. * In a case the default should be changed to be different from 0, call this function from dnx_algo_port_init() */ static shr_error_e dnx_algo_port_db_tm_init( int unit, dnx_algo_port_db_2d_handle_t tm_handle) { dnx_algo_port_db_tm_t tm_db; SHR_FUNC_INIT_VARS(unit); sal_memset(&tm_db, 0, sizeof(dnx_algo_port_db_tm_t)); tm_db.header_type_in = BCM_SWITCH_PORT_HEADER_TYPE_ETH; SHR_IF_ERR_EXIT(dnx_algo_port_db.tm.set(unit, tm_handle.h0, tm_handle.h1, &tm_db)); exit: SHR_FUNC_EXIT; }
/////////////////////////////////////////////////////////////////////////////// /** Present the current frame by swapping the back buffer, then move to the next back buffer and also signal the fence for the current queue slot entry. */ void D3D12Sample::Present () { swapChain_->Present (1, 0); const auto fenceValue = currentFenceValue_; commandQueue_->Signal (frameFences_ [currentBackBuffer_].Get (), fenceValue); fenceValues_[currentBackBuffer_] = fenceValue; ++currentFenceValue_; currentBackBuffer_ = (currentBackBuffer_ + 1) % GetQueueSlotCount (); }
/** * 0010 Add the EmbeddedProcess and TransientEmbeddedProcess entities */ private void update0215MoreProcessTypes() { this.archiveBuilder.addEntityDef(addEmbeddedProcessEntity()); this.archiveBuilder.addEntityDef(addTransientEmbeddedProcessEntity()); }
Inside the world's first transistor I love prototype technology. There's a magic in seeing what the original working model looked like, which makes so many systems instantly more-understandable to a lay person. Strip away the perfection and packaging, and you can see what's really going on. In this video, Bill Hammack—professor of engineering at the University of Illinois at Urbana-Champaign and part-time Engineer Guy for the Internet—demonstrates this effect by introducing us to a model of the world's first transistor. An amplifier for electric signals, transistors were at the heart of all the miniaturization and techno-populism that began in the 1960s and 70s. Watch this video, and you'll understand how a transistor works, and why these little things are so very important. Thanks, Engineer Guy! Learn more at PBS's great Transistorized! site
from collections import defaultdict, deque, Counter from heapq import heappush, heappop, heapify import math from bisect import bisect_left, bisect_right import random from itertools import permutations, accumulate, combinations import sys import string from copy import deepcopy INF = float('inf') def LI(): return list(map(int, sys.stdin.readline().split())) def I(): return int(sys.stdin.readline()) def LS(): return sys.stdin.readline().split() def S(): return sys.stdin.readline().strip() def IR(n): return [I() for i in range(n)] def LIR(n): return [LI() for i in range(n)] def SR(n): return [S() for i in range(n)] def LSR(n): return [LS() for i in range(n)] def SRL(n): return [list(S()) for i in range(n)] def MSRL(n): return [[int(j) for j in list(S())] for i in range(n)] mod = 10 ** 9 + 7 h, w = LI() sy, sx = LI() gy, gx = LI() sy -= 1 sx -= 1 gy -= 1 gx -= 1 grid = SR(h) dp = [[INF] * w for _ in range(h)] q = deque([(sy, sx)]) dp[sy][sx] = 0 while q: uy, ux = q.popleft() for i, j in ((1, 0), (0, 1), (-1, 0), (0, -1)): ny = uy + i nx = ux + j if 0 <= ny < h and 0 <= nx < w and grid[ny][nx] == "." and dp[ny][nx] > dp[uy][ux]: dp[ny][nx] = dp[uy][ux] q.appendleft((ny,nx)) for i in range(-2, 3): for j in range(-2, 3): ny = uy + i nx = ux + j if 0 <= ny < h and 0 <= nx < w and grid[ny][nx] == "." and dp[ny][nx] > dp[uy][ux] + 1: dp[ny][nx] = dp[uy][ux] + 1 q.append((ny, nx)) print(dp[gy][gx] if dp[gy][gx] != INF else -1)
package org.lejos.example; import lejos.nxt.Button; /** * Example leJOS Project with an ant build file * */ public class HelloWorld { public static void main(String[] args) { System.out.println("Hello World"); Button.waitForAnyPress(); } }
// Copyright 2016 Amazon.com, Inc. or its affiliates. All Rights Reserved. // // Licensed under the Apache License, Version 2.0 (the "License"). You may not // use this file except in compliance with the License. A copy of the // License is located at // // http://aws.amazon.com/apache2.0/ // // or in the "license" file accompanying this file. This file is distributed // on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, // either express or implied. See the License for the specific language governing // permissions and limitations under the License. //+build windows // Package updateec2config implements the UpdateEC2Config plugin. package updateec2config const ( //minimum version for EC2 config service minimumVersion = "0" // PipelineTestVersion represents fake version for pipeline tests PipelineTestVersion = "9999.0.0.0" //EC2 config agent constants EC2UpdaterPackageName = "aws-ec2windows-ec2configupdater" EC2ConfigAgentName = "aws-ec2windows-ec2config" EC2UpdaterFileName = "EC2ConfigUpdater.zip" EC2SetupFileName = "EC2ConfigSetup.zip" Updater = "EC2ConfigUpdater" //redefined here because manifest file has a spelling error which will need to be continued PackageVersionHolder = "{PacakgeVersion}" //update command arguments SetupInstallCmd = " --setup-installation" SourceVersionCmd = "-current-version" SourceLocationCmd = "-current-source" SourceHashCmd = "-current-hash" TargetVersionCmd = "-target-version" TargetLocationCmd = "-target-source" TargetHashCmd = "-target-hash" MessageIDCmd = "-message-id" HistoryCmd = "-history" InstanceID = "-instance-id" DocumentIDCmd = "-document-id" RegionIDCmd = "-instance-region" UserAgentCmd = "-user-agent" MdsEndpointCmd = "-mds-endpoint" UpdateHealthCmd = " --health-update" UpdateCmd = " --update" //constant num histories numHistories = "10" //HTTP format for ssmagent HTTPFormat = "https://aws-ssm-{Region}.s3.amazonaws.com" //S3 format for updater S3Format = "https://s3.amazonaws.com/aws-ssm-{Region}" //Manifest Path in S3 bucket ManifestPath = "/amazon-ssm-{Region}/manifest.json" // CommonManifestURL is the URL for the manifest file in regular regions CommonManifestURL = "https://s3.{Region}.amazonaws.com" + ManifestPath // ChinaManifestURL is the URL for the manifest in regions in China ChinaManifestURL = "https://s3.{Region}.amazonaws.com.cn" + ManifestPath ) // update context constant strings const ( notStarted = "NotStarted" initialized = "Initialized" staged = "Staged" installed = "Installed" rollback = "Rollback" rolledBack = "Rolledback" completed = "Completed" ) // update state constant strings const ( inProgress = "InProgress" succeeded = "Succeeded" failed = "Failed" )
package com.github.nosrick.crockpot.config; import com.github.nosrick.crockpot.CrockPotMod; import com.github.nosrick.crockpot.compat.cloth.ClothConfigManager; import com.github.nosrick.crockpot.compat.cloth.CrockPotConfig; public class ConfigManager { protected static boolean clothPresent = false; protected static boolean clothPresent() { boolean cloth = CrockPotMod.MODS_LOADED.contains("cloth"); if(cloth != clothPresent) { clothPresent = cloth; } return clothPresent; } public static boolean useCursedStew() { if(clothPresent()) { return ClothConfigManager.getConfig().gameplay.cursedStew; } return true; } public static boolean useItemPositiveEffects() { if(clothPresent()) { return ClothConfigManager.getConfig().gameplay.itemPositiveEffects; } return true; } public static boolean useItemNegativeEffects() { if(clothPresent()) { return ClothConfigManager.getConfig().gameplay.itemNegativeEffects; } return true; } public static int boilTimePerLevel() { if(clothPresent()) { return ClothConfigManager.getConfig().gameplay.boilSecondsPerLevel * 20; } return 20 * 60 * 2; } public static int minCowlLevel() { if(clothPresent()) { return ClothConfigManager.getConfig().gameplay.cowlCurseLevels; } return 5; } public static int maxBonusLevels() { if(clothPresent()) { return ClothConfigManager.getConfig().gameplay.maxBonusLevels; } return 5; } public static int maxPortionsPerPot() { if(clothPresent()) { return ClothConfigManager.getConfig().gameplay.maxPortions; } return 64; } public static int stewMinPositiveLevelsEffect() { if(clothPresent()) { return ClothConfigManager.getConfig().gameplay.itemMinPositiveBonusLevel; } return 5; } public static int stewMinNegativeLevelsEffect() { if(clothPresent()) { return ClothConfigManager.getConfig().gameplay.itemMinNegativeCurseLevel; } return 1; } public static int maxStewNameLength() { if(clothPresent()) { return ClothConfigManager.getConfig().gameplay.maxStewNameLength; } return 32; } public static int baseNauseaDuration() { if(clothPresent()) { return ClothConfigManager.getConfig().gameplay.baseNauseaDuration; } return 5; } public static boolean cappedNauseaDuration() { if(clothPresent()) { return ClothConfigManager.getConfig().gameplay.cappedNauseaDuration; } return true; } public static int maxNauseaDuration() { if(clothPresent()) { return ClothConfigManager.getConfig().gameplay.maxNauseaDuration * 20; } return baseNauseaDuration() * 20 * minCowlLevel(); } public static int basePositiveDuration() { if(clothPresent()) { return ClothConfigManager.getConfig().gameplay.basePositiveDuration; } return 5; } public static boolean cappedPositiveDuration() { if(clothPresent()) { return ClothConfigManager.getConfig().gameplay.cappedPositiveDuration; } return true; } public static int maxPositiveDuration() { if(clothPresent()) { return ClothConfigManager.getConfig().gameplay.maxPositiveDuration * 20; } return basePositiveDuration() * 20 * maxBonusLevels(); } public static int minSatisfyingLevels() { if(clothPresent()) { return ClothConfigManager.getConfig().gameplay.minSatisfyingLevel; } return 1; } public static int minFillingLevels() { if(clothPresent()) { return ClothConfigManager.getConfig().gameplay.minFillingLevel; } return 3; } public static int minHeartyLevels() { if(clothPresent()) { return ClothConfigManager.getConfig().gameplay.minHeartyLevel; } return 5; } public static float soundEffectVolume() { if(clothPresent()) { return ClothConfigManager.getConfig().sound.soundEffectVolume; } return 0.3f; } public static int bubbleSoundChance() { if(clothPresent()) { return ClothConfigManager.getConfig().sound.bubbleSoundChance; } return 100; } public static int boilSoundChance() { if(clothPresent()) { return ClothConfigManager.getConfig().sound.boilSoundChance; } return 100; } public static boolean useBubbleSound() { if(clothPresent()) { return ClothConfigManager.getConfig().sound.useBubbleSound; } return true; } public static boolean useBoilSound() { if(clothPresent()) { return ClothConfigManager.getConfig().sound.useBoilSound; } return true; } public static int bubbleParticleChance() { if(clothPresent()) { return ClothConfigManager.getConfig().graphics.bubbleParticleChance; } return 50; } public static int boilParticleChance() { if(clothPresent()) { return ClothConfigManager.getConfig().graphics.boilParticleChance; } return 50; } public static boolean useBoilParticles() { if(clothPresent()) { return ClothConfigManager.getConfig().graphics.useBoilParticles; } return true; } public static boolean useBubbleParticles() { if(clothPresent()) { return ClothConfigManager.getConfig().graphics.useBubbleParticles; } return true; } public static boolean animateBoilingLid() { if(clothPresent()) { return ClothConfigManager.getConfig().graphics.animateBoilingLid; } return true; } public static boolean redstoneNeedsPower() { if(clothPresent()) { return ClothConfigManager.getConfig().gameplay.redstoneNeedsPower; } return false; } public static int redstonePowerThreshold() { if(clothPresent()) { return ClothConfigManager.getConfig().gameplay.redstonePowerThreshold; } return 8; } public static boolean canFillWithWaterBottle() { if(clothPresent()) { return ClothConfigManager.getConfig().gameplay.canFillWithWaterBottle; } return true; } }
The psychosis risk timeline: can we improve our preventive strategies? Part 2: adolescence and adulthood SUMMARY Current understanding of psychosis development is relevant to patients' clinical outcomes in mental health services as a whole, given that psychotic symptoms can be a feature of many different diagnoses at different stages of life. Understanding the risk factors helps clinicians to contemplate primary, secondary and tertiary preventive strategies that it may be possible to implement. In this second article of a three-part series, the psychosis risk timeline is again considered, here focusing on risk factors more likely to be encountered during later childhood, adolescence and adulthood. These include environmental factors, substance misuse, and social and psychopathological aspects. LEARNING OBJECTIVES: After reading this article you will be able to: • understanding the range of risk factors for development of psychotic symptoms in young people and adults• understand in particular the association between trauma/abuse and subsequent psychosis• appreciate current evidence for the nature and strength of the link between substance misuse and psychosis. DECLARATION OF INTEREST: None.
def gen_route(positions, connections): G = nx.Graph() for n, p in positions.iteritems(): G.add_node(n, pos=p) for connection in connections: G.add_edge(*connection) return G
// NewHelmRepoRepository will return errors if canQuery is false func NewHelmRepoRepository(canQuery bool) repository.HelmRepoRepository { return &HelmRepoRepository{ canQuery, []*models.HelmRepo{}, } }
//just caches across a call of exec (note that this makes Algorithms nonreentrant!) @Override public final void exec(State state, ExecutionContext ctx) throws DecisionException, ContradictionException, ThreadStackEmptyException, ClasspathException, CannotManageStateException, FailureException, InterruptException { cleanup(); this.ctx = ctx; try { doExec(state); } catch (InvalidInputException e) { onInvalidInputException(state, e); } }
/* ** Copyright (C) 1999-2011 <NAME> <<EMAIL>> ** ** This program is free software; you can redistribute it and/or modify ** it under the terms of the GNU Lesser General Public License as published by ** the Free Software Foundation; either version 2.1 of the License, or ** (at your option) any later version. ** ** This program is distributed in the hope that it will be useful, ** but WITHOUT ANY WARRANTY; without even the implied warranty of ** MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ** GNU Lesser General Public License for more details. ** ** You should have received a copy of the GNU Lesser General Public License ** along with this program; if not, write to the Free Software ** Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ #include "sfconfig.h" #include <stdio.h> #include <stdlib.h> #include <string.h> #include <math.h> #include "sndfile.h" #include "sfendian.h" #include "common.h" #include "wav_w64.h" /* These required here because we write the header in this file. */ #define RIFF_MARKER (MAKE_MARKER ('R', 'I', 'F', 'F')) #define WAVE_MARKER (MAKE_MARKER ('W', 'A', 'V', 'E')) #define fmt_MARKER (MAKE_MARKER ('f', 'm', 't', ' ')) #define fact_MARKER (MAKE_MARKER ('f', 'a', 'c', 't')) #define data_MARKER (MAKE_MARKER ('d', 'a', 't', 'a')) #define WAVE_FORMAT_MS_ADPCM 0x0002 typedef struct { int channels, blocksize, samplesperblock, blocks, dataremaining ; int blockcount ; sf_count_t samplecount ; short *samples ; unsigned char *block ; #if HAVE_FLEXIBLE_ARRAY short dummydata [] ; /* ISO C99 struct flexible array. */ #else short dummydata [0] ; /* This is a hack an might not work. */ #endif } MSADPCM_PRIVATE ; /*============================================================================================ ** MS ADPCM static data and functions. */ static int AdaptationTable [] = { 230, 230, 230, 230, 307, 409, 512, 614, 768, 614, 512, 409, 307, 230, 230, 230 } ; /* TODO : The first 7 coef's are are always hardcode and must appear in the actual WAVE file. They should be read in in case a sound program added extras to the list. */ static int AdaptCoeff1 [MSADPCM_ADAPT_COEFF_COUNT] = { 256, 512, 0, 192, 240, 460, 392 } ; static int AdaptCoeff2 [MSADPCM_ADAPT_COEFF_COUNT] = { 0, -256, 0, 64, 0, -208, -232 } ; /*============================================================================================ ** MS ADPCM Block Layout. ** ====================== ** Block is usually 256, 512 or 1024 bytes depending on sample rate. ** For a mono file, the block is laid out as follows: ** byte purpose ** 0 block predictor [0..6] ** 1,2 initial idelta (positive) ** 3,4 sample 1 ** 5,6 sample 0 ** 7..n packed bytecodes ** ** For a stereo file, the block is laid out as follows: ** byte purpose ** 0 block predictor [0..6] for left channel ** 1 block predictor [0..6] for right channel ** 2,3 initial idelta (positive) for left channel ** 4,5 initial idelta (positive) for right channel ** 6,7 sample 1 for left channel ** 8,9 sample 1 for right channel ** 10,11 sample 0 for left channel ** 12,13 sample 0 for right channel ** 14..n packed bytecodes */ /*============================================================================================ ** Static functions. */ static int msadpcm_decode_block (SF_PRIVATE *psf, MSADPCM_PRIVATE *pms) ; static sf_count_t msadpcm_read_block (SF_PRIVATE *psf, MSADPCM_PRIVATE *pms, short *ptr, int len) ; static int msadpcm_encode_block (SF_PRIVATE *psf, MSADPCM_PRIVATE *pms) ; static sf_count_t msadpcm_write_block (SF_PRIVATE *psf, MSADPCM_PRIVATE *pms, const short *ptr, int len) ; static sf_count_t msadpcm_read_s (SF_PRIVATE *psf, short *ptr, sf_count_t len) ; static sf_count_t msadpcm_read_i (SF_PRIVATE *psf, int *ptr, sf_count_t len) ; static sf_count_t msadpcm_read_f (SF_PRIVATE *psf, float *ptr, sf_count_t len) ; static sf_count_t msadpcm_read_d (SF_PRIVATE *psf, double *ptr, sf_count_t len) ; static sf_count_t msadpcm_write_s (SF_PRIVATE *psf, const short *ptr, sf_count_t len) ; static sf_count_t msadpcm_write_i (SF_PRIVATE *psf, const int *ptr, sf_count_t len) ; static sf_count_t msadpcm_write_f (SF_PRIVATE *psf, const float *ptr, sf_count_t len) ; static sf_count_t msadpcm_write_d (SF_PRIVATE *psf, const double *ptr, sf_count_t len) ; static sf_count_t msadpcm_seek (SF_PRIVATE *psf, int mode, sf_count_t offset) ; static int msadpcm_close (SF_PRIVATE *psf) ; static void choose_predictor (unsigned int channels, short *data, int *bpred, int *idelta) ; /*============================================================================================ ** MS ADPCM Read Functions. */ int wav_w64_msadpcm_init (SF_PRIVATE *psf, int blockalign, int samplesperblock) { MSADPCM_PRIVATE *pms ; unsigned int pmssize ; int count ; if (psf->codec_data != NULL) { psf_log_printf (psf, "*** psf->codec_data is not NULL.\n") ; return SFE_INTERNAL ; } ; if (psf->file.mode == SFM_WRITE) samplesperblock = 2 + 2 * (blockalign - 7 * psf->sf.channels) / psf->sf.channels ; pmssize = sizeof (MSADPCM_PRIVATE) + blockalign + 3 * psf->sf.channels * samplesperblock ; if (! (psf->codec_data = malloc (pmssize))) return SFE_MALLOC_FAILED ; pms = (MSADPCM_PRIVATE*) psf->codec_data ; memset (pms, 0, pmssize) ; pms->samples = pms->dummydata ; pms->block = (unsigned char*) (pms->dummydata + psf->sf.channels * samplesperblock) ; pms->channels = psf->sf.channels ; pms->blocksize = blockalign ; pms->samplesperblock = samplesperblock ; if (pms->blocksize == 0) { psf_log_printf (psf, "*** Error : pms->blocksize should not be zero.\n") ; return SFE_INTERNAL ; } ; if (psf->file.mode == SFM_READ) { pms->dataremaining = psf->datalength ; if (psf->datalength % pms->blocksize) pms->blocks = psf->datalength / pms->blocksize + 1 ; else pms->blocks = psf->datalength / pms->blocksize ; count = 2 * (pms->blocksize - 6 * pms->channels) / pms->channels ; if (pms->samplesperblock != count) { psf_log_printf (psf, "*** Error : samplesperblock should be %d.\n", count) ; return SFE_INTERNAL ; } ; psf->sf.frames = (psf->datalength / pms->blocksize) * pms->samplesperblock ; psf_log_printf (psf, " bpred idelta\n") ; msadpcm_decode_block (psf, pms) ; psf->read_short = msadpcm_read_s ; psf->read_int = msadpcm_read_i ; psf->read_float = msadpcm_read_f ; psf->read_double = msadpcm_read_d ; } ; if (psf->file.mode == SFM_WRITE) { pms->samples = pms->dummydata ; pms->samplecount = 0 ; psf->write_short = msadpcm_write_s ; psf->write_int = msadpcm_write_i ; psf->write_float = msadpcm_write_f ; psf->write_double = msadpcm_write_d ; } ; psf->codec_close = msadpcm_close ; psf->seek = msadpcm_seek ; return 0 ; } /* wav_w64_msadpcm_init */ static int msadpcm_decode_block (SF_PRIVATE *psf, MSADPCM_PRIVATE *pms) { int chan, k, blockindx, sampleindx ; short bytecode, bpred [2], chan_idelta [2] ; int predict ; int current ; int idelta ; pms->blockcount ++ ; pms->samplecount = 0 ; if (pms->blockcount > pms->blocks) { memset (pms->samples, 0, pms->samplesperblock * pms->channels) ; return 1 ; } ; if ((k = psf_fread (pms->block, 1, pms->blocksize, psf)) != pms->blocksize) psf_log_printf (psf, "*** Warning : short read (%d != %d).\n", k, pms->blocksize) ; /* Read and check the block header. */ if (pms->channels == 1) { bpred [0] = pms->block [0] ; if (bpred [0] >= 7) psf_log_printf (psf, "MS ADPCM synchronisation error (%d).\n", bpred [0]) ; chan_idelta [0] = pms->block [1] | (pms->block [2] << 8) ; chan_idelta [1] = 0 ; psf_log_printf (psf, "(%d) (%d)\n", bpred [0], chan_idelta [0]) ; pms->samples [1] = pms->block [3] | (pms->block [4] << 8) ; pms->samples [0] = pms->block [5] | (pms->block [6] << 8) ; blockindx = 7 ; } else { bpred [0] = pms->block [0] ; bpred [1] = pms->block [1] ; if (bpred [0] >= 7 || bpred [1] >= 7) psf_log_printf (psf, "MS ADPCM synchronisation error (%d %d).\n", bpred [0], bpred [1]) ; chan_idelta [0] = pms->block [2] | (pms->block [3] << 8) ; chan_idelta [1] = pms->block [4] | (pms->block [5] << 8) ; psf_log_printf (psf, "(%d, %d) (%d, %d)\n", bpred [0], bpred [1], chan_idelta [0], chan_idelta [1]) ; pms->samples [2] = pms->block [6] | (pms->block [7] << 8) ; pms->samples [3] = pms->block [8] | (pms->block [9] << 8) ; pms->samples [0] = pms->block [10] | (pms->block [11] << 8) ; pms->samples [1] = pms->block [12] | (pms->block [13] << 8) ; blockindx = 14 ; } ; /*-------------------------------------------------------- This was left over from a time when calculations were done as ints rather than shorts. Keep this around as a reminder in case I ever find a file which decodes incorrectly. if (chan_idelta [0] & 0x8000) chan_idelta [0] -= 0x10000 ; if (chan_idelta [1] & 0x8000) chan_idelta [1] -= 0x10000 ; --------------------------------------------------------*/ /* Pull apart the packed 4 bit samples and store them in their ** correct sample positions. */ sampleindx = 2 * pms->channels ; while (blockindx < pms->blocksize) { bytecode = pms->block [blockindx++] ; pms->samples [sampleindx++] = (bytecode >> 4) & 0x0F ; pms->samples [sampleindx++] = bytecode & 0x0F ; } ; /* Decode the encoded 4 bit samples. */ for (k = 2 * pms->channels ; k < (pms->samplesperblock * pms->channels) ; k ++) { chan = (pms->channels > 1) ? (k % 2) : 0 ; bytecode = pms->samples [k] & 0xF ; /* Compute next Adaptive Scale Factor (ASF) */ idelta = chan_idelta [chan] ; chan_idelta [chan] = (AdaptationTable [bytecode] * idelta) >> 8 ; /* => / 256 => FIXED_POINT_ADAPTATION_BASE == 256 */ if (chan_idelta [chan] < 16) chan_idelta [chan] = 16 ; if (bytecode & 0x8) bytecode -= 0x10 ; predict = ((pms->samples [k - pms->channels] * AdaptCoeff1 [bpred [chan]]) + (pms->samples [k - 2 * pms->channels] * AdaptCoeff2 [bpred [chan]])) >> 8 ; /* => / 256 => FIXED_POINT_COEFF_BASE == 256 */ current = (bytecode * idelta) + predict ; if (current > 32767) current = 32767 ; else if (current < -32768) current = -32768 ; pms->samples [k] = current ; } ; return 1 ; } /* msadpcm_decode_block */ static sf_count_t msadpcm_read_block (SF_PRIVATE *psf, MSADPCM_PRIVATE *pms, short *ptr, int len) { int count, total = 0, indx = 0 ; while (indx < len) { if (pms->blockcount >= pms->blocks && pms->samplecount >= pms->samplesperblock) { memset (&(ptr [indx]), 0, (size_t) ((len - indx) * sizeof (short))) ; return total ; } ; if (pms->samplecount >= pms->samplesperblock) msadpcm_decode_block (psf, pms) ; count = (pms->samplesperblock - pms->samplecount) * pms->channels ; count = (len - indx > count) ? count : len - indx ; memcpy (&(ptr [indx]), &(pms->samples [pms->samplecount * pms->channels]), count * sizeof (short)) ; indx += count ; pms->samplecount += count / pms->channels ; total = indx ; } ; return total ; } /* msadpcm_read_block */ static sf_count_t msadpcm_read_s (SF_PRIVATE *psf, short *ptr, sf_count_t len) { MSADPCM_PRIVATE *pms ; int readcount, count ; sf_count_t total = 0 ; if (! psf->codec_data) return 0 ; pms = (MSADPCM_PRIVATE*) psf->codec_data ; while (len > 0) { readcount = (len > 0x10000000) ? 0x10000000 : (int) len ; count = msadpcm_read_block (psf, pms, ptr, readcount) ; total += count ; len -= count ; if (count != readcount) break ; } ; return total ; } /* msadpcm_read_s */ static sf_count_t msadpcm_read_i (SF_PRIVATE *psf, int *ptr, sf_count_t len) { MSADPCM_PRIVATE *pms ; short *sptr ; int k, bufferlen, readcount = 0, count ; sf_count_t total = 0 ; if (! psf->codec_data) return 0 ; pms = (MSADPCM_PRIVATE*) psf->codec_data ; sptr = psf->u.sbuf ; bufferlen = ARRAY_LEN (psf->u.sbuf) ; while (len > 0) { readcount = (len >= bufferlen) ? bufferlen : len ; count = msadpcm_read_block (psf, pms, sptr, readcount) ; for (k = 0 ; k < readcount ; k++) ptr [total + k] = sptr [k] << 16 ; total += count ; len -= readcount ; if (count != readcount) break ; } ; return total ; } /* msadpcm_read_i */ static sf_count_t msadpcm_read_f (SF_PRIVATE *psf, float *ptr, sf_count_t len) { MSADPCM_PRIVATE *pms ; short *sptr ; int k, bufferlen, readcount = 0, count ; sf_count_t total = 0 ; float normfact ; if (! psf->codec_data) return 0 ; pms = (MSADPCM_PRIVATE*) psf->codec_data ; normfact = (psf->norm_float == SF_TRUE) ? 1.0 / ((float) 0x8000) : 1.0 ; sptr = psf->u.sbuf ; bufferlen = ARRAY_LEN (psf->u.sbuf) ; while (len > 0) { readcount = (len >= bufferlen) ? bufferlen : len ; count = msadpcm_read_block (psf, pms, sptr, readcount) ; for (k = 0 ; k < readcount ; k++) ptr [total + k] = normfact * (float) (sptr [k]) ; total += count ; len -= readcount ; if (count != readcount) break ; } ; return total ; } /* msadpcm_read_f */ static sf_count_t msadpcm_read_d (SF_PRIVATE *psf, double *ptr, sf_count_t len) { MSADPCM_PRIVATE *pms ; short *sptr ; int k, bufferlen, readcount = 0, count ; sf_count_t total = 0 ; double normfact ; normfact = (psf->norm_double == SF_TRUE) ? 1.0 / ((double) 0x8000) : 1.0 ; if (! psf->codec_data) return 0 ; pms = (MSADPCM_PRIVATE*) psf->codec_data ; sptr = psf->u.sbuf ; bufferlen = ARRAY_LEN (psf->u.sbuf) ; while (len > 0) { readcount = (len >= bufferlen) ? bufferlen : len ; count = msadpcm_read_block (psf, pms, sptr, readcount) ; for (k = 0 ; k < readcount ; k++) ptr [total + k] = normfact * (double) (sptr [k]) ; total += count ; len -= readcount ; if (count != readcount) break ; } ; return total ; } /* msadpcm_read_d */ static sf_count_t msadpcm_seek (SF_PRIVATE *psf, int mode, sf_count_t offset) { MSADPCM_PRIVATE *pms ; int newblock, newsample ; if (! psf->codec_data) return 0 ; pms = (MSADPCM_PRIVATE*) psf->codec_data ; if (psf->datalength < 0 || psf->dataoffset < 0) { psf->error = SFE_BAD_SEEK ; return PSF_SEEK_ERROR ; } ; if (offset == 0) { psf_fseek (psf, psf->dataoffset, SEEK_SET) ; pms->blockcount = 0 ; msadpcm_decode_block (psf, pms) ; pms->samplecount = 0 ; return 0 ; } ; if (offset < 0 || offset > pms->blocks * pms->samplesperblock) { psf->error = SFE_BAD_SEEK ; return PSF_SEEK_ERROR ; } ; newblock = offset / pms->samplesperblock ; newsample = offset % pms->samplesperblock ; if (mode == SFM_READ) { psf_fseek (psf, psf->dataoffset + newblock * pms->blocksize, SEEK_SET) ; pms->blockcount = newblock ; msadpcm_decode_block (psf, pms) ; pms->samplecount = newsample ; } else { /* What to do about write??? */ psf->error = SFE_BAD_SEEK ; return PSF_SEEK_ERROR ; } ; return newblock * pms->samplesperblock + newsample ; } /* msadpcm_seek */ /*========================================================================================== ** MS ADPCM Write Functions. */ void msadpcm_write_adapt_coeffs (SF_PRIVATE *psf) { int k ; for (k = 0 ; k < MSADPCM_ADAPT_COEFF_COUNT ; k++) psf_binheader_writef (psf, "22", AdaptCoeff1 [k], AdaptCoeff2 [k]) ; } /* msadpcm_write_adapt_coeffs */ /*========================================================================================== */ static int msadpcm_encode_block (SF_PRIVATE *psf, MSADPCM_PRIVATE *pms) { unsigned int blockindx ; unsigned char byte ; int chan, k, predict, bpred [2], idelta [2], errordelta, newsamp ; choose_predictor (pms->channels, pms->samples, bpred, idelta) ; /* Write the block header. */ if (pms->channels == 1) { pms->block [0] = bpred [0] ; pms->block [1] = idelta [0] & 0xFF ; pms->block [2] = idelta [0] >> 8 ; pms->block [3] = pms->samples [1] & 0xFF ; pms->block [4] = pms->samples [1] >> 8 ; pms->block [5] = pms->samples [0] & 0xFF ; pms->block [6] = pms->samples [0] >> 8 ; blockindx = 7 ; byte = 0 ; /* Encode the samples as 4 bit. */ for (k = 2 ; k < pms->samplesperblock ; k++) { predict = (pms->samples [k-1] * AdaptCoeff1 [bpred [0]] + pms->samples [k-2] * AdaptCoeff2 [bpred [0]]) >> 8 ; errordelta = (pms->samples [k] - predict) / idelta [0] ; if (errordelta < -8) errordelta = -8 ; else if (errordelta > 7) errordelta = 7 ; newsamp = predict + (idelta [0] * errordelta) ; if (newsamp > 32767) newsamp = 32767 ; else if (newsamp < -32768) newsamp = -32768 ; if (errordelta < 0) errordelta += 0x10 ; byte = (byte << 4) | (errordelta & 0xF) ; if (k % 2) { pms->block [blockindx++] = byte ; byte = 0 ; } ; idelta [0] = (idelta [0] * AdaptationTable [errordelta]) >> 8 ; if (idelta [0] < 16) idelta [0] = 16 ; pms->samples [k] = newsamp ; } ; } else { /* Stereo file. */ pms->block [0] = bpred [0] ; pms->block [1] = bpred [1] ; pms->block [2] = idelta [0] & 0xFF ; pms->block [3] = idelta [0] >> 8 ; pms->block [4] = idelta [1] & 0xFF ; pms->block [5] = idelta [1] >> 8 ; pms->block [6] = pms->samples [2] & 0xFF ; pms->block [7] = pms->samples [2] >> 8 ; pms->block [8] = pms->samples [3] & 0xFF ; pms->block [9] = pms->samples [3] >> 8 ; pms->block [10] = pms->samples [0] & 0xFF ; pms->block [11] = pms->samples [0] >> 8 ; pms->block [12] = pms->samples [1] & 0xFF ; pms->block [13] = pms->samples [1] >> 8 ; blockindx = 14 ; byte = 0 ; chan = 1 ; for (k = 4 ; k < 2 * pms->samplesperblock ; k++) { chan = k & 1 ; predict = (pms->samples [k-2] * AdaptCoeff1 [bpred [chan]] + pms->samples [k-4] * AdaptCoeff2 [bpred [chan]]) >> 8 ; errordelta = (pms->samples [k] - predict) / idelta [chan] ; if (errordelta < -8) errordelta = -8 ; else if (errordelta > 7) errordelta = 7 ; newsamp = predict + (idelta [chan] * errordelta) ; if (newsamp > 32767) newsamp = 32767 ; else if (newsamp < -32768) newsamp = -32768 ; if (errordelta < 0) errordelta += 0x10 ; byte = (byte << 4) | (errordelta & 0xF) ; if (chan) { pms->block [blockindx++] = byte ; byte = 0 ; } ; idelta [chan] = (idelta [chan] * AdaptationTable [errordelta]) >> 8 ; if (idelta [chan] < 16) idelta [chan] = 16 ; pms->samples [k] = newsamp ; } ; } ; /* Write the block to disk. */ if ((k = psf_fwrite (pms->block, 1, pms->blocksize, psf)) != pms->blocksize) psf_log_printf (psf, "*** Warning : short write (%d != %d).\n", k, pms->blocksize) ; memset (pms->samples, 0, pms->samplesperblock * sizeof (short)) ; pms->blockcount ++ ; pms->samplecount = 0 ; return 1 ; } /* msadpcm_encode_block */ static sf_count_t msadpcm_write_block (SF_PRIVATE *psf, MSADPCM_PRIVATE *pms, const short *ptr, int len) { int count, total = 0, indx = 0 ; while (indx < len) { count = (pms->samplesperblock - pms->samplecount) * pms->channels ; if (count > len - indx) count = len - indx ; memcpy (&(pms->samples [pms->samplecount * pms->channels]), &(ptr [total]), count * sizeof (short)) ; indx += count ; pms->samplecount += count / pms->channels ; total = indx ; if (pms->samplecount >= pms->samplesperblock) msadpcm_encode_block (psf, pms) ; } ; return total ; } /* msadpcm_write_block */ static sf_count_t msadpcm_write_s (SF_PRIVATE *psf, const short *ptr, sf_count_t len) { MSADPCM_PRIVATE *pms ; int writecount, count ; sf_count_t total = 0 ; if (! psf->codec_data) return 0 ; pms = (MSADPCM_PRIVATE*) psf->codec_data ; while (len > 0) { writecount = (len > 0x10000000) ? 0x10000000 : (int) len ; count = msadpcm_write_block (psf, pms, ptr, writecount) ; total += count ; len -= count ; if (count != writecount) break ; } ; return total ; } /* msadpcm_write_s */ static sf_count_t msadpcm_write_i (SF_PRIVATE *psf, const int *ptr, sf_count_t len) { MSADPCM_PRIVATE *pms ; short *sptr ; int k, bufferlen, writecount, count ; sf_count_t total = 0 ; if (! psf->codec_data) return 0 ; pms = (MSADPCM_PRIVATE*) psf->codec_data ; sptr = psf->u.sbuf ; bufferlen = ARRAY_LEN (psf->u.sbuf) ; while (len > 0) { writecount = (len >= bufferlen) ? bufferlen : len ; for (k = 0 ; k < writecount ; k++) sptr [k] = ptr [total + k] >> 16 ; count = msadpcm_write_block (psf, pms, sptr, writecount) ; total += count ; len -= writecount ; if (count != writecount) break ; } ; return total ; } /* msadpcm_write_i */ static sf_count_t msadpcm_write_f (SF_PRIVATE *psf, const float *ptr, sf_count_t len) { MSADPCM_PRIVATE *pms ; short *sptr ; int k, bufferlen, writecount, count ; sf_count_t total = 0 ; float normfact ; if (! psf->codec_data) return 0 ; pms = (MSADPCM_PRIVATE*) psf->codec_data ; normfact = (psf->norm_float == SF_TRUE) ? (1.0 * 0x7FFF) : 1.0 ; sptr = psf->u.sbuf ; bufferlen = ARRAY_LEN (psf->u.sbuf) ; while (len > 0) { writecount = (len >= bufferlen) ? bufferlen : len ; for (k = 0 ; k < writecount ; k++) sptr [k] = lrintf (normfact * ptr [total + k]) ; count = msadpcm_write_block (psf, pms, sptr, writecount) ; total += count ; len -= writecount ; if (count != writecount) break ; } ; return total ; } /* msadpcm_write_f */ static sf_count_t msadpcm_write_d (SF_PRIVATE *psf, const double *ptr, sf_count_t len) { MSADPCM_PRIVATE *pms ; short *sptr ; int k, bufferlen, writecount, count ; sf_count_t total = 0 ; double normfact ; normfact = (psf->norm_double == SF_TRUE) ? (1.0 * 0x7FFF) : 1.0 ; if (! psf->codec_data) return 0 ; pms = (MSADPCM_PRIVATE*) psf->codec_data ; sptr = psf->u.sbuf ; bufferlen = ARRAY_LEN (psf->u.sbuf) ; while (len > 0) { writecount = (len >= bufferlen) ? bufferlen : len ; for (k = 0 ; k < writecount ; k++) sptr [k] = lrint (normfact * ptr [total + k]) ; count = msadpcm_write_block (psf, pms, sptr, writecount) ; total += count ; len -= writecount ; if (count != writecount) break ; } ; return total ; } /* msadpcm_write_d */ /*======================================================================================== */ static int msadpcm_close (SF_PRIVATE *psf) { MSADPCM_PRIVATE *pms ; pms = (MSADPCM_PRIVATE*) psf->codec_data ; if (psf->file.mode == SFM_WRITE) { /* Now we know static int for certain the length of the file we can ** re-write the header. */ if (pms->samplecount && pms->samplecount < pms->samplesperblock) msadpcm_encode_block (psf, pms) ; } ; return 0 ; } /* msadpcm_close */ /*======================================================================================== ** Static functions. */ /*---------------------------------------------------------------------------------------- ** Choosing the block predictor. ** Each block requires a predictor and an idelta for each channel. ** The predictor is in the range [0..6] which is an indx into the two AdaptCoeff tables. ** The predictor is chosen by trying all of the possible predictors on a small set of ** samples at the beginning of the block. The predictor with the smallest average ** abs (idelta) is chosen as the best predictor for this block. ** The value of idelta is chosen to to give a 4 bit code value of +/- 4 (approx. half the ** max. code value). If the average abs (idelta) is zero, the sixth predictor is chosen. ** If the value of idelta is less then 16 it is set to 16. ** ** Microsoft uses an IDELTA_COUNT (number of sample pairs used to choose best predictor) ** value of 3. The best possible results would be obtained by using all the samples to ** choose the predictor. */ #define IDELTA_COUNT 3 static void choose_predictor (unsigned int channels, short *data, int *block_pred, int *idelta) { unsigned int chan, k, bpred, idelta_sum, best_bpred, best_idelta ; for (chan = 0 ; chan < channels ; chan++) { best_bpred = best_idelta = 0 ; for (bpred = 0 ; bpred < 7 ; bpred++) { idelta_sum = 0 ; for (k = 2 ; k < 2 + IDELTA_COUNT ; k++) idelta_sum += abs (data [k * channels] - ((data [(k - 1) * channels] * AdaptCoeff1 [bpred] + data [(k - 2) * channels] * AdaptCoeff2 [bpred]) >> 8)) ; idelta_sum /= (4 * IDELTA_COUNT) ; if (bpred == 0 || idelta_sum < best_idelta) { best_bpred = bpred ; best_idelta = idelta_sum ; } ; if (! idelta_sum) { best_bpred = bpred ; best_idelta = 16 ; break ; } ; } ; /* for bpred ... */ if (best_idelta < 16) best_idelta = 16 ; block_pred [chan] = best_bpred ; idelta [chan] = best_idelta ; } ; return ; } /* choose_predictor */
/** * * @author David Salter <[email protected]> */ public class VariableFormatters { /** * @param args the command line arguments */ public static void main(String[] args) { ComplexNumber number = new ComplexNumber(1.0d, 2.0d); System.out.println(number); } }
Sporting KC Academy defender Jaylin Lindsey appeared in three matches for the U.S U-17 national team at the recent Vaclav Jezek Tournament in the Czech Republic. The United States finished third overall as they continue preparation for the 2017 FIFA U-17 World Cup in India. Lindsey played all three matches in the group stage for the U.S. U-17s. After starting the team's 4-1 win over Hungary, Lindsey came off the bench to close out a 3-1 victory over Russia. He was back in the lineup for the final group stage match, a 4-0 loss to Japan. The U.S. closed out the tournament with a 4-0 win over Iceland to take third place overall. Japan won the tournament with a 4-2 victory over host Czech Republic in the final. The U.S. U-17s will have their final training camp before the World Cup next week in Bradenton, Florida. The U-17 World Cup kicks off October 6. The United States will face Colombia, Ghana and hosts India in Group A. Lindsey has appeared in more than 20 matches for the U.S. U-17s since 2016, recording four assists. He previously represented his country at the U-15 and U-16 levels.
/** Each key except the last must find another KeyTable object */ Php::Value KeyTable::path(Php::Parameters& param) { bool check = param.size() >= 1; if (check) { Php::Value& v = param[0]; if (v.isString()) { std::string target((const char*) v, v.size()); StringList keys = pun::explode(".", target); auto ct = keys.size(); auto kit = keys.begin(); auto table = this; if (ct == 0) { return Php::Value(); } while (ct > 1) { auto fit = table->_store.find(*kit); if (fit == table->_store.end()) return Php::Value(); table = castKeyTable( fit->second ); if (table == nullptr) { return Php::Value(); } ct--; kit++; } auto fit = table->_store.find(*kit); return (fit == table->_store.end() ? Php::Value() : fit->second); } } throw Php::Exception("No path argument"); }
import AuthCheck from "../../components/AuthCheck"; import PostList from "../../components/PostList"; import CreateNewPost from "../../components/CreateNewPost"; export default function AdminPage({}) { return ( <main> <AuthCheck> <PostList /> <CreateNewPost /> </AuthCheck> </main> ); }
n = int(raw_input().split()[0]) grid = [] for i in range(n): grid.append(['.']*n) if n < 6: for i in range(n): if i%2 == 0: grid[i][0] = '>' grid[i][n-1] = 'v' else: grid[i][0] = 'v' grid[i][n-1] = '<' else: for i in range(100): grid.append(['.']*100) for j in range(25): for i in range(50): grid[j*4+0][i] = '>' grid[j*4+1][50+i] = '<' grid[j*4+2][99-i] = '<' grid[j*4+3][49-i] = '>' for i in range(0,50,2): grid[j*4+0][50+i] = '>' grid[j*4+1][49-i] = '<' grid[j*4+2][49-i] = '<' grid[j*4+3][50+i] = '>' grid[j*4+0][51] = '>' grid[j*4+0][99] = 'v' grid[j*4+1][0] = '^' grid[j*4+2][48] = '<' grid[j*4+2][0] = 'v' grid[j*4+3][99] = '^' for i in xrange(n): print "".join(grid[i]) print "1 1"
package org.sadtech.bot.vcs.bitbucketbot.local.service; import lombok.NonNull; import lombok.RequiredArgsConstructor; import org.sadtech.bot.vcs.bitbucketbot.local.config.property.BitbucketUserProperty; import org.sadtech.bot.vsc.bitbucketbot.context.domain.entity.NotifySetting; import org.sadtech.bot.vsc.bitbucketbot.context.domain.notify.Notify; import org.sadtech.bot.vsc.bitbucketbot.context.service.MessageSendService; import org.sadtech.bot.vsc.bitbucketbot.context.service.NotifyService; import org.springframework.context.annotation.Primary; import org.springframework.stereotype.Service; import java.util.Optional; /** * // TODO: 26.10.2020 Добавить описание. * * @author upagge 26.10.2020 */ @Primary @Service @RequiredArgsConstructor public class NotifyLocalServiceImpl implements NotifyService { private final MessageSendService messageSendService; private final BitbucketUserProperty bitbucketUserProperty; @Override public <T extends Notify> void send(T notify) { if (notify.getRecipients().contains(bitbucketUserProperty.getLogin())) { messageSendService.send(notify); } } @Override public void saveSettings(@NonNull NotifySetting setting) { throw new IllegalStateException(); } @Override public Optional<NotifySetting> getSetting(@NonNull String login) { return Optional.empty(); } }
/** * Handle the trajectory and return true if we go to a location, false else. * @private */ bool handleNextMoveActionOrAction(GameStrategyContext* gameStrategyContext) { if (isLoggerTraceEnabled()) { appendStringLN(getTraceOutputStreamLogger(), "handleNextMoveActionOrAction"); } GameTarget* currentTarget = gameStrategyContext->currentTarget; if (currentTarget == NULL) { if (isLoggerTraceEnabled()) { appendStringLN(getTraceOutputStreamLogger(), "currentTarget=NULL"); } return false; } Location* nearestLocation = gameStrategyContext->nearestLocation; Navigation* navigation = gameStrategyContext->navigation; if (nearestLocation == NULL && TRAJECTORY_TYPE_NONE == gameStrategyContext->trajectoryType) { gameStrategyCreateOutsideTemporaryPaths(gameStrategyContext); nearestLocation = updateGameStrategyContextNearestLocation(gameStrategyContext); } else { gameStrategyClearOusideTemporaryPathsAndLocations(gameStrategyContext); } GameTargetActionList* actionList = &(currentTarget->actionList); GameTargetAction* targetAction = getNextGameTargetActionTodoByPriority(actionList); if (targetAction == NULL) { return false; } if (targetAction->endLocation == NULL || targetAction->endLocation == nearestLocation) { executeTargetAction(gameStrategyContext, targetAction); updateTargetStatusAndScore(gameStrategyContext); return true; } else { if (targetAction->endLocation != NULL) { computeBestPath(navigation, nearestLocation, targetAction->endLocation); return followComputedNextPath(gameStrategyContext); } } return true; }
def quantities(self, quantities): self._quantities = quantities
Contribution of spicules to solar coronal emission Recent high-resolution imaging and spectroscopic observations have generated renewed interest in spicules' role in explaining the hot corona. Some studies suggest that some spicules, often classified as type II, may provide significant mass and energy to the corona. Here we use numerical simulations to investigate whether such spicules can produce the observed coronal emission without any additional coronal heating agent. Model spicules consisting of a cold body and hot tip are injected into the base of a warm ($0.5$ MK) equilibrium loop with different tip temperatures and injection velocities. Both piston- and pressure-driven shocks are produced. We find that the hot tip cools rapidly and disappears from coronal emission lines such as Fe XII $195$ and Fe XIV $274$. Prolonged hot emission is produced by pre-existing loop material heated by the shock and by thermal conduction from the shock. However, the shapes and Doppler shifts of synthetic line profiles show significant discrepancies with observations. Furthermore, spatially and temporally averaged intensities are extremely low, suggesting that if the observed intensities from the quiet Sun and active regions were solely due to type II spicules, one to several orders of magnitude more spicules would be required than have been reported in the literature. This conclusion applies strictly to the ejected spicular material. We make no claims about emissions connected with waves or coronal currents that may be generated during the ejection process and heat the surrounding area. INTRODUCTION Defying decades of continued efforts, many aspects of coronal heating remain unanswered (Klimchuk 2006(Klimchuk , 2015Viall et al. 2021). Even the basic mechanism is a matter of debate. Despite the fact that all the coronal mass is sourced at the chromosphere, agreement on how the chromospheric mass is heated and transported up to the corona has not been reached. An early observation of the solar chromosphere reported the existence of several small jet-like features (Secchi 1877). They were later named spicules (Roberts 1945). With improved observations, these spicules were seen to propagate upwards (Pneuman & Kopp 1977, 1978 with speed 20 -50 km s −1 . They were also seen to survive for about 5 to 10 minutes and carry almost 100 times the mass needed to balance the mass loss in the solar corona due to the solar wind. Further studies of the spicules (Athay & Holzer 1982) suggested a pivotal role in transferring energy from the inner layers of the solar atmosphere to the lower solar corona. However, the proposal was not pursued further because these traditional spicules lack emission in the Transition Region (TR) and coronal lines (Withbroe 1983). About a decade ago, using high-resolution imaging and spectroscopic observations from the Hinode and Solar Dynamic Observatory missions, De Pontieu et al. (2007 further discovered jet like features traveling from the chromosphere to the corona. These features appear all over the Sun with a lifetime between 10 − 150 s and a velocity of 50 − 150 km s −1 . De Pontieu et al. (2007) termed them type II spicules and suggested that they are capable of connecting the relatively cooler solar chromosphere with the hot corona. Since their discovery, multiple observations have identified type II spicules and reported on their characteristics. However, nothing conclusive has yet been established about their origin. Only recently, Samanta et al. (2019) have identified the near-simultaneous origin of spicules and the emergence of photospheric magnetic bipoles. The tips of the originated spicules eventually appear in the coronal passband, suggesting that the plasma is heated to coronal temperatures. A 2.5D radiative MHD simulation of type II spicules (Martínez-Sykora et al. 2017) has reproduced many of their observed features. This simulation also suggests that ambipolar diffusion in the partially ionized chromosphere may play a crucial role in the origin of type II spicules. On the other hand, a recent work (Dey et al. 2022) based on radiative MHD simulation and laboratory experiment suggests that quasi-periodic photospheric driving in the presence of vertical magnetic fields can readily generate spicules in the solar atmosphere. Their work, devoid of any chromospheric physics, can still account for the abundance of wide varieties of spicules, as seen in the observations. The evolution of spicules during their propagation is understood through multi-wavelength studies (e.g., De Skogsrud et al. 2015). Observations of De Pontieu et al. (2011) suggest that spicule plasma emanating from the chromosphere gets heated to transition region (TR) temperatures and even up to coronal temperatures. Such heating may happen for two reasons: (a) Spicule propagation can produce shocks, compressing the material lying ahead of it. In such a scenario, it is not the ejected spicule material but the pre-existing coronal material in front of it that gets compressed by the shock to contribute to the hot emission (Klimchuk 2012;Petralia et al. 2014); (b) The tip of the spicule may get heated during the ejection process, on-site, through impulsive heating and produce emissions in the coronal lines. In the latter scenario, the emission indeed comes from the ejected spicule material (De Pontieu et al. 2007). The radiative MHD simulations of Martínez-Sykora et al. (2018) suggest that spicules and surrounding areas get heated by ohmic dissipation of newly created currents and by waves. Note, however, that the currents in the simulations are relatively large-scale volume currents and would not be dissipated efficiently at the many orders of magnitude smaller resistivity of the real corona. Heating in the real corona involves magnetic re-connection at thin current sheets, of which there are at least 100, 000 in a single active region (Klimchuk 2015). It is not known whether the ohmic heating in the simulations is a good proxy for the actual reconnection-based heating. Klimchuk (2012) considered a simple analytical model for the evolution of spicules with a hot tip. He argued that if a majority of observed coronal emission were from such hot tips, it would be inconsistent with several observational features (see also Tripathi & Klimchuk (2013); Patsourakos et al. (2014)). The result was further supported by hydrodynamic simulations (Klimchuk & Bradshaw 2014). Using these simulations, they studied the response of a static loop to impulsive heating in the upper chromosphere, which produces localized hot material that rapidly expands upward and might represent the hot tip of a spicule. Noticing the inability of a single hot spicule tip to explain the observations, Bradshaw & Klimchuk (2015) further explored the role of frequently recurring chromospheric nanoflares. The study was motivated by the suggestion that rapidly repeating type II spicules might accumulate enough hot plasma to explain the coronal observations . However, the simulations were still inconsistent with observations. In both the analytical model and the simulations, the dynamics of the hot material is due entirely to an explosive expansion from the locally enhanced pressure. There is no additional imposed force to bodily eject the material. The consequences of such a force were investigated by Petralia et al. (2014). Their study involves injecting cold and dense chromospheric material into the corona with an initial velocity. The result indicates that the production of a shock can give rise to coronal emission. However, the emission is from the preexisting coronal material rather than the spicule itself. The injected material has no hot component. The studies mentioned above have investigated the dynamics of either the hot tip of a spicule without any initial velocity or a spicule with a cold tip and finite injection velocity. Our work combines these two effects. The spicule is now injected in a stratified flux tube with high velocity and consists of both a hot tip and a cold body (T = 2 × 10 4 K). We further investigate the possibility that most of the observed hot emission from the corona can be explained by such spicules. Through forward modelling, we quantitatively compare the simulations with observations to answer this question. The rest of this paper is organized as follows. The numerical setup is described in Section 2. We report on the simulation results in Section 3. Finally we summarize and discuss our results in Section 4. NUMERICAL SETUP Spicules are seen to follow magnetic field lines. To simulate their dynamics, we consider a straight 2D magnetic flux tube consisting of uniform 10 G magnetic field. We impose a gravity corresponding to a semi-circular loop such that the vertical component of the gravitational force is maximum at both ends and smoothly becomes zero in the middle of the tube. Two ends of the tube are embedded in the chromosphere. The loop is symmetric about the center, which corresponds to the apex. We use Cartesian coordinates, and therefore the loop actually corresponds to an infinite slab. This is a reasonable approximation because we are interested in how the plasma evolves within an effectively rigid magnetic field appropriate to the low β corona. The slab dimension corresponding to the loop length is 100 Mm. The other dimension is 0.42 Mm, but this is not relevant. Rigid wall boundary conditions are imposed at the sides, and the evolution is essentially equivalent to 1D hydrodynamics, as discussed below. The first 2 Mm of both ends of the loop are resolved with a fine uniform grid with 10 km cells, while the coronal part is resolved with a stretched grid containing 1500 cells on each side. The fine grid close to both the footpoints allows us to resolve the steep transition region more accurately. The spicule simulation begins with an initial static equilibrium atmosphere obtained with the double relaxation method described in Appendix A. We choose a relatively low temperature and low density loop because we wish to test the hypothesis that the observed coronal emission comes primarily from spicules. The apex temperature of the loop is 0.5 MK. Figure 1 shows the background loop profile that is used in most of our simulations. The chromosphere is 470 km deep -approximately half a gravitational scale height. It merely acts as a mass reservoir. Detailed chromospheric physics like partial ionization and optically thick radiation are not implemented in the code as we are solely interested in coronal emission. We use a modified radiation loss function to maintain a chromospheric temperature near 2 × 10 4 K, as described in Appendix A. The propagation of a spicule in the loop is emulated through an injection of dense material from the left footpoint. The injected material follows specified density, velocity and temperature profiles in time which are described below. At this injection boundary, all plasma parameters, except the density and pressure, are set to their initial values once the injection is over. The density is set to have the prescribed value at the end of the injection phase, and pressure is determined from the ideal gas equation of state. On the other hand, at the right foot-point, all the plasma parameters maintain the initially prescribed values throughout the entire simulation. We solve the compressible MHD equations inside our simulation domain using the PLUTO code (Mignone et al. 2007) with ideal gas environment. Plasma inside the domain is cooled by radiation and field aligned thermal conduction. The CHIANTI (Landi et al. 2013) radiative loss function for coronal abundance is used to model the radiative cooling. For anisotropic conduction, the thermal conductivity, κ = 5.6 × 10 −7 T 5/2 erg s −1 K −1 cm −1 is considered along the magnetic field lines, whereas κ ⊥ is taken to be zero. Also, for the saturated conductive flux used in PLUTO, F sat = 5φρC 3 s , where we have considered the value of the free parameter φ to be 0.9, which represents effective thermal conduction in the system, and C s is the isothermal sound speed. The MHD equations are solved in Cartesian coordinates. The photospheric magnetic flux is found to be localized and clumpy, whereas, in the corona, it fills out space uniformly. Such nature of the magnetic flux at different layers of the solar atmosphere dictates that the flux tubes expand laterally at the junction of the chromosphere and corona, where the plasma β changes from being greater than one to less than one. This type of expansion of flux tubes is realized in 2D MHD simulations of coronal loops (e.g., Guarrasi et al. 2014). Through an area expansion factor, this has also been incorporated in 1D or 0D models (Mikić et al. 2013;Cargill et al. 2022). We do not include expansion in our model because we are interested in the spicule dynamics in the corona, and the simplification should not affect our results significantly. We note that the plasma β is less than unity throughout the evolution, so no expansion from the spicule injection would be expected. Additionally, the initial atmosphere and injection profile are uniform along the horizontal (cross-field) axis. Hence the plasma remains nearly uniform in the lateral direction, effectively making our simulations similar to 1D hydrodynamic simulations. Nevertheless, we ran all our computations using the 2D MHD set up because of our familiarity with the powerful PLUTO code. The limited number of grid points in the cross-field direction keeps the computational demands relatively low. Two main components of our simulations are: (a) a background loop in hydrostatic and energy equilibrium representing a tenuous coronal atmosphere, and (b) the propagation of injected material resembling spicule propagation along the loop. Our experimental spicule consists of a hot dense tip followed by cold dense material injected from the base of the model. Here we investigate how changing the temperature of the hot tip and injection speed can alter the intensities and profiles of the Fe XII (195Å) and Fe XIV(274Å) coronal spectral lines. We have performed six sets of simulations where the spicule tip temperatures are considered to be at 2, 1, and 0.02 MK, followed by cold material with a temperature of 0.02 MK. All the runs are performed with two injection velocities: 50 and 150 km s −1 (see Table 1). Since we assume that spicules might have been generated deep inside the chromosphere, we inject a high-density material in the loop to emulate the spicule. The density scale height of the spicule is chosen to be six times the gravitational scale height at the base of the equilibrium loop. To impose such conditions on the ejected spicule, its density follows a time profile given by, where ρ(t) and ρ 0 are the injected density at time t and the base density of the equilibrium loop, respectively. The time profile of the injection velocity is given by, where v inj corresponds to 50 or 150 km s −1 (depending on the simulation). H represents the gravitational scale height given by where T base = 0.02 MK is the base temperature of the loop, k B is the Boltzmann constant, while m H and g represent mass of the hydrogen atom and solar surface gravity, respectively, and µ = 0.67 denotes the mean molecular weight of the plasma. The temperature of the ejected spicule also follows a time profile given by In all the above equations, times t 1 , t 2 , t 3 , t 4 and t 5 are chosen to be 2, 10, 12, 90 and 100 s, respectively. Times are chosen so that the top 10% of the spicule's body emits in coronal lines as is generally observed ). The total injection duration is also motivated by the observed lifetime of type II spicules . The ramping up of velocity, density, and temperature ensures a smooth entry of the spicules into the simulation domain. Similarly, the ramping down at the end of the injection avoids any spurious effects. Figure 2 shows one such example of velocity, density, and temperature profiles when the spicule is ejected with velocity 150 km s −1 , and its hot tip is at 2 MK. Likewise, different injection time profiles have been used for other injection velocities and temperatures. The initial equilibrium loop remains the same in all cases, unless specified. RESULTS The large velocity of the spicule and high pressure compared to the ambient medium give rise to a shock, which propagates along the loop and heats the material ahead of it. Depending on the sound speed of the ambient medium (i.e., the preexisting loop plasma) and the temperature of the injected spicule material, the generated shock turns out to be of different kinds: (a) Piston driven shock -in which case the shock speed is nearly equal to the injection speed (e.g., simulation with T tip = 0.02 MK), and (b) Pressure driven shock -in which case the shock speed exceeds the injection speed (e.g., when T tip = 2 and 1 MK). Emission from the shock heated plasma differs depending on the nature of the shock. We compare different simulations to understand the coronal response to spicules with different injection parameters. Our discussion starts with the results from Run1 where the hot tip of the injected spicule has a temperature T tip = 2 MK and injection velocity v = 150 km s −1 . The injection profiles are those already shown in Figure 2. Dynamics and heating Insertion of dense, high temperature plasma (T tip = 2 MK) with high velocity (v = 150 km s −1 , Run1) into the warm loop produces a shock. Figure 3 shows the temperature, density and plasma velocity along the loop at t = 70 s. The dashed lines mark the location of the shock front. It is evident from the figure that the high compression ratio exceeds the ratio of an adiabatic shock. The compression ratio of an adiabatic shock should always be ≤ 4. To understand the nature of the shock, we perform a shock test with Rankine-Hugoniot (RH) conditions, which read Here ρ 1 and ρ 2 are the pre-and post-shock plasma mass densities respectively, and v 1 and v 2 are likewise the pre-and post-shock plasma velocities in the shock rest frame. Furthermore, γ is the ratio of the specific heats, is the upstream sound speed, where P 1 is the upstream pressure, and finally M = v 1 /c s is the upstream Mach number in the shock reference frame. Injection of high temperature plasma accelerates the shock with a speed much larger than the injection speed of the spicule material giving rise to a pressure driven shock front. Figure 3 demonstrates an abrupt change in plasma variables at the shock. The shock speed at this instant is 562 km s −1 . It also shows that at the discontinuity location (s = 36.9 Mm, s being the coordinate along the loop), the density and velocity ratios are 10.7 and 0.094, respectively, in the shock rest frame. The inverse relationship of these ratios indicates a constant mass flux across the shock front, in accordance with equation (5). The Mach number in the shock frame at the same location is 3.37. With this Mach number, the RH condition gives density and velocity ratios in accordance with those in the simulation (10.5 and 0.095) when γ = 1.015. In other words, consistency is achieved with this value of γ. Being close to unity, it implies a nearly isothermal shock. Efficient thermal conduction carries a large heat flux from the shock front to its surroundings, giving rise to the locally smooth, near-isothermal temperature profile in Figure 3. It is worth mentioning here that RH-jump conditions do not consider any heat loss/gain, such as thermal conduction or radiative loss. However, our system includes these sink terms in the energy equation. It is because of these loss functions the shock-jump is larger. Limited thermal conduction would bring the jump condition closer to the adiabatic approximation but would also affect the thermal profile ahead and behind the shock. Our result is consistent with that of Petralia et al. (2014), where the signature of the shocks in front of the spicule has been reported. As we show later, the initially hot material in the spicule tip cools dramatically. Only ambient material heated by the shock is hot enough to produce significant coronal emission. Interestingly, the high compression ratio at the shock front depends more on the temperature difference and corresponding pressure difference between the injected and ambient plasma material than on the velocity with which it is injected. Table 1 shows a study of how the compression ratio (or the shock strength) varies when the tip of the spicules are at different temperatures and are injected with different velocities. As mentioned earlier, the injection conditions give rise to two different types of shocks. When the injected plasma temperature is high (e.g., spicule tips with temperatures 2 and 1 MK), the excess pressure gives rise to a pressure-driven shock. On the other hand, injection of a cold material (tip temperature equal to that at the loop footpoint, i.e., 0.02 MK) produces a piston-driven shock. Our test runs identify both kinds of shocks. For example, when we inject spicules with a fixed injection velocity of 150 km s −1 , but with different tip temperatures (viz. 2, 1 and 0.02 MK), the average shock speed is 520, 400 and 210 km s −1 , respectively (see Figure 11). The first two shocks are pressure-driven as the average shock speeds exceed the injection speed by a wide margin. The third shock maintains a speed close to the injection speed and can be categorized as a piston-driven shock. The shock speed depends not only on the injected tip temperature, but also on the properties of the ambient material in which it is propagating, which vary along the loop. This is discussed further in Appendix B. Loop emission Thermally conducted energy from the shock front heats the material lying ahead of it. Therefore, a magnetic flux tube subjected to spicule activity could pro- duce hot emission from newly ejected material at the spicule's hot tip and from pre-existing coronal material in both the pre and post-shock regions. We now examine the contributions from these three different sources. We identify the leading edge of the hot spicule tip by finding the location in the loop where the column mass integrated from the right footpoint equals the initial column mass of the loop. Recall that the spicule is injected from the left footpoint. The spicule compresses the material in the loop, but does not change its column mass. We identify the trailing edge of the hot material in a similar manner, but using the column mass at time t = 10 s, when the injection of hot material ceases and the injection of cold material begins. Figure 4 shows emission along the loop in the Fe XII and Fe XIV lines at t = 10 and 70 s, evaluated from Run1. The orange region is the hot spicule tip, while the red region is the shock-heated material ahead of it. The shock front is the dot-dashed black vertical line. The dark orange curve is temperature in units of 10 5 K, with the scale on the left. The blue curve is the logarithm of density, with the scale on the right. The yellow and green curves are the logarithms of Fe XII and Fe XIV intensity, respectively, with the scale on the left. The variation of intensity is enormous; a difference of 10 corresponds to 10 orders of magnitude. The intensity is what would be observed by the Extreme ultraviolet Imaging Spectrometer (EIS; Culhane et al. (2007)) onboard Hinode (Kosugi et al. 2007) if the emitting plasma had a line-of-sight depth equal to the EIS pixel dimension, i.e., if observing an EIS pixel cube. This can be interpreted as normalized emissivity. At t = 10 s, the emission in both lines comes primarily from the injected hot plasma (orange region). On the other hand, at t = 70 s it comes primarily from the shock heated plasma (red region). The transition happens very early on. Shortly after the injection of the hot material stops (t = 10 s), emission from the shock heated material starts dominating the total emission from the loop. This is evident in the time evolution plot of the loopintegrated emission in Figure 5. Shown are the intensities that would be observed by EIS, assuming that the loop has a cross section equal to the pixel area and that all of the loop plasma is contained within a single pixel. This corresponds to a loop that has been straightened along the line of sight and crudely represents a line of sight passing through an arcade of similar, out of phase loops. The black curve shows the evolution of the total emission contributed by the spicule and pre-existing plasma. Subtracting the spicule component (red curve) from the total gives the evolution of the emission coming solely from the pre-existing (non-spicule) loop material (green curve). Soon after the hot tip of the spicule completes its entry into the loop (at t = 10 s), the emission from the spicule falls off rapidly. This is because the hot material at the spicule tip cools rapidly as it expands in the absence of any external heating. It is far too faint to make a significant contribution to the observed coronal emission, as emphasized earlier by Klimchuk (2012) and Klimchuk & Bradshaw (2014). For better comparison with observations, we construct synthetic spectral line profiles. The methodology is explained in Appendix C. To construct these profiles, we imagine that the loop lies in a vertical plane and is observed from above. We account for the semi-circular shape when converting velocities to Doppler shifts. We then integrate the emission over the entire loop and distribute it uniformly along the projection of the loop onto the solar surface. We assume a cross section corresponding to an EIS pixel, and thereby obtain a spatially averaged EIS line profile for loop. Finally, a temporal average is taken over the time required for the shock to travel to the other end of the loop (≈ 190 s in this case). Such spatially and temporally averaged line profiles from a single loop (e.g., Figure 6) is equivalent to an observation of many unresolved loops of similar nature but at different stages of their evolution (Patsourakos & Klimchuk 2006;Klimchuk & Bradshaw 2014). Asymmetric coronal line profiles with blue wing enhancement are manifestations of mass transport in the solar corona. Type II spicules are often suggested to be associated with such a mass transport mechanism (De Pontieu et al. 2009Martínez-Sykora et al. 2017). However, the extreme non-Gaussian shapes of the simulated Fe XII and Fe XIV line profiles ( Figure 6) are significantly different from observed shapes (De Pontieu et al. 2009;Tian et al. 2011;Tripathi & Klimchuk 2013). Also, the very large blue shifts are inconsistent with observations. Observed Doppler shifts of coronal lines tend to be slower than 5 km s −1 in both active regions (Doschek 2012;Tripathi et al. 2012) and quiet Sun (Chae et al. 1998;Peter & Judge 1999). In contrast, a shift of 150 km s −1 is evident in the simulated spectral lines (Figure 6). Our simulation is not reliable after the shock reaches the right boundary of the model. Because of rigid wall boundary conditions, it reflects in an unphysical manner. One might question whether the emission after this time could dramatically alter the predicted line profiles. We estimate the brightness of this neglected emission using the loop temperature profile shortly before the shock reaches the chromosphere at t = 190 s. The temperature peaks at the shock, and there is strong cooling from thermal conduction both to the left (up the loop leg) and, especially, to the right (down the loop leg). We estimate the cooling timescale according to: where, k B is the Boltzmann constant, κ 0 is the coefficient of thermal conductivity along the field lines, T is the temperature at the shock, n e is the electron number density behind the shock, and l is the temperature scale length. We do this separately using the scale lengths on both sides of the shock, obtaining τ cond = 1290 s and 7 s for the left and right sides, respectively. Radiative cooling is much weaker and can be safely ignored. We estimate the integrated emission after t = 190 s by multiplying the count rate at that time by the longer of the two timescales, thereby obtaining an upper limit on the neglected emission in our synthetic line profiles. The result is 10565 DN pix −1 for Fe XII and 2206 DN pix −1 for Fe XIV. These are about 0.97 and 2.76 times the temporally integrated emission before this time, for Fe XII and Fe XIV, respectively. The factors are much smaller using the shorter cooling timescale. Even the large factors do not qualitatively alter our conclusions. The profile shapes and Doppler shifts would still be much different from observed. The conclusions we draw below are also not affected by neglecting the emission after the shock reaches the right footpoint. Comparison with observations We now estimate the spicule occurrence rate that would be required to explain the observed coronal intensities from active regions and quiet Sun. We have already seen that, in the absence of any external (coronal) heating, the hot material at the tip of the spicule cools down rapidly. However, we are concerned here with the total emission, including that from pre-existing material that is heated as the spicule propagates along the loop. The black curve is the sum of both the spicule and non-spicule intensities. Time integrated intensities over the 190 s required for the shock to reach the right footpoint are indicated. Even though the emission from the injected spicule material decreases rapidly, it is so much brighter than the shock heated pre-existing material that it contributes more to the time integration. Consider a region of area A reg on the solar surface, large enough to include many spicules. If the spatially averaged occurrence rate of spicules in this region is R (cm −2 s −1 ), then one may expect N reg = Rτ A reg spicules to be present at any moment, where τ is the typical spicule lifetime. Since we are averaging over large areas, the orientations of the spicule loops does not matter, and we can treat the loops as straightened along the line of sight, as done for Figure 5. If I sp (DN s −1 pix −1 ) is the temporally averaged intensity of such a loop (the full loop intensity divided by 190 s in Fig. 5), then the expected intensity from a corona that only contains spicule loops is I obs = N reg I sp A sp /A reg = I sp Rτ A sp , where A sp is the cross-sectional area of the loop. The typical intensities (I obs ) observed by EIS in active regions and quiet Sun are, respectively, 162 and 34 DN s −1 pix −1 in Fe XII (195Å) and 35 and 4 DN s −1 pix −1 in Fe XIV (274Å) (Brown et al. 2008). On the other hand, the temporally averaged intensities from our simulation (I sp ) are 56.36 and 4.22 DN s −1 pix −1 for Fe XII and Fe XIV, respectively. Considering τ to be 190 s, the time it takes for the shock to travel across the loop, we derive an occurrence rate (R) of spicules as a function of their cross-sectional area (A sp ). Results are shown in Figure 7 for the two lines. Following our earlier logic, we may also argue that at any given time there are N = Rτ A spicules on the solar disk, where A is the area of the solar disk. Using the estimated value of the occurrence rate of the spicules (R), and taking τ to be 190 s as before, the number of spicules on the solar disk is related to the other quantities as per N = (I obs /I sp )(A /A sp ). This formula represents N as a function of the spicule crosssectional area A sp (Figure 8). Considering the fact that the typical observed widths of spicules lie between 200 − 400 km (Pereira et al. 2011), we find that the full disk equivalent number of spicules required to explain the observed intensities exceeds 10 7 in the quiet Sun and 10 8 in active regions, as indicated by the green shaded region in Figure 8. However, observational estimations for the number of spicules on the disk vary between 10 5 (Sterling & Moore 2017) and 2×10 7 (Judge & Carlsson 2010). There is a large discrepancy. Far more spicules than observed would be required to produce all the observed coronal emission. For the quiet Sun, 100 times more spicules would be needed, while for active regions, 10 − 10 3 times more would be needed. These are lower limits based on Run1. Our other simulations imply even greater numbers of spicules (see Table 2). We should mention here that the larger the height the spicule rises, the longer the time it compresses the ambient material, and thus the brighter the time averaged emission. The spicules in our simulations with 150 km s −1 injection speed reach a height of about 23 Mm, which is much larger than the typically observed spicule height (∼ 10 Mm). Therefore, we are likely to overestimate the emission coming from spicule loops, and so the discrepancy between the required and observed number of spicules is even greater. It should also be noted that the values estimated by Sterling Analysis of our simulated observations thus suggests that spicules contribute a relatively minor amount to the emission and thermal energy of the corona. Through the generation of shocks, they may heat the local plasma, but that too cools down rapidly due to expansion and thermal conduction. Therefore, synthetic spectra derived from our simulation show a high discrepancy with observed spectra. However, this does not rule out the possibility of spicules contributing significantly to the coronal mass. The ejected spicule material may still get heated in the corona through some other heating mechanism -a source that exceeds the initial thermal and kinetic energy of the spicule. However, observational evidence of such a process is still lacking. Analyzing the excess blue wing emission of multiple spectral lines hotter than 0.6 MK, Tripathi & Klimchuk (2013) have concluded that the upward mass flux is too small to explain the mass of the active region corona. Their observations indicate that spicules hotter than 0.6 MK are not capable of providing sufficient mass to the corona. So far, we have allowed our spicules to propagate within a warm (T = 0.5 MK), relatively low density loop in order to determine whether they, by themselves, can explain the observed hot emission. Our simulations indicate that this is not viable. Therefore, there must be some other heating mechanisms at play that produce the hot, dense plasma. Setting aside the issue of heating mechanisms, in the following section we simply test the response of a spicule in a hot and dense flux tube. We have considered a static equilibrium loop with apex and footpoint temperatures of approximately 2 and 0.02 MK, respectively. A spicule with a tip temperature of 2 MK followed by a cold, dense material with temperature 0.02 MK is injected with a velocity of 150 km s −1 from the bottom boundary, similar to our previous spicules. The velocity profile of the injected spicule is the same as shown in Figure 2. The injected spicule generates a shock that takes about 180 s to traverse the loop. Spicule propagation in a hot loop The spatio-temporal averaged spectral line profiles are obtained following the method described in Appendix C. However, in this case, because of the high background temperature, the loop itself emits significantly in the Fe XII and Fe XIV coronal lines. We consider the situation where the line of sight passes through many loops. Some contain spicules and some are maintained in the hot equilibrium state. We adjust the relative proportions to determine what combination is able to reproduce the observed red-blue (RB) profile asymmetries, which are generally < 0.05 De Pontieu et al. 2009;Tian et al. 2011). For an asymmetry of ≈ 0.04, we find that the ratios of spicule to non-spicule strands are 1 : 150 for the Fe XII line and 1 : 72 for the Fe XIV line. Again the conclusion is that spicules are a relatively minor contributor to the corona overall, though they are important for the loops in which they occur. SUMMARY AND DISCUSSION The solar atmosphere displays a wide variety of spicules with different temperatures and velocities. It 1 150 6.7 × 10 −2 1.3 × 10 −3 3.86 × 10 8 2.26 × 10 9 1.84 × 10 9 1.98 × 10 10 4 1 50 5.9 × 10 −3 7.81 × 10 −6 4.3 × 10 9 3.8 × 10 11 2.06 × 10 10 3.4 × 10 12 5 0.02 150 4.4 × 10 −4 1.5 × 10 −7 5.89 × 10 10 2.02 × 10 13 2.8 × 10 11 1.77 × 10 14 6 0.02 50 1.02 × 10 −7 1.1 × 10 −13 2.5 × 10 14 2.7 × 10 19 1.2 × 10 15 2.4 × 10 20 has been suggested that type II spicules are a major source of coronal mass and energy (De Pontieu et al. 2007, 2009. In this work, we numerically investigate the role of spicules in producing observed coronal emissions. In particular, we examine whether, in the absence of any external heating, the hot tips of the spicules and the shock-heated ambient plasma can explain the observed coronal emission. For this, we inject spicules with different temperatures and velocities into a coronal loop in static equilibrium. We choose a relatively cool equilibrium so that the loop does not itself produce appreciable emission in the absence of a spicule. Each of our injected spicules consists of a hot tip followed by a cold body. We consider three different temperatures for the hot tips, viz., 2, 1 and 0.02 MK, while the cold, dense chromospheric plasma that follows the tip has a temperature of 0.02 MK. Six different simulations are run by injecting each of these spicules with an initial velocity of either 50 km s −1 or 150 km s −1 (see Table 1). We also have constructed spectral line profiles and estimated the spicule occurrence rate required to explain the observed intensities from the quiet Sun and active regions. Our main results are summarized as follows. Shock formation during spicule propagation -All six runs described above suggest the formation of shocks due to the injection of spicule material into the coronal flux tubes. The shocks are stronger when the temperature differences and therefore pressure differences with the ambient plasma are higher. Table 1 shows the variation of the compression ratio (measure of shock strength) with changing temperature of the spicule tip. The nature of the shock depends on the tip temperature. Spicules with a hotter tip produce a pressure-driven shock that propagates with a speed larger than the injection speed. Spicules with a cold tip (i.e., T tip = 0.02 MK) produce a piston-driven shock which propagates with a speed close to the injection speed. The intensities and shapes of spectral line profiles depend on the nature of the shock. The formation of shocks during spicule injection agrees well with previous studies (Petralia et al. 2014;Martínez-Sykora et al. 2018). Rapid cooling of the hot spicule tip -Our simulations show that, in the absence of any external heating, the hot tip of the spicule cools rapidly before reaching a substantial coronal height. Consequently, the tip emission from coronal lines like Fe XII (195Å) and Fe XIV (274Å) is short lived ( Figure 5) and confined to low altitudes. The result is consistent with earlier studies by Klimchuk (2012) and Klimchuk & Bradshaw (2014). Relative emission contributions of hot tip and shock heated plasma -Our simulations show that the pre-existing material in the loop gets heated through shock compression and thermal conduction. However, the time-integrated emission from this heated pre-existing material is less than that from the hot tip, as shown in Figure 5. The tip plasma is hot for a much shorter time, but it is inherently much brighter because of the greater densities (it is injected in a dense state). Line profile discrepancies -The shapes of our synthetic spectral line profiles show significant discrepancies with observations. The simulated profiles are highly non-Gaussian and far more asymmetric than observed. A strong blue shift (∼ 150 km s −1 ) of the synthetic lines is also inconsistent with the mild Doppler shifts (< 5 km s −1 ) observed in the quiet Sun and active regions. Excessive number of spicules required to explain observed intensities -The spatially and temporally averaged intensities from our simulations (Figures 6) imply that far more spicules are required to reproduce the observed emission from the solar disk than are observed (Figure 8). The discrepancies are up to a factor of 100 for the quiet Sun and factors of 10 − 10 3 for active regions. These factors apply specifically to Run1, where a spicule with a 2 MK tip is ejected at a velocity of 150 km s −1 . As listed in Table 2, the loops in our other simulations with different combinations of tip temperature and ejection velocity are fainter, and therefore more of them would be required to reproduce the observed disk emission, exacerbating the discrepancy. Ratio of loops with and without spicules -Under the assumption that the corona is comprised of hot loops with and without spicule ejections, red-blue spectral line asymmetries similar to those observed (0.04) require far more loops without spicules than with them. The spicule to non-spicule loop number ratio is 1 : 150 for the FeXII line and 1 : 72 for the Fe XIV line. Our simulations indicate that spicules contribute a relatively minor amount to the mass and energy of the corona. Such a claim had already been made by Klimchuk (2012), where it was shown that hot tip material rapidly expanding into the corona is unable to explain the observed coronal emission. However, a bodily ejection of the spicule was not considered, and the emission from ambient material effected by the expansion was not rigorously investigated (though see Appendix B in that paper). Later, Petralia et al. (2014) argued that the shock-heated material in front of an ejected cold spicule might be erroneously interpreted as ejected hot material. They did not compare the brightness of the shock-heated material with coronal observations. Our numerical simulations improve on both of these studies. We show that neither the expanding hot tip nor the shock-heated ambient material of a bodily ejected spicule can reproduce coronal observations. A number of discrepancies exist. The existence of some coronal heating mechanism -operating in the corona itself -is required to explain the hot corona. It is not sufficient to eject hot (or cold) material into the corona from below. We emphasize that our conclusion does not rule out the possibility that waves may be launched into the corona as part of the spicule ejection process, or that new coronal currents may be created outside the flux tube in which the ejected material resides, as suggested by Martínez-Sykora et al. (2018). Such waves and currents would lead to coronal heating and could explain at least some non-spicule loops. It seems doubtful, however, that this could explain the many non-spicule loops implied by observed line profile asymmetries. It seems that some type of heating unrelated to spicules must play the primary role in explaining hot coronal plasma. We thank the anonymous referee for her/his comments to improve the clarity of the paper. SSM & AS thank Dr. Jishnu Bhattacharyya for many useful discussions. Computations were carried out on the Physical Research Laboratory's VIKRAM cluster. JAK was supported by the Internal Scientist Funding Model (competed work package program) at Goddard Space Flight Center. We inject spicules in a magnetic structure that is in static equilibrium. Such an equilibrium is achieved recursively, and the final equilibrium profile is obtained through two stages of relaxation. First, we obtain the density and temperature profiles by solving the hydrostatic and energy balance equations (Aschwanden & Schrijver 2002) assuming a steady and uniform background heating Q bg . The CHIANTI radiative loss function Λ(T ) is used to describe the loop's radiation in the energy balance equation. The desired looptop temperature is achieved by adjusting the value of Q bg . However, due to lack of exact energy balance, the temperature and density profiles derived in this way do not achieve a perfect equilibrium state. Rather these derived profiles are then used to calculate the final equilibrium loop profile , such that the resulting temperature profile never drops below the chromospheric temperature T ch (2 × 10 4 K), and the system does not generate any spurious velocity either. In the following, we explain these two stages in detail. A.1. Heating and cooling in Relaxation-I: Starting with the initial profiles described above, the loop is allowed to relax under gravity with the constant background heating Q bg . To avoid numerical artifacts, from this stage onward, we smoothly reduce the radiative cooling of the chromosphere to zero over a narrow temperature range between T ch and T min , where T min = 1.95×10 4 K is a conveniently chosen temperature slightly less than T ch . This is achieved by the radiative loss function λ(T ), defined as Here Λ(T ) denotes the optically thin radiative loss function from CHIANTI. The modified function λ(T ) is plotted in Figure 10. As the loop relaxes, material drains from the corona and accumulates at the footpoints. The resulting high density of the loop footpoints gives rise to excessive cooling and brings down the footpoint temperatures below T min , along with generating short lived velocities. However, the loop eventually achieves a steady-state, and we use the enhanced footpoint density at that time (n base ) to estimate the additional heating required to keep the chromospheric temperature above T min . This is carried out in the next relaxation stage. A.2. Heating and cooling in Relaxation-II: Once again, we start with the initial density and temperature profiles from the beginning of the first stage. However, this time we apply additional heating in the chromosphere above the constant background heating Q bg . This prevents the plasma from cooling below T min and instead lets it hover between T ch and T min . The total heating function Q is given by where Q ch = n 2 ch Λ(T ch ) is the heat required to balance the radiative losses from the footpoint plasma of the initial loop profile at temperature T ch and density n ch . Figure 10 graphically depicts the radiative loss and heating functions that are maintained throughout the simulation. B. VARIATION OF SHOCK SPEED WITH HEIGHT For a pressure driven shock, the shock's speed primarily depends on the pressure difference between the spicule's tip and the ambient medium in which it is propagating. Lower pressure close to the loop apex provides lesser resistance to the shock propagation, and hence the shock speed increases. On the other hand, high pressure close to the footpoints provides greater resistance and thus the shock speed reduces. For a better understanding, we track the shock front along the loop and derive its speed during its propagation. The shock front at any instant can be identified from the density jump moving ahead of the injected spicule material. To track it, we identify the jump in density at each time, which is also associated with the maximum temperature of the loop. Once the locations of the shock front along the loop are identified, a derivative of the same gives the instantaneous shock speed as a function of loop coordinates. Figure 10. Radiative loss function and excess heating implemented to prevent cooling of plasma below Tmin. When the temperature of the loop is above T ch , the radiative loss is given by the CHIANTI radiative loss function Λ(T ) (Equation A1). The only heating applied to the loop at these temperatures is the uniform background heating Q bg . The radiative cooling smoothly goes to zero as the temperature approaches Tmin from above. In the temperature range Tmin to T ch , depending on the plasma density and temperature, an additional heating is provided to the loop to balance the lost energy through radiative cooling. Below the temperature Tmin, an additional heating Q ch , proportional to n 2 ch , is applied to bring the plasma back T ch . Figure 11. Variation of the shock speeds as a function of loop coordinates for injection speed = 150 km s −1 and tip temperatures = 2, 1 and 0.02 MK. Figure 11 shows the variation of shock speed as a function of loop coordinates for three different shocks, all ejected with velocity 150 km s −1 but with three different tip temperatures, viz. 2, 1 and 0.02 MK. Though the shock speeds increase at the loop apex for all three shocks, velocity amplitudes depend on the injection temperatures and thus pressures. The larger the tip temperature, the higher the spicule tip pressure and hence larger is the shock speed. C. FORWARD MODELLING Spectral profiles provide a wealth of information about the plasma dynamics along the line of sight (LOS). Adapting the method outlined in Patsourakos & Klimchuk (2006), synthetic spectral line profiles are constructed at each numerical grid cell using the cell's density, velocity and temperature. At any given time, t, and location along the loop, s, the line profile is where I 0 is the amplitude, v shift is the Doppler shift, and v width is the thermal line width. The amplitude is given by I 0 (s, t) = n 2 e G(T )ds , where n e , T and ds denote the electron number density, temperature, and length of the cell. The contribution function G(T ) for the line is taken from the CHIANTI atomic data base (Landi et al. 2013). The Doppler shift equals the line of sight velocity of the cell, v shift = v los (C5) in wavelength units, and the thermal width is given by where m ion is the mass of the ion. Once the line profile at each grid point is constructed, spatial averaging is performed by summing the profiles along the loop and dividing by its projected length assuming that it lies in a vertical plane and is viewed from above: where L is the loop length and d is the pixel dimension. The loop is assumed to have a cross section of d 2 . Finally the spatially averaged line profiles are temporally averaged over a time τ , which is taken to be the travel time of the shock along the loop; this yields I spatial, temporal = 1 τ t I(t) spatial (C8)
/*------------------------------------------------------------------------------- lightSphere Adds a circle of light to the lightmap at x and y with the supplied radius and intensity; casts no shadows intensity can be from -255 to 255 -------------------------------------------------------------------------------*/ light_t* lightSphere(Sint32 x, Sint32 y, Sint32 radius, Sint32 intensity) { light_t* light; Sint32 u, v; Sint32 dx, dy; if ( intensity == 0 ) { return NULL; } light = newLight(x, y, radius, intensity); intensity = std::min(std::max(-255, intensity), 255); for ( v = y - radius; v <= y + radius; v++ ) { for ( u = x - radius; u <= x + radius; u++ ) { if ( u >= 0 && v >= 0 && u < map.width && v < map.height ) { dx = u - x; dy = v - y; light->tiles[(dy + radius) + (dx + radius) * (radius * 2 + 1)] = intensity - intensity * std::min<float>(sqrtf(dx * dx + dy * dy) / radius, 1.0f); lightmap[v + u * map.height] += light->tiles[(dy + radius) + (dx + radius) * (radius * 2 + 1)]; } } } return light; }
#include <cstdio> #include <cstdlib> #include <cassert> #include <vector> #include "all-procedures.cpp" class Test { std::vector<int32_t> input; size_t max_size; bool ok; public: Test(size_t size_) : max_size(size_) , ok(true) { fill_ascending(); } bool all_ok() const { return ok; } template <typename FUNCTION> void test(const char* name, FUNCTION fun); private: template <typename FUNCTION> bool run(FUNCTION fun); void fill_ascending() { input.clear(); input.reserve(max_size); for (size_t i=0; i < max_size; i++) { input[i] = i; } } }; template <typename FUNCTION> bool Test::run(FUNCTION fun) { for (size_t i=0; i < max_size; i++) { break; const bool expected = true; const bool ret = fun(&input[0], i); if (ret != expected) { printf("case 1, size = %lu failed\n", i); return false; } } for (size_t size=3; size < max_size; size++) { for (size_t i=0; i < size; i++) { const int32_t prev = input[i]; input[i] = (i == 0) ? prev + 2 : -1; const bool expected = false; const bool ret = fun(&input[0], size); if (ret != expected) { printf("case 2, size = %lu, position = %lu failed\n", size, i); for (size_t j=0; j < size; j++) { printf(" %d", input[j]); } putchar('\n'); return false; } input[i] = prev; } } return true; } template <typename FUNCTION> void Test::test(const char* name, FUNCTION fun) { printf("testing %s", name); fflush(stdout); if (run(fun)) { puts(" OK"); } else { ok = false; } } int main() { Test test(1024); test.test("scalar", is_sorted); test.test("SSE (generic)", is_sorted_sse_generic); test.test("SSE (generic, unrolled 4 times)", is_sorted_sse_generic_unrolled4); test.test("SSE", is_sorted_sse); test.test("SSE (unrolled 4 times)", is_sorted_sse_unrolled4); #ifdef HAVE_AVX2 test.test("AVX2 (generic)", is_sorted_avx2_generic); test.test("AVX2 (generic, unrolled 4 times)", is_sorted_avx2_generic_unrolled4); test.test("AVX2", is_sorted_avx2); test.test("AVX2 (unrolled 4 times)", is_sorted_avx2_unrolled4); #endif // HAVE_AVX2 #ifdef HAVE_AVX512 test.test("AVX512", is_sorted_avx512); test.test("AVX512 (generic)", is_sorted_avx512_generic); #endif // HAVE_AVX512 if (test.all_ok()) { puts("All OK"); return EXIT_SUCCESS; } else { return EXIT_FAILURE; } }
{-# LANGUAGE DataKinds #-} {-# LANGUAGE NamedFieldPuns #-} {-# LANGUAGE TypeOperators #-} module Main where import Network.Wai.Handler.Warp (run) import Network.Wai.Middleware.RequestLogger (logStdout) import Reviews import Reviews.Types.Common main :: IO () main = do ctx <- createContext "./config.dhall" putStrLn $ "Starting webserver on port: " ++ show (port' ctx) run (port' ctx) $ application ctx where port' = fromIntegral . port . config application = logStdout . app
import caffe2.python.hypothesis_test_util as hu import hypothesis.strategies as st import numpy as np import numpy.testing as npt from caffe2.python import core, workspace from hypothesis import given class TestEnsureClipped(hu.HypothesisTestCase): @given( X=hu.arrays(dims=[5, 10], elements=hu.floats(min_value=-1.0, max_value=1.0)), in_place=st.booleans(), sparse=st.booleans(), indices=hu.arrays(dims=[5], elements=st.booleans()), **hu.gcs_cpu_only ) def test_ensure_clipped(self, X, in_place, sparse, indices, gc, dc): if (not in_place) and sparse: return param = X.astype(np.float32) m, n = param.shape indices = np.array(np.nonzero(indices)[0], dtype=np.int64) grad = np.random.rand(len(indices), n) workspace.FeedBlob("indices", indices) workspace.FeedBlob("grad", grad) workspace.FeedBlob("param", param) input = ["param", "indices", "grad"] if sparse else ["param"] output = "param" if in_place else "output" op = core.CreateOperator("EnsureClipped", input, output, min=0.0) workspace.RunOperatorOnce(op) def ref(): return ( np.array( [np.clip(X[i], 0, None) if i in indices else X[i] for i in range(m)] ) if sparse else np.clip(X, 0, None) ) npt.assert_allclose(workspace.blobs[output], ref(), rtol=1e-3)
/* Copyright 2017 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package upgrade import ( "fmt" "os" "strings" "time" "k8s.io/apimachinery/pkg/util/version" kubeadmapi "k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm" "k8s.io/kubernetes/cmd/kubeadm/app/constants" certsphase "k8s.io/kubernetes/cmd/kubeadm/app/phases/certs" "k8s.io/kubernetes/cmd/kubeadm/app/phases/certs/renewal" controlplanephase "k8s.io/kubernetes/cmd/kubeadm/app/phases/controlplane" etcdphase "k8s.io/kubernetes/cmd/kubeadm/app/phases/etcd" "k8s.io/kubernetes/cmd/kubeadm/app/util" "k8s.io/kubernetes/cmd/kubeadm/app/util/apiclient" etcdutil "k8s.io/kubernetes/cmd/kubeadm/app/util/etcd" "k8s.io/kubernetes/cmd/kubeadm/app/util/staticpod" ) const ( // UpgradeManifestTimeout is timeout of upgrading the static pod manifest UpgradeManifestTimeout = 5 * time.Minute ) // StaticPodPathManager is responsible for tracking the directories used in the static pod upgrade transition type StaticPodPathManager interface { // MoveFile should move a file from oldPath to newPath MoveFile(oldPath, newPath string) error // RealManifestPath gets the file path for the component in the "real" static pod manifest directory used by the kubelet RealManifestPath(component string) string // RealManifestDir should point to the static pod manifest directory used by the kubelet RealManifestDir() string // TempManifestPath gets the file path for the component in the temporary directory created for generating new manifests for the upgrade TempManifestPath(component string) string // TempManifestDir should point to the temporary directory created for generating new manifests for the upgrade TempManifestDir() string // BackupManifestPath gets the file path for the component in the backup directory used for backuping manifests during the transition BackupManifestPath(component string) string // BackupManifestDir should point to the backup directory used for backuping manifests during the transition BackupManifestDir() string // BackupEtcdDir should point to the backup directory used for backuping manifests during the transition BackupEtcdDir() string // CleanupDirs cleans up all temporary directories CleanupDirs() error } // KubeStaticPodPathManager is a real implementation of StaticPodPathManager that is used when upgrading a static pod cluster type KubeStaticPodPathManager struct { realManifestDir string tempManifestDir string backupManifestDir string backupEtcdDir string keepManifestDir bool keepEtcdDir bool } // NewKubeStaticPodPathManager creates a new instance of KubeStaticPodPathManager func NewKubeStaticPodPathManager(realDir, tempDir, backupDir, backupEtcdDir string, keepManifestDir, keepEtcdDir bool) StaticPodPathManager { return &KubeStaticPodPathManager{ realManifestDir: realDir, tempManifestDir: tempDir, backupManifestDir: backupDir, backupEtcdDir: backupEtcdDir, keepManifestDir: keepManifestDir, keepEtcdDir: keepEtcdDir, } } // NewKubeStaticPodPathManagerUsingTempDirs creates a new instance of KubeStaticPodPathManager with temporary directories backing it func NewKubeStaticPodPathManagerUsingTempDirs(realManifestDir string, saveManifestsDir, saveEtcdDir bool) (StaticPodPathManager, error) { upgradedManifestsDir, err := constants.CreateTempDirForKubeadm("kubeadm-upgraded-manifests") if err != nil { return nil, err } backupManifestsDir, err := constants.CreateTimestampDirForKubeadm("kubeadm-backup-manifests") if err != nil { return nil, err } backupEtcdDir, err := constants.CreateTimestampDirForKubeadm("kubeadm-backup-etcd") if err != nil { return nil, err } return NewKubeStaticPodPathManager(realManifestDir, upgradedManifestsDir, backupManifestsDir, backupEtcdDir, saveManifestsDir, saveEtcdDir), nil } // MoveFile should move a file from oldPath to newPath func (spm *KubeStaticPodPathManager) MoveFile(oldPath, newPath string) error { return os.Rename(oldPath, newPath) } // RealManifestPath gets the file path for the component in the "real" static pod manifest directory used by the kubelet func (spm *KubeStaticPodPathManager) RealManifestPath(component string) string { return constants.GetStaticPodFilepath(component, spm.realManifestDir) } // RealManifestDir should point to the static pod manifest directory used by the kubelet func (spm *KubeStaticPodPathManager) RealManifestDir() string { return spm.realManifestDir } // TempManifestPath gets the file path for the component in the temporary directory created for generating new manifests for the upgrade func (spm *KubeStaticPodPathManager) TempManifestPath(component string) string { return constants.GetStaticPodFilepath(component, spm.tempManifestDir) } // TempManifestDir should point to the temporary directory created for generating new manifests for the upgrade func (spm *KubeStaticPodPathManager) TempManifestDir() string { return spm.tempManifestDir } // BackupManifestPath gets the file path for the component in the backup directory used for backuping manifests during the transition func (spm *KubeStaticPodPathManager) BackupManifestPath(component string) string { return constants.GetStaticPodFilepath(component, spm.backupManifestDir) } // BackupManifestDir should point to the backup directory used for backuping manifests during the transition func (spm *KubeStaticPodPathManager) BackupManifestDir() string { return spm.backupManifestDir } // BackupEtcdDir should point to the backup directory used for backuping manifests during the transition func (spm *KubeStaticPodPathManager) BackupEtcdDir() string { return spm.backupEtcdDir } // CleanupDirs cleans up all temporary directories except those the user has requested to keep around func (spm *KubeStaticPodPathManager) CleanupDirs() error { if err := os.RemoveAll(spm.TempManifestDir()); err != nil { return err } if !spm.keepManifestDir { if err := os.RemoveAll(spm.BackupManifestDir()); err != nil { return err } } if !spm.keepEtcdDir { if err := os.RemoveAll(spm.BackupEtcdDir()); err != nil { return err } } return nil } func upgradeComponent(component string, waiter apiclient.Waiter, pathMgr StaticPodPathManager, cfg *kubeadmapi.InitConfiguration, beforePodHash string, recoverManifests map[string]string, isTLSUpgrade bool) error { // Special treatment is required for etcd case, when rollbackOldManifests should roll back etcd // manifests only for the case when component is Etcd recoverEtcd := false waitForComponentRestart := true if component == constants.Etcd { recoverEtcd = true } if isTLSUpgrade { // We currently depend on getting the Etcd mirror Pod hash from the KubeAPIServer; // Upgrading the Etcd protocol takes down the apiserver, so we can't verify component restarts if we restart Etcd independently. // Skip waiting for Etcd to restart and immediately move on to updating the apiserver. if component == constants.Etcd { waitForComponentRestart = false } // Normally, if an Etcd upgrade is successful, but the apiserver upgrade fails, Etcd is not rolled back. // In the case of a TLS upgrade, the old KubeAPIServer config is incompatible with the new Etcd confg, so we rollback Etcd // if the APIServer upgrade fails. if component == constants.KubeAPIServer { recoverEtcd = true fmt.Printf("[upgrade/staticpods] The %s manifest will be restored if component %q fails to upgrade\n", constants.Etcd, component) } } if err := renewCerts(cfg, component); err != nil { return fmt.Errorf("failed to renew certificates for component %q: %v", component, err) } // The old manifest is here; in the /etc/kubernetes/manifests/ currentManifestPath := pathMgr.RealManifestPath(component) // The new, upgraded manifest will be written here newManifestPath := pathMgr.TempManifestPath(component) // The old manifest will be moved here; into a subfolder of the temporary directory // If a rollback is needed, these manifests will be put back to where they where initially backupManifestPath := pathMgr.BackupManifestPath(component) // Store the backup path in the recover list. If something goes wrong now, this component will be rolled back. recoverManifests[component] = backupManifestPath // Skip upgrade if current and new manifests are equal equal, err := staticpod.ManifestFilesAreEqual(currentManifestPath, newManifestPath) if err != nil { return err } if equal { fmt.Printf("[upgrade/staticpods] current and new manifests of %s are equal, skipping upgrade\n", component) return nil } // Move the old manifest into the old-manifests directory if err := pathMgr.MoveFile(currentManifestPath, backupManifestPath); err != nil { return rollbackOldManifests(recoverManifests, err, pathMgr, recoverEtcd) } // Move the new manifest into the manifests directory if err := pathMgr.MoveFile(newManifestPath, currentManifestPath); err != nil { return rollbackOldManifests(recoverManifests, err, pathMgr, recoverEtcd) } fmt.Printf("[upgrade/staticpods] Moved new manifest to %q and backed up old manifest to %q\n", currentManifestPath, backupManifestPath) if waitForComponentRestart { fmt.Println("[upgrade/staticpods] Waiting for the kubelet to restart the component") fmt.Printf("[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout %v\n", UpgradeManifestTimeout) // Wait for the mirror Pod hash to change; otherwise we'll run into race conditions here when the kubelet hasn't had time to // notice the removal of the Static Pod, leading to a false positive below where we check that the API endpoint is healthy // If we don't do this, there is a case where we remove the Static Pod manifest, kubelet is slow to react, kubeadm checks the // API endpoint below of the OLD Static Pod component and proceeds quickly enough, which might lead to unexpected results. if err := waiter.WaitForStaticPodHashChange(cfg.NodeRegistration.Name, component, beforePodHash); err != nil { return rollbackOldManifests(recoverManifests, err, pathMgr, recoverEtcd) } // Wait for the static pod component to come up and register itself as a mirror pod if err := waiter.WaitForPodsWithLabel("component=" + component); err != nil { return rollbackOldManifests(recoverManifests, err, pathMgr, recoverEtcd) } fmt.Printf("[upgrade/staticpods] Component %q upgraded successfully!\n", component) } else { fmt.Printf("[upgrade/staticpods] Not waiting for pod-hash change for component %q\n", component) } return nil } // performEtcdStaticPodUpgrade performs upgrade of etcd, it returns bool which indicates fatal error or not and the actual error. func performEtcdStaticPodUpgrade(waiter apiclient.Waiter, pathMgr StaticPodPathManager, cfg *kubeadmapi.InitConfiguration, recoverManifests map[string]string, isTLSUpgrade bool, oldEtcdClient, newEtcdClient etcdutil.ClusterInterrogator) (bool, error) { // Add etcd static pod spec only if external etcd is not configured if cfg.Etcd.External != nil { return false, fmt.Errorf("external etcd detected, won't try to change any etcd state") } // Checking health state of etcd before proceeding with the upgrade _, err := oldEtcdClient.GetClusterStatus() if err != nil { return true, fmt.Errorf("etcd cluster is not healthy: %v", err) } // Backing up etcd data store backupEtcdDir := pathMgr.BackupEtcdDir() runningEtcdDir := cfg.Etcd.Local.DataDir if err := util.CopyDir(runningEtcdDir, backupEtcdDir); err != nil { return true, fmt.Errorf("failed to back up etcd data: %v", err) } // Need to check currently used version and version from constants, if differs then upgrade desiredEtcdVersion, err := constants.EtcdSupportedVersion(cfg.KubernetesVersion) if err != nil { return true, fmt.Errorf("failed to retrieve an etcd version for the target kubernetes version: %v", err) } currentEtcdVersionStr, err := oldEtcdClient.GetVersion() if err != nil { return true, fmt.Errorf("failed to retrieve the current etcd version: %v", err) } currentEtcdVersion, err := version.ParseSemantic(currentEtcdVersionStr) if err != nil { return true, fmt.Errorf("failed to parse the current etcd version(%s): %v", currentEtcdVersionStr, err) } // Comparing current etcd version with desired to catch the same version or downgrade condition and fail on them. if desiredEtcdVersion.LessThan(currentEtcdVersion) { return false, fmt.Errorf("the desired etcd version for this Kubernetes version %q is %q, but the current etcd version is %q. Won't downgrade etcd, instead just continue", cfg.KubernetesVersion, desiredEtcdVersion.String(), currentEtcdVersion.String()) } // For the case when desired etcd version is the same as current etcd version if strings.Compare(desiredEtcdVersion.String(), currentEtcdVersion.String()) == 0 { return false, nil } beforeEtcdPodHash, err := waiter.WaitForStaticPodSingleHash(cfg.NodeRegistration.Name, constants.Etcd) if err != nil { return true, fmt.Errorf("failed to get etcd pod's hash: %v", err) } // Write the updated etcd static Pod manifest into the temporary directory, at this point no etcd change // has occurred in any aspects. if err := etcdphase.CreateLocalEtcdStaticPodManifestFile(pathMgr.TempManifestDir(), cfg); err != nil { return true, fmt.Errorf("error creating local etcd static pod manifest file: %v", err) } // Waiter configurations for checking etcd status noDelay := 0 * time.Second podRestartDelay := noDelay if isTLSUpgrade { // If we are upgrading TLS we need to wait for old static pod to be removed. // This is needed because we are not able to currently verify that the static pod // has been updated through the apiserver across an etcd TLS upgrade. // This value is arbitrary but seems to be long enough in manual testing. podRestartDelay = 30 * time.Second } retries := 10 retryInterval := 15 * time.Second // Perform etcd upgrade using common to all control plane components function if err := upgradeComponent(constants.Etcd, waiter, pathMgr, cfg, beforeEtcdPodHash, recoverManifests, isTLSUpgrade); err != nil { fmt.Printf("[upgrade/etcd] Failed to upgrade etcd: %v\n", err) // Since upgrade component failed, the old etcd manifest has either been restored or was never touched // Now we need to check the health of etcd cluster if it is up with old manifest fmt.Println("[upgrade/etcd] Waiting for previous etcd to become available") if _, err := oldEtcdClient.WaitForClusterAvailable(noDelay, retries, retryInterval); err != nil { fmt.Printf("[upgrade/etcd] Failed to healthcheck previous etcd: %v\n", err) // At this point we know that etcd cluster is dead and it is safe to copy backup datastore and to rollback old etcd manifest fmt.Println("[upgrade/etcd] Rolling back etcd data") if err := rollbackEtcdData(cfg, pathMgr); err != nil { // Even copying back datastore failed, no options for recovery left, bailing out return true, fmt.Errorf("fatal error rolling back local etcd cluster datadir: %v, the backup of etcd database is stored here:(%s)", err, backupEtcdDir) } fmt.Println("[upgrade/etcd] Etcd data rollback successful") // Now that we've rolled back the data, let's check if the cluster comes up fmt.Println("[upgrade/etcd] Waiting for previous etcd to become available") if _, err := oldEtcdClient.WaitForClusterAvailable(noDelay, retries, retryInterval); err != nil { fmt.Printf("[upgrade/etcd] Failed to healthcheck previous etcd: %v\n", err) // Nothing else left to try to recover etcd cluster return true, fmt.Errorf("fatal error rolling back local etcd cluster manifest: %v, the backup of etcd database is stored here:(%s)", err, backupEtcdDir) } // We've recovered to the previous etcd from this case } fmt.Println("[upgrade/etcd] Etcd was rolled back and is now available") // Since etcd cluster came back up with the old manifest return true, fmt.Errorf("fatal error when trying to upgrade the etcd cluster: %v, rolled the state back to pre-upgrade state", err) } // Initialize the new etcd client if it wasn't pre-initialized if newEtcdClient == nil { client, err := etcdutil.NewFromStaticPod( []string{fmt.Sprintf("localhost:%d", constants.EtcdListenClientPort)}, constants.GetStaticPodDirectory(), cfg.CertificatesDir, ) if err != nil { return true, fmt.Errorf("fatal error creating etcd client: %v", err) } newEtcdClient = client } // Checking health state of etcd after the upgrade fmt.Println("[upgrade/etcd] Waiting for etcd to become available") if _, err = newEtcdClient.WaitForClusterAvailable(podRestartDelay, retries, retryInterval); err != nil { fmt.Printf("[upgrade/etcd] Failed to healthcheck etcd: %v\n", err) // Despite the fact that upgradeComponent was successful, there is something wrong with the etcd cluster // First step is to restore back up of datastore fmt.Println("[upgrade/etcd] Rolling back etcd data") if err := rollbackEtcdData(cfg, pathMgr); err != nil { // Even copying back datastore failed, no options for recovery left, bailing out return true, fmt.Errorf("fatal error rolling back local etcd cluster datadir: %v, the backup of etcd database is stored here:(%s)", err, backupEtcdDir) } fmt.Println("[upgrade/etcd] Etcd data rollback successful") // Old datastore has been copied, rolling back old manifests fmt.Println("[upgrade/etcd] Rolling back etcd manifest") rollbackOldManifests(recoverManifests, err, pathMgr, true) // rollbackOldManifests() always returns an error -- ignore it and continue // Assuming rollback of the old etcd manifest was successful, check the status of etcd cluster again fmt.Println("[upgrade/etcd] Waiting for previous etcd to become available") if _, err := oldEtcdClient.WaitForClusterAvailable(noDelay, retries, retryInterval); err != nil { fmt.Printf("[upgrade/etcd] Failed to healthcheck previous etcd: %v\n", err) // Nothing else left to try to recover etcd cluster return true, fmt.Errorf("fatal error rolling back local etcd cluster manifest: %v, the backup of etcd database is stored here:(%s)", err, backupEtcdDir) } fmt.Println("[upgrade/etcd] Etcd was rolled back and is now available") // We've successfully rolled back etcd, and now return an error describing that the upgrade failed return true, fmt.Errorf("fatal error upgrading local etcd cluster: %v, rolled the state back to pre-upgrade state", err) } return false, nil } // StaticPodControlPlane upgrades a static pod-hosted control plane func StaticPodControlPlane(waiter apiclient.Waiter, pathMgr StaticPodPathManager, cfg *kubeadmapi.InitConfiguration, etcdUpgrade bool, oldEtcdClient, newEtcdClient etcdutil.ClusterInterrogator) error { recoverManifests := map[string]string{} var isTLSUpgrade bool var isExternalEtcd bool beforePodHashMap, err := waiter.WaitForStaticPodControlPlaneHashes(cfg.NodeRegistration.Name) if err != nil { return err } if oldEtcdClient == nil { if cfg.Etcd.External != nil { // External etcd isExternalEtcd = true client, err := etcdutil.New( cfg.Etcd.External.Endpoints, cfg.Etcd.External.CAFile, cfg.Etcd.External.CertFile, cfg.Etcd.External.KeyFile, ) if err != nil { return fmt.Errorf("failed to create etcd client for external etcd: %v", err) } oldEtcdClient = client // Since etcd is managed externally, the new etcd client will be the same as the old client if newEtcdClient == nil { newEtcdClient = client } } else { // etcd Static Pod client, err := etcdutil.NewFromStaticPod( []string{fmt.Sprintf("localhost:%d", constants.EtcdListenClientPort)}, constants.GetStaticPodDirectory(), cfg.CertificatesDir, ) if err != nil { return fmt.Errorf("failed to create etcd client: %v", err) } oldEtcdClient = client } } // etcd upgrade is done prior to other control plane components if !isExternalEtcd && etcdUpgrade { previousEtcdHasTLS := oldEtcdClient.HasTLS() // set the TLS upgrade flag for all components isTLSUpgrade = !previousEtcdHasTLS if isTLSUpgrade { fmt.Printf("[upgrade/etcd] Upgrading to TLS for %s\n", constants.Etcd) } // Perform etcd upgrade using common to all control plane components function fatal, err := performEtcdStaticPodUpgrade(waiter, pathMgr, cfg, recoverManifests, isTLSUpgrade, oldEtcdClient, newEtcdClient) if err != nil { if fatal { return err } fmt.Printf("[upgrade/etcd] non fatal issue encountered during upgrade: %v\n", err) } } // Write the updated static Pod manifests into the temporary directory fmt.Printf("[upgrade/staticpods] Writing new Static Pod manifests to %q\n", pathMgr.TempManifestDir()) err = controlplanephase.CreateInitStaticPodManifestFiles(pathMgr.TempManifestDir(), cfg) if err != nil { return fmt.Errorf("error creating init static pod manifest files: %v", err) } for _, component := range constants.MasterComponents { if err = upgradeComponent(component, waiter, pathMgr, cfg, beforePodHashMap[component], recoverManifests, isTLSUpgrade); err != nil { return err } } // Remove the temporary directories used on a best-effort (don't fail if the calls error out) // The calls are set here by design; we should _not_ use "defer" above as that would remove the directories // even in the "fail and rollback" case, where we want the directories preserved for the user. return pathMgr.CleanupDirs() } // rollbackOldManifests rolls back the backed-up manifests if something went wrong. // It always returns an error to the caller. func rollbackOldManifests(oldManifests map[string]string, origErr error, pathMgr StaticPodPathManager, restoreEtcd bool) error { errs := []error{origErr} for component, backupPath := range oldManifests { // Will restore etcd manifest only if it was explicitly requested by setting restoreEtcd to True if component == constants.Etcd && !restoreEtcd { continue } // Where we should put back the backed up manifest realManifestPath := pathMgr.RealManifestPath(component) // Move the backup manifest back into the manifests directory err := pathMgr.MoveFile(backupPath, realManifestPath) if err != nil { errs = append(errs, err) } } // Let the user know there were problems, but we tried to recover return fmt.Errorf("couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced: %v", errs) } // rollbackEtcdData rolls back the content of etcd folder if something went wrong. // When the folder contents are successfully rolled back, nil is returned, otherwise an error is returned. func rollbackEtcdData(cfg *kubeadmapi.InitConfiguration, pathMgr StaticPodPathManager) error { backupEtcdDir := pathMgr.BackupEtcdDir() runningEtcdDir := cfg.Etcd.Local.DataDir if err := util.CopyDir(backupEtcdDir, runningEtcdDir); err != nil { // Let the user know there we're problems, but we tried to reçover return fmt.Errorf("couldn't recover etcd database with error: %v, the location of etcd backup: %s ", err, backupEtcdDir) } return nil } func renewCerts(cfg *kubeadmapi.InitConfiguration, component string) error { if cfg.Etcd.Local != nil { // ensure etcd certs are loaded for etcd and kube-apiserver if component == constants.Etcd || component == constants.KubeAPIServer { caCert, caKey, err := certsphase.LoadCertificateAuthority(cfg.CertificatesDir, certsphase.KubeadmCertEtcdCA.BaseName) if err != nil { return fmt.Errorf("failed to upgrade the %s CA certificate and key: %v", constants.Etcd, err) } renewer := renewal.NewFileRenewal(caCert, caKey) if component == constants.Etcd { for _, cert := range []*certsphase.KubeadmCert{ &certsphase.KubeadmCertEtcdServer, &certsphase.KubeadmCertEtcdPeer, &certsphase.KubeadmCertEtcdHealthcheck, } { if err := renewal.RenewExistingCert(cfg.CertificatesDir, cert.BaseName, renewer); err != nil { return fmt.Errorf("failed to renew %s certificate and key: %v", cert.Name, err) } } } if component == constants.KubeAPIServer { cert := certsphase.KubeadmCertEtcdAPIClient if err := renewal.RenewExistingCert(cfg.CertificatesDir, cert.BaseName, renewer); err != nil { return fmt.Errorf("failed to renew %s certificate and key: %v", cert.Name, err) } } } } return nil }
package main import ( "fmt" ) func main() { var n, k int fmt.Scanf("%d %d", &n, &k) a := make([]int, n) for i := 0; i < n; i++ { fmt.Scanf("%d", &a[i]) a[i]-- } answer := finalTown(n, k, a) + 1 fmt.Println(answer) } func finalTown(n, k int, a []int) int { beforeLoop := 1 loopStart := 0 loopLength := 1 next := 0 visited := make([]bool, n) visited[0] = true for { next = a[next] if visited[next] { // loop start loopStart = next for t, length := a[next], 2; t != loopStart; t, length = a[t], length+1 { loopLength = length } break } visited[next] = true beforeLoop++ } beforeLoop = beforeLoop - loopLength if beforeLoop < 0 { panic("beforeLoop must be equal or greather than 0") } // fmt.Println("k-beforeLoop", k-beforeLoop) if k-beforeLoop < 0 { town := a[0] if k == 1 { return town } for i := 0; i < k-1; i++ { // fmt.Println("town", town) town = a[town] } return town } m := (k - beforeLoop) % loopLength town := loopStart for i := 0; i < m; i++ { town = a[town] } // fmt.Println(beforeLoop, loopStart, loopLength) return town }
/* Function called from the ARP keventd tasklet in response to * a data message from the net driver * * NOTE: the current implementation discards data when it * cannot get a lock. As the mechanism is for protocols that * do not guarantee data delivery this is considered acceptible. */ int efab_handle_ipp_pkt_task(int thr_id, ci_ifid_t ifindex, const void* in_data, int len) { tcp_helper_resource_t* thr; const ci_ip4_hdr* in_ip; efab_ipp_addr addr; in_ip = (const ci_ip4_hdr*) in_data; if( !efab_ipp_icmp_parse( in_ip, len, &addr, 0)) goto exit_handler; addr.ifindex = ifindex; if( (thr = efab_ipp_get_locked_thr_from_tcp_handle(thr_id))) { ci_sock_cmn* s; CI_ICMP_IN_STATS_COLLECT( &thr->netif, (ci_icmp_hdr*)((char*)in_ip + CI_IP4_IHL(in_ip)) ); CI_ICMP_STATS_INC_IN_MSGS( &thr->netif ); s = efab_ipp_icmp_for_thr( thr, &addr ); if( s ) efab_ipp_icmp_qpkt( thr, s, &addr ); efab_tcp_helper_netif_unlock( thr, 1 ); efab_thr_release(thr); } exit_handler: return 0; }
<gh_stars>1000+ #pythran export solve(int) #runas solve(3) def solve(digit): ''' A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 x 99. Find the largest palindrome made from the product of two 3-digit numbers. ''' n = 0 for a in range(10 ** digit - 1, 10 ** (digit - 1), -1): for b in range(a, 10 ** (digit - 1), -1): x = a * b if x > n: s = str(a * b) if s == s[::-1]: n = a * b return n
NPS Then what about the case? In a huge relief to National Public School students in Karnataka, CBSE has reportedly restored affiliation to six schools of the NPS group that were accused of forging and producing fake minority certificates. CBSE had withdrawn affiliation of the six schools belonging to the NPS group in September alleging that they forged papers to escape the Right to Education Act. The Times of India reported that Gopalakrishna, chairman of the NPS Group of Schools, wrote to the parents of the students saying that the school will follow the CBSE syllabus. The six schools are: National Public School in Rajajinagar, National Public School in Indiranagar, National Public School in Koramangala, National Public School in H.S.R. Layout, National Public School in Mysuru and National Academy for Learning in Basaveshwara Nagar. “CBSE has restored affiliation to the schools with immediate effect. Our students will continue to follow the CBSE syllabus, and appear for the board exams as scheduled.” “We are continuing to pursue all legal processes, challenging the allegations made against NPS,“ Gopalakrishna stated. Gopalakrishna did not respond to The News Minute's calls. The issue started in August when the Department of Public Instruction had written to the CBSE urging the board to withdraw affiliation as some schools had refrained from allocating 25 per cent reservation under the RTE quota by resorting to malpractices. The letter had stated: “The managements of these schools have violated the RTE Act and are indulging in fraudulent means in order to circumvent the provisions of the RTE Act and are involved in criminal activities. Hence, this is a request to withdraw the CBSE affiliation of the schools and to take any further action as per extant rules.” The alleged fraud came to light when the National Commission for Minority Educational Institutions wrote to the Commissioner for Public Instruction alleging that the minority certificates produced by the schools under the group were forged. A case was filed against Gopalakrishna by the DPI, but there is no clarity as to what the progress has been so far. Sowjanya, Commissioner for Public Instruction had earlier told TNM, “The DPI had filed a complaint in August 2016. However, the police have not filed an FIR yet.” She refused to comment on why there could have been a delay. Responding to her claims, city police Commissioner NS Megharik told TNM that there was nothing pending from the police’s side.
def __logEntities(self, entities, force=False): for entity in entities: self.__logEntity(entity, force)
/* * Copyright (c) 2007 Oracle. All rights reserved. * * This software is available to you under a choice of one of two * licenses. You may choose to be licensed under the terms of the GNU * General Public License (GPL) Version 2, available from the file * COPYING in the main directory of this source tree, or the * OpenIB.org BSD license below: * * Redistribution and use in source and binary forms, with or * without modification, are permitted provided that the following * conditions are met: * * - Redistributions of source code must retain the above * copyright notice, this list of conditions and the following * disclaimer. * * - Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following * disclaimer in the documentation and/or other materials * provided with the distribution. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * */ #include <linux/pagemap.h> #include <linux/slab.h> #include <linux/rbtree.h> #include <linux/dma-mapping.h> /* for DMA_*_DEVICE */ #include "rds.h" /* * XXX * - build with sparse * - should we limit the size of a mr region? let transport return failure? * - should we detect duplicate keys on a socket? hmm. * - an rdma is an mlock, apply rlimit? */ /* * get the number of pages by looking at the page indices that the start and * end addresses fall in. * * Returns 0 if the vec is invalid. It is invalid if the number of bytes * causes the address to wrap or overflows an unsigned int. This comes * from being stored in the 'length' member of 'struct scatterlist'. */ static unsigned int rds_pages_in_vec(struct rds_iovec *vec) { if ((vec->addr + vec->bytes <= vec->addr) || (vec->bytes > (u64)UINT_MAX)) return 0; return ((vec->addr + vec->bytes + PAGE_SIZE - 1) >> PAGE_SHIFT) - (vec->addr >> PAGE_SHIFT); } static struct rds_mr *rds_mr_tree_walk(struct rb_root *root, u64 key, struct rds_mr *insert) { struct rb_node **p = &root->rb_node; struct rb_node *parent = NULL; struct rds_mr *mr; while (*p) { parent = *p; mr = rb_entry(parent, struct rds_mr, r_rb_node); if (key < mr->r_key) p = &(*p)->rb_left; else if (key > mr->r_key) p = &(*p)->rb_right; else return mr; } if (insert) { rb_link_node(&insert->r_rb_node, parent, p); rb_insert_color(&insert->r_rb_node, root); atomic_inc(&insert->r_refcount); } return NULL; } /* * Destroy the transport-specific part of a MR. */ static void rds_destroy_mr(struct rds_mr *mr) { struct rds_sock *rs = mr->r_sock; void *trans_private = NULL; unsigned long flags; rdsdebug("RDS: destroy mr key is %x refcnt %u\n", mr->r_key, atomic_read(&mr->r_refcount)); if (test_and_set_bit(RDS_MR_DEAD, &mr->r_state)) return; spin_lock_irqsave(&rs->rs_rdma_lock, flags); if (!RB_EMPTY_NODE(&mr->r_rb_node)) rb_erase(&mr->r_rb_node, &rs->rs_rdma_keys); trans_private = mr->r_trans_private; mr->r_trans_private = NULL; spin_unlock_irqrestore(&rs->rs_rdma_lock, flags); if (trans_private) mr->r_trans->free_mr(trans_private, mr->r_invalidate); } void __rds_put_mr_final(struct rds_mr *mr) { rds_destroy_mr(mr); kfree(mr); } /* * By the time this is called we can't have any more ioctls called on * the socket so we don't need to worry about racing with others. */ void rds_rdma_drop_keys(struct rds_sock *rs) { struct rds_mr *mr; struct rb_node *node; unsigned long flags; /* Release any MRs associated with this socket */ spin_lock_irqsave(&rs->rs_rdma_lock, flags); while ((node = rb_first(&rs->rs_rdma_keys))) { mr = container_of(node, struct rds_mr, r_rb_node); if (mr->r_trans == rs->rs_transport) mr->r_invalidate = 0; rb_erase(&mr->r_rb_node, &rs->rs_rdma_keys); RB_CLEAR_NODE(&mr->r_rb_node); spin_unlock_irqrestore(&rs->rs_rdma_lock, flags); rds_destroy_mr(mr); rds_mr_put(mr); spin_lock_irqsave(&rs->rs_rdma_lock, flags); } spin_unlock_irqrestore(&rs->rs_rdma_lock, flags); if (rs->rs_transport && rs->rs_transport->flush_mrs) rs->rs_transport->flush_mrs(); } /* * Helper function to pin user pages. */ static int rds_pin_pages(unsigned long user_addr, unsigned int nr_pages, struct page **pages, int write) { int ret; ret = get_user_pages_fast(user_addr, nr_pages, write, pages); if (ret >= 0 && ret < nr_pages) { while (ret--) put_page(pages[ret]); ret = -EFAULT; } return ret; } static int __rds_rdma_map(struct rds_sock *rs, struct rds_get_mr_args *args, u64 *cookie_ret, struct rds_mr **mr_ret) { struct rds_mr *mr = NULL, *found; unsigned int nr_pages; struct page **pages = NULL; struct scatterlist *sg; void *trans_private; unsigned long flags; rds_rdma_cookie_t cookie; unsigned int nents; long i; int ret; if (rs->rs_bound_addr == 0) { ret = -ENOTCONN; /* XXX not a great errno */ goto out; } if (!rs->rs_transport->get_mr) { ret = -EOPNOTSUPP; goto out; } nr_pages = rds_pages_in_vec(&args->vec); if (nr_pages == 0) { ret = -EINVAL; goto out; } rdsdebug("RDS: get_mr addr %llx len %llu nr_pages %u\n", args->vec.addr, args->vec.bytes, nr_pages); /* XXX clamp nr_pages to limit the size of this alloc? */ pages = kcalloc(nr_pages, sizeof(struct page *), GFP_KERNEL); if (!pages) { ret = -ENOMEM; goto out; } mr = kzalloc(sizeof(struct rds_mr), GFP_KERNEL); if (!mr) { ret = -ENOMEM; goto out; } atomic_set(&mr->r_refcount, 1); RB_CLEAR_NODE(&mr->r_rb_node); mr->r_trans = rs->rs_transport; mr->r_sock = rs; if (args->flags & RDS_RDMA_USE_ONCE) mr->r_use_once = 1; if (args->flags & RDS_RDMA_INVALIDATE) mr->r_invalidate = 1; if (args->flags & RDS_RDMA_READWRITE) mr->r_write = 1; /* * Pin the pages that make up the user buffer and transfer the page * pointers to the mr's sg array. We check to see if we've mapped * the whole region after transferring the partial page references * to the sg array so that we can have one page ref cleanup path. * * For now we have no flag that tells us whether the mapping is * r/o or r/w. We need to assume r/w, or we'll do a lot of RDMA to * the zero page. */ ret = rds_pin_pages(args->vec.addr, nr_pages, pages, 1); if (ret < 0) goto out; nents = ret; sg = kcalloc(nents, sizeof(*sg), GFP_KERNEL); if (!sg) { ret = -ENOMEM; goto out; } WARN_ON(!nents); sg_init_table(sg, nents); /* Stick all pages into the scatterlist */ for (i = 0 ; i < nents; i++) sg_set_page(&sg[i], pages[i], PAGE_SIZE, 0); rdsdebug("RDS: trans_private nents is %u\n", nents); /* Obtain a transport specific MR. If this succeeds, the * s/g list is now owned by the MR. * Note that dma_map() implies that pending writes are * flushed to RAM, so no dma_sync is needed here. */ trans_private = rs->rs_transport->get_mr(sg, nents, rs, &mr->r_key); if (IS_ERR(trans_private)) { for (i = 0 ; i < nents; i++) put_page(sg_page(&sg[i])); kfree(sg); ret = PTR_ERR(trans_private); goto out; } mr->r_trans_private = trans_private; rdsdebug("RDS: get_mr put_user key is %x cookie_addr %p\n", mr->r_key, (void *)(unsigned long) args->cookie_addr); /* The user may pass us an unaligned address, but we can only * map page aligned regions. So we keep the offset, and build * a 64bit cookie containing <R_Key, offset> and pass that * around. */ cookie = rds_rdma_make_cookie(mr->r_key, args->vec.addr & ~PAGE_MASK); if (cookie_ret) *cookie_ret = cookie; if (args->cookie_addr && put_user(cookie, (u64 __user *)(unsigned long) args->cookie_addr)) { ret = -EFAULT; goto out; } /* Inserting the new MR into the rbtree bumps its * reference count. */ spin_lock_irqsave(&rs->rs_rdma_lock, flags); found = rds_mr_tree_walk(&rs->rs_rdma_keys, mr->r_key, mr); spin_unlock_irqrestore(&rs->rs_rdma_lock, flags); BUG_ON(found && found != mr); rdsdebug("RDS: get_mr key is %x\n", mr->r_key); if (mr_ret) { atomic_inc(&mr->r_refcount); *mr_ret = mr; } ret = 0; out: kfree(pages); if (mr) rds_mr_put(mr); return ret; } int rds_get_mr(struct rds_sock *rs, char __user *optval, int optlen) { struct rds_get_mr_args args; if (optlen != sizeof(struct rds_get_mr_args)) return -EINVAL; if (copy_from_user(&args, (struct rds_get_mr_args __user *)optval, sizeof(struct rds_get_mr_args))) return -EFAULT; return __rds_rdma_map(rs, &args, NULL, NULL); } int rds_get_mr_for_dest(struct rds_sock *rs, char __user *optval, int optlen) { struct rds_get_mr_for_dest_args args; struct rds_get_mr_args new_args; if (optlen != sizeof(struct rds_get_mr_for_dest_args)) return -EINVAL; if (copy_from_user(&args, (struct rds_get_mr_for_dest_args __user *)optval, sizeof(struct rds_get_mr_for_dest_args))) return -EFAULT; /* * Initially, just behave like get_mr(). * TODO: Implement get_mr as wrapper around this * and deprecate it. */ new_args.vec = args.vec; new_args.cookie_addr = args.cookie_addr; new_args.flags = args.flags; return __rds_rdma_map(rs, &new_args, NULL, NULL); } /* * Free the MR indicated by the given R_Key */ int rds_free_mr(struct rds_sock *rs, char __user *optval, int optlen) { struct rds_free_mr_args args; struct rds_mr *mr; unsigned long flags; if (optlen != sizeof(struct rds_free_mr_args)) return -EINVAL; if (copy_from_user(&args, (struct rds_free_mr_args __user *)optval, sizeof(struct rds_free_mr_args))) return -EFAULT; /* Special case - a null cookie means flush all unused MRs */ if (args.cookie == 0) { if (!rs->rs_transport || !rs->rs_transport->flush_mrs) return -EINVAL; rs->rs_transport->flush_mrs(); return 0; } /* Look up the MR given its R_key and remove it from the rbtree * so nobody else finds it. * This should also prevent races with rds_rdma_unuse. */ spin_lock_irqsave(&rs->rs_rdma_lock, flags); mr = rds_mr_tree_walk(&rs->rs_rdma_keys, rds_rdma_cookie_key(args.cookie), NULL); if (mr) { rb_erase(&mr->r_rb_node, &rs->rs_rdma_keys); RB_CLEAR_NODE(&mr->r_rb_node); if (args.flags & RDS_RDMA_INVALIDATE) mr->r_invalidate = 1; } spin_unlock_irqrestore(&rs->rs_rdma_lock, flags); if (!mr) return -EINVAL; /* * call rds_destroy_mr() ourselves so that we're sure it's done by the time * we return. If we let rds_mr_put() do it it might not happen until * someone else drops their ref. */ rds_destroy_mr(mr); rds_mr_put(mr); return 0; } /* * This is called when we receive an extension header that * tells us this MR was used. It allows us to implement * use_once semantics */ void rds_rdma_unuse(struct rds_sock *rs, u32 r_key, int force) { struct rds_mr *mr; unsigned long flags; int zot_me = 0; spin_lock_irqsave(&rs->rs_rdma_lock, flags); mr = rds_mr_tree_walk(&rs->rs_rdma_keys, r_key, NULL); if (!mr) { printk(KERN_ERR "rds: trying to unuse MR with unknown r_key %u!\n", r_key); spin_unlock_irqrestore(&rs->rs_rdma_lock, flags); return; } if (mr->r_use_once || force) { rb_erase(&mr->r_rb_node, &rs->rs_rdma_keys); RB_CLEAR_NODE(&mr->r_rb_node); zot_me = 1; } spin_unlock_irqrestore(&rs->rs_rdma_lock, flags); /* May have to issue a dma_sync on this memory region. * Note we could avoid this if the operation was a RDMA READ, * but at this point we can't tell. */ if (mr->r_trans->sync_mr) mr->r_trans->sync_mr(mr->r_trans_private, DMA_FROM_DEVICE); /* If the MR was marked as invalidate, this will * trigger an async flush. */ if (zot_me) rds_destroy_mr(mr); rds_mr_put(mr); } void rds_rdma_free_op(struct rm_rdma_op *ro) { unsigned int i; for (i = 0; i < ro->op_nents; i++) { struct page *page = sg_page(&ro->op_sg[i]); /* Mark page dirty if it was possibly modified, which * is the case for a RDMA_READ which copies from remote * to local memory */ if (!ro->op_write) { BUG_ON(irqs_disabled()); set_page_dirty(page); } put_page(page); } kfree(ro->op_notifier); ro->op_notifier = NULL; ro->op_active = 0; } void rds_atomic_free_op(struct rm_atomic_op *ao) { struct page *page = sg_page(ao->op_sg); /* Mark page dirty if it was possibly modified, which * is the case for a RDMA_READ which copies from remote * to local memory */ set_page_dirty(page); put_page(page); kfree(ao->op_notifier); ao->op_notifier = NULL; ao->op_active = 0; } /* * Count the number of pages needed to describe an incoming iovec array. */ static int rds_rdma_pages(struct rds_iovec iov[], int nr_iovecs) { int tot_pages = 0; unsigned int nr_pages; unsigned int i; /* figure out the number of pages in the vector */ for (i = 0; i < nr_iovecs; i++) { nr_pages = rds_pages_in_vec(&iov[i]); if (nr_pages == 0) return -EINVAL; tot_pages += nr_pages; /* * nr_pages for one entry is limited to (UINT_MAX>>PAGE_SHIFT)+1, * so tot_pages cannot overflow without first going negative. */ if (tot_pages < 0) return -EINVAL; } return tot_pages; } int rds_rdma_extra_size(struct rds_rdma_args *args) { struct rds_iovec vec; struct rds_iovec __user *local_vec; int tot_pages = 0; unsigned int nr_pages; unsigned int i; local_vec = (struct rds_iovec __user *)(unsigned long) args->local_vec_addr; /* figure out the number of pages in the vector */ for (i = 0; i < args->nr_local; i++) { if (copy_from_user(&vec, &local_vec[i], sizeof(struct rds_iovec))) return -EFAULT; nr_pages = rds_pages_in_vec(&vec); if (nr_pages == 0) return -EINVAL; tot_pages += nr_pages; /* * nr_pages for one entry is limited to (UINT_MAX>>PAGE_SHIFT)+1, * so tot_pages cannot overflow without first going negative. */ if (tot_pages < 0) return -EINVAL; } return tot_pages * sizeof(struct scatterlist); } /* * The application asks for a RDMA transfer. * Extract all arguments and set up the rdma_op */ int rds_cmsg_rdma_args(struct rds_sock *rs, struct rds_message *rm, struct cmsghdr *cmsg) { struct rds_rdma_args *args; struct rm_rdma_op *op = &rm->rdma; int nr_pages; unsigned int nr_bytes; struct page **pages = NULL; struct rds_iovec iovstack[UIO_FASTIOV], *iovs = iovstack; int iov_size; unsigned int i, j; int ret = 0; if (cmsg->cmsg_len < CMSG_LEN(sizeof(struct rds_rdma_args)) || rm->rdma.op_active) return -EINVAL; args = CMSG_DATA(cmsg); if (rs->rs_bound_addr == 0) { ret = -ENOTCONN; /* XXX not a great errno */ goto out; } if (args->nr_local > UIO_MAXIOV) { ret = -EMSGSIZE; goto out; } /* Check whether to allocate the iovec area */ iov_size = args->nr_local * sizeof(struct rds_iovec); if (args->nr_local > UIO_FASTIOV) { iovs = sock_kmalloc(rds_rs_to_sk(rs), iov_size, GFP_KERNEL); if (!iovs) { ret = -ENOMEM; goto out; } } if (copy_from_user(iovs, (struct rds_iovec __user *)(unsigned long) args->local_vec_addr, iov_size)) { ret = -EFAULT; goto out; } nr_pages = rds_rdma_pages(iovs, args->nr_local); if (nr_pages < 0) { ret = -EINVAL; goto out; } pages = kcalloc(nr_pages, sizeof(struct page *), GFP_KERNEL); if (!pages) { ret = -ENOMEM; goto out; } op->op_write = !!(args->flags & RDS_RDMA_READWRITE); op->op_fence = !!(args->flags & RDS_RDMA_FENCE); op->op_notify = !!(args->flags & RDS_RDMA_NOTIFY_ME); op->op_silent = !!(args->flags & RDS_RDMA_SILENT); op->op_active = 1; op->op_recverr = rs->rs_recverr; WARN_ON(!nr_pages); op->op_sg = rds_message_alloc_sgs(rm, nr_pages); if (!op->op_sg) { ret = -ENOMEM; goto out; } if (op->op_notify || op->op_recverr) { /* We allocate an uninitialized notifier here, because * we don't want to do that in the completion handler. We * would have to use GFP_ATOMIC there, and don't want to deal * with failed allocations. */ op->op_notifier = kmalloc(sizeof(struct rds_notifier), GFP_KERNEL); if (!op->op_notifier) { ret = -ENOMEM; goto out; } op->op_notifier->n_user_token = args->user_token; op->op_notifier->n_status = RDS_RDMA_SUCCESS; } /* The cookie contains the R_Key of the remote memory region, and * optionally an offset into it. This is how we implement RDMA into * unaligned memory. * When setting up the RDMA, we need to add that offset to the * destination address (which is really an offset into the MR) * FIXME: We may want to move this into ib_rdma.c */ op->op_rkey = rds_rdma_cookie_key(args->cookie); op->op_remote_addr = args->remote_vec.addr + rds_rdma_cookie_offset(args->cookie); nr_bytes = 0; rdsdebug("RDS: rdma prepare nr_local %llu rva %llx rkey %x\n", (unsigned long long)args->nr_local, (unsigned long long)args->remote_vec.addr, op->op_rkey); for (i = 0; i < args->nr_local; i++) { struct rds_iovec *iov = &iovs[i]; /* don't need to check, rds_rdma_pages() verified nr will be +nonzero */ unsigned int nr = rds_pages_in_vec(iov); rs->rs_user_addr = iov->addr; rs->rs_user_bytes = iov->bytes; /* If it's a WRITE operation, we want to pin the pages for reading. * If it's a READ operation, we need to pin the pages for writing. */ ret = rds_pin_pages(iov->addr, nr, pages, !op->op_write); if (ret < 0) goto out; rdsdebug("RDS: nr_bytes %u nr %u iov->bytes %llu iov->addr %llx\n", nr_bytes, nr, iov->bytes, iov->addr); nr_bytes += iov->bytes; for (j = 0; j < nr; j++) { unsigned int offset = iov->addr & ~PAGE_MASK; struct scatterlist *sg; sg = &op->op_sg[op->op_nents + j]; sg_set_page(sg, pages[j], min_t(unsigned int, iov->bytes, PAGE_SIZE - offset), offset); rdsdebug("RDS: sg->offset %x sg->len %x iov->addr %llx iov->bytes %llu\n", sg->offset, sg->length, iov->addr, iov->bytes); iov->addr += sg->length; iov->bytes -= sg->length; } op->op_nents += nr; } if (nr_bytes > args->remote_vec.bytes) { rdsdebug("RDS nr_bytes %u remote_bytes %u do not match\n", nr_bytes, (unsigned int) args->remote_vec.bytes); ret = -EINVAL; goto out; } op->op_bytes = nr_bytes; out: if (iovs != iovstack) sock_kfree_s(rds_rs_to_sk(rs), iovs, iov_size); kfree(pages); if (ret) rds_rdma_free_op(op); else rds_stats_inc(s_send_rdma); return ret; } /* * The application wants us to pass an RDMA destination (aka MR) * to the remote */ int rds_cmsg_rdma_dest(struct rds_sock *rs, struct rds_message *rm, struct cmsghdr *cmsg) { unsigned long flags; struct rds_mr *mr; u32 r_key; int err = 0; if (cmsg->cmsg_len < CMSG_LEN(sizeof(rds_rdma_cookie_t)) || rm->m_rdma_cookie != 0) return -EINVAL; memcpy(&rm->m_rdma_cookie, CMSG_DATA(cmsg), sizeof(rm->m_rdma_cookie)); /* We are reusing a previously mapped MR here. Most likely, the * application has written to the buffer, so we need to explicitly * flush those writes to RAM. Otherwise the HCA may not see them * when doing a DMA from that buffer. */ r_key = rds_rdma_cookie_key(rm->m_rdma_cookie); spin_lock_irqsave(&rs->rs_rdma_lock, flags); mr = rds_mr_tree_walk(&rs->rs_rdma_keys, r_key, NULL); if (!mr) err = -EINVAL; /* invalid r_key */ else atomic_inc(&mr->r_refcount); spin_unlock_irqrestore(&rs->rs_rdma_lock, flags); if (mr) { mr->r_trans->sync_mr(mr->r_trans_private, DMA_TO_DEVICE); rm->rdma.op_rdma_mr = mr; } return err; } /* * The application passes us an address range it wants to enable RDMA * to/from. We map the area, and save the <R_Key,offset> pair * in rm->m_rdma_cookie. This causes it to be sent along to the peer * in an extension header. */ int rds_cmsg_rdma_map(struct rds_sock *rs, struct rds_message *rm, struct cmsghdr *cmsg) { if (cmsg->cmsg_len < CMSG_LEN(sizeof(struct rds_get_mr_args)) || rm->m_rdma_cookie != 0) return -EINVAL; return __rds_rdma_map(rs, CMSG_DATA(cmsg), &rm->m_rdma_cookie, &rm->rdma.op_rdma_mr); } /* * Fill in rds_message for an atomic request. */ int rds_cmsg_atomic(struct rds_sock *rs, struct rds_message *rm, struct cmsghdr *cmsg) { struct page *page = NULL; struct rds_atomic_args *args; int ret = 0; if (cmsg->cmsg_len < CMSG_LEN(sizeof(struct rds_atomic_args)) || rm->atomic.op_active) return -EINVAL; args = CMSG_DATA(cmsg); /* Nonmasked & masked cmsg ops converted to masked hw ops */ switch (cmsg->cmsg_type) { case RDS_CMSG_ATOMIC_FADD: rm->atomic.op_type = RDS_ATOMIC_TYPE_FADD; rm->atomic.op_m_fadd.add = args->fadd.add; rm->atomic.op_m_fadd.nocarry_mask = 0; break; case RDS_CMSG_MASKED_ATOMIC_FADD: rm->atomic.op_type = RDS_ATOMIC_TYPE_FADD; rm->atomic.op_m_fadd.add = args->m_fadd.add; rm->atomic.op_m_fadd.nocarry_mask = args->m_fadd.nocarry_mask; break; case RDS_CMSG_ATOMIC_CSWP: rm->atomic.op_type = RDS_ATOMIC_TYPE_CSWP; rm->atomic.op_m_cswp.compare = args->cswp.compare; rm->atomic.op_m_cswp.swap = args->cswp.swap; rm->atomic.op_m_cswp.compare_mask = ~0; rm->atomic.op_m_cswp.swap_mask = ~0; break; case RDS_CMSG_MASKED_ATOMIC_CSWP: rm->atomic.op_type = RDS_ATOMIC_TYPE_CSWP; rm->atomic.op_m_cswp.compare = args->m_cswp.compare; rm->atomic.op_m_cswp.swap = args->m_cswp.swap; rm->atomic.op_m_cswp.compare_mask = args->m_cswp.compare_mask; rm->atomic.op_m_cswp.swap_mask = args->m_cswp.swap_mask; break; default: BUG(); /* should never happen */ } rm->atomic.op_notify = !!(args->flags & RDS_RDMA_NOTIFY_ME); rm->atomic.op_silent = !!(args->flags & RDS_RDMA_SILENT); rm->atomic.op_active = 1; rm->atomic.op_recverr = rs->rs_recverr; rm->atomic.op_sg = rds_message_alloc_sgs(rm, 1); if (!rm->atomic.op_sg) { ret = -ENOMEM; goto err; } /* verify 8 byte-aligned */ if (args->local_addr & 0x7) { ret = -EFAULT; goto err; } ret = rds_pin_pages(args->local_addr, 1, &page, 1); if (ret != 1) goto err; ret = 0; sg_set_page(rm->atomic.op_sg, page, 8, offset_in_page(args->local_addr)); if (rm->atomic.op_notify || rm->atomic.op_recverr) { /* We allocate an uninitialized notifier here, because * we don't want to do that in the completion handler. We * would have to use GFP_ATOMIC there, and don't want to deal * with failed allocations. */ rm->atomic.op_notifier = kmalloc(sizeof(*rm->atomic.op_notifier), GFP_KERNEL); if (!rm->atomic.op_notifier) { ret = -ENOMEM; goto err; } rm->atomic.op_notifier->n_user_token = args->user_token; rm->atomic.op_notifier->n_status = RDS_RDMA_SUCCESS; } rm->atomic.op_rkey = rds_rdma_cookie_key(args->cookie); rm->atomic.op_remote_addr = args->remote_addr + rds_rdma_cookie_offset(args->cookie); return ret; err: if (page) put_page(page); kfree(rm->atomic.op_notifier); return ret; }
/** * Patches Groovy's method dispatch table so that they point to {@link CpsDefaultGroovyMethods} instead of * {@link DefaultGroovyMethods}. * * <p> * To be able to correctly execute code like {@code list.each ...} in CPS, we need to tweak Groovy * so that it dispatches methods like 'each' to {@link CpsDefaultGroovyMethods} instead of {@link DefaultGroovyMethods}. * Groovy has some fairly involved internal data structure to determine which method to dispatch, but * at high level, this comes down to the following: * * <ol> * <li>{@link ClassInfoSet} holds references to {@link ClassInfo} where one instance exists for each {@link Class} * <li>{@link ClassInfo} holds a reference to {@link MetaClassImpl} * <li>{@link MetaClassImpl} holds a whole lot of {@link MetaMethod}s for methods that belong to the class * <li>Some of those {@link MetaMethod}s are {@link GeneratedMetaMethod} that points to methods defined on {@link DefaultGroovyMethods} * </ol> * * <p> * Many of these objects are created lazily and various caching is involved in various layers (such as * {@link MetaClassImpl#metaMethodIndex}) presumably to make the method dispatching more efficient. * * <p> * Our strategy here is to locate {@link GeneratedMetaMethod}s that point to {@link DefaultGroovyMethods} * and replace them by another {@link MetaMethod} that points to {@link CpsDefaultGroovyMethods}. Given * the elaborate data structure Groovy builds, we liberally walk the data structure and patch references * wherever we find them, instead of being precise & surgical about what we replace. This logic * is implemented in {@link #patch(Object)}. * * * <h2>How Groovy registers methods from {@link DefaultGroovyMethods}?</h2> * <p> * (This is a memo I took during this investigation, in case in the future this becomes useful again) * <p> * {@link DefaultGroovyMethods} are build-time processed (where?) to a series of "dgm" classes, and this gets * loaded into {@link MetaClass} structures in {@link GeneratedMetaMethod.DgmMethodRecord#loadDgmInfo()}. * <p> * The code above is called from {@link MetaClassRegistryImpl#MetaClassRegistryImpl(int,boolean)} , which * uses {@link CachedClass#setNewMopMethods(List)} to install method definitions from DGM. * {@link CachedClass#setNewMopMethods(List)} internally calls into {@link CachedClass#updateSetNewMopMethods(List)}, * which simply updates {@link ClassInfo#newMetaMethods}. This is where the method definitions stay for a while * <p> * The only read usage of {@link ClassInfo#newMetaMethods} is in {@link CachedClass#getNewMetaMethods()}, and * this method builds its return value from all the super types. This method is then further used by * {@link MetaClassImpl} when it is instantiated and build its own index. * * @author Kohsuke Kawaguchi */ class DGMPatcher { // we need to traverse various internal fields of the objects private final Field MetaClassImpl_myNewMetaMethods = field(MetaClassImpl.class,"myNewMetaMethods"); private final Field MetaClassImpl_newGroovyMethodsSet = field(MetaClassImpl.class,"newGroovyMethodsSet"); private final Field MetaClassImpl_metaMethodIndex = field(MetaClassImpl.class,"metaMethodIndex"); private final Field ClassInfo_dgmMetaMethods = field(ClassInfo.class,"dgmMetaMethods"); private final Field ClassInfo_newMetaMethods = field(ClassInfo.class,"newMetaMethods"); private final Field ClassInfo_globalClassSet = field(ClassInfo.class,"globalClassSet"); private final Field ClassInfoSet_segments = field(AbstractConcurrentMapBase.class,"segments"); private final Field Segment_table = field(Segment.class,"table"); /** * Used to compare two {@link MetaMethod} by their signatures. */ static final class Key { /** * Receiver type. */ final Class declaringClass; /** * Method name */ final String name; /** * Method signature */ final Class[] nativeParamTypes; Key(Class declaringClass, String name, Class[] nativeParamTypes) { this.declaringClass = declaringClass; this.name = name; this.nativeParamTypes = nativeParamTypes; } Key(MetaMethod m) { this(m.getDeclaringClass().getTheClass(), m.getName(), m.getNativeParameterTypes()); } @Override public boolean equals(Object o) { if (this == o) return true; if (o == null || getClass() != o.getClass()) return false; Key key = (Key) o; return Objects.equal(declaringClass, key.declaringClass) && Objects.equal(name, key.name) && Arrays.equals(nativeParamTypes, key.nativeParamTypes); } @Override public int hashCode() { return Objects.hashCode(declaringClass, name, Arrays.hashCode(nativeParamTypes)); } @Override public String toString() { return declaringClass.getName() + "." + name + Arrays.toString(nativeParamTypes); } } /** * Methods defined in {@link CpsDefaultGroovyMethods} to override definitions in {@link DefaultGroovyMethods}. */ private final Map<Key,MetaMethod> overrides = new HashMap<Key, MetaMethod>(); /** * @param methods * List of methods to overwrite {@link DefaultGroovyMethods} */ DGMPatcher(List<MetaMethod> methods) { for (MetaMethod m : methods) { MetaMethod old = overrides.put(new Key(m),m); if (old != null) { throw new IllegalStateException("duplication between " + m + " and " + old); } } } /** * Visits Groovy data structure and install methods given in the constructor. */ void patch() { MetaClassRegistry r = GroovySystem.getMetaClassRegistry(); // this never seems to iterate anything // Iterator<MetaClass> itr = r.iterator(); // while (itr.hasNext()) { // MetaClass mc = itr.next(); // patch(mc); // } patch(r); try { patch(ClassInfo_globalClassSet.get(null)); } catch (IllegalAccessException e) { throw new AssertionError(e); } } /** * Walks the given object recursively, patch references, and return the replacement object. * * <p> * Key data structure we visit is {@link MetaClassImpl}, */ private Object patch(Object o) { if (o instanceof MetaClassRegistryImpl) { MetaClassRegistryImpl r = (MetaClassRegistryImpl) o; patch(r.getInstanceMethods()); patch(r.getStaticMethods()); } else if (o instanceof ClassInfoSet) { // discover all ClassInfo in ClassInfoSet via Segment -> table -> ClassInfo ClassInfoSet cis = (ClassInfoSet)o; patch(cis,ClassInfoSet_segments); } else if (o instanceof Segment) { Segment s = (Segment) o; patch(s,Segment_table); } else if (o instanceof ClassInfo) { ClassInfo ci = (ClassInfo) o; patch(ci,ClassInfo_dgmMetaMethods); patch(ci,ClassInfo_newMetaMethods); // ClassInfo -> MetaClass patch(ci.getStrongMetaClass()); patch(ci.getWeakMetaClass()); // patch(ci.getCachedClass()); } else // doesn't look like we need to visit this // if (o instanceof CachedClass) { // CachedClass cc = (CachedClass) o; // patch(cc.classInfo); // } else if (o instanceof MetaClassImpl) { MetaClassImpl mc = (MetaClassImpl) o; patch(mc,MetaClassImpl_myNewMetaMethods); patch(mc.getMethods()); // this directly returns mc.allMethods patch(mc,MetaClassImpl_newGroovyMethodsSet); patch(mc,MetaClassImpl_metaMethodIndex); } else if (o instanceof MetaMethodIndex) { MetaMethodIndex mmi = (MetaMethodIndex) o; for (Entry e : mmi.getTable()) { if (e!=null) { e.methods = patch(e.methods); e.methodsForSuper = patch(e.methodsForSuper); e.staticMethods = patch(e.staticMethods); } } mmi.clearCaches(); // in case anything was actually modified } else if (o instanceof GeneratedMetaMethod) { // the actual patch logic. GeneratedMetaMethod gm = (GeneratedMetaMethod) o; MetaMethod replace = overrides.get(new Key(gm)); if (replace!=null) { // we found a GeneratedMetaMethod that points to DGM that needs to be replaced! return replace; } } else // other collection structure that needs to be recursively visited if (o instanceof Object[]) { Object[] a = (Object[])o; for (int i=0; i<a.length; i++) { a[i] = patch(a[i]); } } else if (o instanceof List) { List l = (List)o; ListIterator i = l.listIterator(); while (i.hasNext()) { Object x = i.next(); Object y = patch(x); if (x!=y) i.set(y); } } else if (o instanceof FastArray) { FastArray a = (FastArray) o; for (int i=0; i<a.size(); i++) { Object x = a.get(i); Object y = patch(x); if (x!=y) a.set(i,y); } } else if (o instanceof Set) { Set s = (Set)o; for (Object x : s.toArray()) { Object y = patch(x); if (x!=y) { s.remove(x); s.add(y); } } } return o; } /** * Patch a field of an object that's not directly accessible. */ private void patch(Object o, Field f) { try { Object x = f.get(o); Object y = patch(x); if (x!=y) f.set(o,y); } catch (IllegalAccessException e) { throw new AssertionError(e); // we make this field accessible } } private Field field(Class<?> owner, String field) { try { Field f = owner.getDeclaredField(field); f.setAccessible(true); return f; } catch (NoSuchFieldException e) { // TODO: warn return null; } } static { List<MetaMethod> methods = new ArrayList<MetaMethod>(); for (CachedMethod m : ReflectionCache.getCachedClass(CpsDefaultGroovyMethods.class).getMethods()) { if (m.isStatic() && m.isPublic()) { CachedClass[] paramTypes = m.getParameterTypes(); if (paramTypes.length > 0) { methods.add(new NewInstanceMetaMethod(m)); } } } new DGMPatcher(methods).patch(); } /** * No-op method to ensure the static initializer has run. */ public static void init() {} }
/** * State of the domain; to be subclassed and filled by respective domains. * * @author Jimmy */ public abstract class PDDLState implements ICloneable { @Override public abstract PDDLState clone(); public void dump() { dump(false); } public void dump(boolean includeStatic) { dump(includeStatic, false, 80); } public abstract boolean isSet(PDDLPredicate predicate); public abstract void setDynamic(StateCompact dynamicPartOfTheState); public abstract IStorage[] getStorages(); /** * NOT A CLONE, if you want to persist the state, you have to {@link StateCompact#clone()} it! * @return */ public abstract StateCompact getDynamic(); public abstract void dump(boolean includeStatic, boolean includeEmpty, int maxLineLength); public static <T extends PDDLPredicate> void dumpStorage(IStorage<T> storage, boolean includeEmpty, int maxLineLength) { List<T> predicates = new ArrayList<T>(); storage.getAll(predicates); if (!includeEmpty && predicates.size() == 0) return; System.out.println(storage.getName() + ":"); predicates.sort(PDDLPredicate.COMPARATOR); StringBuffer predicatesDump = new StringBuffer(); PDDLPredicate.dumpPredicates(predicatesDump, predicates, maxLineLength, " "); System.out.println(predicatesDump.toString()); } public static <T extends PDDLPredicate> void diff(IStorage<T> source, IStorage<T> diffFrom, Collection<T> resultAdded, Collection<T> resultRemoved) { List<T> predicates1 = new ArrayList<T>(); List<T> predicates2 = new ArrayList<T>(); source.getAll(predicates1); diffFrom.getAll(predicates2); List<T> added = new ArrayList<T>(predicates1); added.removeAll(predicates2); List<T> removed = new ArrayList<T>(predicates2); removed.removeAll(predicates1); resultAdded.addAll(added); resultRemoved.addAll(removed); } public static void dumpDiff(IStorage[] sources, IStorage[] diffFroms, int maxLineLength) { List added = new ArrayList(); List removed = new ArrayList(); for (int i = 0; i < sources.length; ++i) { if (i < diffFroms.length) { diff(sources[i], diffFroms[i], added, removed); } } if (added.size() > 0) { System.out.println("+++ ADDED:"); added.sort(PDDLPredicate.COMPARATOR); StringBuffer result = new StringBuffer(); PDDLPredicate.dumpPredicates(result, added, maxLineLength, " "); System.out.println(result.toString()); } if (removed.size() > 0) { System.out.println("--- REMOVED:"); removed.sort(PDDLPredicate.COMPARATOR); StringBuffer result = new StringBuffer(); PDDLPredicate.dumpPredicates(result, removed, maxLineLength, " "); System.out.println(result.toString()); } } }
#include <stdio.h> #include <stdlib.h> #define max(A,B) ((A)>(B)?(A):(B)) int main() { int passager,x,i,n,entre,sortie; scanf("%d",&n); x=0; passager=0; for (i=1;i<=n;i++) { scanf("%d %d",&sortie,&entre); passager+=entre-sortie; x=max(x,passager); } printf("%d",x); return 0; }
/** * * common base class that contains code to track the * source for * this instance (USER|GLOBAL) * . * * @version $Revision$ $Date$ */ @SuppressWarnings( "all" ) public class TrackableBase implements java.io.Serializable, java.lang.Cloneable { //-----------/ //- Methods -/ //-----------/ /** * Method clone. * * @return TrackableBase */ public TrackableBase clone() { try { TrackableBase copy = (TrackableBase) super.clone(); return copy; } catch ( java.lang.Exception ex ) { throw (java.lang.RuntimeException) new java.lang.UnsupportedOperationException( getClass().getName() + " does not support clone()" ).initCause( ex ); } } //-- TrackableBase clone() public static final String USER_LEVEL = "user-level"; public static final String GLOBAL_LEVEL = "global-level"; private String sourceLevel = USER_LEVEL; private boolean sourceLevelSet = false; public void setSourceLevel( String sourceLevel ) { if ( sourceLevelSet ) { throw new IllegalStateException( "Cannot reset sourceLevel attribute; it is already set to: " + sourceLevel ); } else if ( !( USER_LEVEL.equals( sourceLevel ) || GLOBAL_LEVEL.equals( sourceLevel ) ) ) { throw new IllegalArgumentException( "sourceLevel must be one of: {" + USER_LEVEL + "," + GLOBAL_LEVEL + "}" ); } else { this.sourceLevel = sourceLevel; this.sourceLevelSet = true; } } public String getSourceLevel() { return sourceLevel; } }
<filename>internal/databases/mysql/binlog_fetch_handler.go package mysql import ( "os" "path" "path/filepath" "strings" "time" "github.com/wal-g/storages/storage" "github.com/wal-g/tracelog" "github.com/wal-g/wal-g/internal" "github.com/wal-g/wal-g/utility" ) type BinlogFetchSettings struct { startTs time.Time endTS *time.Time needApply bool } func (settings BinlogFetchSettings) GetLogsFetchInterval() (time.Time, *time.Time) { return settings.startTs, settings.endTS } func (settings BinlogFetchSettings) GetDestFolderPath() (string, error) { return internal.GetLogsDstSettings(internal.MysqlBinlogDstSetting) } func (settings BinlogFetchSettings) GetLogFolderPath() string { return BinlogPath } type BinlogFetchHandler struct { settings BinlogFetchSettings applier Applier afterFetch func([]storage.Object) error abortHandler func(string) error } func (handler BinlogFetchHandler) AfterFetch(logs []storage.Object) error { return handler.afterFetch(logs) } func (handler BinlogFetchHandler) HandleAbortFetch(logFileName string) error { tracelog.InfoLogger.Printf("handling abort fetch over %s", logFileName) return handler.abortHandler(logFileName) } func (handler BinlogFetchHandler) FetchLog(logFolder storage.Folder, logName string) (bool, error) { tracelog.InfoLogger.Printf("fetching log file %s", logName) return handler.applier(logFolder, logName, handler.settings) } var indexFileCreator = func(logsFolderPath string, logs []storage.Object) error { tracelog.InfoLogger.Printf("creating index file %s", logsFolderPath) return createIndexFile(logsFolderPath, logs) } func NewBinlogFetchHandler(settings BinlogFetchSettings) BinlogFetchHandler { if settings.needApply { return BinlogFetchHandler{ settings: settings, applier: StreamApplier, afterFetch: func(objects []storage.Object) error { return nil }, abortHandler: func(s string) error { return nil }, } } return BinlogFetchHandler{ settings: settings, applier: FSDownloadApplier, afterFetch: func(objects []storage.Object) error { destLogFolderPath, err := settings.GetDestFolderPath() if err != nil { return err } return indexFileCreator(destLogFolderPath, objects) }, abortHandler: func(logName string) error { dstPathFolder, err := settings.GetDestFolderPath() if err != nil { return err } return os.Remove(path.Join(dstPathFolder, logName)) }, } } func configureEndTs(untilDT string) (*time.Time, error) { if untilDT != "" { dt, err := time.Parse(time.RFC3339, untilDT) if err != nil { return nil, err } return &dt, nil } return internal.ParseTS(internal.MysqlBinlogEndTsSetting) } func FetchLogs(folder storage.Folder, backupUploadTime time.Time, untilDT string, needApply bool) error { endTS, err := configureEndTs(untilDT) if err != nil { return err } settings := BinlogFetchSettings{ startTs: backupUploadTime, endTS: endTS, needApply: needApply, } handler := NewBinlogFetchHandler(settings) _, err = internal.FetchLogs(folder, settings, handler) return err } func getBackupUploadTime(folder storage.Folder, backup *internal.Backup) (time.Time, error) { var streamSentinel StreamSentinelDto err := internal.FetchStreamSentinel(backup, &streamSentinel) if err != nil { return time.Time{}, err } binlogs, _, err := folder.GetSubFolder(BinlogPath).ListFolder() if err != nil { return time.Time{}, err } var backupUploadTime time.Time for _, binlog := range binlogs { if strings.HasPrefix(binlog.GetName(), streamSentinel.BinLogStart) { backupUploadTime = binlog.GetLastModified() } } return backupUploadTime, nil } func isBinlogCreatedAfterEndTs(binlogTimestamp time.Time, endTS *time.Time) bool { return endTS != nil && binlogTimestamp.After(*endTS) } func createIndexFile(logsFolder string, fetchedBinlogs []storage.Object) error { indexFile, err := os.Create(filepath.Join(logsFolder, "binlogs_order")) if err != nil { return err } for _, binlogObject := range fetchedBinlogs { _, err = indexFile.WriteString(utility.TrimFileExtension(binlogObject.GetName()) + "\n") if err != nil { return err } } return indexFile.Close() } func HandleBinlogFetch(folder storage.Folder, backupName string, untilDT string, needApply bool) error { backup, err := internal.GetBackupByName(backupName, utility.BaseBackupPath, folder) tracelog.ErrorLogger.FatalfOnError("Unable to get backup %+v\n", err) backupUploadTime, err := getBackupUploadTime(folder, backup) if err != nil { return err } return FetchLogs(folder, backupUploadTime, untilDT, needApply) }
Recently, troubled teenagers in residential treatment programs throughout America have been brutally marched to death, sexually abused, raped, electrocuted over 30 times a day (warning: upsetting video), restrained until nearly dead, and more. However, the gory details of several cases aren’t the worst part. The worst part, according to testimony before Congress and much evidence, is that the abuse is ongoing. The suggested scope of the matter is rarely addressed by current media or government. This oversight prompted Senator McKeon to lament, “I don’t understand.” Another, George Miller, Senior Democrat on the House Committee on Education and the Workforce, says, “The last time this country witnessed somebody with a bag over their head and a noose around their neck, the world was horrified, the nation was embarrassed and it was at Abu-ghraib, to be told by this committee that this is a valid therapy, I guess, or a practice by somebody in the care of somebody else’s child . . . that this was acceptable, would horrify this nation again.” Miller incredulously spoke these iconic words after testimony by investigator Gregory Kutz. Kutz was head of the Government Accountability Office special investigations unit responsible for the 2007 report Residential Treatment Programs: Concerns Regarding Abuse and Death in Certain Programs for Troubled Youth. The report detailed tremendous abuse in private facilities for troubled kids, an issue acknowledged by few but desperately in need of attention. In one example, a 15-year-old girl lay dead on the road for 18 hours after vomiting water for two days, but the camp’s advertised “survival experts” did nothing and were not equipped to offer or obtain aid. Congressman George was tired of hearing these stories. The government report came on the heels of a groundbreaking investigative piece by current Time magazine neuroscience journalist Maia Szalavitz. The book, entitled Help at Any Cost: How the Troubled-Teen Industry Cons Parents and Hurts Kids, offers an in-depth look at rampant abuse at private facilities for troubled teens. She followed the industry roots to a bizarre militant anti-drug cult called Synanon, where addicts were “brutally confronting one another” (a practice now known as “attack therapy”) to keep each other clean. Humiliation, including demeaning outfits such as diapers for “crybabies” and hooker skirts for “promiscuous” young ladies, have been employed by many relatively modern programs (such as the Elan School, and the Bain Capital-owned CRC Health Facility Mount Bachelor Academy), but they find their roots in Synanon. The “humiliating, degrading and traumatizing . . . sexualized role play” within Mount Bachelor, as described by the Oregon Department of Human Services in 2009, was a mimic of Synanon and Straight, Inc. The cult ultimately degenerated into forced partner swapping/divorce, forced abortions, and forced vasectomies. In 1980, after losing a $300,000 lawsuit for kidnapping, the para-military arm of the organization (the Imperial Marines) placed a de-rattled rattle snake into the mailbox of the opposing lawyer. This action ultimately led to their decline. Before the world was even rid of Synanon, the abusive practices were adopted by Straight, Inc., which ran from 1976-1993. Despite closing its last facility long ago, many may have heard of the organization’s new name, the Drug Free America Foundation, a prominent conservative anti-drug “think tank” to this day. Straight was founded by Mel Sembler, a powerful Republican who became the Republican Finance Chairman for the Republican National Committee from 1997 to 2000, as well as Mitt Romney’s national finance co-chair. Although Synanon was made up of families, and consequently youth, Straight was explicitly a youth drug rehab program. The abuse in Straight facilities was, perhaps, second to none. The mistreatment recorded is eerily similar to Synanon: The abuse included kidnapping, false imprisonment, beatings, sexual humiliation (boys were called “fags,” girls, “whores”), punitive use of isolation and restraint and bizarre incidents like teens being gagged with Kotex and held on the floor for hours until they wet or even soiled themselves. In every state where Straight had a facility, regulators and/or lawsuits eventually documented serious abuse. But after Straight closed in 1993, things did not much improve, and many other programs took up the black flag. The most notorious of these programs was World Wide Association of Specialty Programs (WWASP). The organization has operated many facilities from 1998 until the present day. Although, allegedly, the organization only exists to address “ongoing lawsuits,” at least one of the owners (Narvin Lichfield) still runs facilities such as Seneca Ranch. The list of abuses is nearly identical to its two predecessors, with at least one of the owners (Robert Lichfield, Utah Finance co-chair for Romney) borrowing pages from the Straight-influenced Provo Canyon School, in Provo and Orem, Utah. Lichfield was employed there starting in 1977, and according to courts, the facility was abusive has demonstrated an ongoing history of abuse: Regardless of origin, condition or motivation, once arrived, each person during the beginning phases of the school program was locked in, isolated from the outside world, and whether anti-social, crippled or learning disabled, was subject to mandated physical standing day after day after day to promote “right thinking” and “social conformity.” Mail was censored. Visitors were discouraged. Disparaging remarks concerning the institution were prohibited and punished. To “graduate” from confinement to a more liberated phase, one had to “pass” a lie detector test relating to “attitude,” “truthfulness and “future conduct.” Some failed to pass and remained in confinement for extended periods of time. PCS lost this lawsuit. Also, more lawsuits emerged, including in 2014 alleging “cruel and unusual punishment” and wrongful death. Allegations abounded against WWASP, including its many international facilities. At Tranquility Bay, kids say they spent “13 hours a day, for weeks or months on end, lying on their stomachs in an isolation room, their arms repeatedly twisted to the breaking point.” At High Impact, authorities video-taped kids locked in dog cages and tied spread-eagle in the Mexico sun. In some cases, survivors report they were tied in their underwear, with no protection from fire-ants, and they were threatened with a cattle prod if they moved. At a Samoan facility, Paradise Cove, the punishments involved turning the kids against each other. Of course, staff-to-student abuse flourished, including, “Beatings by staff. . . . But the worst consequence was ‘the Box,’ a three-foot-square windowless, wooden hut with a concrete floor, where teens were made to stay for days to months, subsisting on rice and water. Sometimes, they were thrown in hog-tied and left.” The same article points out murders committed by former attendees, and speculates the facility is to blame. I concur. A more damaged bunch I have never met than the poor now-adults that attended this wretched place. Finally, one notable lawsuit against WWASP had 357 plaintiffs alleging abuse, including (and this is only a small portion, it goes for pages): • Denial of proper medical and dental care and treatment • Denial of an even minimally sufficient education • Kicked, beaten, thrown and slammed to the ground; • Forced to lie in, or wear, urine and feces as one method of punishment; • Sexual abuse, which included forced sexual relations and acts of fondling and masturbation performed on them • Forced to eat their own vomit (Author’s note: this is a weirdly common program practice) • Threatened severe punishment, including death, if they told anyone of their abuses and poor living conditions • And much more WWASP ultimately ebbed due to bad press and legal battles over alleged wrongful death, and the owners distanced themselves as much as they could. Although WWASP businessmen still run facilities, they are thankfully few. After hearing this for the first time, one might wonder if I cherry-picked the worst of the worst or suspect the abuse to be mostly past. Unfortunately, although data is limited, what we know indicates the opposite. Concerning a pattern of widespread abuse, if we circle around to the Government Accountability Office reports, we learn that the 2007 reports (there are at least four of them: Abuse/Death, Abuse/Death/Marketing, Oversight Gaps, Seclusion and Restraint) focused on two objectives: • Verify whether allegations of abuse and death at residential treatment programs are widespread • Examine the facts and circumstances surrounding selected closed cases where a teenager died while enrolled in a private program. The results are upsetting. GAO found thousands of allegations of abuse. GAO could not identify a more concrete number of allegations because it could not locate a single website, federal agency, or other entity that collects comprehensive nationwide data. Later, they address the second question. During 2005 alone, 33 states reported 1,619 staff members involved in incidents of abuse in residential programs. GAO could not identify a more concrete number of allegations because it could not locate a single website, federal agency, or other entity that collects comprehensive nationwide data. Furthermore, it’s worth noting that 1619 staff members, not incidents, were recorded. We can assume staff members participated in more than one instance of abuse before the state caught them. Also, the 2009 reports demonstrated that states don’t have adequate laws and aren’t keeping track. It becomes abundantly clear why seventeen states are not represented in the above statistics. GAO found no federal laws restricting the use of seclusion and restraints. . . . Nineteen states have no laws or regulations related to the use of seclusions or restraints in schools. Seven states place some restrictions of the use of restraints, but do not regulate seclusions . . . while nineteen require parents to be notified after restraints have been used. Two states require annual reporting on the use of restraints. The combined GAO investigations demonstrate core components of the abuse pandemic. Those advocating for youth locked in programs already knew, and the 2009 report confirms, that improper restraint by under-trained staff, and unnecessary punitive restraint in non-emergencies, is a significant cause of death. (Click to access example one, example two, and example three.) Poorly trained staff, poor conditions, improper nutrition, possible profit motive, lack of evidence-based treatment, and other reasons predictably result in abuse of our most vulnerable and impressionable. Regrettably, some information is dated, but as the GAO tirelessly point out, nobody is keeping track. Although congressmen expressed outrage, no law was passed, despite the valiant efforts by George Miller and others to pass H.R. 922 (111th): Stop Child Abuse in Residential Programs for Teens Act of 2009. Currently, a third iteration of the failed bill is in Congress, but certain committee members are blocking it. To solve the problem and get more data, we need to pass a national law. So, what can we do today to effect change? We can educate ourselves. (Check out the bill texts: Senate, House.) We can support the bills by contacting our local congresspeople. (To support the bills, click here: Senate, House). We can support efforts to shut down the Judge Rotenberg Center, the only facility in the United States, maybe the world, that openly uses electric shock (documented: 31 shocks in one sitting) to punish misbehavior for misdeeds; you know, they’re punished for bad stuff, like refusing to remove their jackets. Next, we can work to correct the idea that kids in facilities are “bad kids” or deserve “tough love.” Many are in facilities because they are gay, or often simply because they’re not getting along with their parents. Although not a lot of facilities advertise it anymore, “correcting” homosexuality is a common practice in religious facilities (especially Utah and Missouri) and non-religious facilities (or at least they aren’t advertised as religious; this is particularly true in Utah). Further, kids in many facilities often have misdiagnosed or undiagnosed mental health problems, and diagnosed or not, many facilities do little to help kids with mental health issues. “Faith based” programs in particular often ignore diagnosable problems with documented mitigations and solutions. It would also help if we could demonstrate to the conservative community that this isn’t a state’s rights issue falling under the purview of education. Rather, it’s an interstate commerce and human rights issue. Further, when states violate the rights of other states’ citizens, we have a states’ rights issue the conservative community should firmly stand behind. We aren’t trying to stifle the states’ ability to manage education. Instead, we are trying to make sure children’s rights and dignities are maintained. I think most political parties will agree on the basics at a federal level, if only we correctly frame the problem. Finally, education is our best tool. Check back in to learn more about the particulars: What open facilities and networks are abusive? How does the abuse continue? Who is responsible? What parties stand to gain? I will do my best to answer these questions in upcoming articles so we are informed and prepared to fight the good fight. *** Ricky Linder is a writer from the Colorado Springs area.
<reponame>joostoudeman/external-resources<gh_stars>0 package org.onehippo.forge.externalresource.reports.plugins.statistics; import java.util.ArrayList; import java.util.List; import java.util.Map; import nl.uva.mediamosa.model.StatsPopularcollectionsType; /** * @version $Id$ */ public class MMPopularCollectionsStatisticsProvider extends MediaMosaStatisticsProvider<StatsPopularcollectionsType> { protected enum ColumnName {collId, ownerId, title, description, created, count} public MMPopularCollectionsStatisticsProvider(final Map<String, String> statisticsServiceParameters) { super(statisticsServiceParameters); itemColumnMap.put(ColumnName.collId.name(), new StringPropertyColumn("collId")); itemColumnMap.put(ColumnName.ownerId.name(), new StringPropertyColumn("ownerId")); itemColumnMap.put(ColumnName.title.name(), new StringPropertyColumn("title")); itemColumnMap.put(ColumnName.description.name(), new StringPropertyColumn("description")); itemColumnMap.put(ColumnName.created.name(), new DatePropertyColumn("created")); itemColumnMap.put(ColumnName.count.name(), new StringPropertyColumn("count")); } //We override this to let super deal with all Mediamosa labels. //If we don't, we must provide a properties file named after this class @Override protected String getResourceValue(String key) { return super.getResourceValue(key); } @Override public List<StatsPopularcollectionsType> getListData() { try { return service.getStatsPopularCollections(); } catch (Exception e) { log.error("Error invoking MediaMosa service.", e); } return new ArrayList<StatsPopularcollectionsType>(); } @Override public Map<String, Long> getChartData() { return null; } }
def weighted_wheel_selection(weights): cumulative_sum = np.cumsum(weights) prob = r.generate_uniform_random_number() * cumulative_sum[-1] for i, c_sum in enumerate(cumulative_sum): if c_sum > prob: return i return None
<reponame>TioComeGfas/Bitarray package cl.tiocomegfas.bitarray; /** * Grupo de investigación ALBA, * Proyecto 2030 ... * * @author <NAME>, <NAME> y <NAME> * Basado en * BitSecuenceRG de libcds Autor <NAME> * https://github.com/fclaude/libcds/blob/master/src/static/bitsequence/BitSequenceRG.cpp */ public class RankSelect { private static final int WORD_SIZE = 64; /** * mask for obtaining the first 5 bits */ private static final long mask63 = 63L; private final long length; private final int factor; private final int s; private final long ones; private final long[] bits; private long[] Rs; //arreglo de superBlock /** * Crea un Array de bits estático con soporte para las * operaciones de rank y select. * * @param ba */ public RankSelect(BitArray ba) { this(ba, 20); } /** * Crea un Array de bits estático con soporte para las * operaciones de rank y select. * * @param ba, bit array a clonar, sobre el cual se opera * @param factor factor con el cual se determina la redundancia de la estructura * si factor=2, redundancia 50% * factor=3, redundancia 33% * factor=4, redundancia 25% * factor=20, redundancia 5%; */ public RankSelect(BitArray ba, int factor) { this.length = ba.length(); bits = ba.bits.clone(); this.factor = factor; if (factor == 0) factor = 20; s = WORD_SIZE * factor; buildRank(); ones = rank1(length - 1); } private void buildRank() { int num_sblock = (int) (length / s); // +1 pues sumo la pos cero Rs = new long[num_sblock + 5]; int j; Rs[0] = 0; for (j = 1; j <= num_sblock; j++) { Rs[j] = Rs[j - 1]; Rs[j] += BuildRankSub((j - 1) * factor, factor); } } private long BuildRankSub(int ini, int bloques) { long rank = 0, aux; for (int i = ini; i < ini + bloques; i++) { if (i < bits.length) { aux = bits[i]; rank += Long.bitCount(aux); } } return rank; //retorna el numero de 1's del intervalo } public long numberOfOnes() { return ones; } /** * Permite conocer el valor 0 o 1 de la i-ésima posición del ubiobio.cl.bitArray.BitArray * * @param pos * @return */ public boolean access(long pos) { if (pos < 0) throw new IndexOutOfBoundsException("pos < 0: " + pos); if (pos >= length) throw new IndexOutOfBoundsException("pos >= length():" + pos); return (bits[(int) (pos / WORD_SIZE)] & (1l << (pos % WORD_SIZE))) != 0; } /** * retorna la cantidad de unos que hay hasta pos inclusive. * * @param pos * @return * @throws IndexOutOfBoundsException, si pos esta fuera del rango [0..length) */ public long rank1(long pos) { if (pos < 0) throw new IndexOutOfBoundsException("pos < 0: " + pos); if (pos >= length) throw new IndexOutOfBoundsException("pos >= length():" + pos); long i = pos + 1; int p = (int) (i / s); long resp = Rs[p]; int aux = p * factor; for (int a = aux; a < i / WORD_SIZE; a++) resp += Long.bitCount(bits[a]); resp += Long.bitCount(bits[(int) (i / WORD_SIZE)] & ((1l << (i & mask63)) - 1l)); return resp; } /** * retorna la posición en el bit array en la que ocurre el i-ésimo uno. * * @param i * @return */ public long select1(long i) { long x = i; // returns i such that x=rank(i) && rank(i-1)<x or n if that i not exist // first binary search over first level rank structure // then sequential search using popcount over a int // then sequential search using popcount over a char // then sequential search bit a bit if (i <= 0) throw new IndexOutOfBoundsException("i <= 0: " + i); if (i > ones) throw new IndexOutOfBoundsException("i > amount of ones:" + i); //binary search over first level rank structure int l = 0, r = (int) (length / s); int mid = (l + r) / 2; long rankmid = Rs[mid]; while (l <= r) { if (rankmid < x) l = mid + 1; else r = mid - 1; mid = (l + r) / 2; rankmid = Rs[mid]; } //sequential search using popcount over a int int left; left = mid * factor; x -= rankmid; long j = bits[left]; int onesJ = Long.bitCount(j); while (onesJ < x) { x -= onesJ; left++; if (left > bits.length) return length; j = bits[left]; onesJ = Long.bitCount(j); } //sequential search using popcount over a char left = left * WORD_SIZE; rankmid = Long.bitCount(j); if (rankmid < x) { j = j >>> 8l; x -= rankmid; left += 8l; rankmid = Long.bitCount(j); if (rankmid < x) { j = j >>> 8l; x -= rankmid; left += 8l; rankmid = Long.bitCount(j); if (rankmid < x) { j = j >>> 8l; x -= rankmid; left += 8l; } } } // then sequential search bit a bit while (x > 0) { if ((j & 1) > 0) x--; j = j >>> 1l; left++; } return left - 1; } /** * retorna la cantidad de ceros que hay hasta pos inclusive. * * @param pos posición hasta donde se quiere contar la cantidad de ceros. * @return total de ceros hasta la posicion pos */ public long rank0(long pos) { return pos - rank1(pos) + 1; } /** * retorna la posición en el bit array en la que ocurre el i-ésimo cero. * * @param i número ordinal del cero buscado. * @return posición del i-esimo cero */ public long select0(long i) { long x = i; // returns i such that x=rank_0(i) && rank_0(i-1)<x or exception if that i not exist // first binary search over first level rank structure // then sequential search using popcount over a int // then sequential search using popcount over a char // then sequential search bit a bit if (i <= 0) throw new IndexOutOfBoundsException("i < 1: " + i); if (i > length - ones) throw new IndexOutOfBoundsException("i > amount of 0:" + i); //binary search over first level rank structure if (x == 0) return 0; int l = 0, r = (int) (length / s); int mid = (l + r) / 2; long rankmid = mid * factor * WORD_SIZE - Rs[mid]; while (l <= r) { if (rankmid < x) l = mid + 1; else r = mid - 1; mid = (l + r) / 2; rankmid = mid * factor * WORD_SIZE - Rs[mid]; } //sequential search using popcount over a int int left; left = mid * factor; x -= rankmid; long j = bits[left]; int zeros = WORD_SIZE - Long.bitCount(j); while (zeros < x) { x -= zeros; left++; if (left > bits.length) return length; j = bits[left]; zeros = WORD_SIZE - Long.bitCount(j); } //sequential search using popcount over a char left = left * WORD_SIZE; rankmid = WORD_SIZE - Long.bitCount(j); if (rankmid < x) { j = j >> 8l; x -= rankmid; left += 8; rankmid = WORD_SIZE - Long.bitCount(j); if (rankmid < x) { j = j >> 8l; x -= rankmid; left += 8; rankmid = WORD_SIZE - Long.bitCount(j); if (rankmid < x) { j = j >> 8l; x -= rankmid; left += 8; } } } // then sequential search bit a bit while (x > 0) { if (j % 2 == 0) x--; j = j >> 1l; left++; } left--; if (left > length) return length; return left; } /** * retorna el menor indice i>=start tal que access(i) es true; * * @param start * @return posición en el bitArray del 1 siguiente o igual a start. * En el caso de no haber un 1 posterior retorna el largo * en bits de la secuencia. */ public long selectNext1(long start) { if (start < 0) throw new IndexOutOfBoundsException("start < 0: " + start); if (start >= length) throw new IndexOutOfBoundsException("start >= length:" + start); long count = start; long des; long aux2; des = (int) (count % WORD_SIZE); aux2 = bits[(int) (count / WORD_SIZE)] >>> des; if (aux2 != 0) { return count + Long.numberOfTrailingZeros(aux2); } for (int i = (int) (count / WORD_SIZE) + 1; i < bits.length; i++) { aux2 = bits[i]; if (aux2 != 0) { return i * WORD_SIZE + Long.numberOfTrailingZeros(aux2); } } return length; } /** * retorna el mayor indice i<=start tal que access(i) es true; * * @param start * @return retorna la posición i del 1 previo a start. Si no * hay un 1 previo a la posción start retorna -1. */ public long selectPrev1(long start) { // returns the position of the previous 1 bit before and including start. if (start < 0) throw new IndexOutOfBoundsException("start < 0: " + start); if (start >= length) throw new IndexOutOfBoundsException("start > length:" + start); if (start == 0) return -1; int i = (int) (start / WORD_SIZE); int offset = (int) (start % WORD_SIZE); //64 unos long mask = 0xffffffffffffffffL; long aux2 = bits[i] & (mask >>> (WORD_SIZE - offset)); if (aux2 != 0) { return i * WORD_SIZE + 63 - Long.numberOfLeadingZeros(aux2); } for (int k = i - 1; k >= 0; k--) { aux2 = bits[k]; if (aux2 != 0) { return k * WORD_SIZE + 63 - Long.numberOfLeadingZeros(aux2); } } return -1; } /** * retorna el menor indice i>start tal que access(i) es false; * * @param start * @return */ public long selectNext0(long start) { if (start < 0) throw new IndexOutOfBoundsException("start < 0: " + start); if (start >= length) throw new IndexOutOfBoundsException("start >= length:" + start); long count = start; long des; long aux2; des = (int) (count % WORD_SIZE); aux2 = ~bits[(int) (count / WORD_SIZE)] >>> des; if (aux2 != 0) { return count + Long.numberOfTrailingZeros(aux2); } for (int i = (int) (count / WORD_SIZE) + 1; i < bits.length; i++) { aux2 = ~bits[i]; if (aux2 != 0) { return i * WORD_SIZE + Long.numberOfTrailingZeros(aux2); } } return length; } /** * retorna el mayor indice i<start tal que access(i) es false; * * @param start * @return */ public long selectPrev0(long start) { // returns the position of the previous 1 bit before and including start. if (start < 0) throw new IndexOutOfBoundsException("start < 0: " + start); if (start >= length) throw new IndexOutOfBoundsException("start > length:" + start); if (start == 0) return -1; int i = (int) (start / WORD_SIZE); long offset = (start % WORD_SIZE); //64 unos long mask = 0xffffffffffffffffL; long aux2 = ~bits[i] & (mask >>> (WORD_SIZE - offset)); if (aux2 != 0) { return i * WORD_SIZE + 63 - Long.numberOfLeadingZeros(aux2); } for (int k = i - 1; k >= 0; k--) { aux2 = ~bits[k]; if (aux2 != 0) { return k * WORD_SIZE + 63 - Long.numberOfLeadingZeros(aux2); } } return -1; } /** * Retorna la cantidad de bits en el ubiobio.cl.bitArray.BitArray. * * @return */ public long length() { return length; } /** * Retorna el tamaño del ubiobio.cl.bitArray.BitArray en byte. * * @return */ public long size() { long bitmapSize = (bits.length * WORD_SIZE) / 8 + 4; long sbSize = Rs.length * WORD_SIZE / 8 + 4; //variables:long: length, ones =2*8 //int: factor y s =2*4 //referencias a los arreglos (puntero): Rs, bits= 2*8 (word ram 64 bits) long otros = 8 + 8 + 4 + 4 + 8 + 8; return bitmapSize + sbSize + otros; } @Override public String toString() { StringBuilder out = new StringBuilder(); for (int i = 0; i < length; i++) { out.append(access(i) ? "1" : "0"); } return out.toString(); } public long numberOfZeroes() { return length - ones; } }
async def urls(self): url_tuple = namedtuple("WikiURLs", ["view", "edit"]) urls = await self.wiki.http.get_urls(self.title) return url_tuple(urls[0], urls[1])
/** * Executor for tests. * @author Roman Mazur - Stanfy (http://stanfy.com) */ final class ControlledExecutor implements Executor { ArrayList<Runnable> commands = new ArrayList<>(); @Override public void execute(Runnable command) { commands.add(command); } void runAllAndClean() { for (Runnable r : commands) { r.run(); } commands.clear(); } }
/** * <code>required .RBLMessage.Condition condition = 2;</code> */ public Builder mergeCondition(protobuf.RblProto.RBLMessage.Condition value) { if (conditionBuilder_ == null) { if (((bitField0_ & 0x00000002) == 0x00000002) && condition_ != protobuf.RblProto.RBLMessage.Condition.getDefaultInstance()) { condition_ = protobuf.RblProto.RBLMessage.Condition.newBuilder(condition_).mergeFrom(value).buildPartial(); } else { condition_ = value; } onChanged(); } else { conditionBuilder_.mergeFrom(value); } bitField0_ |= 0x00000002; return this; }
package com.trkj.framework.vo; import com.baomidou.mybatisplus.annotation.TableField; import com.baomidou.mybatisplus.annotation.TableId; import io.swagger.annotations.ApiModelProperty; import lombok.AllArgsConstructor; import lombok.Data; import lombok.NoArgsConstructor; import java.io.Serializable; import java.util.Date; /** * 请假审批VO */ @Data @NoArgsConstructor @AllArgsConstructor public class LeaveVo implements Serializable { private static final long serialVersionUID = 1L; @ApiModelProperty(value = "审批编号") @TableId("AUDITFLOW_ID") private Long auditflowId; @ApiModelProperty(value = "标题") @TableId("AUDITFLOW_TITLE") private String auditflowTitle; @ApiModelProperty(value = "审批类型") @TableId("AUDITFLOW_TYPE") private String auditflowType; @ApiModelProperty(value = "申请人") @TableId("STAFF_NAME") private String staffName; @ApiModelProperty(value = "申请状态") @TableId("AUDITFLOW_STATE") private Long auditflowState; @ApiModelProperty(value = "审批明细编号") @TableId("AUDITFLOWDETAIL_ID") private Long auditflowdetailId; @ApiModelProperty(value = "审核人1") @TableId("STAFF_NAME") private String staffName1; @ApiModelProperty(value = "审核人2") @TableId("STAFF_NAME") private String staffName2; @ApiModelProperty(value = "审核人3") @TableId("STAFF_NAME") private String staffName3; @ApiModelProperty(value = "审核备注") @TableId("AUDITFLOWDETAI_REMARKS") private String auditflowdetaiRemarks; @ApiModelProperty(value = "审核日期") @TableId("AUDITFLOWDETAI_DATE") private String auditflowdetaiDate; @ApiModelProperty(value = "审核状态") @TableId("AUDITFLOWDETAI_STATE") private Long auditflowdeatistate; @ApiModelProperty(value = "请假编号") @TableId("LEAVE_ID") private Integer leaveId; @ApiModelProperty(value = "部门名称") @TableField("DEPT_NAME") private String deptname; @ApiModelProperty(value = "请假类型") @TableField("LEAVE_TYPE") private String leaveType; @ApiModelProperty(value = "请假事由") @TableField("LEAVE_MATTER") private String leaveMatter; @ApiModelProperty(value = "备注") @TableField("LEAVE_REMARKS") private String leaveRemarks; @ApiModelProperty(value = "请假开始时间") @TableField("LEAVE_S_DATE") private Date leaveSDate; @ApiModelProperty(value = "请假结束时间") @TableField("LEAVE_E_DATE") private Date leaveEDate; @ApiModelProperty(value = "请假总小时") @TableField("LEAVE_TOTAL_DATE") private Integer leaveTotalDate; }
def region_code(self, tidx: TileIdx_xy) -> str: x, y = tidx return self.region_code_format.format(x=x, y=y)
Minimally invasive strabismus surgery for horizontal rectus muscle reoperations Aims: To study if minimally invasive strabismus surgery (MISS) is suitable for rectus muscle reoperations. Methods: The study presents a series of consecutive patients operated on by the same surgeon at Kantonsspital St Gallen, Switzerland with a novel MISS rectus muscle reoperation technique. Surgery is done by applying two small radial cuts along the muscle insertion. Through the tunnel obtained after muscle separation from surrounding tissue, a recession, advancement or plication is performed. Results: In 62 eyes of 51 patients (age 35.4 (SD 16.3) years) a total of 86 horizontal rectus muscles were reoperated. On the average, the patients had 2.1 strabismus surgeries previously. Preoperative logMAR visual acuity was 0.38 (0.82) compared with 0.37 (0.83) at 6 months (p>0.1). On the first postoperative day, in the primary gaze position conjunctival and lid swelling and redness was hardly visible in 11 eyes, discrete in 15 eyes, moderate in 11 eyes and severe in 15 eyes. One corneal dellen and one corneal erosion occurred, which both quickly resolved. The preoperative deviation at distance for esodeviations (n = 15) of 12.5 (8.5)° decreased to 2.6 (7.8)° at 6 months (p<0.001). For near, a decrease from 12.0 (10.1)° to 2.9 (1.6)° was observed (p<0.001). The preoperative deviation at distance for exodeviations (n = 35) of −16.4 (8.5)° decreased to −7.9 (6.5)° at 6 months (p<0.005). For near, a decrease from −16.5 (11.4)° to −2.9 (1.5)° was observed (p<0.005). Within the first 6 months, only one patient had a reoperation. At month 6, in four patients a reoperation was planned or suggested by us because of unsatisfactory alignment. No patient experienced persistent diplopia or necessitated a reoperation because of double vision. Stereovision improved at month 6 compared with preoperatively (p<0.01). Conclusions: The study demonstrates that a small-cut, minimal dissection technique allows to perform rectus muscle reoperations. The MISS technique seems to reduce conjunctival and lid swelling in the direct postoperative period. Minimally invasive surgical procedures reduce tissue traumatism, postoperative patient dis-comfort, hospital stay, working disability, and the economic impact of surgery. 1 2 They are now routine in many fields of surgery. In ophthalmology, the following minimally invasive procedures are in use: phacoemulsification for cataracts, 3 nonpenetrating techniques 4 and miniature drainage implants 5 for glaucoma, transconjuctival approaches 6 and minimal buckling 7 for vitreoretinal surgery, endoscopic techniques for the lacrimal system, 8 and small-incisions for lids. 9 In strabismus surgery, a smaller conjunctival incision increases the postoperative quality of life, cosmesis, and the function of the operated muscle. The opening size also influences the ease to perform revision surgery. The majority of surgeons use the limbal approach first described by Harms in 1949 10 and later popularised by von Noorden. 11 This approach allows one to easily perform primary or revision surgery in horizontal rectus muscles 11 12 (fig 1A,B). Several other conjunctival openings have been proposed by these authors: Swan and Talbott, 13 Parks, 14 Velez 15 and Santiago et al. 16 In a previous study, I described a novel minimally invasive strabismus surgery (MISS) technique for rectus muscle recessions and plications and compared it with the usual limbal opening. 17 The MISS operation is performed by applying two small radial cuts along the muscle insertion. After the muscle is separated from its surrounding tissue, a recession or a plication is done through the resulting tunnel. MISS patients had better visual acuities and less lid swelling the day after surgery, indicating that the technique is superior in the direct postoperative period. A conjunctival opening situated at a reasonable distance from the limbus might decrease the incidence of corneal dellen formation and avoid a prolapse of the Tenon capsule. There is also evidence that non-limbal strabismus surgery affects less perilimbal blood supply and may safeguard anterior segment ischaemia in high-risk patients. 18 This study describes how the previously published MISS technique for primary rectus muscle surgery 17 can be adapted to perform rectus muscle reoperations with minimal anatomical disruption and presents the results of the first series of patients. Patients undergoing MISS horizontal reoperations Inclusion criteria: All consecutive patients needing horizontal rectus muscle reoperations by the author between May 2003 and June 2007 were included. Exclusion criteria: Patients with excessive conjunctival scarring from previous surgery necessitating simultaneous conjunctival grafting, need for retroequatorial fixation sutures or muscle transpositions, simultaneous vertical rectus or obliquus muscle surgery, or strongly restricted passive motility. All patients had at least one complete orthoptic examination 5 days before the surgical procedure, on the first postoperative day and after 6 months (range 5-7 months) at the Department of Strabismology and Neuro-Ophthalmology, Kantonsspital, St Gallen, Switzerland. Between day 1 and month 6, only patients harbouring a complication or not referred by an ophthalmologist were seen at our department. The referring ophthalmologist followed the Recession: (C) A limbal traction suture is applied to rotate the eyeball away from the field of surgery. Two small radial cuts are performed, one along the superior and one along the inferior muscle margin. With blunt Wescott scissors using the two cuts for access, the episcleral tissue is separated from the muscle sheath and the sclera, and the muscle is hooked. A meticulous dissection of the check ligaments and intramuscular membrane is performed. Two sutures are applied to the superior and inferior border of the muscle tendon as close as possible to the insertion. The tendon is detached using scissors. (D) After measurement of the amount of recession, the tendon is reattached with the two sutures to the sclera. (M, N) The surgical procedure is finished by applying two sutures to each of the two small cuts. Plication: (E) After applying a limbal traction suture and performing the two small cuts, two sutures are applied to the upper and lower borders of the muscle at the distance from the tendon insertion site corresponding to the plication amount. The sutures are passed at the superior and inferior tendon insertions respectively. (F) An iris spatula is inserted between the tendon and the sutures and the muscle is plicated. (M, N) The surgical procedure ends by applying two sutures to each of the two small cuts. Advancement: (G) After applying a limbal traction suture, the two small radial openings are created. The anterior margins of the cuts are at the level of the actual tendon insertion. (H) With blunt Wescott scissors using the two cuts for access, the episcleral tissue is separated from the muscle sheath and the sclera. Clinical science other patients. The schedule of follow-up visits in between was at day 10 (range 1-2 weeks) and week 4 (range 3-5 weeks). Outcome measures The following parameters were registered: final alignment, binocular single vision, variations in vision, refraction, and number and types of complications and retreatments required during the first 6 months after surgery or at the 6 months postoperative visit. In patients with central fixation squint angles were always measured with the alternating cover test. Otherwise, angles were determined by centralising the corneal reflex using prisms in front of the fixating eye (Krimsky test). On the first postoperative day, conjunctival and lid swelling and redness were determined in primary gaze position, when the cuts were covered by the eyelids. In eight eyes (12.9%) the quality of the slit-lamp photographs or the chart documentation was insufficient to classify the lid and conjunctival status. None of these eight patients had a conjunctival or lid abnormality at month 6. The following ordinal scale was used: redness and swelling of eyelid and conjunctiva not visible from 1 m = ''hardly visible''; ptosis of not more than 1 mm and only minimal redness or swelling of conjunctiva visible from 1 m = ''discrete''; immediate visibility of redness from 1 m or ptosis of more than 1 mm = ''moderate''; conjunctival chemosis or subconjuctival haemorrhage, ptosis of more than 3 mm or lid haemorrhage = ''severe.'' Principle of revision surgery If the actual deviation was secondary to a previous recession, a muscle advancement of the recessed muscle was always performed. Otherwise muscle reinforcements were performed by plications. All horizontal muscle procedures were combined; a recession was combined either with an ipsilateral placation or with an ipsilateral advancement. If necessary, usually in angles .25u, an additional recession was performed in the contralateral eye. Preoperatively, usually the planned intraoperative placement of the keywhole openings was determined using information about the type and amount of previous surgery. If not available, the site was established either preoperatively by location of the muscle insertion using the slit-lamp or intraoperatively by moving the eye using the traction suture. Apart from eyes with excessive scarring or with abundant Tenon, moving the eye frequently allows one to distinguish which vessels are conjunctival and which belong to the muscle, thus permitting the actual insertion site to be determined. Schematic representation of the surgical technique for MISS rectus muscle reoperations Surgery is performed with the operating microscope under general anaesthesia. There is no need for an assistant, since all surgical steps can be performed alone. Recession The technique is similar to the technique described in the previous article about MISS. 17 Therefore, only a brief description is given. First, a limbal traction suture (Silkam 6-0, B. Braun Medical, Switzerland) is applied to rotate the eyeball away from the field of surgery (fig 1C). At any time, direct contact of the traction suture with the cornea has to be avoided. Then, two small radial cuts are performed, one along the superior and one along the inferior muscle margin (fig 1C). The anterior margin of the cut is at the level of the actual tendon insertion. The size of the cuts should be 1 mm less than the amount of the planned muscle displacement. Using blunt Wescott scissors using the two cuts for access, the episcleral tissue is separated from the muscle sheath and the sclera. When the borders of the muscles have been identified, the muscle is hooked. Now, a meticulous dissection of the check ligaments and intramuscular membrane is performed 6-7 mm backward to the insertion. The resulting tunnel allows one to perform the recession. Two sutures (Vicryl 7-0, Ethicon, Switzerland) are applied to the superior and inferior border of the muscle tendon as close as possible to the insertion. Then, the tendon is detached using Wescott scissors ( fig 1C). If necessary, haemostasis is performed. After measurement of the amount of recession, the tendon is reattached with the two sutures to the sclera ( fig 1D). The surgical procedure is finished by applying two sutures (Vicryl Rapid 8-0, Ethicon, Switzerland) to each of the two small cuts (fig 1M,N). Plication The technique is similar to the technique already described in a previous MISS article. 17 Therefore, only a brief description is given. After applying a limbal traction suture (Silkam 6-0, B. Braun Medical, Switzerland) to rotate the eyeball away from the field of surgery and performing the two small cuts, two sutures (Vicryl 7-0, Ethicon, Switzerland) are applied to the upper and lower borders of the muscle at the distance from the tendon insertion site corresponding to the plication amount ( fig 1E). Then, the sutures are passed at the superior and inferior tendon insertions respectively ( fig 1E). An iris spatula is inserted between the tendon and the sutures and the muscle is plicated (fig 1F). The surgical procedure ends by applying two sutures (Vicryl Rapid 8-0, Ethicon, Switzerland) to each of the two small cuts (fig 1M,N). Advancement After applying a limbal traction suture (Silkam 6-0, B. Braun Medical, Switzerland) to rotate the eyeball away from the field of surgery, two small radial cuts are performed, one along the superior and one along the inferior muscle margin ( fig 1G). The anterior margin of the cut is at the level of the actual tendon insertion. With blunt Wescott scissors using the two cuts for access, the episcleral tissue is separated from the muscle sheath and the sclera ( fig 1H). When the borders of the muscles have been identified, the muscle is hooked. Now, a meticulous dissection of the check ligaments and intramuscular membrane is performed 6-7 mm backward to the insertion. The resulting tunnel allows one to perform the advancement (fig 1I). Two sutures (Vicryl 7-0, Ethicon, Switzerland) are applied to the superior and inferior border of the muscle tendon as close as possible to the insertion ( fig 1J). Then, the tendon is detached using Wescott scissors ( fig 1K). If necessary, haemostasis is performed. After measurement of the amount of advancement, the tendon is reattached with the two sutures to the sclera ( fig 1L). In order to perform the reattachment without enlarging the opening size, the cut has to be displaced anteriorly using a forceps. The surgical procedure is finished by applying two sutures (Vicryl Rapid 8-0, Ethicon, Switzerland) to each of the two small cuts (fig 1M,N). If a better visualisation of the operating site becomes necessary, the two cuts can be prolonged and joined at the limbus ( fig 1O). Postoperative management At the end of surgery, TobraDex ointment (1 mg of dexamethasone and 3 mg of tobramycin per gram, 0.5% chlorobutanol) or Maxitrol ointment (polymyxin B sulfate 6000 units, neomycin sulfate 3500 units, dexamethasone 1.0 mg, methyl-paraben 0.05%, and propylparaben 0.01%) was applied. There was no need for an eye patch, apart from in the eye with a corneal erosion. For the first 2 weeks after surgery the following treatment was prescribed: TobraDex suspension (1 mg of dexamethasone and 3 mg of tobramycin per ml, 0.01% benzalkonium chloride) tid and TobraDex ointment in the evening or Maxitrol suspension (polymyxin B sulfate 6000 units, neomycin sulfate 3500 units, dexamethasone 1.0 mg, and benzalkonium chloride 0.004%) tid and Maxitrol ointment in the evening. Statistical methods All comparisons were performed between preoperative and postoperative month 6. Binocular vision was compared using the Wilcoxon signed-rank test. Final alignment was determined with the t test. LogMAR visual acuities were analysed with the paired t test. Confidence intervals correspond to 95% the confidence level. Table 1 shows the preoperative characteristics of MISS patients. Fifty-one out of 56 (90.9%) consecutive patients could be included in this study. Five (9.1%) patients were lost from follow-up. None of the lost patients had an adverse outcome in the first four postoperative weeks. In 62 eyes of 51 patients (age 35.4 (16.3) years, range 4-72 years) a total of 75 horizontal rectus muscles were reoperated. On average, the patients had 2.1 strabismus surgeries previously. Twelve patients had an esotropia, two an esophoria with asthenopic complaints, 35 an exotropia and two an exophoria with asthenopic symptoms. No scleral penetration or other serious complication occurred. In one eye, the two small cuts had to be enlarged to a limbal opening in order to visualise better the operating site. This conversion was not associated with an adverse outcome. On the first postoperative day, in primary gaze position conjunctival and lid swelling and redness were hardly visible in 11 eyes (fig 2A), discrete in 15 eyes (fig 2D), moderate in 11 eyes (fig 2E), and severe in 15 eyes (fig 2F). In one patient with severe swelling and pain on eye movements, an infection was suspected, and oral antibiotics were administered. The swelling and pain resolved within 1 day. One corneal dellen and one corneal erosion occurred, which both resolved quickly. The corneal erosion was secondary to the contact of the traction suture with the cornea. The preoperative deviation at distance for esodeviations (n = 15) of 12.5 (8.5)u decreased to 2.6 (7.8)u at 6 months (p,0.001). For near, a decrease from 12.0 (10.1)u to 2.9 (1.6)u was observed (p,0.001). The preoperative deviation at distance for exodeviations (n = 35) of 216.4 (8.5)u decreased to 27.9 (6.5)u at 6 months (p,0.005). For near, a decrease from 216.5 (11.4)u to 22.9 (1.5)u was observed (p,0.005). Preoperative, best corrected logMAR visual acuity was 0.38 (0.82) compared with 0.37 (0.83) at 6 months (p.0.1). Within the first 6 months, only one patient had a reoperation. At month 6, in four patients a reoperation was planned or suggested by us because of unsatisfactory alignment. No patient experienced persistent diplopia or necessitated a reoperation because of double vision. Two patients had an increase in conjunctival redness compared with preoperatively. Stereovision improved at month 6 compared with preoperatively (p,0.01). In the majority of patients, the corneal astigmatism remained unchanged at month 6. In nine patients, the following changes were seen: 0.25 D in three patients, 0.5 D in five patients, and 0.75 D in one patient. The average doseresponse relationship for distance angles at month 6 was for esodeviations 1.12u (CI 1.00 to 1.23u) and for exodeviations 1.15u (CI 1.03 to 1.27u) per millimetre of muscle displacement. DISCUSSION Minimally invasive techniques are becoming important in almost every field of surgery including ophthalmic surgery. Instrument miniaturisation, endoillumination and optical improvements have changed and will continue to strongly influence the way in which surgery is performed. In this study, the results of horizontal rectus muscles revision surgeries with a novel MISS technique in 51 patients have been presented. Squint surgery is performed through two small radial cuts along the superior and inferior muscle margin (fig 1). Despite restricted openings, scarring surrounding the insertions could be adequately detached and, if necessary, resected. Since patients with previous posterior fixation sutures were excluded in this patient series, no patient harboured excessive posterior scarring. For scars lying more behind, a considerable enlargement of the small cuts is necessary. Postoperatively, these openings remain covered by the eyelids apart from during excessive upgaze and excessive lateral gaze, which postoperatively minimises visibility of the surgical procedure. If a better visibility of the operative site is necessary, this type of cut can be prolonged anteriorly or even joined with a limbal cut ( fig 1O). In this patient series, this was necessary for one eye. Conversion was not associated with an adverse course. The whole surgical procedure can be performed with the same instruments used for usual limbal approach. There is no need for an assistant. Despite a strongly restricted opening, the MISS technique allowed adequate muscle exposure to perform recessions, plications and advancements, thus minimising anatomical disruption. Two to 3 weeks after surgery, the eyes often looked normal or nearly normal in the primary gaze position. In a few eyes, this was already achieved on the first postoperative day. A conjunctival opening situated far away from the cornea should decrease the incidence of corneal dellen formation, avoid a prolapse of the Tenon capsule, and minimise postoperative discomfort. There is also increasing evidence that non-limbal strabismus surgery affects less perilimbal blood supply and may safeguard against anterior-segment ischaemia in high-risk patients. However, because of the low incidence of such complications, only larger studies will be able to show if such complications will be less frequent with the new technique. MISS revision surgery also avoids further traumatising of already scarred perilimbal conjunctiva and, in such cases, considerably shortens operating times. Although not objectively measured, we had the impression that in the immediate postoperative period, patient discomfort was reduced. This is supported by the fact that only one patient needed eye patching after surgery. At 6 months, only a minimal scarring was found along the incision lines, and only in two patients was an increase in conjunctival redness observed. It could be assumed that this minimal cicatrisation might facilitate further reoperations. Parks's fornix opening 14 also avoids corneal complications. The advantage of Parks's technique is the better visualisation of the surgical site, while MISS can be performed without an assistant and also in older patients with inelastic conjunctiva. Although these results are promising, definitive superiority of MISS over the traditional, limbal approach or Parks fornix opening has to be proven by other reports, since increased incidences of rare complications could have been missed in this study-for example the frequency of endophthalmitis. 19 In summary, the results of a new surgical technique for horizontal rectus muscle surgery have been presented, which seems to be safe and more rational than previous openings. Incision placement where the surgical procedure of the muscle occurs allows one to minimise the total opening size, to reduce postoperative discomfort and possibly also to reduce hospital stay, working disability and complications related to limbal approaches.
<gh_stars>0 from __future__ import division import iotbx.cif from libtbx import easy_pickle, smart_open from libtbx.str_utils import show_string import sys, os op = os.path def run(args): for f in args: try: file_object = smart_open.for_reading(file_name=f) miller_arrays = iotbx.cif.reader(file_object=file_object).as_miller_arrays() except KeyboardInterrupt: raise except Exception, e: print "Error extracting miller arrays from file: %s:" % ( show_string(f)) print " ", str(e) continue for miller_array in miller_arrays: miller_array.show_comprehensive_summary() print r, _ = op.splitext(op.basename(f)) easy_pickle.dump(file_name=r+'_miller_arrays.pickle', obj=miller_arrays) if (__name__ == "__main__"): import sys run(args=sys.argv[1:])
#ifndef WSPECTRUM_H #define WSPECTRUM_H #include <QtWidgets> #include "SimpleFft.h" #include "DataTypes.h" #include <QImage> #include <QMutex> // the purpose of this class is to draw the nice spectrum // and waterfall diagram people are used to from other SDR // software packages. #define WATERFALLWIDTH 32768 #define WATERFALLHEIGHT 512 #define WATERFALLNUANCES 384 class WSpectrum: public QWidget { Q_OBJECT public: WSpectrum(QWidget* parent=nullptr); void setFFTsize(int fftsize); void onNewSamples(tIQSamplesBlock *pIqSamples); QSize sizeHint() const; int getLastFreq() {return mLastFreq;}; void setDemodParams(int demodFreq,int demodBW,int raster,bool demodOn) {mDemodFreq=demodFreq;mDemodBW=demodBW;mRaster=raster;mDemodOn=demodOn;}; protected: void paintEvent(QPaintEvent *event) override; void resizeEvent(QResizeEvent *event) override; void mousePressEvent(QMouseEvent *event) override; void mouseReleaseEvent(QMouseEvent *event) override; void mouseMoveEvent(QMouseEvent *event) override; void wheelEvent(QWheelEvent *event) override; private: tSComplex *mSampleBuf=nullptr; tSComplex *mSampleBufAccu=nullptr; double *mSpectrum=nullptr; double *mSpectrumPlot=nullptr; int mSampleBufLevel=0; void updateWaterfall(); int mFftSize=32768; QImage *mSpectrumImage=nullptr; QImage *mWaterfallImage1=nullptr; QImage *mWaterfallImage2=nullptr; int mWaterfallLine=0; SimpleFft* mFft=nullptr; int mFftAvg=20; // average the spectrum out over 10 fft calls int mFftCnt=mFftAvg; QColor mPalette[WATERFALLNUANCES]; int mLeft=0; int mRight=mFftSize; int mSampleRate=0; int mCenterFreq=0; int mGain=0; int mLastFreq=0; int mDemodFreq=0; int mDemodBW=0; int mRaster=0; bool mDemodOn=false; QMutex mMutex; }; #endif
After Josh Hadley, a film-maker and journalist, referred to shooting the president on Twitter, he lost a job amid reported Secret Service scrutiny Crackdown on man who trolled Trump went too far, free speech experts say Donald Trump may be this country’s Twitter-troll-in-chief, but those tempted to troll him back, beware. A writer who has addressed about 30 tweets to Trump – some of them including off-color jokes about assassination – was investigated by the Secret Service under circumstances that press freedom experts are calling “troubling”. I’ve reported on Putin – here are my tips for journalists dealing with Trump | Alexey Kovalev Read more Josh Hadley, a 41-year-old freelance film journalist and podcaster from Sturgeon Bay, Wisconsin, told the Guardian he had been “trolling” Trump on Twitter for about a year. “I’ve been kind of mean, but I don’t think any of them can be construed as threats,” he said. “I’m trying to be funny. I was trying to get a reaction out of him.” All of his tweets to Trump are critical, but none appear to contain any credible threats. Hadley marked Trump’s inauguration day by tweeting a slightly altered quote from the Oliver Stone movie U Turn: “Thousands of people die every day … why can’t you be one of them?” On New Year’s Day, he wrote: “Do you think if I shot @RealDonaldTrump Jodie Foster would love me?” (John Hinckley Jr, who shot Ronald Reagan, wrote to the actor before the attack.) A single Twitter account with about 430 followers may seem insignificant on Twitter, where threads often become cesspools of harassment and abuse, but on Wednesday, Hadley received shocking news from one of his employers, the Grindhouse Channel. Hadley had a regular gig writing about 1,000 words a week for the Roku channel’s website. “It has come to our attention that you have been sending more than one electronic message or ‘Tweet’ to @RealDonaldTrump that can be construed as threatening over the past few months,” Hadley’s boss Darrin Uzynski wrote in an email. Sending threats to a sitting president or presidential candidate was a criminal offense, Uzynski added, and the company was severing its contract with him “upon advice of counsel”. Uzynski later told Hadley that the company had been contacted by the Secret Service, prompting the firing. While free speech experts say that it is not unusual for the Secret Service to contact people over online threats, the scope of the investigation into Hadley raises red flags. Uzynski said he had been called by a Secret Service agent, who told him about Hadley’s tweets and asked whether his drafts had been edited to remove “inflammatory” statements. “The word ‘inflammatory’ was used a lot,” Uzynski said. “They wanted to know if we cut anything out … if he had been attempting to publish things that were inflammatory.” Gregg Leslie, the legal defense director for the Reporters Committee for Freedom of the Press, said: “They have no business asking about what’s in a journalist’s work product or the earlier draft, when they have no reason to believe that there is a real threat ... It’s a significant leap too far.” The Secret Service refused to comment, citing a policy not to “confirm, deny, or comment on any investigation or whether one exists”. Aaron Mackey, a legal fellow at the Electronic Frontier Foundation, agreed that expanding an investigation into a tweet to cover “all other expressive activity” was “troubling”. “In this case, we have an inquiry that is spinning far beyond the original speech, trying to get access to a journalistic work product that has not been published,” he said. “That raises a very clear potential first amendment issue when it comes to journalist’s ability to protect their own work product.” “Does this mean that anyone who makes a joke that works at a publication, that the publication then becomes a target?” Mackey asked. The question is of particular importance now, given the Trump administration’s open hostility to the press. Last Wednesday, chief White House strategist Steve Bannon railed against the media, which he said should “keep its mouth shut”, in an interview with the New York Times. I’ve left Twitter. It is unusable for anyone but trolls, robots and dictators | Lindy West Read more For Hadley, the experience has been disquieting. “I’m afraid that any knock on the door is going to be ‘Agent Friendly’ waiting to take me for an interrogation.” Hadley maintained that his tweets were just jokes, not threats. He said he had not deleted any tweets, a statement which the Guardian has not been able to verify. He does acknowledge that he made one other attempt to communicate directly with Trump, however: an email sent to Trump’s campaign staff last year, requesting an interview with the then candidate for his podcast. He never heard back.
Funimation announced on Wednesday that it will provide English broadcast dubs for the D.Gray-man Hallow , Servamp , Tales of Zestiria the X , Danganronpa 3: The End of Hope's Peak High School: Future Arc , Danganronpa 3: The End of Hope's Peak High School: Despair Arc , The Heroic Legend of Arslan: Dust Storm Dance , Love Live! Sunshine!! , planetarian , Cheer Boys!! , First Love Monster , The Disastrous Life of Saiki K. , Handa-kun , Show By Rock!! Short!! , and Puzzle & Dragons X . In addition, it will continue its broadcast dub for the Endride anime. The dub for D.Gray-man Hallow will premiere on August 3 in the United States and Canada, and on August 4 in the United Kingdom and Ireland. Funimation is streaming the series with English subtitles. Yoshiharu Ashino ( Tweeny Witches , Cross Ange , First Squad ) is directing the anime at TMS Entertainment, with scripts by Michiko Yokote ( Shirobako , Prison School , Dagashi Kashi ), Tatsuto Higuchi ( My-Otome , Bakumatsu Gijinden Roman , Schwarzes Marken ), and Kenichi Yamashita ( Koutetsu Sangokushi , Ishida and Asakura , Actually, I Am… ). Yousuke Kabashima ( The Girl Who Leapt Through Space , Lord Marksman and Vanadis ) is adapting the character designs for animation, and is also serving as chief animation director. Yasuhiro Moriki ( Ninja Robots , Crest of the Stars , Bakuon!! ) is credited for design. Kaoru Wada is returning from the previous D.Gray-Man anime to compose the music. The dub for Servamp will premiere on July 20. Funimation is streaming the series with English subtitles. The production reunites the main staff and studio of the Aoharu x Machinegun anime. Itto Sara, who served in a supervisory position in Aoharu x Machinegun , now serves as chief director. Hideaki Nakano is directing the anime at Brain's Base. Kenji Konuta is in charge of series composition, and Junko Yamanaka is designing the characters for animation. The dub for Tales of Zestiria the X will premiere on August 3 in the United States and Canada, and on August 4 for the United Kingdom and Ireland. Funimation is streaming the series with English subtitles. Haruo Sotozaki (Tales of Zestiria: Dōshi no Yoake, Tales of Symphonia the Animation) is directing the series at ufotable, and ufotable is also writing the script. Akira Matsushima (Tales of Symphonia the Animation, Tales of Zestiria: Dōshi no Yoake) is adapting the character designs from Mutsumi Inomata, Kousuke Fujishima, Daigo Okumura, and Minoru Iwamoto for animation, and Motoi Sakuraba and Gō Shiina are composing the music. The dub for Danganronpa 3: The End of Hope's Peak High School: Future Arc will premiere on August 10 for the United States and Canada, and August 11 for the United Kingdom and Ireland, while the dub for Danganronpa 3: The End of Hope's Peak High School: Despair Arc will premiere on August 11. The television anime series will be the conclusion of the "Hope's Peak Academy" series' original story. The anime will premiere both its "Side: Future" and "Side: Despair" parts in July. As a result, two new Danganronpa 3 episodes will premiere every week. "Future Arc" will focus on the characters from the first game installment, while "Despair Arc" will tell the story of the characters of the Danganronpa 2: Goodbye Despair game. The anime will illustrate what happened to the characters before the events of the game. For "Despair Arc," Kazutaka Kodaka is in charge of the original scenario concepts and overall supervision. Rui Komatsuzaki is credited with the original character designs, and Seiji Kishi is the chief director. Norimitsu Kaihō ( School-Live! , Gunslinger Stratos: The Animation ) is in charge of the scenario on the project, and animation studio Lerche is returning to animate the project. Kazuaki Morita and Ryoko Amisaki are designing the anime's characters. The Danganronpa Design Team is credited with the original background designs. Masafumi Takada is composing the music, and Yoshinori Terasawa is the "Otasuke Producer." The dub for The Heroic Legend of Arslan: Dust Storm Dance will premiere on July 19. Funimation is streaming the series with English subtitles. The new season will be eight episodes long. The staff and cast are returning to the new series, but Kyō Yamashita is the new CG director, replacing Daisuke Suzuki, and Tatsuya Shimano is replacing Hiroshi Adachi as the modeling director. Felix Film is replacing SANZIGEN with the 3DCGI. Eir Aoi (Sword Art Online, Fate/Zero) will perform the new season's opening theme song "Tsubasa" (Wings), and Kalafina (Madoka Magica, first season of The Heroic Legend of Arslan) will perform the ending theme song "blaze." The dub for Love Live! Sunshine!! will premiere on July 30. Funimation is streaming the series with English subtitles. Kazuo Sakai (Mushi-Uta director, episode director for Love Live! School idol project, Gundam Build Fighters) is directing the series at Sunrise, and Jukki Hanada (all previous Love Live! School idol project anime) is handling the series composition. Yūhei Murota returned to design the characters, and Tatsuya Katou (Free! - Iwatobi Swim Club, Food Wars! Shokugeki no Soma) is composing the music at Lantis. The Love Live! Sunshine!! project was first announced in February 2015. The project's three key phrases are "Reader Participation," "Inspired by μ's," and "Seaside Town Setting." The group's name Aqours was chosen by fans by popular vote. The dub for planetarian will premiere on August 4. Funimation is streaming the anime with English subtitles. Naokatsu Tsuda (JoJo's Bizarre Adventure series, Inu X Boku Secret Service) is directing and writing both projects at David Production (JoJo's Bizarre Adventure series). Shogo Yasukawa (Terraformars, Food Wars! Shokugeki no Soma, JoJo's Bizarre Adventure) is writing the scripts alongside Tsuda. Katsuichi Nakayama (Nishi no Yoki Majo - Astraea Testament) and Shunsuke Machitani (episode director for Oreimo, Kurokami The Animation, JoJo's Bizarre Adventure: Stardust Crusaders) are the series directors. Hitomi Takechi (Hyperdimension Neptunia) is adapting Eeji Komatsu's original character designs for animation. Sayaka Sasaki is performing the net anime's ending theme song "Twinkle Starlight," Ceui is performing an image song titled "Worlds Pain," and Lia is performing the film's theme song "Hoshi no Fune" (Starship). The dub for Cheer Boys!! will premiere on August 2. Funimation is streaming the series with English subtitles. Tegami Bachi: Letter Bee manga creator Hiroyuki Asada is providing the original character designs for the series. Ai Yoshimura (Blue Spring Ride, Dance with Devils) is directing the series at Brains Base, and Reiko Yoshida (Girls und Panzer, Haruchika – Haruta & Chika) is handling the series composition. Hitomi Tsuruta is adapting Asada's designs for animation and is also serving as chief animation director. Rock band Luck Life is performing the opening theme song "Hajime no Ippo" (First Step). The seven main cast members of the anime are performing the ending theme song "LIMIT BREAKERS" under the unit name BREAKERS. The dub for First Love Monster will premiere on Saturday, July 16. Funimation is streaming the series with English subtitles. Yen Press is releasing Akira Hiyoshimaru's original manga in North America, and it describes the series: When fifteen-year-old Kaho Nikaidou leaves her sheltered home to start life anew in a Tokyo high school dormitory, the last thing she expects is to nearly get hit by a truck! Saved in the nick of time by a handsome stranger, Kaho falls head over heels for him and, after finally tracking him down, boldly confesses her feelings. Turns out Kaho's mystery savior, Kanade, is the son of Kaho's new landlord! The handsome object of Kaho's affection agrees to go out with her, but her newfound bliss is short-lived when it turns out that her new boyfriend...is a fifth-grader?! Takayuki Inagaki (Rosario + Vampire, Sky Wizards Academy) is directing the anime at Studio DEEN, and Mariko Oka (Nura: Rise of the Yokai Clan, Hell Girl) is designing the characters. The dub for The Disastrous Life of Saiki K. premieres on August 7. Funimation is streaming the series with English subtitles. The Disastrous Life of Saiki K. adapts Shūichi Asō's manga from Shueisha's Weekly Shonen Jump . Hiroaki Sakurai (Di Gi Charat, Maid Sama!) is directing the anime at Egg Firm and J.C. Staff, and Michiko Yokote (Shirobako, Kyōkai no Rinne) is in charge of series composition. Masayuki Onji (Kimi to Boku., Aoi Hana) is in charge of character design. The new special musical unit Saikic Lover will perform background music for the anime. The group consists of vocalist Yoffy, guitarist Imajo, and arranger Kenichirō Ōishi. Natsuki Hanae will perform the late-night broadcast's opening theme song, and Dempagumi.inc will perform the late-night broadcast's ending theme song. The dub for Handa-kun premieres on August 5 in the United States and Canada, and August 6 in the United Kingdom and Ireland. Funimation is streaming the series with English subtitles. Yoshitaka Koyama ( Rune Soldier , Noramimi , Kero Kero Chime ) is directing the anime at diomedea, while Michiko Yokote ( Shirobako , Yamada-kun and the Seven Witches , Dagashikashi ) is supervising and penning the series' scripts alongside scriptwriters Mariko Kunisawa ( Hatsukoi Limited , Magimoji Rurumo ) and Miharu Hirami ( Cookin' Idol Ai! Mai! Main! , Blue Spring Ride ). Mayuko Matsumoto ( Gingitsune , KanColle ) is designing the characters. The dub for Show By Rock!! Short!! will premiere on August 1. Funimation is streaming the series with English subtitles. Takahiro Ikezoe is returning from the original Show By Rock!! series to again direct Show By Rock!! Short!! at BONES. Touko Machida, who wrote the scripts for the first eight episodes of the original anime, is handling the series composition. Geechs is credited with production cooperation. The anime centers on the band members of Plasmagica, ShinganCrimsonz, and other bands from the original anime series. Each episode will introduce and feature a different band. The anime will be a "carefree enjoyable" series that shares the worldview of the original TV anime series. The dub for Puzzle & Dragons X premieres on July 19 for the United States and Canada, and on July 20 for the United Kingdom and Ireland. Funimation is streaming the series with English subtitles. Funimation describes the story: From the studio Pierrot, that brought you Kingdom and Tokyo Ghoul comes a series based off of the hit game Puzzle & Dragons. Dorogoza Island is rich in “Drop Energy” that players can use on friendly monsters. Once strong enough, they can face enemies in puzzle wars. Winners reap rewards, but what happens to the losers? Only the brave can take on these monstrous challenges!
package org.apache.batik.svggen.font.table; import java.io.IOException; import java.io.RandomAccessFile; public class Lookup { public static final int IGNORE_BASE_GLYPHS = 2; public static final int IGNORE_BASE_LIGATURES = 4; public static final int IGNORE_BASE_MARKS = 8; public static final int MARK_ATTACHMENT_TYPE = 65280; private int type; private int flag; private int subTableCount; private int[] subTableOffsets; private LookupSubtable[] subTables; public Lookup(LookupSubtableFactory var1, RandomAccessFile var2, int var3) throws IOException { var2.seek((long)var3); this.type = var2.readUnsignedShort(); this.flag = var2.readUnsignedShort(); this.subTableCount = var2.readUnsignedShort(); this.subTableOffsets = new int[this.subTableCount]; this.subTables = new LookupSubtable[this.subTableCount]; int var4; for(var4 = 0; var4 < this.subTableCount; ++var4) { this.subTableOffsets[var4] = var2.readUnsignedShort(); } for(var4 = 0; var4 < this.subTableCount; ++var4) { this.subTables[var4] = var1.read(this.type, var2, var3 + this.subTableOffsets[var4]); } } public int getType() { return this.type; } public int getSubtableCount() { return this.subTableCount; } public LookupSubtable getSubtable(int var1) { return this.subTables[var1]; } }
<filename>Topics/2014_12_21/dini.cpp #include <iostream> using namespace std; int main () { int dni, posadeni[15], kg, pari, n, den, kakvo_pravim, kolko; cin >> n; pari = 0; kolko = 0; for (den=1; den<=n; den=den+1){ cin >> kakvo_pravim; if (kakvo_pravim == 1) { posadeni[kolko] = den; kolko = kolko +1; } else { kolko = kolko - 1; dni = den-posadeni[kolko]; kg = dni*dni; pari = pari + kg; } } cout << "Spechelihme: "<<pari<<endl; return 0; }
The bucks. They are the not-so-secret key to success at this and other top dog shows held every year. On Monday, when Madison Square Garden in Manhattan hosts the 2010 Westminster Dog Show, the most prestigious event on the thoroughbred canine calendar, money will quietly play a role in determining the winner, just as money quietly shaped the field of contenders — and just as money shapes almost every nook and cranny of the dog show business. Among breeders, owners and handlers, it’s understood: you can’t just turn up with the paradigm of the breed, if such an animal exists, and expect a best-in-show ribbon. To seriously vie for victory, a dog needs what is known as a campaign: an exhausting, time-consuming and very expensive gantlet of dog show wins, buttressed by ads in publications like Dog News and The Canine Chronicle. Seriously, ads. Lots and lots of them. They usually hype recent victories at local shows, with the hope of influencing judges at future competitions. “A top 10 toy dog!” reads a recent full-pager for Bon Bon the Pomeranian, listing an assortment of triumphs under a picture of the animal panting atop some logs. The cost of a campaign can add up fast. You need a professional handler and cash for plane tickets and road trips to roughly 150 dog shows a year. (Yes, about three shows a week.) And you need to spend as much as $100,000 annually on ads. Photo Altogether, a top-notch campaign can easily cost more than $300,000 a year, and because it takes time to build momentum and a reputation, a typical campaign lasts for two or three years. Kathy Kirk, who handled Rufus, a colored bull terrier who won best in show at Westminster in 2006, estimates that the dog’s three-year campaign cost about $700,000. “Money is important in everything,” says Ms. Kirk. “The Olympics, auto racing, everything. The big bucks wins.” Most A-list dogs are owned by well-off patrons — groups of them, in many cases — who often leave pets with handlers for years at a time. Sloan, for instance, is in Year 2 of her campaign and lives with Mr. Pittman at PaRay Kennel in Orangevale, Calif. Advertisement Continue reading the main story Sloan’s owners are a married couple, Laura Rosio and Martin Winston. Ms. Rosio, who describes herself as a groupie for her dog, sits ringside at the Portland shows and happily explains what owning a marquee dog is all about. “She’s incorporated,” says Ms. Rosio, nodding toward Sloan and beaming. Then she reaches into her purse and hands over the dog’s business card. AMERICANS spend about $330 million each year traveling to and competing at dog shows, according to the American Kennel Club. The shows support a huge network of kennel clubs and exhibitors, and many are sponsored by pet food manufacturers like Eukanuba and Pedigree. To those companies, the shows are a way to connect with elite handlers, an important demographic known in the industry as “pet influentials.” Westminster is the Olympics of this sport — or hobby, or whatever — attracting an audience of three million viewers on the Animal Planet channel in Canada and on the USA network and CNBC in the United States. It is the culmination of some 1,500 dog shows in the previous year, a race that begins in January with shows like the Rose City Classic, held in the immense Portland Metropolitan Exposition Center. Here, recreational vehicles and trailers pack the parking lot. Exhibitors include fine-art dog portrait painters and the International Canine Semen Bank. Most of the dogs here are handled by weekend hobbyists, known as owner-handlers, and among them you detect a certain fatalism about their chances. “It’s political” is the euphemism you hear time and again. That can sometimes mean that a certain judge is known to have specific prejudices for or against a certain dog, usually based on aesthetics but occasionally based on considerations that seem — sorry about this — more catty. One handler said that an owner had refused to send him and his much-garlanded charge to Westminster this year because the owner was feuding with the judge who would appraise the breed. Photo Most of the time, though, “it’s political” refers to a widely perceived bias in favor of professional handlers and campaigning dogs, known to insiders as “specials.” Nobody thinks the outcomes are rigged. But it’s assumed that the playing field is far from even, especially at major events. Advertisement Continue reading the main story Suffice it to say, nobody can remember an uncampaigned dog prevailing at Westminster. About the closest thing to a surprise was last year’s winner, Stump, a Sussex spaniel who had been retired for four years. But Stump was far from unknown. Before his retirement, his handler showed him at more than 100 shows in one year. “We didn’t come here expecting to win,” says Chris Jones, who is standing beside his wife, Glenda, and a Newfoundland, preparing for the Portland competition. Like all owner-handlers, the couple think their dog is stunning, but she’s young and her rivals include some specials. “It’s because the professional is in front of judges all the time and they’ll say, ‘Oh, if Andy is showing that dog, the dog must be really good.’ ” That sentiment highlights how tricky it is to pinpoint the influence of money at this dog show and others. Only promising dogs are campaigned, so it’s hard to know whether their success is a cause or an effect of the cash spent promoting them onto winners’ stands. And because prominent handlers have their pick of dogs and wouldn’t want to risk their reputations with a stinker, it would make sense for a judge to assume that these handlers have brought standouts. In addition, the pros are generally better at presenting a dog. “You hear from owner-handlers often that there is a supposed advantage for professionals at shows,” says Mr. Pittman, whose lifelong passion for dogs began with his first word, puppy. “But I think that’s an excuse. The professionals know what they’re doing. They groom well, present well, manage the ring well. There’s a reason that they became professionals.” Judges deny any kind of favoritism, though they acknowledge just how subjective their choices are. This show, like Westminster, is a conformation competition, which means the winner is the dog that most closely embodies the breed standard as defined by the American Kennel Club. The standards are highly specific. The one for the basset hound, for instance, is more than 900 words long and includes guidance on size, coat, gait and head. (“The lips are darkly pigmented and are pendulous, falling squarely in front and, toward the back, in loose hanging flews. The dewlap is very pronounced.”) Advertisement Continue reading the main story Still, deciding which basset hound is the basset houndiest isn’t easy because every judge brings his or her own priorities and preferences to the task. Photo To the lay person at a dog show, distinctions seem impossible because a group of, say, golden retrievers all look alike. But judges and professional handlers say that once you know the breed, the problem is that all the dogs look different. The hard part isn’t telling them apart. It’s figuring out which version of excellence to favor. “There’s 95 golden retrievers here today,” says Tracy Tuff, a professional breeder and handler from Canada, who was preparing several dogs for Rose City . “They all have different colors, different size, different bone structures. With 95 of them, that’s three hours’ worth of judging just golden retrievers. A judge can get a little lost in that. They start to go golden blind.” The presence of a pro, she says, offers a cue that many judges find invaluable. “By showing up, judges seem to say, ‘Thank God you’re here because I don’t know what to pick,’ ” says Ms. Tuff. The owner-handlers, of course, are less excited to see her. “I hear a lot of four-letter words. A lot of ‘oh, you’re here,’ ” she says, imitating a crestfallen rival. “Yeah. Sorry.” Newsletter Sign Up Continue reading the main story Please verify you're not a robot by clicking the box. Invalid email address. Please re-enter. You must select a newsletter to subscribe to. Sign Up You will receive emails containing news content , updates and promotions from The New York Times. You may opt-out at any time. You agree to receive occasional updates and special offers for The New York Times's products and services. Thank you for subscribing. An error has occurred. Please try again later. View all New York Times newsletters. IN the lead-up to the bichon frisé competition, the owner-handlers Jerry Pound and Gay Culpepper are standing in one of two hanger-size rooms where all handlers prep. Their operation is little more than two small tables and a blow dryer, plus their dogs, Apollo and Dreamer. At moments, they sound mildly irked about the perceived advantage that professionals take into the ring. Mr. Pound has measured his dogs and found they fit the standard almost to the letter, whereas he and his wife find the PaRay Kennel dogs — Sloan included — a little on the square side. “These guys are supposed to be more rectangular,” Ms. Culpepper says, pointing to Apollo. “But a standard is very subjective.” Then again, the couple marvel at the skills of the PaRay professionals, particularly when it comes to presentation. Mr. Pound calls Paul Flores, who grooms Sloan, an artist, saying “he’d be a sculptor” if he wasn’t working with dogs. You have to wonder: Why do the thousands of owner-handlers compete if they believe that the fight isn’t totally fair? Advertisement Continue reading the main story “It’s gets us out of the house on the weekends,” says Ms. Culpepper. “We don’t sit in front of the TV. We travel and we get to socialize with people who care about the same things we do.” “And we win just enough to keep our interest,” adds Mr. Pound. “We have beaten PaRay in the past.” Photo The more time you spend at the Rose City Classic, the more unpredictable the results seem. The universe of winners is dominated by specials, but it is one random universe. Dogs have just as many quirks as judges. On some days, they’re engaged and alert. On others, they cower from judges, a major no-no. So somehow, elite dog shows seem both overdetermined and surprisingly arbitrary. And it’s the sense that anything can happen that explains the otherwise perplexing tradition of doggy advertising. The ads are a bit like those “for your consideration” campaigns for Oscar nominees, and they’re bought for essentially the same reason: to sway decision makers in a realm in which there is debate about what is “the best.” Lobbying for a St. Bernard, for instance, wouldn’t work if everyone agreed about what constitutes a great St. Bernard. And if St. Bernard greatness were the sort of thing that could be measured with a ruler and calipers, you wouldn’t need judges. A computer would suffice. But there is no unanimity about St. Bernards or any other breed, and judges are human. So at magazines like Dog News, the ads keep pouring in. Often called the bible of the dog show world, Dog News is a weekly published by Harris Publications out of an office on Broadway in Manhattan. Other titles in Harris’s eclectic stable include Guns and Weapons, the hip-hop title XXL and the comic book Vampirella. Most magazines are struggling with a downturn in ads. Not Dog News. It’s about 75 percent ads and runs as long as 600 pages in issues coinciding with big shows. Prices vary from $250 for a full-page black-and-white ad to $4,000 for the cover. Yes, the cover is an ad. “I don’t have a single staffer to solicit ads,” says Matthew Stander, publisher of Dog News. “They come to us unsolicited.” Advertisement Continue reading the main story Judges are the main target — they are sent the magazine gratis — and they star along with the dogs in most of the ads. There’s a tradition at shows of taking a photograph of winning dogs along with the judges who selected them, and most of the ads are little more than that photo and a cutesy tag line. “Don’t hate me because I’m beautiful,” reads a recent ad for Prissy the dachshund, “Love me because I’m a weinner!” The judge usually gets a shout-out, too. (“Thank you Judge Mrs. Bonnie Threlfall.”) Not surprisingly, it’s hard to find a judge who says the ads work. One said she browses out of vanity, to see if her outfit looks good enough to wear again. One Portland judge, Betty-Anne Stenmark, is slightly more generous. “Do the magazines influence some judges? I’m sure they do,” she says. “Do they influence everybody? No. Do I see a dog who looks great in the magazines and think I’d love to judge that dog? Yes.” Photo Professional handlers and owners say they wouldn’t write the checks if the ads didn’t get results. There are thousands of specials in any given year, and in a realm this competitive, the ads elevate you above the pack, they say. Just by buying them, you announce that you’re playing to win. WHAT do owners get back for their rather substantial investments in these dogs? Not money, and woe unto the foolish reporter who suggests that money might be a perfectly reasonable reward. (Only indie rockers and physicians are more sensitive to questions about profits.) By every account, a show dog is a sinkhole. Even for a Westminster champ, the stud fee is a few grand. Rufus will die before he makes a dent in the sum spent on him. Pet food companies like to brag about the number of Westminster group winners who eat their product. But Nike they are not. The best handlers are courted, but with nothing more valuable than the occasional hat, tote bag and coupons for discounted chow. When Uno the beagle won best in show at Westminster two years ago, his owners weren’t paid even when Purina featured him in a full-page USA Today ad. No, the strange and inescapable truth is that people drop hundreds of thousands of dollars in this realm for one reason: they love dogs. Or, rather, they love a specific breed or dog and they are willing to part with a small fortune proving that their breed or dog is better than yours. Advertisement Continue reading the main story “In this building alone, I can name you three millionaires who don’t breed dogs; they’ve never bred a litter in their life,” says Tracy Tuff, the handler from Canada. “They just like to throw money at people like us to show good dogs.” The owners come from so many different backgrounds and professions that they are hard to categorize. Mr. Winston, Sloan’s co-owner, is in nuclear medicine, his wife, Ms. Rosio, said. This is their first campaign, and their reasons for competing are very personal. “It’s like having a child in middle school and you realize that kid can play baseball,” says Ms. Rosio, “and for the next two or three years you do everything you can for the kid to play ball. It’s the same thing. We have four kids and they’re grown now. This is our new baby.” The role of money doesn’t seem to bother anyone other than the owner-handlers, perhaps because campaigns have been extremely pricey since the ’70s. David Frei, the public face of the Westminster Dog Show, sounds mostly unbothered by the sums. Well, he is disturbed by rare reports of people mortgaging their homes to show their dogs. And now that so many dogs have multiple owners, he is done trying to read all of their names during the telecast. “People say to me, ‘Why didn’t you read off the names of all the owners?’ ” he says. “Well, if the dog has six different owners, that’s the only thing I’d get to say about the dog.” With luck and a stellar performance, Sloan might be a name that Mr. Frei utters when it’s time to announce the winners. She trotted to a rather quick victory over Apollo in Portland, padding around the ring with a champion’s poise, a tiny snowbank on paws. When the show begins in Madison Square Garden, she’ll have everything she needs to take home top honors: wealthy patrons, an esteemed handler and an expensively won reputation — to put it in dog-fancier terms — as a terrific little bitch.
def start(task_id: int, duration: int = typer.Option(None, '--duration', '-d', help='Duration in minutes')): with Session(engine) as session: try: session.exec(select(Timer).where(Timer.end == None)).one() typer.secho('\nThe Timer must be stopped first\n', fg=typer.colors.RED) raise typer.Exit(code=1) except NoResultFound: pass try: query = session.get(ToDo, task_id) if not query.status == 'done': if query.status == 'to do': query.status = 'doing' session.add(query) if duration is not None: duration = timedelta(minutes=duration) if duration <= timedelta(minutes=0): typer.secho( f'\nDuration must be grater than 0\n', fg=typer.colors.RED) raise typer.Exit(code=1) total_seconds = int(duration.total_seconds()) session.add(Timer(id_todo=task_id)) session.commit() new_id = session.exec(select(func.max(Timer.id))).one() typer.secho( f'\nTask Start task {task_id}. Timer id: {new_id}\n', fg=typer.colors.GREEN) with typer.progressbar(length=total_seconds) as progress: end = datetime.utcnow() + duration while datetime.utcnow() < end: time.sleep(1) progress.update(1) else: typer.secho('\n\nYour Time is over! Well done!\n', blink=True, fg=typer.colors.BRIGHT_GREEN) pop_up_msg() remark = typer.confirm("Any remark?") if remark: remark = typer.prompt('Enter your remarks.') else: remark = None stop(remarks=remark) typer.Exit() else: session.add(Timer(id_todo=task_id)) session.commit() new_id = session.exec(select(func.max(Timer.id))).one() typer.secho( f'\nStart task {task_id}. Timer id: {new_id}\n', fg=typer.colors.GREEN) else: typer.secho(f'\nTask already done\n', fg=typer.colors.RED) raise typer.Exit(code=1) except AttributeError: typer.secho(f'\nInvalid task id\n', fg=typer.colors.RED) raise typer.Exit(code=1)
Management of incision failure during small incision lenticule extraction because of conjunctivochalasis Purpose We report a case of incision failure during small incision lenticule extraction (SMILE) and its management. Observations The incision could not be made using the femtosecond laser because of a redundant conjunctiva, so it was instead done manually using a diamond knife. The lenticule was successfully separated and extracted. Three months after the procedure, the uncorrected distance visual acuity was 20/20 and no complication was observed. Conclusions and importance This case demonstrates that the conjunctiva should be carefully examined before SMILE. If a complication occurs because of conjunctivochalasis, it can be resolved with proper management without compromising the patient's visual acuity. Introduction Small incision lenticule extraction (SMILE) is a new, flapless, minimally invasive procedure that can correct myopia and myopic astigmatism using a VisuMax ® (Carl Zeiss Meditec, Jena, Germany) femtosecond laser. 1 Numerous studies have reported that SMILE is effective, safe, and yields predictable results, so this procedure has gained wide acceptance. 1e3 However, there are several possible complications specific to the femtosecond laser including suction loss, interface haze, anterior chamber bubbles, black spots during the creation of the lenticule, and blockage of the laser by an opaque bubble layer. 4e6 Although these potential complications occur infrequently, they present a challenge to the surgeon. Conjunctivochalasis is characterized by the existence of excess fold of conjunctiva located between the globe and the lid margin. 7 It usually does not affect corneal refractive procedures because they only involve the cornea. However, the peripheral cornea is sometimes covered by redundant conjunctiva that affects SMILE because it relies on a suction system. We report a case involving blockage of SMILE by redundant conjunctiva that required manual incision. Case report A 37-year-old female visited our clinic for treatment of visual problems. She had a history of dry eye and had used soft contact lenses for 10 years. Her preoperative manifest refractions were sphere, À2.25; cylinder, À0.75; and axis, 80 in the right eye, and sphere, À2.5; cylinder, À0. Both eyes were treated on the same day by the same surgeon (CYT) using a VisuMax ® 500 kHz femtosecond laser (Carl Zeiss Meditec). The laser settings included a cut energy of 180 nJ and a spacing of 4.5 mm. The lenticule diameter was 6.6 mm, the cap diameter was 7.5 mm, and the intended cap thickness was 110 mm in both eyes. The intended lenticule thicknesses were 66 mm and 67 mm, and the expected residual corneal beds were 388 mm and 389 mm in the right and left eyes, respectively. The incision was 2.0 mm long in both eyes. The target refractive corrections were À2.25e0.75 Â 80 and À2.5e0.5 Â 90 for the right and left eyes, respectively. SMILE was performed as previously described. 1 Although the SMILE procedure in the left eye was performed without any difficulty, the conjunctiva blocked the laser in the right eye. After confirming that the laser failed to cut the cornea, the incision was performed manually using a diamond knife. The surgeon created an incision to 1/4 the depth of the cornea. The lenticule was separated and successfully extracted through the incision (Fig. 1). After the procedure, the patient was treated with 0.5% moxifloxacin (Vigamox; Alcon, Hünenberg, Switzerland) for 7 days, and 0.1% fluorometholone (Oculmetholone; Samil Pharmaceutical Co., Ltd., Seoul, Republic of Korea) and preservative-free hyaluronic acid lubricating drops (0.1% Hyalein Mini; Santen Pharmaceutical Co., Ltd., Osaka, Japan) for four weeks. One week after the procedure, the UDVA was 20/25 and 20/20 in the right and left eyes, respectively. After three months, the UDVA and CDVA were 20/20 and 20/18, with 0e0.25 Â 90 in the right eye, and 20/18, 20/18, with þ0.25e0.25 Â 90 in the left eye, respectively. The patient complained of eye pain after injection of the right eye on the first day after the procedure, but there were no complaints after several days. Fig. 2 shows dual Scheimpflug images of the right eye taken preoperatively and three months after surgery. No complications such as keratitis, ectasia, or opacification were observed during the follow-up period. The patient was satisfied with her uncorrected vision, and did not complain of dryness or pain. Discussion Conjunctivochalasis is a condition of ocular which involves a loose conjunctiva increases while downgaze. 7 Reported causing factors are as follows: aging, ocular movement, ocular surface inflammation, and delayed tear clearance. 7e10 Suspected causing factors are mechanical and inflammatory factors, however it is not clear that what causes conjunctivochalasis. 7,9 Most patients with conjunctivochalasis are asymptomatic, especially if the condition is mild. In symptomatic patients, the symptoms include dryness, a foreign body sensation, injection, and eye pain. 7e10 Conjunctivochalasis status is not important when examining patients who require refractive correction because almost all patients who undergo refractive procedures are young, and the procedures are performed on the cornea. However, there are some patients with conjunctivochalasis who want refractive correction. Wearing contact lenses is an important risk factor for conjunctivochalasis, and conjunctivochalasis-induced dry eye or foreign body sensations can be a cause of contact lens intolerance, which is a possible reason for patients to request refractive correction. 11 In our case, the patient's mild conjuntivochalasis was not detected preoperatively. It was noticed on slit-lamp examination during the follow-up, and the surgeon did not pay close attention to the conjunctiva covering the peripheral cornea where the incision was planned, so the surgery proceeded. If the surgeon had noticed the conjuntivochalasis preoperatively, the surgeon would have paid a closer attention during the surgery, as if releasing the cornea after the suction and then pressing the suction button again. Although the rest of the procedure was successful after performing the manual incision, it took much longer than usual, so the patient experienced delayed visual recovery, eye pain, and injection. In addition, manual incision is considered to be more susceptible to infection because of increased epithelial damage. However, no postoperative complication, including keratitis, was observed. Raminez-Miranda et al. 4 reported that 26.9% of eyes treated with SMILE had complications, including epithelial defects, suction loss, an opaque bubble layer, a cap rupture, or lenticule rupture. In one patient, the small incision was not performed because an opaque bubble layer blocked the laser, and the incision was performed manually. They reported that most of the complications had a favorable resolution, with no permanent effects on the patient's final visual acuity. In our case, the patient's UDVA was 20/20 at three months after the procedure, and she did not complain of dryness or pain. Conclusion Surgeons should carefully inspect the conjunctiva before SMILE, even if the patient is young, and if the conjunctiva is suspected to cover the cornea during the procedure, the surgeon can release the suction and start again. If the incision cannot be made using SMILE, it can be made manually using a knife. With proper management, a failed incision during SMILE can therefore be successfully managed without compromising the patient's visual acuity. Patient consent Consent to publish this report was obtained in writing.
<filename>include/cxxdasp/filter/tsvf/general_tsvf_core_operator.hpp<gh_stars>10-100 // // Copyright (C) 2014 <NAME> // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. // #ifndef CXXDASP_FILTER_TSVF_GENERAL_TSVF_CORE_OPERATOR_HPP_ #define CXXDASP_FILTER_TSVF_GENERAL_TSVF_CORE_OPERATOR_HPP_ #include <cxxporthelper/type_traits> #include <cxxporthelper/compiler.hpp> #include <cxxdasp/datatype/audio_frame.hpp> namespace cxxdasp { namespace filter { /** * General Linear Trapezoidal Integrated State Variable Filter core operator * * @tparam TFrame audio frame type */ template <typename TFrame> class general_tsvf_core_operator { /// @cond INTERNAL_FIELD general_tsvf_core_operator(const general_tsvf_core_operator &) = delete; general_tsvf_core_operator &operator=(const general_tsvf_core_operator &) = delete; /// @endcond public: /** * Audio frame type. */ typedef TFrame frame_type; /** * Value type. */ typedef typename frame_type::data_type value_type; /** * Check this operator class is available. * @return whether the class is available */ CXXPH_OPTIONAL_CONSTEXPR static bool is_supported() CXXPH_NOEXCEPT { return true; } /** * Constructor. */ general_tsvf_core_operator(); /** * Destructor. */ ~general_tsvf_core_operator(); /** * Set parameters. * @param a1 [in] a1 filter parameter * @param a2 [in] a2 filter parameter * @param a3 [in] a3 filter parameter * @param m0 [in] m0 filter parameter * @param m1 [in] m1 filter parameter * @param m2 [in] m2 filter parameter */ void set_params(double a1, double a2, double a3, double m0, double m1, double m2) CXXPH_NOEXCEPT; /** * Reset state. */ void reset() CXXPH_NOEXCEPT; /** * Perform filtering. * @param src_dest [in/out] data buffer (overwrite) * @param n [in] count of samples */ void perform(frame_type *src_dest, int n) CXXPH_NOEXCEPT; /** * Perform filtering. * @param src [in] source data buffer * @param dest [out] destination data buffer * @param n [in] count of samples */ void perform(const frame_type *CXXPH_RESTRICT src, frame_type *CXXPH_RESTRICT dest, int n) CXXPH_NOEXCEPT; private: static_assert(std::is_floating_point<value_type>::value, "floating point type is required"); /// @cond INTERNAL_FIELD value_type a1_; value_type a2_; value_type a3_; value_type m0_; value_type m1_; value_type m2_; frame_type ic1eq_; frame_type ic2eq_; /// @endcond }; template <typename TFrame> inline general_tsvf_core_operator<TFrame>::general_tsvf_core_operator() { } template <typename TFrame> inline general_tsvf_core_operator<TFrame>::~general_tsvf_core_operator() { } template <typename TFrame> inline void general_tsvf_core_operator<TFrame>::set_params(double a1, double a2, double a3, double m0, double m1, double m2) CXXPH_NOEXCEPT { a1_ = static_cast<value_type>(a1); a2_ = static_cast<value_type>(a2); a3_ = static_cast<value_type>(a3); m0_ = static_cast<value_type>(m0); m1_ = static_cast<value_type>(m1); m2_ = static_cast<value_type>(m2); } template <typename TFrame> inline void general_tsvf_core_operator<TFrame>::reset() CXXPH_NOEXCEPT { ic1eq_ = 0; ic2eq_ = 0; } template <typename TFrame> inline void general_tsvf_core_operator<TFrame>::perform(frame_type *src_dest, int n) CXXPH_NOEXCEPT { const value_type a1 = a1_; const value_type a2 = a2_; const value_type a3 = a3_; const value_type m0 = m0_; const value_type m1 = m1_; const value_type m2 = m2_; frame_type ic1eq = ic1eq_; frame_type ic2eq = ic2eq_; for (int i = 0; i < n; ++i) { const frame_type v0 = src_dest[i]; const frame_type v3 = v0 - ic2eq; const frame_type v1 = a1 * ic1eq + a2 * v3; const frame_type v2 = ic2eq + a2 * ic1eq + a3 * v3; ic1eq = 2 * v1 - ic1eq; ic2eq = 2 * v2 - ic2eq; src_dest[i] = m0 * v0 + m1 * v1 + m2 * v2; } ic1eq_ = ic1eq; ic2eq_ = ic2eq; } template <typename TFrame> inline void general_tsvf_core_operator<TFrame>::perform(const frame_type *CXXPH_RESTRICT src, frame_type *CXXPH_RESTRICT dest, int n) CXXPH_NOEXCEPT { const value_type a1 = a1_; const value_type a2 = a2_; const value_type a3 = a3_; const value_type m0 = m0_; const value_type m1 = m1_; const value_type m2 = m2_; frame_type ic1eq = ic1eq_; frame_type ic2eq = ic2eq_; for (int i = 0; i < n; ++i) { const frame_type v0 = src[i]; const frame_type v3 = v0 - ic2eq; const frame_type v1 = a1 * ic1eq + a2 * v3; const frame_type v2 = ic2eq + a2 * ic1eq + a3 * v3; ic1eq = 2 * v1 - ic1eq; ic2eq = 2 * v2 - ic2eq; dest[i] = m0 * v0 + m1 * v1 + m2 * v2; } ic1eq_ = ic1eq; ic2eq_ = ic2eq; } } // namespace filter } // namespace cxxdasp #endif // CXXDASP_FILTER_TSVF_GENERAL_TSVF_CORE_OPERATOR_HPP_
def skipline(self): position = self.tell() prefix = self._fix() self.seek(prefix, 1) suffix = self._fix() if prefix != suffix: raise IOError(_FIX_ERROR) return position, prefix
<gh_stars>10-100 /* * Copyright 2016,2017 falcon Author. All rights reserved. * Use of this source code is governed by a BSD-style * license that can be found in the LICENSE file. */ package service import ( "net" "time" "github.com/golang/glog" "github.com/yubo/falcon/lib/core" "golang.org/x/net/context" "google.golang.org/grpc" "google.golang.org/grpc/reflection" ) type apiModule struct { disable bool ctx context.Context cancel context.CancelFunc address string service *Service } func (p *apiModule) Get(ctx context.Context, in *GetRequest) (res *GetResponse, err error) { glog.V(3).Infof("%s rx get len(Keys) %d", MODULE_NAME, len(in.Keys)) res, err = p.service.tsdb.get(in) statsInc(ST_RX_GET_ITERS, 1) statsInc(ST_RX_GET_ITEMS, len(res.Data)) return } func (p *apiModule) Put(ctx context.Context, in *PutRequest) (res *PutResponse, err error) { //TODO glog.V(4).Infof("%s rx put %v", MODULE_NAME, len(in.Data)) res, err = p.service.tsdb.put(in) if err != nil { glog.V(4).Infof("%s rx put err %v", MODULE_NAME, err) } for i := int32(0); i < res.N; i++ { p.service.cache.put(in.Data[i]) } statsInc(ST_RX_PUT_ITERS, 1) statsInc(ST_RX_PUT_ITEMS, int(res.N)) statsInc(ST_RX_PUT_ERR_ITEMS, len(in.Data)-int(res.N)) return } func (p *apiModule) GetStats(ctx context.Context, in *Empty) (*Stats, error) { return &Stats{Counter: statsGets()}, nil } func (p *apiModule) GetStatsName(ctx context.Context, in *Empty) (*StatsName, error) { return &StatsName{CounterName: statsCounterName}, nil } func (p *apiModule) prestart(service *Service) error { p.address = service.Conf.ApiAddr p.disable = core.AddrIsDisable(p.address) p.service = service return nil } func (p *apiModule) start(service *Service) error { if p.disable { glog.Info(MODULE_NAME + "api disable") return nil } p.ctx, p.cancel = context.WithCancel(context.Background()) ln, err := net.Listen(core.CleanSockFile(core.ParseAddr(p.address))) if err != nil { return err } server := grpc.NewServer() RegisterServiceServer(server, p) // Register reflection service on gRPC server. reflection.Register(server) go func() { if err := server.Serve(ln); err != nil { p.cancel() } }() go func() { <-p.ctx.Done() server.Stop() }() return nil } func (p *apiModule) stop(service *Service) error { if p.disable { return nil } p.cancel() return nil } func (p *apiModule) reload(service *Service) error { return nil if !p.disable { p.stop(service) time.Sleep(time.Second) } p.prestart(service) return p.start(service) }
/* Initialization for X buffer operations on GPU. */ void nbnxn_gpu_init_x_to_nbat_x(const Nbnxm::GridSet& gridSet, NbnxmGpu* gpu_nbv) { const DeviceStream& localStream = *gpu_nbv->deviceStreams[InteractionLocality::Local]; const bool bDoTime = gpu_nbv->bDoTime; const int maxNumColumns = gridSet.numColumnsMax(); reallocateDeviceBuffer(&gpu_nbv->cxy_na, maxNumColumns * gridSet.grids().size(), &gpu_nbv->ncxy_na, &gpu_nbv->ncxy_na_alloc, *gpu_nbv->deviceContext_); reallocateDeviceBuffer(&gpu_nbv->cxy_ind, maxNumColumns * gridSet.grids().size(), &gpu_nbv->ncxy_ind, &gpu_nbv->ncxy_ind_alloc, *gpu_nbv->deviceContext_); for (unsigned int g = 0; g < gridSet.grids().size(); g++) { const Nbnxm::Grid& grid = gridSet.grids()[g]; const int numColumns = grid.numColumns(); const int* atomIndices = gridSet.atomIndices().data(); const int atomIndicesSize = gridSet.atomIndices().size(); const int* cxy_na = grid.cxy_na().data(); const int* cxy_ind = grid.cxy_ind().data(); auto* timerH2D = bDoTime ? &gpu_nbv->timers->xf[AtomLocality::Local].nb_h2d : nullptr; reallocateDeviceBuffer(&gpu_nbv->atomIndices, atomIndicesSize, &gpu_nbv->atomIndicesSize, &gpu_nbv->atomIndicesSize_alloc, *gpu_nbv->deviceContext_); if (atomIndicesSize > 0) { if (bDoTime) { timerH2D->openTimingRegion(localStream); } copyToDeviceBuffer(&gpu_nbv->atomIndices, atomIndices, 0, atomIndicesSize, localStream, GpuApiCallBehavior::Async, bDoTime ? timerH2D->fetchNextEvent() : nullptr); if (bDoTime) { timerH2D->closeTimingRegion(localStream); } } if (numColumns > 0) { if (bDoTime) { timerH2D->openTimingRegion(localStream); } copyToDeviceBuffer(&gpu_nbv->cxy_na, cxy_na, maxNumColumns * g, numColumns, localStream, GpuApiCallBehavior::Async, bDoTime ? timerH2D->fetchNextEvent() : nullptr); if (bDoTime) { timerH2D->closeTimingRegion(localStream); } if (bDoTime) { timerH2D->openTimingRegion(localStream); } copyToDeviceBuffer(&gpu_nbv->cxy_ind, cxy_ind, maxNumColumns * g, numColumns, localStream, GpuApiCallBehavior::Async, bDoTime ? timerH2D->fetchNextEvent() : nullptr); if (bDoTime) { timerH2D->closeTimingRegion(localStream); } } } nbnxnInsertNonlocalGpuDependency(gpu_nbv, Nbnxm::InteractionLocality::Local); nbnxnInsertNonlocalGpuDependency(gpu_nbv, Nbnxm::InteractionLocality::NonLocal); }
def CountBouts(self): i = 0 for e in self.entries: i += e.CountBouts() return i
The Cubs have signed left-hander Tsuyoshi Wada to a one-year, Major League contract, the team announced. The deal is worth $4MM, and another $2MM available in incentives, MLB.com’s Carrie Muskat reports (Twitter link). The new contract overrides a $5MM team option the Cubs held on Wada’s services for the 2015 season. Wada is represented by the Octagon Agency. Wada, 33, finally got his first taste of Major League action last season, posting an impressive 3.25 ERA, 7.4 K/9 and 3.00 K/BB rate over 69 1/3 IP (13 starts) with the Cubs. Following a distinguished nine-year career in Japan, Wada signed a two-year, $8.15MM deal with the Orioles in December 2011, though he never threw as much as a pitch for the O’s thanks to Tommy John surgery. After signing a minor league deal with the Cubs last offseason, Wada successfully rebuilt his value and has now worked himself into Chicago’s rotation plans for 2015. The Cubs have been widely rumored to be interested in signing a top free agent pitcher (possibly Jon Lester) to add to Wada, Jake Arrieta, Kyle Hendricks, Travis Wood and Edwin Jackson, plus options like Jacob Turner and Felix Doubront are also in the rotation mix.
/** * Class to generate ingredient objects for recipes and other */ public class Ingredient { /** * ID of the ingredient */ private Integer ingredientID; /** * Name of the ingredient */ private String name; /** * Boolean if the ingredient is vegan */ private boolean isVegan; /** * Boolean if the ingredient is vegetarian */ private boolean isVegetarian; /** * Quantity of the ingredient */ private Integer quantity; /** * Category of the ingredient */ private IngredientCategory category; /** * Unit of the ingredient */ private IngredientUnit unit; /** * Ingredient cunstructor. * @param name Ingredient name * @param isVegan Is the ingredient vegan * @param isVegetarian Is the ingredient vegetarian * @param category Category object for the ingredient */ public Ingredient(String name, boolean isVegan, boolean isVegetarian, IngredientCategory category) { this.name = name; this.isVegan = isVegan; this.isVegetarian = isVegetarian; this.category = category; } /** * Ingredient constructor. * @param ingredientID ID of the ingredient * @param name Ingredient name * @param isVegan Is the ingredient vegan * @param isVegetarian Is the ingredient vegetarian * @param category Category object for the ingredient */ public Ingredient(int ingredientID, String name, boolean isVegan, boolean isVegetarian, IngredientCategory category) { this.ingredientID = ingredientID; this.name = name; this.isVegan = isVegan; this.isVegetarian = isVegetarian; this.category = category; } /** * Constructs an ingredient from its ID in the database * @param ingredientID IngredientID * @param quantity Quantity for the ingredient * @param unit Unit for the ingredient * @throws IngredientNotFoundException If an ingredient is requested that does not exist! */ public Ingredient(int ingredientID, int quantity, IngredientUnit unit) throws IngredientNotFoundException { if (!IngredientController.ingredientExists(ingredientID)) { Logger.log(LogType.ERROR, "Ingredient does not exist!"); throw new IngredientNotFoundException("Cannot get ingredient with ID '" + ingredientID + "'. (NOT_FOUND)"); } Ingredient i = IngredientController.getIngredient(ingredientID); assert i != null; this.ingredientID = ingredientID; this.name = i.getName(); this.isVegan = i.isVegan(); this.isVegetarian = i.isVegetarian(); this.category = i.getIngredientCategory(); this.quantity = quantity; this.unit = unit; } /** * Gets the ID of the ingredient * @return ID */ public Integer getIngredientID() { return ingredientID; } /** * @return Name of the ingredient */ @Override public String toString() { return getName(); } /** * Gets the category object * @return IngredientCategory Object */ public IngredientCategory getIngredientCategory() { return category; } /** * Sets the category object * @param category IngredientCategory Object */ public void setIngredientCategory(IngredientCategory category) { this.category = category; } /** * Gets the unit for the ingredient * @return unit object */ public IngredientUnit getUnit() { return unit; } /** * Gets the ingredient name * @return ingredeint name */ public String getName() { return name; } /** * Sets the name of the ingredient * @param name name of the ingredient */ public void setName(String name) { this.name = name; } /** * Gets the boolean if the ingredient is vegan * @return true if its vegan */ public boolean isVegan() { return isVegan; } /** * Defines if the ingredient is vegan * @param vegan true if it is vegan */ public void setVegan(boolean vegan) { isVegan = vegan; } /** * Gets the boolean if the ingredient is vegetarian * @return true if its vegetarian */ public boolean isVegetarian() { return isVegetarian; } /** * Defines if the ingredient is vegetarian * @param vegetarian true if it is vegetarian */ public void setVegetarian(boolean vegetarian) { isVegetarian = vegetarian; } /** * Gets the quantity of the ingredient * @return Quantity (null -> not set) */ public Integer getQuantity() { return quantity; } }
/** * Solution to the challenge #002 of Project Euler series on HackerRank: * https://www.hackerrank.com/contests/projecteuler/challenges/euler002 * Created on: 18-08-2018 * Last Modified: 18-08-2018 * Author: <NAME> (<EMAIL>) * License: MIT */ #include <iostream> #include <cstdint> #include <vector> std::vector<uint64_t> fibSequence{1, 1, 2}; std::vector<uint64_t> answers{0, 0, 0}; // holds sum of even Fibonacci entries whose 1-index does not exceed k. const uint64_t maxN = 4e16; int main() { // Precalculates the data until maxN is exceeded: uint64_t k = answers.size() - 1; // index of last calculated fibonacci number & answer while (fibSequence[k] < maxN) { fibSequence.push_back(fibSequence[k-1] + fibSequence[k]); answers.push_back(answers[k] + (fibSequence[k] % 2 == 0 ? fibSequence[k] : 0)); k += 1; } int T; std::cin >> T; for (int t = 0; t < T; ++t) { uint64_t N; std::cin >> N; int n = 1; // This challenge assumes 1-indexing; I threw an extra "1" as zeroth entry. while (fibSequence[n] < N) { n += 1; // Linear search, could be optimized to binary if it timeouts. } std::cout << answers[n] << std::endl; } return 0; }
<filename>jsonquery/numeric_comparison.go package jsonquery import ( "fmt" "strconv" "time" "github.com/araddon/dateparse" ) type numericComparison struct { Arity // TODO: SWAP ORDER comparisonFloat func(rValues []float64, lValue float64) bool comparisonTime func(rValues []time.Time, lvalue time.Time) bool extraNumericValidator func(rValues []float64) error extraTimeValidator func(rValues []time.Time) error } func (n *numericComparison) GetComparator(rValues []string) (Comparator, error) { if err := n.CheckArity(rValues); err != nil { return nil, err } // attempt to detect if we're being called in a time context by testing the first lvalue looksLikeTime := false if _, err := time.Parse(time.RFC3339, rValues[0]); err == nil { looksLikeTime = true } else { if _, err := time.Parse("2006-01-02", rValues[0]); err == nil { looksLikeTime = true } } if !looksLikeTime { // doesn't look like a time, assume float context floatValues, err := getFloatSlice(rValues) if err != nil { return nil, fmt.Errorf("numeric comparison value: %w", err) } if n.extraNumericValidator != nil { if err := n.extraNumericValidator(floatValues); err != nil { return nil, fmt.Errorf("invalid comparison values: %w", err) } } return &floatComparator{ rValues: floatValues, compFunc: n.comparisonFloat, }, nil } timeValues, err := getTimeSlice(rValues) if err != nil { return nil, fmt.Errorf("datetime comparison: %w", err) } if n.extraTimeValidator != nil { if err := n.extraTimeValidator(timeValues); err != nil { return nil, fmt.Errorf("invalid comparison values: %w", err) } } return &timeComparator{ rValues: timeValues, compFunc: n.comparisonTime, }, nil } type timeComparator struct { rValues []time.Time compFunc func(l []time.Time, r time.Time) bool } func (t *timeComparator) Evaluate(lValues []string) (bool, error) { lTimevals, err := getTimeSlice(lValues) if err != nil { return false, fmt.Errorf("non-datetime value cannot be compared: %w", err) } for _, tv := range lTimevals { if t.compFunc(t.rValues, tv) { return true, nil } } return false, nil } type floatComparator struct { rValues []float64 compFunc func(r []float64, l float64) bool } func (f *floatComparator) Evaluate(lValues []string) (bool, error) { lFloatvals, err := getFloatSlice(lValues) if err != nil { return false, fmt.Errorf("non-numeric value cannot be compared: %w", err) } for _, lv := range lFloatvals { if f.compFunc(f.rValues, lv) { return true, nil } } return false, nil } func getFloatSlice(values []string) ([]float64, error) { floatValues := make([]float64, len(values)) for i, fs := range values { fv, err := strconv.ParseFloat(fs, 64) if err != nil { return nil, fmt.Errorf("%q is not numeric", fs) } floatValues[i] = fv } return floatValues, nil } func getTimeSlice(values []string) ([]time.Time, error) { timeValues := make([]time.Time, len(values)) for i, ts := range values { tv, err := dateparse.ParseAny(ts) if err != nil { return nil, fmt.Errorf("%q is not a recognized time format", ts) } timeValues[i] = tv } return timeValues, nil }
import { HttpClient } from '@angular/common/http'; import { Injectable } from '@angular/core'; import { Observable } from 'rxjs'; import { catchError } from 'rxjs/operators'; import { Flights } from '../model/flights'; @Injectable({ providedIn: 'root' }) export class FlightsService { private flightUrl: string = 'http://localhost:8082/api/flights/'; flight: Flights[] | any; httpOptions = {}; constructor(private readonly http: HttpClient) {} getFlights() { return this.http.get<Flights[]>(this.flightUrl); } deleteFlights(id: number) { return this.http.delete<Flights>(this.flightUrl + id) } addFlight(flight: any): Observable<Flights> { console.log('beep'); return this.http.post<Flights>(this.flightUrl, flight); } getFlightById(id: number): Observable<Flights> { return this.http.get<Flights>(this.flightUrl + id); } updateFlight(flight: Flights): Observable<Flights> { return this.http.put<Flights>(this.flightUrl, flight); } }
By Tom Mangold Radio 4, Crossing Continents Curtis Flowers, a 39-year-old African-American is to stand trial for an unprecedented sixth time for the murder of four people in Mississippi in 1996. So far, two of his trials have resulted in mistrials and three in convictions that were later overturned. James Bibbs, also an African-American, was a juror in Mr Flowers's 2008 trial, which ended in a mistrial. He was the only one of the 12 to vote against a conviction. Juror James Bibbs was charged with perjury at Curtis Flowers fifth trial At the end of the trial, Mr Bibbs was hauled in front of the judge, harangued, threatened, arrested in court, led away in handcuffs, charged with perjury and spent the night in prison. Mr Bibbs is in his early 60s. He's a retired school teacher, a Vietnam veteran, a local football referee - a patently decent man who was shocked by what had happened. "The judge got real loud, and he said 'you are lying, you committed perjury'. I was disappointed, all these years you do all these things for the community, then you are called a liar like that out in the public, it was degrading." The judge's outburst (the perjury charge has since been quietly dropped) came in a case that is extraordinary for many reasons. Unprecedented The prosecution of Curtis Flowers casts a sharp light on racial attitudes in America's South one year after the election of the nation's first black president. The trials of Curtis Flowers have been dogged by misconduct He has been sentenced to death three times, only for each verdict to be overturned on appeal because of what the Mississippi Supreme Court described as prosecutorial misconduct. In one further trial, the jury failed to agree after dividing broadly on racial lines. In the fifth trial, James Bibbs voted for acquittal, and a unanimous verdict was required. Mr Flowers has spent 13 years on remand in prison. The local district attorney, desperate to score a conviction in such a high-profile case, has played it dirty to win. One of his tricks, exposed by a refreshingly impartial Mississippi Supreme Court, was to fiddle the jury selection to exclude black jurors. Paradoxically, the DA is not generally held to be a racist himself. Just to complicate matters even further, Curtis Flowers does have a strong case to answer. He had a motive. The store in Winona Mississippi where four people were killed Mr Flowers had been employed by the owner of a furniture store who sacked him. There was a dispute about money owing. Subsequently someone walked into the store, shot the owner and then coldly massacred three other employees. Mr Flowers has never produced an alibi for that terrible morning. For his defence, the forensic evidence against him is wafer thin, and some witness evidence is contentious. Post-racial society The murders took place in the small town of Winona, in the heart of a state with the worst civil rights record in the US. Mississippi was the first state to prosecute a successful civil rights case Winona is not far from Philadelphia, where three civil rights workers were infamously murdered in the early 60s - a story captured in the film Mississippi Burning. The lynchings, the cross burnings, the overt violence and discrimination have long since disappeared. But even one year after Barack Obama and the dream of a post-racial society, the Flowers case shows how short the march away from old attitudes has been. The local state senator, Lydia Chassaniol has won few African-American hearts by introducing a bill that would widen the jury pool in such a way that critics say would make it easier to select an all-white jury. She has joined a local chapter of the right-wing Council for Conservative Citizens and addressed their annual conference. Badges like these show that racial tensions remain in Mississippi "I'll talk to anyone who wants me to talk to them," the senator told me, stressing her role as official tourist booster for the state. But meet members of the council, as I did, in a modest motel outside Winona, and the nature of this rump of the red-neck, good 'ole white boys, confederate-flag-wavers is striking. Their hatred of inter-racial marriage, homosexuals, liberals (aka communists) identifies an atavistic streak that still remains 150 years after slavery. As one of them told me: "It's all right for them (non-whites) to practise their culture but they should not take ours away from us. We are probably the most discriminated race in the country." Mr Flowers faces a sixth trial next June. In Britain, natural justice would have made it likely that the prosecution would be dropped after the second mistrial. But this is Winona, Mississippi and a black man accused of a quadruple murder will not be allowed to walk away. Black president or not, the state and its judicial servants are not ready for that yet. Crossing Continents: Mississippi Smouldering is broadcast on BBC Radio 4 on Thursday, 26 November 2009 at 1100 GMT and repeated on Monday, at 2030 GMT. You can also listen to Crossing Continents on the BBC iPlayer or subscribe to the podcast . Bookmark with: Delicious Digg reddit Facebook StumbleUpon What are these? E-mail this to a friend Printable version
/* * Write each image (represented as an encoded byte array) to the * HDFS using the hash of the byte array to generate a unique * filename. */ @Override public void map(HipiImageHeader header, ByteImage image, Context context) throws IOException, InterruptedException { if (header == null || image == null) { System.err.println("Failed to decode image, skipping."); return; } String source = header.getMetaData("source"); if (source == null) { System.err.println("Failed to locate source metadata key/value pair, skipping."); return; } String base = FilenameUtils.getBaseName(source); if (base == null) { System.err.println("Failed to determine base name of source metadata value, skipping."); return; } Path outpath = new Path(path + "/" + base + ".jpg"); FSDataOutputStream os = fileSystem.create(outpath); JpegCodec.getInstance().encodeImage(image, os); os.flush(); os.close(); context.write(new BooleanWritable(true), new Text(base)); }
Sometimes online games just...die, but Demon's Souls will live on. Atlus sends word that they've decided to extend online server support for this game into 2012. "While it comes at significant cost to us and although it has been over two years since the game revolutionized the notion of multiplayer and online functionality in an RPG," commented Aram Jabbari, Manager of PR and Sales at ATLUS, "our commitment to the game and the fans that turned it into an incredible success remains as strong as ever." Nothing lasts forever, though. "While the reality is that one day the servers will ultimately close due to operation and maintenance costs, that day is not today, nor will it be this year. We're excited to continue to support one of the most significant, influential games in recent memory into its third year, and we're planning to hold more tendency events for our loyal, beloved fans. If you've been waiting to try Demon's Souls or held off with concern that it was too late to join in, the truth is that there's never been a better time to see what all the talk and awards are all about." The game's only $19.99, as it's a PS3 Greatest Hits title. Not bad for one of the top titles on the system. As they say on eBay, buy with confidence. You are logged out. Login | Sign up
import { ShapeSet } from '../types/ShapeSet'; import { polygonContains } from 'd3-polygon'; const isAllContained = async (activeSet: ShapeSet, path: number[][][]) => { return activeSet.nodes.reduce((prev, curr) => { return path.reduce( (prev2, curr2) => prev2 && polygonContains(curr2 as [number, number][], [ curr.center.lng, curr.center.lat ]), true ); }, true); }; export default isAllContained;
Two weeks after the much-maligned CNBC Republican debate, where we learned that the 2016 GOP hopefuls opposed raising the minimum wage, would cut taxes to practically nothing and abolish the IRS, would repeal Obamacare, destroy ISIS with their steely gazes, not to mention spending billions more on the military, and, of course, that they really, really hate Hillary Clinton, they met again Tuesday night. This time, facing their friends from the Wall Street Journal and the Fox Business Network, we learned that … the 2016 GOP hopefuls opposed raising the minimum wage, would cut taxes to practically nothing and abolish the IRS, would repeal Obamacare, destroy ISIS with their steely gazes, not to mention spending billions more on the military, and, of course, that they really, really hate Hillary Clinton. There were a few amusing moments: Ben Carson is still insisting West Point hands out scholarships and that he is not a liar; Donald Trump announced that wages were too high, along with seeming to believe that China was a part of the TPP; Carly Fiorina sneered at the idea (with finger quotes!) of protecting consumers from fraud; Jeb Bush kept forgetting that President Obama inherited a financial crisis from another Bush; Marco Rubio said that being a parent is the most important job in the world, and in the next breath said being president is the most important job in the world, and Ted Cruz announced the five government departments that he would abolish, but could only name four of them. Oops. We’ll be looking at more of the highlights from Tuesday night’s debate throughout the day, but that’s about it in a nutshell. Emphasis on nut.
/// TODO: Figure out State engine for Negotiation fn receive_negotiation(&mut self, action: Action, option: TelnetOption) { use self::{State::*, Action::*}; // https://mudcoders.fandom.com/wiki/Telnet_Option_Negotiation_-_Q_Method // @formatter:off #[rustfmt::skip] match (self.options[option].0, self.options[option].1, action) { // ( Local State, Remote State, Action) => { Action } ( No, _, Do) => { self.options[option].0 = Yes; } ( No, _, Dont) => { /* Ignored */ } ( _, No, Will) => { self.options[option].1 = Yes; } ( _, No, Wont) => { } // (Recv, Will, _, No) => { self.options[option].1 = NegotiationState::WantYes } // (Recv, Wont, _, No) => { /* Do Nothing */ } // } // @formatter:on }
2 years ago Hey guys, So, it turns out we've been playing around with the expanded Day 5 universe... And we've got a few bonus tales to tell alongside Jake and company. The first short, Number 27, debuts this Sunday, July 10th, at 4pm and stars a familiar face you might've spotted in Episode 2... Speaking of familiar faces, RTX 2016 attendees got a sneak peek at Episode 4, "Sweet Dreams," which happens to (finally) feature someone who rhymes with "Bowl." You'll get to see more of him (but not as much as you'll see of Aaron later -- wink, wink) when the episode debuts on Sunday, July 17th. Then look out for two more Day 5 shorts in August after the season finale to feed your delirium. And seriously -- thanks for watching. Josh