content
stringlengths
10
4.9M
#include "test.h" /** Question no 1373 hard Maximum Sum BST in Binary Tree * Author : Li-Han, Chen; 陳立瀚 * Date : 8th, March, 2020 * Source : https://leetcode.com/problems/maximum-sum-bst-in-binary-tree/ * * Given a binary tree root, the task is to return the maximum sum of all keys of any sub-tree which is also a Binary * Search Tree (BST). * * Assume a BST is defined as follows: * * The left subtree of a node contains only nodes with keys less than the node's key. * The right subtree of a node contains only nodes with keys greater than the node's key. * Both the left and right subtrees must also be binary search trees. * */ /** Solution * Runtime 192 ms MeMory 66.67 MB; * faster than 66.67%, less than 100.00% * O(n) ; O(n) */ class Solution { public: int ans = 0; int maxSumBST(TreeNode* root) { traverse(root); return ans; } int traverse(TreeNode* root) { if (!root) return 0; if (!root->left && !root->right) { ans = std::max(ans, root->val); return root->val; } int left_val = traverse(root->left); int right_val = traverse(root->right); if (left_val>INT_MIN && right_val>INT_MIN) { if (root->left !=NULL && root->left->val >= root->val) return INT_MIN; if (root->right!=NULL && root->right->val <= root->val) return INT_MIN; int sum_up = root->val + left_val + right_val; ans = std::max(ans, sum_up); return sum_up; } return INT_MIN; } };
<gh_stars>0 import React from 'react'; import BlindMain from '../components/BlindMain'; const NotFound: React.FC = () => ( <BlindMain type="404" /> ); export default NotFound;
<gh_stars>0 import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.util.ArrayDeque; import java.util.Arrays; import java.util.Collections; import java.util.Deque; import java.util.stream.Collectors; public class LittleAlchemy { public static void main(String[] args) throws IOException { BufferedReader reader = new BufferedReader(new InputStreamReader(System.in)); Integer[] stonesArray = Arrays.stream(reader.readLine().split("\\s+")).map(Integer::valueOf).toArray(Integer[]::new); Deque<Integer> stones = new ArrayDeque<>(); int gold = 0; Collections.addAll(stones, stonesArray); String line; while (true) { if ("Revision".equals(line = reader.readLine())) { break; } String[] tokens = line.split("\\s+"); String command = tokens[0] + " " + tokens[1]; int n = Integer.parseInt(tokens[2]); switch (command) { case "Apply acid": if (stones.isEmpty()) { continue; } while (n-- > 0) { int stone = stones.removeFirst(); stone -= 1; if (stone == 0) { gold++; } else { stones.add(stone); } if (stones.isEmpty()) { break; } } break; case "Air leak": if (gold == 0) { continue; } stones.add(n); gold--; break; } } System.out.println(String.join(" ", stones.stream().map(String::valueOf).collect(Collectors.toList()))); System.out.println(gold); } }
Parents are giving their children cannabis to treat serious diseases, and they're resorting to growing their own plants or importing them illegally from overseas. At least three New Zealand customers order liquid cannabis products from Mulaways Medicinal Cannabis in Kempsie, about 400km north of Sydney in rural New South Wales. The cannabis tincture has been credited with providing relief for children with terminal illnesses and has sparked renewed debate over clinical trials of medicinal marijuana. Mulaways founder Tony Bower said three New Zealanders were on a mailing list of about 150 customers. Other Kiwis flew to Australia to source the drug and dozens more were added each week to a bulging waiting list, he said. But the supply of liquid cannabis could dry up altogether if Bower is sent to prison. He is due to appear in court in October charged with cultivating cannabis and breaching a good behaviour bond he was placed on after a six-week stint in jail in mid-2013. He said he felt a "duty" to keep growing the plants and supplying the product to children, even if it meant going back to prison. "It's crossed my mind to stop and pack up but these are sick kids. What else can I do?" Bower said the delivery of the drug to New Zealand was particularly risky - not only for him but for the customers who ordered it. The maximum penalty for importation of liquid cannabis - considered a class B drug in New Zealand - is 14 years in jail. Possession or use offences carry a maximum three-month jail term and/or a $500 fine. "To tell you the truth, that's why I don't [send] as much to New Zealand," Bower said. "It's hard because you've got people who risk getting their children taken off them." Hawke's Bay mother Christine* said she tried to source medical cannabis two years ago but met a dead end and decided to grow her own plants. She took the unconventional step of producing cannabis oil for her daughter Ellen*, then aged 14, who was told she may not reach adulthood due to Dravet syndrome, a severe form of epilepsy. "At the time I was thinking everything else had failed, so why not try it. I was quite pessimistic to be honest." Christine said the cannabis oil had an immediate and dramatic impact. Ellen's seizures reduced from hundreds each day down to only a handful, allowing her to return to school for the first time in five years. "She's gone from 120 hospital admissions in 2012 to just eight last year. It's quite amazing. She is still on some pharmaceuticals. We've found that combination with the cannabis oil has been hugely beneficial." The plants used to make Ellen's cannabis oil are grown at a private property and taken to an "under the radar" lab to be tested. The oil has to be between 0.5 per cent and 1 per cent of THC - a level that's too low to cause any psychoactive "high". Christine said it was a time-consuming and extremely high-risk process, but she felt it was her only option. "When you have a child whose condition is terminal, you would do just about everything to save them." As far as she knew she was the only person in New Zealand who made cannabis oil for her child "but there would be lots of parents in the same boat who would be considering it". Medicinal marijuana support group Green Cross NZ estimated there were hundreds of children and adults across New Zealand using cannabis medically. Green Cross director Billy McKee said the Government was "out of touch" with the reality of terminally ill patients and said a medical cannabis trial was well overdue. "It's extremely urgent they look at it. We've been saying this for years - but [Associate Health Minister] Peter Dunne won't engage with us, he's not interested in looking at the evidence." Dunne said the Government had "no plan" to approve a clinical trial for medical cannabis. He said anecdotal examples of children's health benefits from cannabis oil or liquids were not enough to change his mind on the policy. "I have yet to see any evidence that cannabis in any form has contributed in any way to help children, or indeed anyone, recover from serious diseases," he said. New Zealand Drug Foundation director Ross Bell said there was mounting evidence of the benefits of medical cannabis. He said most of that had been ignored by a Government which was "40 years behind on the issue". The health select committee had particularly failed to heed calls to review the laws, he said. "There's absolutely more that needs to be done, especially around compassionate laws to prevent parents being prosecuted. If parents think they can help their child they will bend over backwards to find that drug, whether it's legal or not. "We need to update the legislation to acknowledge that." The Green Party also criticised the committee's finding, saying its latest report in May was "spurious" and more research was needed. Growing calls to decriminalise medical marijuana have gained backing from the Australian Medical Association and the NSW Nurses and Midwives' Association. A bill currently before the New South Wales state parliament could approve cannabis use for terminally-ill patients. New Zealand Medical Association chair Dr Mark Peterson said he was in support of evidence-based testing of medical cannabis, and wanted to see a trial held. "We think there is a place for more research, absolutely," he said. "We're trying to make the case that medical cannabis shouldn't be treated differently to, say, aspirin or any other controlled drug." Bower said governments in both countries needed to legalise the drug for medical use. It would be beneficial for his upcoming court case and also crucial to the wellbeing of young patients, he said. Countries including Canada, Switzerland, Belgium, Czech Republic, Netherlands, Israel, and 23 states in America have all permitted medical cannabis, while maintaining laws against recreational use. Christine hoped New Zealand would eventually follow that model. She said parents should be able to seek help for their children without feeling like a criminal. * Names have been changed for legal protection. 'IT'S AMAZING . . . SHE'S REALLY DEVELOPING' Eleven-year-old Paige Gallien uses marijuana three times a day to control her seizures in a treatment programme that's been personally ticked off by the health minister. The Hamilton girl is the only child in New Zealand approved to use Sativex - a medical cannabis mouth spray. Her father Brent Gallien said Paige had shown an "incredible improvement" in her Dravet syndrome, a rare form of epilepsy, and he wanted the Government to subsidise the drug and make it more accessible for other children. Since Paige started using Sativex in February she had gone from an average of eight seizures per night down to fewer than one a night, Gallien said. "It's amazing to see her now. Her speech is great, she's really developing. She drew a picture of a smiley face the other day, which doesn't sound like much but for us was really massive." Sativex has been legal in New Zealand since 2008 but has only partial approval, meaning each application has to be authorised through the health minister's office. Ministry of Health figures show only 53 prescriptions have been issued, including for repeat patients, and no longer active users. Gallien said there were extensive hoops to jump through to get registered for the programme. He admitted if it wasn't made available for Paige, he would have bought illegal cannabis. "We would have been doing an illegal version. There's no question about that." Sativex did not receive any subsidy through Pharmac and did not come cheap, at about $1000 for three small bottles, which lasted between one month and three months, depending on the level of use. It also had side-effects. Government authority Medsafe warned dizziness was commonly reported and it did not recommend Sativex for children "due to a lack of safety and efficacy data". New Zealand Medical Association chair Dr Mark Peterson said Sativex was out of reach for most people. "The bar is set deliberately high because it has not been approved for full market use," he said. "It also lacks the evidence of how safe it is to use." The spray contains half the level of THC - the compound that gives the psychoactive "high" - usually found in cannabis. Gallien said Paige often felt drowsy, which he suggested would not be the case if he had access to other illegal and unregulated cannabis products, which had lower THC-levels. Another pharmaceutical, Epidiolex, which is a liquid, non-psychoactive cannabinoid, is also on the radar for New Zealand following clinical trials in America. Medsafe group manager Stewart Jessamine said the trials were being closely followed and the Ministry of Health "may in the future consider medicinal products [such as Epidiolex] be brought to the market". Gallien said he would prefer Paige to use Epidiolex, but he expected it would be at least a couple of years away. The more immediate concern was that New Zealand children with serious diseases were missing out on access to Sativex due to cost, or bureaucratic hurdles. "The next step is to help all these other kids get funding and permission to use it. There are so many people out there with the same issue." * Comments on this story are now closed.
def delete_annotation( self, key: str, column_id: Optional[int] = None, row_id: Optional[int] = None ): doc = self.read(column_id=column_id, row_id=row_id) if key in doc: del doc[key] self.write(doc=doc, column_id=column_id, row_id=row_id)
/* * Copyright (C) 2020 The zfoo Authors * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except * in compliance with the License. You may obtain a copy of the License at * http://www.apache.org/licenses/LICENSE-2.0 * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and limitations under the License. */ package com.zfoo.tank.cache.controller; import com.zfoo.event.manager.EventBus; import com.zfoo.net.NetContext; import com.zfoo.net.router.attachment.GatewayAttachment; import com.zfoo.net.router.receiver.PacketReceiver; import com.zfoo.net.session.model.Session; import com.zfoo.orm.model.anno.EntityCachesInjection; import com.zfoo.orm.model.cache.IEntityCaches; import com.zfoo.storage.model.anno.ResInjection; import com.zfoo.storage.model.vo.Storage; import com.zfoo.tank.cache.model.PlayerLevelUpEvent; import com.zfoo.tank.cache.util.SendUtils; import com.zfoo.tank.common.entity.PlayerEntity; import com.zfoo.tank.common.protocol.CurrencyUpdateNotice; import com.zfoo.tank.common.protocol.PlayerExpNotice; import com.zfoo.tank.common.protocol.battle.BattleResultRequest; import com.zfoo.tank.common.protocol.battle.BattleResultResponse; import com.zfoo.tank.common.protocol.cache.BattleScoreAnswer; import com.zfoo.tank.common.protocol.cache.BattleScoreAsk; import com.zfoo.tank.common.resource.PlayerExpResource; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.stereotype.Component; /** * @author jaysunxiao * @version 3.0 */ @Component public class BattleController { private static final Logger logger = LoggerFactory.getLogger(BattleController.class); @EntityCachesInjection private IEntityCaches<Long, PlayerEntity> playerEntityCaches; @ResInjection private Storage<Integer, PlayerExpResource> playerExpStorage; @PacketReceiver public void atBattleResultRequest(Session session, BattleResultRequest request, GatewayAttachment gatewayAttachment) { var uid = gatewayAttachment.getUid(); var player = playerEntityCaches.load(uid); var score = request.getScore(); // 注意线程号和异步请求回调的线程号是一致的 logger.info("c[{}][{}]玩家战斗结果[score:{}]", gatewayAttachment.getUid(), gatewayAttachment.getSid(), score); // 异步发送给cache服 NetContext.getConsumer().asyncAsk(BattleScoreAsk.valueOf(uid, score), BattleScoreAnswer.class, uid) .whenComplete(answer -> { logger.info("c[{}][{}]玩家战斗结果异步回调", gatewayAttachment.getUid(), gatewayAttachment.getSid()); // 战斗过后如果上了排行榜,则奖励一下,每一分值一个金币,半个钻石 if (answer.isRankReward()) { var currencyPo = player.getCurrencyPo(); currencyPo.setGold(currencyPo.getGold() + score); currencyPo.setGem(currencyPo.getGem() + score / 2); addPlayerExp(player, score); playerEntityCaches.update(player); NetContext.getRouter().send(session, BattleResultResponse.valueOf(score), gatewayAttachment); NetContext.getRouter().send(session, CurrencyUpdateNotice.valueOf(currencyPo.toCurrencyVO()), gatewayAttachment); } }); } public void addPlayerExp(PlayerEntity playerEntity, int playerExp) { playerEntity.addExp(playerExp); for (int i = 0; i < playerExpStorage.size(); i++) { var level = playerEntity.getLevel(); var exp = playerEntity.getExp(); if (!playerExpStorage.contain(level + 1)) { break; } var playerExpConfig = playerExpStorage.get(level); if (exp < playerExpConfig.getExp()) { break; } playerEntity.setLevel(playerEntity.getLevel() + 1); playerEntity.setExp(exp - playerExpConfig.getExp()); // 抛出一个升级的事件 EventBus.syncSubmit(PlayerLevelUpEvent.valueOf(playerEntity, level)); } SendUtils.sendToPlayer(playerEntity, PlayerExpNotice.valueOf(playerEntity.getLevel(), playerEntity.getExp())); } }
<filename>twitoff/predict.py """prediction for user based on tweets""" import numpy as np from sklearn.linear_model import LogisticRegression from .models import User from .twitter import vectorize_tweet def predict_user(user0_name, user1_name, hypo_tweet_text): """ Determine and returns which user is more likely to say a given tweet Example run: predict_user('elonmusk', 'jackblack', "tesla cars go fast") Returns a 0 (user0: 'elonmusk') or a 1 (user1: 'jackblack') """ #Grabbing user from our DB #the user must be in our DB user0 = User.query.filter(User.name == user0_name).one() user1 = User.query.filter(User.name == user1_name).one() #grabbing tweet vectors from each tweet for each user user0_vects = np.array([tweet.vect for tweet in user0.tweets]) user1_vects = np.array([tweet.vect for tweet in user1.tweets]) #vertically stack tweet_vects to get one np array vects = np.vstack([user0_vects, user1_vects]) labels = np.concatenate( [np.zeros(len(user0.tweets)), np.ones(len(user1.tweets))]) #Fit model with our x's == vects y's == labels log_reg = LogisticRegression().fit(vects, labels) #vectorize hypothetical tweet to pass into .predict() hypo_tweet_vect = vectorize_tweet(hypo_tweet_text) return log_reg.predict(hypo_tweet_vect.reshape(1, -1))
class Content: # pylint: disable=too-few-public-methods, too-many-instance-attributes """Content Data Class.""" def __init__(self): # type: () -> None """Init for data.""" self.message_type = "" # type: Text self.protocol_version = "" # type: Text self.connection_uuid = "" # type: Text self.established = 0 # type: int self.timestamp_ini = 0 # type: int self.timestamp_ack = 0 # type: int self.submessage_type = "" # type: Text self.channel = 0 # type: int self.dest_id_urn = "" # type: Text self.device_id_urn = "" # type: Text self.payload = "" # type: Text self.payload_data = bytearray()
<gh_stars>0 package ioc.context; import ioc.factory.impl.DefaultListableBeanFactory; import ioc.support.reader.XmlBeanDefinitionReader; /** * 子类,提供加载BeanDefinition功能 */ public abstract class AbstractXmlApplicationContext extends AbstractRefreshableApplicationContext { @Override protected void loadBeanDefinitions(DefaultListableBeanFactory beanFactory) { XmlBeanDefinitionReader beanDefinitionReader = new XmlBeanDefinitionReader(beanFactory, this); String[] configLocations = getConfigLocations(); if (null != configLocations) { beanDefinitionReader.loadBeanDefinitions(configLocations); } } protected abstract String[] getConfigLocations(); }
package org.camunda.bpm.demo.cockpit.plugin.kpi.dto; public class ProcessInstanceCountDto { private String processDefinitionKey; private int runningInstanceCount; private int endedInstanceCount; private int failedInstanceCount; public String getProcessDefinitionKey() { return processDefinitionKey; } public void setProcessDefinitionKey(String processDefinitionKey) { this.processDefinitionKey = processDefinitionKey; } public int getRunningInstanceCount() { return runningInstanceCount; } public void setRunningInstanceCount(int runningInstanceCount) { this.runningInstanceCount = runningInstanceCount; } public int getEndedInstanceCount() { return endedInstanceCount; } public void setEndedInstanceCount(int endedInstanceCount) { this.endedInstanceCount = endedInstanceCount; } public int getFailedInstanceCount() { return failedInstanceCount; } public void setFailedInstanceCount(int failedInstanceCount) { this.failedInstanceCount = failedInstanceCount; } }
#include <cstdio> #include <cmath> #include <cstring> #include <cstdlib> #include <algorithm> using namespace std; const int MaxN = 2e5; typedef long long LL; int n, k; int dur[MaxN + 5], afr[MaxN + 5], pri[MaxN + 5]; struct PP { int pri, pos; }pp[MaxN + 5]; int cmp(PP x, PP y) {return x.pri < y.pri;} int main() { while(scanf("%d %d", &n, &k) != EOF) { LL ans = 0; int up = 0; for(int i = 1; i <= n; i++) scanf("%d", &dur[i]); for(int i = 1; i <= n; i++) scanf("%d", &afr[i]); for(int i = 1; i <= n; i++) { pp[i].pri = afr[i] - dur[i]; pp[i].pos = i; } sort(pp + 1, pp + n + 1, cmp); /*for(int i = 1; i <= n; i++) printf("%d ", pp[i].pri); printf("\n");*/ for(int i = 1; i <= n; i++) if(pp[i].pri > 0) up++; if(up > k) { for(int i = n; i >= n - up + 1; i--) ans += dur[pp[i].pos]; for(int i = 1; i <= n - up; i++) ans += afr[pp[i].pos]; } else { for(int i = n; i >= n - k + 1; i--) ans += dur[pp[i].pos]; for(int i = 1; i <= n - k; i++) ans += afr[pp[i].pos]; } printf("%I64d\n", ans); memset(dur, 0, sizeof(dur)); memset(afr, 0, sizeof(afr)); memset(pri, 0, sizeof(pri)); memset(pp, 0, sizeof(pp)); } return 0; }
export const cyrillicPattern = /[а-яА-ЯЁё]/;
<reponame>pchrszon/rbsc<filename>src/Rbsc/TypeChecker/ModelInfo.hs {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE LambdaCase #-} {-# LANGUAGE MultiParamTypeClasses #-} {-# LANGUAGE RecordWildCards #-} {-# LANGUAGE TypeOperators #-} -- | Construction of the symbol table and evaluation of constants. module Rbsc.TypeChecker.ModelInfo ( getModelInfo ) where import Control.Lens import Control.Monad.Reader import Control.Monad.State.Strict import Data.Foldable import Data.Map.Strict (Map) import qualified Data.Map.Strict as Map import qualified Data.Set as Set import Rbsc.Config import Rbsc.Data.ComponentType import Rbsc.Data.Field import Rbsc.Data.ModelInfo import Rbsc.Data.Scope import Rbsc.Data.Some import Rbsc.Data.Type import Rbsc.Eval import Rbsc.Report.Error import Rbsc.Report.Region import Rbsc.Report.Result import Rbsc.Syntax.Typed (LSomeExpr, SomeExpr (..)) import qualified Rbsc.Syntax.Typed as T import Rbsc.Syntax.Untyped (Enumeration (..), LExpr, ModuleInstance (..), Parameter (..), TypeSetDef (..), UConstant, UFunction, UModuleInstance, UParameter, UType, UVarDecl, UVarType) import qualified Rbsc.Syntax.Untyped as U import Rbsc.TypeChecker.ComponentTypes import Rbsc.TypeChecker.Dependencies import Rbsc.TypeChecker.Expr import Rbsc.TypeChecker.Identifiers import qualified Rbsc.TypeChecker.Internal as TC type ModuleInstances = Map TypeName (Map Name [UModuleInstance]) type BuilderState = ModelInfo :&: ModuleInstances :&: RecursionDepth moduleInstances :: Lens' BuilderState ModuleInstances moduleInstances = field -- | Construct the 'ModelInfo' for a given 'Model'. getModelInfo :: (MonadReader r (t Result), Has RecursionDepth r, MonadTrans t) => U.Model -> t Result (ModelInfo, Map TypeName [UModuleInstance]) getModelInfo m = do idents <- lift (identifierDefs m) deps <- lift (fromEither' (sortDefinitions m idents)) depth <- view recursionDepth result <- lift (runBuilder (traverse addDependency deps) depth) lift (validateComponentTypes (view (_1.componentTypes) result) m) return result addDependency :: Dependency -> Builder () addDependency = \case DepDefinition def -> case def of DefConstant c -> addConstant c DefFunction f -> addFunction f DefLabel -> return () DefGlobal decl -> addVariable Global decl DefLocal tyName moduleName decl -> addLocalVariable tyName moduleName decl DefComponentType t -> addComponentType t DefTypeSet s -> addTypeSet s DefComponent c -> addComponents c DefModule _ -> return () DepFunctionSignature f -> addFunctionSignature f DepModuleInstantiation mi -> addModuleInstance mi addConstant :: UConstant -> Builder () addConstant (U.Constant (Loc name _) msTy e) = do SomeExpr e' ty <- case msTy of Just sTy -> do -- if an explicit type annotation is given, check if type of -- the definition matches Some ty <- fromSyntaxType sTy e' <- typeCheckExpr ty e return (SomeExpr e' ty) Nothing -> runTypeChecker (tcExpr e) Dict <- return (dictShow ty) v <- evalConstDefExpr (e' `withLocOf` e) insertSymbol Global name (Some ty) insertConstant name (SomeExpr (T.Literal v ty) ty) addFunctionSignature :: UFunction -> Builder () addFunctionSignature (U.Function mTyName (Loc name _) params sTy _) = do paramTys <- traverse (fromSyntaxType . U.paramType) params tyResult <- fromSyntaxType sTy let sc = fromMaybeTypeName (fmap unLoc mTyName) insertSymbol sc name (foldr mkTyFunc tyResult paramTys) where mkTyFunc (Some a) (Some b) = Some (a --> b) addFunction :: UFunction -> Builder () addFunction (U.Function mTyName (Loc name _) params sTy body) = do paramSyms <- traverse paramToSym params tyResult <- fromSyntaxType sTy case mTyName of Nothing -> do f <- runTypeChecker (tcFunctionDef paramSyms tyResult body) insertConstant name f Just (Loc tyName _) -> do f <- runTypeChecker . TC.localScope tyName $ tcFunctionDef paramSyms tyResult body insertMethod (ScopedName (Local tyName) name) f where paramToSym (U.Parameter n psTy) = do ty <- fromSyntaxType psTy return (unLoc n, ty) addLocalVariable :: TypeName -> Name -> UVarDecl -> Builder () addLocalVariable tyName moduleName (U.VarDecl (Loc name _) vTy _) = do args <- getArguments (ty, mRange) <- withConstants args (fromSyntaxVarType vTy) insertSymbol (Local tyName) name ty insertRange (Local tyName) name mRange where getArguments :: Builder [(Name, LSomeExpr)] getArguments = do mi <- use (moduleInstances.at tyName._Just.at moduleName) return $ case mi of -- There can only be one module instance for a given local -- variable, since local variables names are unique per -- component type. If there would be more than one module -- instantiation for the same module, their local variables -- would clash. Just [inst] -> view U.miArgs inst Just _ -> error $ "addLocalVariable: more than one instance for " ++ show tyName ++ "." ++ show name Nothing -> [] -- | Temporarily add a list of constants to the 'ModelInfo'. The given -- 'Builder' action should not make any changes to the symbol table or -- the list of constants, as they will be lost (read access is fine -- however). withConstants :: [(Name, LSomeExpr)] -> Builder a -> Builder a withConstants cs m = do symTable <- use symbolTable consts <- use constants for_ cs $ \(constName, Loc e@(SomeExpr _ ty) _) -> do symbolTable.at (ScopedName Global constName) ?= Some ty constants.at constName ?= e res <- m symbolTable .= symTable constants .= consts return res addVariable :: Scope -> UVarDecl -> Builder () addVariable sc (U.VarDecl (Loc name _) vTy _) = do (ty, mRange) <- fromSyntaxVarType vTy insertSymbol sc name ty insertRange sc name mRange addComponentType :: ComponentTypeDef -> Builder () addComponentType = \case TypeDefNatural (U.NaturalTypeDef (Loc name _)) -> insertComponentType name NaturalType TypeDefRole (U.RoleTypeDef (Loc name _) playerTyNames) -> insertComponentType name (RoleType (Set.fromList (fmap unLoc playerTyNames))) TypeDefCompartment (U.CompartmentTypeDef (Loc name _) multiRoleLists) -> do roleRefLists <- traverse (traverse toRoleRef) multiRoleLists insertComponentType name (CompartmentType roleRefLists) where toRoleRef (U.MultiRole (Loc tyName _) mBounds) = case mBounds of Nothing -> return (RoleRef tyName (1, 1)) Just (lower, upper) -> do lower' <- evalIntExpr lower upper' <- evalIntExpr upper checkCardinalities lower upper lower' upper' return (RoleRef tyName (lower', upper')) checkCardinalities lower upper lower' upper' | lower' < 0 = throw (getLoc lower) (InvalidLowerBound lower') | upper' < lower' = throw (getLoc lower <> getLoc upper) (InvalidCardinalities lower' upper') | otherwise = return () addTypeSet :: TypeSetDef -> Builder () addTypeSet (TypeSetDef (Loc name _) tyNames) = typeSets.at name ?= Set.fromList (fmap unLoc (toList tyNames)) addComponents :: ComponentDef -> Builder () addComponents (ComponentDef (Loc name _) (Loc tyName _) mLen) = case mLen of -- add component array Just len -> do len' <- evalIntExpr len let tyArray = TyArray (max 0 len') tyComponent insertSymbol Global name (Some tyArray) -- add single component Nothing -> insertSymbol Global name (Some tyComponent) where tyComponent = TyComponent (Set.singleton tyName) addModuleInstance :: ModuleInstantiationDep -> Builder () addModuleInstance ModuleInstantiationDep {..} = do checkArity let args = zip midArgs (U.modParams midModule) args' <- traverse evalArgument args let moduleName = unLoc (U.modName midModule) inst = ModuleInstance moduleName args' (U.modBody midModule) modifying moduleInstances $ Map.insertWith (Map.unionWith (++)) midTypeName (Map.singleton moduleName [inst]) where checkArity = do let numArgs = length midArgs numParams = length (U.modParams midModule) if numArgs == numParams then return () else throw midRegion (WrongNumberOfArguments numParams numArgs) evalArgument :: (LExpr, UParameter) -> Builder (Name, LSomeExpr) evalArgument (arg, param) = do let name = unLoc (paramName param) Some ty <- fromSyntaxType (paramType param) arg' <- typeCheckExpr ty arg val <- evalExpr (arg' `withLocOf` arg) Dict <- return (dictShow ty) return (name, SomeExpr (T.Literal val ty) ty `withLocOf` arg) fromSyntaxType :: UType -> Builder (Some Type) fromSyntaxType = \case U.TyBool -> return (Some TyBool) U.TyInt -> return (Some TyInt) U.TyDouble -> return (Some TyDouble) U.TyAction -> return (Some TyAction) U.TyComponent tySet -> do compTys <- use componentTypes tySetDefs <- use typeSets tySet' <- lift (fromEither' (normalizeTypeSet compTys tySetDefs tySet)) return (Some (TyComponent tySet')) U.TyArray size sTy -> do sizeVal <- evalIntExpr size Some ty <- fromSyntaxType sTy return (Some (TyArray sizeVal ty)) U.TyFunc sTyL sTyR -> do Some tyL <- fromSyntaxType sTyL Some tyR <- fromSyntaxType sTyR return (Some (tyL --> tyR)) fromSyntaxVarType :: UVarType -> Builder (Some Type, Maybe (Int, Int)) fromSyntaxVarType = \case U.VarTyBool -> return (Some TyBool, Nothing) U.VarTyInt (lower, upper) -> do lowerVal <- evalIntExpr lower upperVal <- evalIntExpr upper return (Some TyInt, Just (lowerVal, upperVal)) U.VarTyEnum (Enumeration names) -> return (Some TyInt, Just (0, length names - 1)) U.VarTyArray size vTy -> do sizeVal <- evalIntExpr size (Some ty, mRange) <- fromSyntaxVarType vTy return (Some (TyArray sizeVal ty), mRange) type Builder a = StateT BuilderState Result a runBuilder :: Builder a -> RecursionDepth -> Result (ModelInfo, Map TypeName [UModuleInstance]) runBuilder m depth = do (mi :&: (insts :&: _)) <- execStateT m initial let insts' = Map.map (concat . Map.elems) insts return (mi, insts') where initial = emptyModelInfo :&: Map.empty :&: depth evalIntExpr :: Loc U.Expr -> Builder Int evalIntExpr e = do e' <- typeCheckExpr TyInt e evalExpr (e' `withLocOf` e) evalExpr :: Loc (T.Expr t) -> Builder t evalExpr e = do env <- get runReaderT (eval e) env evalConstDefExpr :: Loc (T.Expr t) -> Builder t evalConstDefExpr e = do env <- get runReaderT (evalConstDef e) env typeCheckExpr :: Type t -> Loc U.Expr -> Builder (T.Expr t) typeCheckExpr ty e = runTypeChecker (e `hasType` ty) runTypeChecker :: TC.TypeChecker a -> Builder a runTypeChecker m = do s <- get lift (TC.runTypeChecker m s) insertSymbol :: Scope -> Name -> Some Type -> Builder () insertSymbol sc name ty = symbolTable.at (ScopedName sc name) ?= ty insertRange :: Scope -> Name -> Maybe (Int, Int) -> Builder () insertRange sc name mRange = rangeTable.at (ScopedName sc name) .= mRange insertConstant :: Name -> SomeExpr -> Builder () insertConstant name e = constants.at name ?= e insertMethod :: ScopedName -> SomeExpr -> Builder () insertMethod name e = methods.at name ?= e insertComponentType :: TypeName -> ComponentType -> Builder () insertComponentType tyName ty = componentTypes.at tyName ?= ty
Analysis of the Impact of the Presence of Phylum Cyanobacteria in the Microbiome of Patients with Breast Cancer on Their Prognosis Cyanobacterial blooms caused by Cyanobacteria adversely affect the health of the people living in their vicinity. We elucidated the effect of Cyanobacteria in patients with breast cancer. The serum microbiome of the patients with breast cancer was analyzed using NGS. Serologic tests were performed to analyze the association between the factors affecting the liver function of patients with breast cancer and the amount of Cyanobacteria. In addition, the recurrent-free survival of patients with breast cancer according to the abundance of Cyanobacteria was analyzed. The abundance of Cyanobacteria tended to be correlated with the serological results related to liver function. A high abundance of Cyanobacteria seemed to be more related to late-stage breast cancer. A high recurrent-free survival was related to a low abundance of Cyanobacteria. Even though no toxicity study was conducted, this study demonstrates the impact of phylum Cyanobacteria on the prognosis of patients with breast cancer. Thus, the abundance of Cyanobacteria in the microbiome can help predict the prognosis of patients with breast cancer. Introduction Breast cancer is the most common cancer among women worldwide, and its incidence is still increasing . In addition, its recurrence rate is high. For example, triple-negative breast cancer is known to have a high recurrence rate within five years and a poor prognosis . Moreover, luminal-type breast cancer may recur even after 5 or 10 years, when the patient thinks that the disease has already been overcome . Therefore, the prognosis and management of breast cancer are important. The importance of eating habits, which are the primary breast cancer prevention strategy, is well known . The diet and lifestyle, along with the environment in which the patient lives, play an important role in preventing breast cancer, according to this study. We present evidence that supports the impact of the environment on breast cancer development. Genetic problems cannot be solved; however, various risk factors, such as the environment, can be improved. In this study, we investigate the presence of Cyanobacteria in the microbiome of patients with breast cancer, which reflects the environment that surrounds them and can affect their prognosis. The elements derived from symbiotic microbiota found in the human microbiome include the DNA of bacteria, fungi, and viruses, as well as archaea . Cyanobacteria were formerly called blue-green algae and classified as eukaryotes, but they are now classified as prokaryotes . This study focused on the sequence of the phylum Cyanobacteria in the human microbiome rather than the bacteria themselves. Cyanobacteria can also be found in the human microbiome and are often found in stagnant or polluted water, such as in reservoirs or lakes . The presence of Cyanobacteria in freshwater is associated with suspended cyanobacterial blooms and toxins . In particular, with global warming acceleration, the water temperature is rising. Eutrophication due to the presence of domestic sewage and a high water temperature creates the optimal conditions for the growth of Cyanobacteria . The accelerating growth of Cyanobacteria in freshwater worldwide has been studied using the Landsat 5 satellite data from the National Aeronautics and Space Administration (NASA) . In this study, cyanobacterial blooms in Republic of Korea were identified using data from the Sentinel-2 satellite of the European Space Agency. Cyanobacteria can not only be absorbed into the human body by directly drinking contaminated water but also through recreational activities . In addition, according to previous studies, the consumption of fish and shellfish and vegetables grown with contaminated water can affect the concentration of Cyanobacteria in the human body . Cyanobacteria produce various toxins that can cause problems in the liver, nervous system, and genes. Examples are microcystin, nodularin, anatoxin, saxitoxin, and cylindrospermopsin. . The most commonly found cyanobacterial toxins are microcystins and anatoxins . While acute exposure to microcystins causes nausea, vomiting, abdominal pain, and skin rash , chronic exposure to this toxin can cause malignancy, such as colon or liver cancer . In particular, microcystin-LR can act as a tumor promoter in human cancer development . The relationship between breast cancer and Cyanobacteria has not been studied. Since Cyanobacteria are associated with the development of certain malignancies , it is important to study their effects on breast cancer. The presence of cyanobacterial DNA in the human microbiome may not be a result of short-term exposure to Cyanobacteria. It results from cumulative exposure to Cyanobacteria toxins since birth through the food chain . The effect of long-term exposure to Cyanobacteria on the prognosis of patients with breast cancer was analyzed by testing the blood microbiome of patients. In particular, here, we focused on the presence of bacterial extracellular vesicles in the blood of patients. Bacterial extracellular vesicles contain nucleic acids and metabolites and are present in all body fluids, especially in the blood . Previous studies have shown that there is a relationship between the composition of the microbiome in extracellular vesicles of body fluids and breast cancer . In this study, the effect of a phylum Cyanobacteriacontaining microbiome on the prognosis of patients with breast cancer was investigated. Our results will help explain the effect of a Cyanobacteria-polluted environment, which is reflected in the microbiome of breast cancer patients, on their prognosis. Recruitment of Patients Female patients diagnosed with breast cancer and presenting pathologic results were recruited. After the pathologic diagnosis of breast cancer, their sera were collected before treatment including surgery, chemotherapy, or radiation therapy. The subjects had no history of taking drugs or supplements such as antibiotics or probiotics within one month before blood sampling. Informed consent was obtained from the study participants. The research was conducted with the approval of the respective institutional review board (the approval number: Ewha Womans University Mokdong Hospital, EUMC 2014-10-005-019). At the time of initial recruitment, the patients with stage 0-III breast cancer with no metastasis or recurrence were recruited and were followed up for 7 or 8 years. Distribution of Cyanobacteria Blooms in Korean Freshwater according to the Satellite The Copernicus Sentinel-2 satellite, which has a high-resolution multispectral sensor, was used to analyze the status of cyanobacterial blooms in the riverine area of the Korean peninsula. The Ulyssys Water Quality Viewer (UWQV), whose information derives from Sentinel-2, was utilized to identify the cyanobacterial blooms, assuming that these can be identified by the colored UWQV values . The Sentinel Hub EO Browser was implemented for analysis of the imagery. For data analysis, the highly concentrated UWQV value areas were compared with the low-UWQV-value areas from July to August 2022. Isolation of the Extracellular Vesicles and DNA Extraction Serum samples were collected from the participants using the same method as that used in the previous studies . Blood samples were collected into serum vacuum separator tubes. For each sample, the serum was isolated from the blood, and then the extracellular vesicles (EVs) were collected. To isolate the EVs, the serum was centrifuged at 1500× g for 15 min at 4 • C and diluted in 1× phosphate-buffered saline (PBS, pH 7.4, ML008-01; Welgene, Gyeongsan, Republic of Korea). Thereafter, centrifugation was performed at 10,000× g at 4 • C for 1 min. Then, the supernatant was filtered using a 0.22 µm filter and ultracentrifuged at 150,000× g for 3 h at 4 • C using a 45 Ti rotor (Beckman Instruments, Brea, CA, USA). The obtained pellet, containing the EVs, was diluted in PBS and stored at −80 • C. Thereafter, an isolation kit (MoBio PowerSoil DNA Isolation Kit; Qiagen, Hilden, Germany) was used to obtain the DNA. DNA was extracted from the EVs according to the manufacturer's instructions. The extracted DNA was quantified using a QIAxpert system (Qiagen). Next-Generation Sequencing for Microbiome Analysis Next-generation sequencing was performed based on the EV extraction results of the participants and on the bacteria-specific 16s rDNA-in particular, the V3-V4 region, which is a hypervariable region. For this, the MiSeq system and application (Illumina, San Diego, CA, USA) suitable for the NGS of bacteria were used. The primers used and the corresponding sequences are the following: 16S_V3_F (5 -TCGTCGGCAGCGTCAGATGTGTATAAGA GACAGCCTACGGGNGGCWGCAG-3 ) and 16S_V4_R (5 -GTCTCGTGGGCTCGGAGAT GTATAAGAGAGACAGGAC-TACHVGGGTATCTAATCC-3 ), which are the same as those used in the previous studies . Libraries were prepared, and each amplicon was sequenced according to the manufacturer's guidelines. MDx-Pro ver.1 (MD Healthcare, Seoul, Republic of Korea), a profiling program, was used for the taxonomic assignment. The read length was set as 300 bp, and the average quality score was set to >20. The operational classification units were based on the CD-HIT cluster database, and the UCLUST algorithm was used to divide a set of sequences into clusters. Moreover, QIIME, a microbiome bioinformatics platform, and Greengenes were used to analyze the microbial communities. The results were analyzed from the phylum to the species level. In particular, the ratio of the Cyanobacteria phylum to other phyla was analyzed, and the microbiome with the highest Cyanobacteria ratio at the species level was identified. Among the microbiomes, the microbiome assigned to 'Uncultured Cyanobacterium' was the most studied. Analysis of the Percentage of Cyanobacteria in the Microbiome of Patients with Breast Cancer The microbiomes of patients with breast cancer were analyzed in terms of alpha and beta diversity ( Figure S1). The percentage of Cyanobacteria was compared with that of other phyla using dot plot graphs. The relationship between the ratio of Cyanobacteria and the factors affecting breast cancer was analyzed. The accumulation of Cyanobacteria according to time was analyzed by investigating the relationship between the absence or presence of phylum Cyanobacteria and the age of patients. The proportion of phylum Cyanobacteria in women undergoing menopause, which is a factor that affects hormone levels, was investigated. The relationship between the percentage of phylum Cyanobacteria and the stage, tumor size, and lymph node metastasis was analyzed. The association between the ratio of phylum Cyanobacteria and factors related to liver metabolism was investigated. Serological test results, such as the low-density lipoprotein (LDL) cholesterol, high-density lipoprotein (HDL) cholesterol, and fasting blood glucose levels, were obtained. Since most of the participants did not have AST (aspartate aminotransferase) and ALT (alanine transaminase) levels higher than three times the normal value (>120 IU/L), which are the liver disease criteria , no patient was diagnosed with liver disease. Therefore, the ALP (Alkaline phosphatase) level was used to analyze the relevance of Cyanobacteria to liver function. The interaction between the ratio of phylum Cyanobacteria and the bacteria affecting liver function, such as the genus Lactobacillus and the family Ruminococcaceae, was investigated . Finally, Kaplan-Meier was used to investigate the metastasis and recurrence of patients with breast cancer after a certain time elapsed, according to the abundance of phylum Cyanobacteria. Characteristics of Patients For microbiome analysis, 96 patients with breast cancer participated in this study, which was conducted as part of another large project. The microbiome was analyzed through serological testing prior to the treatment of patients with breast cancer. The average age of the patients was 51 years. Body mass index (BMI) was classified into four groups according to the World Health Organization (WHO) classification: underweight (<18.5 kg/m 2 ), normal weight (18.5-24.9 kg/m 2 ), overweight (25-29.9 kg/m 2 ), and obesity class I (30.0-34.9 kg/m 2 ) . The menstruation and eating habit characteristics of the patients with breast cancer were studied through questionnaires. The postoperative biopsy results were used to analyze the breast cancer stages, the subtypes, the tumor size, and the number of metastatic lymph nodes. For the serological testing of the patients, the high-density lipoprotein (HDL) and low-density lipoprotein (LDL) cholesterol, alkaline phosphatase (ALP), and fasting glucose levels were measured. There were no patients with diabetes mellitus among the recruited subjects, and there was no case of a fasting blood glucose of 126 mg/dL or higher. None of the patients were smokers or alcoholics (Table 1). Analysis of the Cyanobacterial Blooms in Republic of Korea As shown in Figure 1, from July to August 2022, abnormally high UWQV values were observed in the Ganwol Lake, which is an artificial lake in Cheonsu Bay. The Nakdong and Geum Rivers also showed high UWQV values, unlike the Han River. The Ganwol Lake is likely polluted by industrial factors. Conversely, the Han River shows some cyanobacterial blooms that are not as severe as those of the Nak-dong and Geum Rivers. Characteristics of the Cyanobacteria Phylum The average percentage of phylum Cyanobacteria in the human microbiome is less than 1%. Figure 2 shows the relative proportion of phylum Cyanobacteria to other phyla in patients with breast cancer. The proportion of the top 10 phyla in the patients with breast cancer was 99.2%, and the most common phylum in these patients was Proteobacteria, followed by Firmicutes and Bacteroidetes. The average percentage of Cyanobacteria sequences in the patients was 0.45%, which was lower than that of Verrucomicrobia sequences but similar to that of Patescibacteria sequences (Figure 2). The patients with breast cancer participating in our study were followed up for 7-8 years, and the rate of recurrence was analyzed. The patients with breast cancer were divided into two groups: patients who were recurrence-free and those with recurrence. The amount of phylum Cyanobacteria in the microbiome of the patients of the two groups was compared. The average amount of Cyanobacteria in the microbiome was higher in recurrent patients than it was in patients who were recurrence-free ( Figure 3). The urine microbiome was further analyzed to compare it with the serum microbiome data from breast cancer patients. As a result, there was no statistically significant difference in the number of the sequences of Cyanobacteria in serum and urine ( Figure S3). The influence of the presence of phylum Cyanobacteria in the microbiome of patients with breast cancer on this condition was analyzed (Figure 4). Patients were divided into two groups according to the presence of Cyanobacteria in their microbiome: one without (negative group) and one with Cyanobacteria (positive group). The group without Cyanobacteria had lower-stage cancer. The late-stage (stage III) patients tended to have a higher proportion of Cyanobacteria in their microbiome than the early-stage (stage 0-II) patients. When looking at the ratio of Cyanobacteria according to premenopause or menopause, the ratio of Cyanobacteria in the microbiome of menopausal patients was relatively high. Four types of breast cancer were investigated with the phylum Cyanobacteria level. The hormonedependent group, luminal A and B patients, accounted for 70.8%, a similar portion of all breast cancer patients. Patients were divided into two groups according to hormone dependence and were compared with the microbiome of phylum Cyanobacteria. These results showed no significant difference in the level of phylum Cyanobacteria in hormone-positive (luminal A and B) and hormone-negative (HER2 and TNBC) breast cancers ( Figure S2). Regarding eating habits, the amount of Cyanobacteria in the microbiome of vegetarian patients was the highest. The average age of the patients in the Cyanobacteria-positive group was significantly higher. The BMI (body mass index) and Ki-67 index were similar in the two groups. The ratio of Cyanobacteria was proportional to the tumor size and lymph node metastasis. The presence of lymph node metastasis seemed to be related to a higher amount of Cyanobacteria in the microbiome. Lymph node metastasis might be related to the amount of Cyanobacteria in the microbiome in late-stage patients. The tumor size seemed to be inversely related to the amount of Cyanobacteria in the microbiome; however, this relationship was not statistically significant. The relative proportion of phylum Cyanobacteria in patients with recurrence-free and recurrence states was compared. RF: patients without recurrence during more than 6 years of follow-up; R: patients with recurrence during more than 6 years of follow-up; RF-R: mean difference in phylum Cyanobacteria between the two patient groups. Analysis of the Relationship between the Presence of Cyanobacteria in the Microbiome and Liver Function The presence of phylum Cyanobacteria and the liver function-related serological results were integrated. Serological tests related to liver metabolism and function include the analysis of the levels of glucose, LDL and HDL cholesterol, and ALP. These results were integrated with the abundance of Cyanobacteria in the microbiome and its relevance. The lower the HDL cholesterol level in patients, the higher the percentage of Cyanobacteria in their microbiome. The LDL cholesterol level did not seem to be related to the abundance of Cyanobacteria. Although there was no statistical significance, the higher the ALP level, the higher the percentage of Cyanobacteria in the microbiome of patients. The ratio of Cyanobacteria was high in vegetarians and relatively low in patients who eat meat-based meals every day ( Figure 5). The purpose of this study was to integrate the percentage of specific Cyanobacteria species with the serological test results. The proportion of the microbiome assigned to Uncultured Cyanobacterium spp. was determined. Uncultured Cyanobacterium spp. seem to be related to the LDL cholesterol level rather than to the blood sugar level, and as the ratio of LDL cholesterol increases, the ratio of Cyanobacteria tends to increase (Figure 6). Since the results of the microbiome analysis in chronic liver disease and cirrhosis patients were investigated in previous studies, the family Ruminococcaceae and the genus Lactobacillus were known as microbiomes related to liver disease . Figure 7 shows the analysis results that confirm the relationship between the abundance of phylum Cyanobacteria and that of other bacteria found in the microbiome that are related to liver disease. The relationship between these three microbiomes was expressed as a bubble plot. The x-axis represents the abundance of the genus Lactobacillus, the y-axis represents that of the family Ruminococcaceae, and the bubble size and color represent the proportion of Cyanobacteria. The amount of Cyanobacteria was low when the ratio of the family Ruminococcaceae and the genus Lactobacillus was high, and it was high when the ratio of the family Ruminococcaceae and the genus Lactobacillus was low. In particular, under a genus Lactobacillus abundance of 3.9% or less and an abundance of the family Ruminococcaceae of 11.1% or less, a patient group with an abundance of phylum Cyanobacteria of 2% or more was concentrated. The average percentage of phylum Cyanobacteria found in the microbiome of patients with breast cancer was 0.45%, and it ranged from 0% to less than 3% (Figure 8). Patients were divided into two groups based on a Cyanobacteria abundance value of 0.3%, which is lower than the average value obtained. The group with less than 0.3% of Cyanobacteria was called group A, and the group with more than 0.3% of Cyanobacteria was called group B. When comparing the number of patients by stage in groups A and B, group A has two fewer stage 0 patients, three fewer stage III patients, and five more stage I-II patients than group B. Groups A and B showed differences in the recurrence-free survival. The probability of metastasis and recurrence was significantly lower in group A than in group B. That is, when less than 0.3% of Cyanobacteria was found in the microbiome of patients, the chance of occurrence of metastasis or recurrence was lower. Figure 7. Relationship between the abundance of phylum Cyanobacteria and that of the family Ruminococcaceae and the genus Lactobacillus, which are associated with liver disease, in the microbiome of patients with breast cancer. As the amount of the family Ruminococcaceae increases, the amount of Cyanobacteria decreases. As the amount of the genus Lactobacillus increases, the amount of Cyanobacteria also decreases. The bubble size and color correspond to the abundance of Cyanobacteria. There were some values lower than the 0.01 value that was set for the smallest bubble; however, these have been omitted from the graph to ensure clarity. Figure 8. Comparison of the recurrence-free survival between the patients with breast cancer of group A (patients with an abundance of Cyanobacteria < 0.3%) and group B (patients with an abundance of Cyanobacteria ≥ 0.3%). The blue square box on the right shows the stage ratio of the patients of groups A and B when the patients were first recruited. Kaplan-Meier survival analysis was performed, and the log-rank p value was 0.0173. Discussion Cyanobacteria have been around since the beginning of life. Since Cyanobacteria produce oxygen through photosynthesis, they were suitable to live on Earth from the start and were the ones to give rise to other life forms . However, nowadays, Cyanobacteria are considered pollutants that produce cyanobacterial toxins through eutrophication . Above all, cyanobacterial toxins have been pointed out as being harmful to human health . Although the toxins have not been directly investigated in this study, the result of prolonged exposure to Cyanobacteria, which affects their proportion in the human microbiome, was analyzed. To confirm the severity of the effects of Cyanobacteria, cyanobacterial blooms in Republic of Korea were identified using a satellite database. The Ganwol lake, the artificial lake of Cheonsu Bay, is a representative place showing the severity of cyanobacterial blooms. The Okgu reservoir in Gunsan city, a Gaehwa retention reservoir in Buan-gun, and the Yeongju lake in Yeongju city showed severe cyanobacterial blooms. The Geum and Nakdong river estuaries also showed a moderate degree of cyanobacterial blooms. Cyanobacteria flourishment is different depending on the season and changes due to differences in the water temperature and the level of eutrophication. However, the fact that there are places with a level of cyanobacterial contamination this high provides sufficient evidence that the toxicity of Cyanobacteria is likely to influence South Koreans. The concentration of Cyanobacteria in freshwater is different depending on the environment, changing with the different seasons . The degree of exposure to Cyanobacteria was measured as the abundance of phylum Cyanobacteria in the microbiome of patients. We tried to determine the difference between the concentration of Cyanobacteria in the microbiome of patients with breast cancer and the average concentration of Cyanobacteria found in the microbiome of healthy controls, since Cyanobacteria are present in the microbiome of 80% of the population . That is, phylum Cyanobacteria can accumulate in both healthy people and breast cancer patients. Even small changes in the phylum Cyanobacteria concentration in the microbiome can play an important role in human health. In particular, the average abundance of phylum Cyanobacteria in the breast cancer group was 0.45%; however, in the metastatic or recurrent patient group, the average value was 0.76%. Although the difference between these two groups is not statistically significant, this trend suggests the possibility that the presence of Cyanobacteria in the microbiome may influence the occurrence of recurrence or metastasis. Phylum Cyanobacteria were identified in urine as well as serum ( Figure S2). It can be expected that the microbiome can affect the whole body, including the breast, through the circulation of body fluids. The abundance of phylum Cyanobacteria tended to increase with age, which is due to the fact that the amount of Cyanobacteria in the body may accumulate over time. Phylum Cyanobacteria seem to be more abundant in the microbiome of menopausal patients, which seems to be related to the age of the menopausal group patients. Regarding the association with the breast cancer stage, the higher the stage, the higher the abundance of phylum Cyanobacteria. It is likely that metastatic lymph nodes contribute to late-stage cancer due to the abundance of Cyanobacteria in the microbiome. Thus, lymph node metastasis may be associated with the abundance of Cyanobacteria. A correlation between the concentration of phylum Cyanobacteria in the microbiome and several factors related to liver function has been suggested. The lower the HDL cholesterol, the lower the Cyanobacteria abundance in the microbiome. Although this difference was not statistically significant, it suggests that the production of HDL cholesterol is induced when the abundance of Cyanobacteria is low. There was no relationship between the LDL cholesterol and the abundance of Cyanobacteria in the microbiome. However, the higher the LDL cholesterol, the higher the abundance of Uncultured Cyanobacterium spp. Although the p-value is 0.09, the abundance of Uncultured Cyanobacterium spp. in the microbiome tended to affect liver function, such as lipid metabolism. In liver and bone disease, the ALP level increases. The ALP level tends to increase as the abundance of Cyanobacteria increases, which may be correlated with liver function. The main toxin produced by Cyanobacteria is microcystin, which is hepatotoxic. Since the amount of Cyanobacteria in the microbiome is not fatal and accumulates in the body for a lifetime instead of for a short time, no statistical significance was observed. However, there seems to be an association between the presence of phylum Cyanobacteria in the microbiome and liver function and lymph node metastasis in patients with breast cancer. Chronic liver disease and the human microbiome are closely related according to previous studies . Clinical trials in which probiotics and prebiotics are ingested have shown their good effects on the liver enzyme and lipid profile . The abundance of lactic acid bacteria, such as Lactobacillus, decreases in chronic liver disease, especially in alcoholic liver disease . The abundance of Rumonococcoceae tends to decrease in liver cirrhosis . As the abundance of Cyanobacteria in the microbiome increased, that of these beneficial microorganisms, Lactobacillus and Rumonococcoceae, decreased. This may be due to the fact that the presence of Cyanobacteria reduces the number of beneficial bacteria in the microbiome and creates a microenvironment that adversely affects the health of the host. The problems associated with Cyanobacteria that have been identified in previous studies are toxin-induced hepatotoxicity and neurotoxicity. The effect of Cyanobacteria on breast cancer prognosis has not been elucidated before. This study is meaningful since it reveals the effect of Cyanobacteria on the metastasis and recurrence in patients with breast cancer. According to the Kaplan-Meier analysis, the prognosis of patients with breast cancer is better when the abundance of Cyanobacteria is lower than 0.3%. This result is statistically significant (log-rank p value = 0.0173). By reducing the amount of beneficial bacteria in the microbiome, Cyanobacteria are thought to influence the prognosis of patients with breast cancer. Conclusions Cyanobacteria affect the environment and human health. In this study, the effect of the presence of Cyanobacteria in the microbiome of patients with breast cancer on their condition was investigated. Cyanobacteria are known for their hepatotoxicity and neurotoxicity. In this study, the relationship between the presence of phylum Cyanobacteria in the microbiome of patients with breast cancer and their prognosis was analyzed. A limitation of this study is that certain factors which directly affect the amount of Cyanobacteria in the microbiome of patients, such as the residence place of patients, have not been studied. The specific genus of Cyanobacteria was also not identified. However, this study opens a discussion about the influence of the presence of Cyanobacteria in the surrounding environment on patients with breast cancer and prompts us to rethink the impact of Cyanobacteria-polluted environments on the prognosis of patients with breast cancer. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/jcm11247272/s1, Figure S1: Analysis of Alpha diversity or betadiversity by dividing the group without phylum Cyanobacteria (Cyanobacteria(−)) and with phylum Cyanobacteria (Cyanobacteria(+)); Figure S2: The level of the phylum Cyanobacteria in patients with breast cancer via serum and urine samples. The relative proportion of phylum Cyanobacteria in serum or urine was compared; Figure S3: The percentage of phylum Cyanobacteria according to breast cancer with hormone-positive (luminal A and B) or hormone-negative (HER2 and TNBC). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
<filename>interp/src.go package interp import ( "fmt" "io/fs" "path/filepath" "strings" "golang.org/x/tools/go/packages" ) // importSrc calls global tag analysis on the source code for the package identified by // importPath. rPath is the relative path to the directory containing the source // code for the package. It can also be "main" as a special value. func (interp *Interpreter) importSrc(rPath, importPath string, skipTest bool) (string, error) { var dir string var err error if interp.srcPkg[importPath] != nil { name, ok := interp.pkgNames[importPath] if !ok { return "", fmt.Errorf("inconsistent knowledge about %s", importPath) } return name, nil } // resolve relative and absolute import paths if isPathRelative(importPath) { if rPath == mainID { rPath = "." } dir = filepath.Join(filepath.Dir(interp.name), rPath, importPath) } else if dir, err = interp.getPackageDir(importPath); err != nil { return "", err } if interp.rdir[importPath] { return "", fmt.Errorf("import cycle not allowed\n\timports %s", importPath) } interp.rdir[importPath] = true files, err := fs.ReadDir(interp.opt.filesystem, dir) if err != nil { return "", err } var initNodes []*node var rootNodes []*node revisit := make(map[string][]*node) var root *node var pkgName string // Parse source files. for _, file := range files { name := file.Name() if skipFile(&interp.context, name, skipTest) { continue } name = filepath.Join(dir, name) var buf []byte if buf, err = fs.ReadFile(interp.opt.filesystem, name); err != nil { return "", err } n, err := interp.parse(string(buf), name, false) if err != nil { return "", err } if n == nil { continue } var pname string if pname, root, err = interp.ast(n); err != nil { return "", err } if root == nil { continue } if interp.astDot { dotCmd := interp.dotCmd if dotCmd == "" { dotCmd = defaultDotCmd(name, "yaegi-ast-") } root.astDot(dotWriter(dotCmd), name) } if pkgName == "" { pkgName = pname } else if pkgName != pname && skipTest { return "", fmt.Errorf("found packages %s and %s in %s", pkgName, pname, dir) } rootNodes = append(rootNodes, root) subRPath := effectivePkg(rPath, importPath) var list []*node list, err = interp.gta(root, subRPath, importPath, pkgName) if err != nil { return "", err } revisit[subRPath] = append(revisit[subRPath], list...) } // Revisit incomplete nodes where GTA could not complete. for _, nodes := range revisit { if err = interp.gtaRetry(nodes, importPath, pkgName); err != nil { return "", err } } // Generate control flow graphs. for _, root := range rootNodes { var nodes []*node if nodes, err = interp.cfg(root, nil, importPath, pkgName); err != nil { return "", err } initNodes = append(initNodes, nodes...) } // Register source package in the interpreter. The package contains only // the global symbols in the package scope. interp.mutex.Lock() gs := interp.scopes[importPath] if gs == nil { // A nil scope means that no even an empty package is created from source. return "", fmt.Errorf("no Go files in %s", dir) } interp.srcPkg[importPath] = gs.sym interp.pkgNames[importPath] = pkgName interp.frame.mutex.Lock() interp.resizeFrame() interp.frame.mutex.Unlock() interp.mutex.Unlock() // Once all package sources have been parsed, execute entry points then init functions. for _, n := range rootNodes { if err = genRun(n); err != nil { return "", err } interp.run(n, nil) } // Wire and execute global vars in global scope gs. n, err := genGlobalVars(rootNodes, gs) if err != nil { return "", err } interp.run(n, nil) // Add main to list of functions to run, after all inits. if m := gs.sym[mainID]; pkgName == mainID && m != nil && skipTest { initNodes = append(initNodes, m.node) } for _, n := range initNodes { interp.run(n, interp.frame) } return pkgName, nil } // getPackageDir uses the GOPATH to find the absolute path of an import path. func (interp *Interpreter) getPackageDir(importPath string) (string, error) { // search the standard library and Go modules. config := packages.Config{} config.Env = append(config.Env, "GOPATH="+interp.context.GOPATH, "GOCACHE="+interp.opt.env["goCache"], "GOTOOLDIR="+interp.opt.env["goToolDir"]) pkgs, err := packages.Load(&config, importPath) if err != nil { return "", fmt.Errorf("an error occurred retrieving a package from the GOPATH: %v\n%v\nIf Access is denied, run in administrator", importPath, err) } // confirm the import path is found. for _, pkg := range pkgs { for _, goFile := range pkg.GoFiles { if strings.Contains(filepath.Dir(goFile), pkg.Name) { return filepath.Dir(goFile), nil } } } // check for certain go tools located in GOTOOLDIR if interp.opt.env["goToolDir"] != "" { // search for the go directory before searching for packages // this approach prevents the computer from searching the entire filesystem godir, err := searchUpDirPath(interp.opt.env["goToolDir"], "go", false) if err != nil { return "", fmt.Errorf("an import source could not be found: %q\nThe current GOPATH=%v, GOCACHE=%v, GOTOOLDIR=%v\n%v", importPath, interp.context.GOPATH, interp.opt.env["goCache"], interp.opt.env["goToolDir"], err) } absimportpath, err := searchDirs(godir, importPath) if err != nil { return "", fmt.Errorf("an import source could not be found: %q\nThe current GOPATH=%v, GOCACHE=%v, GOTOOLDIR=%v\n%v", importPath, interp.context.GOPATH, interp.opt.env["goCache"], interp.opt.env["goToolDir"], err) } return absimportpath, nil } return "", fmt.Errorf("an import source could not be found: %q. Set the GOPATH and/or GOTOOLDIR environment variable from Interpreter.Options", importPath) } // searchUpDirPath searches up a directory path in order to find a target directory. func searchUpDirPath(initial string, target string, isCaseSensitive bool) (string, error) { // strings.Split always returns [:0] as filepath.Dir returns "." or the last directory splitdir := strings.Split(filepath.Join(initial), string(filepath.Separator)) if len(splitdir) == 1 { return "", fmt.Errorf("The target directory %q is not within the path %q", target, initial) } updir := splitdir[len(splitdir)-1] if !isCaseSensitive { updir = strings.ToLower(updir) } if updir == target { return initial, nil } return searchUpDirPath(filepath.Dir(initial), target, isCaseSensitive) } // searchDirs searches within a directory (and its subdirectories) in an attempt to find a filepath. func searchDirs(initial string, target string) (string, error) { absfilepath, err := filepath.Abs(initial) if err != nil { return "", err } // find the go directory var foundpath string filter := func(path string, d fs.DirEntry, err error) error { if d.IsDir() { if d.Name() == target { foundpath = path } } return nil } if err = filepath.WalkDir(absfilepath, filter); err != nil { return "", fmt.Errorf("An error occurred searching for a directory.\n%v", err) } if foundpath != "" { return foundpath, nil } return "", fmt.Errorf("The target filepath %q is not within the path %q", target, initial) } func effectivePkg(root, path string) string { splitRoot := strings.Split(root, string(filepath.Separator)) splitPath := strings.Split(path, string(filepath.Separator)) var result []string rootIndex := 0 prevRootIndex := 0 for i := 0; i < len(splitPath); i++ { part := splitPath[len(splitPath)-1-i] index := len(splitRoot) - 1 - rootIndex if index > 0 && part == splitRoot[index] && i != 0 { prevRootIndex = rootIndex rootIndex++ } else if prevRootIndex == rootIndex { result = append(result, part) } } var frag string for i := len(result) - 1; i >= 0; i-- { frag = filepath.Join(frag, result[i]) } return filepath.Join(root, frag) } // isPathRelative returns true if path starts with "./" or "../". // It is intended for use on import paths, where "/" is always the directory separator. func isPathRelative(s string) bool { return strings.HasPrefix(s, "./") || strings.HasPrefix(s, "../") }
/** * Remove the old log file and create a new one. */ public void removeAndCreateNewLog() { logMgrLock.lock(); try { VanillaDb.fileMgr().delete(logFile); lastLsn = LogSeqNum.DEFAULT_VALUE; lastFlushedLsn = LogSeqNum.DEFAULT_VALUE; appendNewBlock(); } finally { logMgrLock.unlock(); } }
def input_s(): return raw_input() def input_i(): return int(raw_input()) def input_2_i(): arr = map(int, raw_input().split()) return arr[0], arr[1] def input_3_i(): arr = map(int, raw_input().split()) return arr[0], arr[1], arr[2] def input_i_array(): return map(int, raw_input().split()) def get_presum(n, arr): presum = [arr[0]] for i in range(1, n): presum.append(presum[i - 1] + arr[i]) return presum def calc(n, a): dp = [[0, 0] for _ in range(n)] remain = [[0, 0] for _ in range(n)] dp[0] = [0, a[0] / 3] remain[0] = [a[0], a[0] % 3] for i in range(1, n): dp[i][0] = max(dp[i - 1]) if dp[i - 1][0] > dp[i - 1][1]: remain[i][0] = remain[i - 1][0] + a[i] elif dp[i - 1][0] < dp[i - 1][1]: remain[i][0] = remain[i - 1][1] + a[i] else: remain[i][0] = max(remain[i - 1]) + a[i] cnt_cost = [ min(a[i] / 2, remain[i - 1][0]), min(a[i] / 2, remain[i - 1][1]) ] extras = [ (a[i] - cnt_cost[0] * 2) / 3, (a[i] - cnt_cost[1] * 2) / 3 ] if dp[i - 1][0] + cnt_cost[0] + extras[0] > dp[i - 1][1] + cnt_cost[1] + extras[1]: dp[i][1] = dp[i - 1][0] + cnt_cost[0] + extras[0] remain[i][1] = remain[i - 1][0] - cnt_cost[0] + (a[i] - cnt_cost[0] * 2 - extras[0] * 3) elif dp[i - 1][0] + cnt_cost[0] + extras[0] < dp[i - 1][1] + cnt_cost[1] + extras[1]: dp[i][1] = dp[i - 1][1] + cnt_cost[1] + extras[1] remain[i][1] = remain[i - 1][1] - cnt_cost[1] + (a[i] - cnt_cost[1] * 2 - extras[1] * 3) else: dp[i][1] = dp[i - 1][1] + cnt_cost[1] + extras[1] if remain[i - 1][0] > remain[i - 1][1]: remain[i][1] = remain[i - 1][0] - cnt_cost[0] + (a[i] - cnt_cost[0] * 2 - extras[0] * 3) else: remain[i][1] = remain[i - 1][1] - cnt_cost[1] + (a[i] - cnt_cost[1] * 2 - extras[1] * 3) """ for i in range(n): print '{}({}) '.format(dp[i][0], remain[i][0]), print '' for i in range(n): print '{}({}) '.format(dp[i][1], remain[i][1]), print '' """ return max(dp[n - 1]) """ 5 1 3 3 5 3 """ n = input_i() arr = input_i_array() print calc(n, arr)
/** * CacheClient : Class used to request the segments cache */ public class CacheAccessClient<K, V> { /** * List of segments */ private List<K> segmentList = new ArrayList<>(CarbonCommonConstants.DEFAULT_COLLECTION_SIZE); private Cache<K, V> cache; public CacheAccessClient(Cache<K, V> cache) { this.cache = cache; } /** * This method will return the value for the given key. It will not check and load * the data for the given key * * @param key * @return */ public V getIfPresent(K key) { V value = cache.getIfPresent(key); if (value != null) { segmentList.add(key); } return value; } /** * This method will get the value for the given key. If value does not exist * for the given key, it will check and load the value. * * @param key * @return * @throws IOException in case memory is not sufficient to load data into memory */ public V get(K key) throws IOException { V value = cache.get(key); if (value != null) { segmentList.add(key); } return value; } /** * the method is used to clear access count of the unused segments cacheable object */ public void close() { cache.clearAccessCount(segmentList); cache = null; } /** * This method will remove the cache for a given key * * @param keys */ public void invalidateAll(List<K> keys) { for (K key : keys) { cache.invalidate(key); } } }
Syndicate will not use the controversial Online Pass, EA has confirmed. EA Partners executive producer Jeff Gamon told Eurogamer the decision was made in an effort to encourage all players to play the shooter, which includes a co-op component. "We want as little resistance or barriers to entry as possible," Gamon said. "The co-op is equal billing in this. We wanted everyone who owns a copy of the game to have access to the entire product." EA's policy is to include Online Pass in all its games, but this does not apply to EA Partners games. Last year EA Partners published Portal 2, made by Valve, and Crysis 2, made by Crytek, and neither included Online Pass. But there are differences between these two games and Syndicate, Gamon said, which makes the decision not to include Online Pass all the more surprising. While Swedish developer Starbreeze is independent, and EA Partners managed the development of the game, Syndicate is an EA-owned IP and the game is published by EA, Gamon said. "Under normal circumstances it would have had an online pass, but because it didn't have competitive multiplayer and because we wanted as many people as possible to be playing co-op, we got away with it," Gamon explained. "Maybe another reason for not having the Online Pass is we were confident in the scope of the online game. "There are nine maps. It's hard to say, but just to play through the maps once on normal is a good six, seven hours. To progress your character and upgrade a few weapons is a heap of content. That and the single-player campaign means hopefully we won't see much in the way of early second hand sales and rentals." EA's Online Pass - an attempt to discourage pre-owned purchases - hit the headlines last month when it emerged that it unlocked quest content for single-player RPG Kingdoms of Amalur: Reckoning. The Online Pass, included in new copies of the open world fantasy, unlocks the House of Valor faction quest, which includes seven individual single player missions. In addition, it unlocks a Mass Effect 3-themed in-game item - the N7-inspired Shepard's Battle Armour. If you have a second hand copy of the game, you have to pay for the Online Pass to unlock the content.
National Institutes of Health seek to speed up therapeutic innovations These are exciting times in medical research, no doubt. Too often, that excitement peters out long before the fruits of the laboratory ever become a treatment or a cure. Now, aiming to break a bottleneck between the test tube and the drug store, doctor’s office and patient’s bedside, the United
Former Nauru security guard stands by water-boarding claim, but admits he never saw it happen Updated A former Australian security guard at the Nauru detention facility, who claimed refugees had been water-boarded, has been forced to concede he never saw it taking place. A Senate committee is investigating claims of abuse against refugees and asylum seekers at the island's Australian-funded facility. Jon Nichols worked for Wilson Security — a service provider at the centre — between 2013 and June this year. He told the inquiry he believed Palestinian refugees had been water-boarded on "two or three occasions" last year. "Members of the ERT [Emergency Response Team] ... conducted [water-boarding] against members of the Palestinian community that was on Nauru, refugees," he said. "These matters were raised with my direct supervisor. "Senior management, I'm not sure [if they were told], but definitely management that was on the ground." There's a dodgy Senate inquiry that is going on at the moment which is being run as a kangaroo court by the Greens and Labor. Peter Dutton, Immigration Minister But under questioning from Liberal Senator David Johnston, Mr Nichols later conceded he had not seen it taking place. "No, I have not personally witnessed the actual event but I have witnessed what I firmly believe to be the actions after," he said. "Water coming out of their mouth, coughing up water." Mr Nichols conceded his statement, saying the torture had occurred "throughout the facility" referred to only one part of the detention centre. "By throughout the facility I'm referring to bravo compound, so in that sense throughout the facility may have been a bad choice of words," he said. Wilson Security's John Rogers said the company rejects the allegations. "Frankly the evidence that I've heard is preposterous," Mr Rogers said. "I can categorically confirm there has never been a report or even slightest rumour of activity of this nature. "Before this afternoon the only allegations resembling these have been reported by a lawyer representing an ex-employee who has indicated that his client will give evidence in this regard." Before today's hearing, Immigration Minister Peter Dutton attacked the inquiry and rejected the claim of torture. "The suggestion that people have been tortured is nonsense," he said. "I have seen no evidence, no suggestion whatsoever of any of that sort of activity. "There's a dodgy Senate inquiry that is going on at the moment which is being run as a kangaroo court by the Greens and Labor." Topics: refugees, immigration, nauru First posted
use sabi_serialize::{deserialize, Deserialize, Serialize}; use crate::{ implement_node, implement_pin, LogicContext, LogicData, Node, NodeExecutionType, NodeState, NodeTrait, NodeTree, PinId, }; use sabi_serialize::typetag; #[derive(Serialize, Deserialize, Copy, Clone)] #[serde(crate = "sabi_serialize")] pub enum LogicExecution { Type, } impl Default for LogicExecution { fn default() -> Self { LogicExecution::Type } } implement_pin!(LogicExecution); #[derive(Serialize, Deserialize, Clone)] #[serde(crate = "sabi_serialize")] pub struct RustExampleNode { node: Node, } implement_node!( RustExampleNode, node, "Example", "Rust example node", NodeExecutionType::OnDemand ); impl Default for RustExampleNode { fn default() -> Self { let mut node = Node::new(stringify!(RustExampleNode)); node.add_input("in_int", 0_i32); node.add_input("in_float", 0_f32); node.add_input("in_string", String::new()); node.add_input("in_bool", false); node.add_input("in_execute", LogicExecution::default()); node.add_output("out_execute", LogicExecution::default()); node.add_output("out_int", 0_i32); node.add_output("out_float", 0_f32); node.add_output("out_string", String::new()); node.add_output("out_bool", false); Self { node } } } impl RustExampleNode { pub fn on_update(&mut self, pin: &PinId, _context: &LogicContext) -> NodeState { if *pin == PinId::new("in_execute") { println!("Executing {}", self.name()); println!("in_int {}", self.node().get_input::<i32>("in_int").unwrap()); println!( "in_float {}", self.node().get_input::<f32>("in_float").unwrap() ); println!( "in_string {}", self.node().get_input::<String>("in_string").unwrap() ); println!( "in_bool {}", self.node().get_input::<bool>("in_bool").unwrap() ); self.node_mut().pass_value::<i32>("in_int", "out_int"); self.node_mut().pass_value::<f32>("in_float", "out_float"); self.node_mut() .pass_value::<String>("in_string", "out_string"); self.node_mut().pass_value::<bool>("in_bool", "out_bool"); NodeState::Executed(Some(vec![PinId::new("out_execute")])) } else { panic!("Trying to execute through an unexpected pin {}", pin.name()); } } } #[derive(Serialize, Deserialize, Clone)] #[serde(crate = "sabi_serialize")] pub struct ScriptInitNode { node: Node, } implement_node!( ScriptInitNode, node, "Init", "Script init node", NodeExecutionType::OneShot ); impl Default for ScriptInitNode { fn default() -> Self { let mut node = Node::new(stringify!(ScriptInitNode)); node.add_output("Execute", LogicExecution::default()); Self { node } } } impl ScriptInitNode { pub fn on_update(&mut self, pin: &PinId, _context: &LogicContext) -> NodeState { debug_assert!(*pin == PinId::invalid()); println!("Executing {}", self.name()); NodeState::Executed(Some(vec![PinId::new("Execute")])) } } #[allow(dead_code)] fn test_node() { use crate::LogicNodeRegistry; use sabi_serialize::serialize; let mut registry = LogicNodeRegistry::default(); registry.register_node::<ScriptInitNode>(); registry.register_node::<RustExampleNode>(); registry.register_pin_type::<f32>(); registry.register_pin_type::<f64>(); registry.register_pin_type::<u8>(); registry.register_pin_type::<i8>(); registry.register_pin_type::<u16>(); registry.register_pin_type::<i16>(); registry.register_pin_type::<u32>(); registry.register_pin_type::<i32>(); registry.register_pin_type::<bool>(); registry.register_pin_type::<String>(); registry.register_pin_type::<LogicExecution>(); let mut tree = NodeTree::default(); tree.add_link("ScriptInitNode", "NodeA", "Execute", "in_execute"); tree.add_link("NodeA", "NodeB", "out_int", "in_int"); tree.add_link("NodeA", "NodeB", "out_string", "in_string"); tree.add_link("NodeA", "NodeB", "out_execute", "in_execute"); assert_eq!(tree.get_links_count(), 4); let init = ScriptInitNode::default(); let serialized_data = init.serialize_node(); if let Some(n) = registry.deserialize_node(&serialized_data) { tree.add_node(n); } assert_eq!(tree.get_nodes_count(), 1); let mut node_a = RustExampleNode::default(); node_a.set_name("NodeA"); if let Some(v) = node_a.node_mut().get_input_mut::<i32>("in_int") { *v = 19; } if let Some(v) = node_a.node_mut().get_input_mut::<f32>("in_float") { *v = 22.; } if let Some(v) = node_a.node_mut().get_input_mut::<String>("in_string") { *v = String::from("Ciao"); } if let Some(v) = node_a.node_mut().get_input_mut::<bool>("in_bool") { *v = true; } assert_eq!(*node_a.node().get_input::<i32>("in_int").unwrap(), 19); assert_eq!(*node_a.node().get_output::<i32>("out_int").unwrap(), 0); assert_eq!(*node_a.node().get_input::<f32>("in_float").unwrap(), 22.); assert_eq!(*node_a.node().get_output::<f32>("out_float").unwrap(), 0.); assert_eq!( *node_a.node().get_input::<String>("in_string").unwrap(), String::from("Ciao") ); assert_eq!( *node_a.node().get_output::<String>("out_string").unwrap(), String::new() ); assert!(*node_a.node().get_input::<bool>("in_bool").unwrap()); assert!(!*node_a.node().get_output::<bool>("out_bool").unwrap()); let serialized_data = node_a.serialize_node(); if let Some(n) = registry.deserialize_node(&serialized_data) { tree.add_node(n); } assert_eq!(tree.get_nodes_count(), 2); tree.add_default_node::<RustExampleNode>("NodeB"); assert_eq!(tree.get_nodes_count(), 3); let serialized_tree = serialize(&tree); if let Ok(new_tree) = deserialize::<NodeTree>(&serialized_tree) { let mut logic_data = LogicData::from(new_tree); logic_data.init(); logic_data.execute(); } else { panic!("Deserialization failed"); } } #[test] fn test_node_fn() { test_node() }
def wait_for_vms(session, vm_refs, power_state, timeout=60): log.debug("wait_for_vms: %s to reach state '%s'" % (vm_refs, power_state)) vms = list(vm_refs) start = time.time() while vms and not should_timeout(start, timeout): vm = vms.pop() if session.xenapi.VM.get_power_state(vm) != power_state: vms.append(vm) if vms: for vm in vms: log.debug("VM not in '%s' state. Instead in '%s' state." % (power_state, session.xenapi.VM.get_power_state(vm))) raise Exception("VMs (%s) were not moved to the '%s' state in the provided timeout ('%d')" % (vms, power_state, timeout))
1895 - 1896 - 1897 - 1898 *click images for full view May May 5, 1895 At the Circus in Hogan's Alley New York World May 19, 1895 A New Restaurant in Casey's Alley New York World July July 7, 1895 The Day After "The Glorious Fourth" Down Hogan's Alley New York World September September 22, 1895 The Great Cup Race on Reilly's Pond New York World November November 10, 1895 The Great Social Event of The Year in Shantytown New York World November 17, 1895 The Horse Show as Reproduced at Shantytown New York World November 24, 1895 An Untimely Death New York World December December 15, 1895 Merry Xmas Morning in Hogan's Alley New York World December 22, 1895 The Great Football Match Down in Hogan's Alley New York World
Stability of the double gyroid phase to nanoparticle polydispersity in polymer-tethered nanosphere systems Recent simulations predict that aggregating nanospheres functionalized with polymer “tethers” can self-assemble to form the double gyroid (DG) phase seen in block copolymer and surfactant systems. Within the struts of the gyroid, the nanoparticles pack in icosahedral motifs, stabilizing the gyroid phase in a small region of the phase diagram. Here, we study the impact of nanoparticle size polydispersity on the stability of the double gyroid phase. We show for low amounts of polydispersity the energy of the double gyroid phase is lowered. A large amount of polydispersity raises the energy of the system, disrupts the icosahedral packing, and eventually destabilizes the gyroid. Our results show that the DG forms readily up to 10% polydispersity. Considering polydispersity as high as 30%, our results suggest no terminal polydispersity for the DG, but that higher polydispersities may kinetically inhibit the formation of phase. The inclusion of a small population of either smaller or larger nanospheres encourages low-energy icosahedral clusters and increases the gyroid stability while facilitating its formation. We also introduce a new measure for determining the volume of a component in a microphase-separated system based on the Voronoi tessellation.
import { Component, OnInit } from '@angular/core'; import { LocalDataSource } from 'ng2-smart-table'; import { Router } from '@angular/router'; import { DatePipe } from '@angular/common'; import { Http } from '@angular/http'; import { FormBuilder, FormGroup, Validators, FormControl } from '@angular/forms'; import {DealersService} from '../../../@core/data/dealers.service'; import { Dealer } from '../../../models/dealer.model'; import { ServerDataSource } from 'ng2-smart-table'; import { Globals } from '../../../../Globals'; import { NbAuthService } from '@nebular/auth'; import { ActivatedRoute } from '@angular/router'; import { ToastrService } from 'ngx-toastr'; @Component({ selector: 'ngx-dealer', templateUrl: './dealer.component.html', styleUrls: ['./dealer.component.scss'], }) export class DealerComponent implements OnInit { cnicFrontPic: string; cnicBackPic: string; docPic: string; id: any; private sub: any; btnSave: boolean; public form: FormGroup; constructor(private fb: FormBuilder, private dealersService: DealersService, private toastr: ToastrService, private route: ActivatedRoute) {} ngOnInit() { this.form = this.fb.group({ fullName: [null, Validators.compose([Validators.required])], cnicNo: [null, Validators.compose([Validators.required, Validators.minLength(13) , Validators.maxLength(13) , Validators.pattern('[0-9]+')])], address: [null, Validators.compose([Validators.required])], address2: [null, Validators.compose([Validators.required])], packagewsp: [null, Validators.compose([Validators.required])], dArea: [null, Validators.compose([Validators.required])], packagerp: [null, Validators.compose([Validators.required])], graceAmount: [null, Validators.compose([Validators.required])], gracePeriod: [null, Validators.compose([Validators.required])], }); this.sub = this.route.params.subscribe(params => { this.id = params['id']; // (+) converts string 'id' to a number console.log('this.id ' + this.id); if (this.id !== undefined) { this.dealersService.getOneDealer(this.id) .subscribe(data => { this.form.patchValue({ fullName : data.fullName , cnicNo: data.cnicNo, address: data.address, address2: data.address2, packagewsp: data.packagewsp, dArea: data.dArea, packagerp: data.packagerp, graceAmount: data.graceAmount, gracePeriod: data.gracePeriod, }); this.form.disable(); this.btnSave = true; console.log('form.valid ' + this.form.valid + ' btnSave ' + this.btnSave) }); } // In a real app: dispatch action to load the details here. }); } cnicFPic(event) { const cnicF = event.target.files[0].name; this.cnicFrontPic = cnicF; // console.log('name ' + files); } cnicBPic(event) { const cnicB = event.target.files[0].name; this.cnicBackPic = cnicB; // console.log('name ' + files); } docsPic(event) { const doc = event.target.files[0].name; this.docPic = doc; // console.log('name ' + files); } onSubmit() { // console.log('data : ' + ) const data = { fullName: this.form.value.fullName, cnicNo: this.form.value.cnicNo, address: this.form.value.address, address2: this.form.value.address2, dArea: this.form.value.dArea == null ? 'Area1' : this.form.value.dArea, packagewsp: this.form.value.packagewsp, packagerp: this.form.value.packagerp, graceAmount: this.form.value.graceAmount, gracePeriod: this.form.value.gracePeriod, } this.dealersService.saveDealer(data) .subscribe( data1 => { console.log('Data inserted') this.toastr.success('Data inserted successfully.'); }, error => { this.toastr.error('Data not inserted error occured.'); } ); console.log('data : ' + JSON.stringify(data)); } }
Differential algebraic fast multipole-accelerated boundary element method for nonlinear beam dynamics in arbitrary enclosures A novel method is developed to take into account realistic boundary conditions in intense nonlinear beam dynamics. The algorithm consists of three main ingredients: the boundary element method that provides a solution for the discretized reformulation of the Poisson equation as boundary integrals; a novel fast multipole method developed for accurate and efficient computation of Coulomb potentials and forces; and differential algebraic methods, which form the numerical structures that enable and hold together the different components. The fast multipole method, without any modifications, also accelerates the solution of intertwining linear systems of equations for further efficiency enhancements. The resulting algorithm scales linearly with the number of particles $N$, as $m\text{ }\mathrm{log}\text{ }m$ with the number of boundary elements $m$, and, therefore, establishes an accurate and efficient method for intense beam dynamics simulations in arbitrary enclosures. Its performance is illustrated with three different cases and structures of practical interest.
<reponame>LeonNOV/BiuVideo<filename>app/src/main/java/com/leon/biuvideo/values/DanmakuType.java package com.leon.biuvideo.values; /** * @Author Leon * @Time 2021/4/17 * @Desc 弹幕类型,暂不支持高级弹幕、代码弹幕、BAS弹幕 */ public enum DanmakuType { /** * 普通弹幕 */ NORMAL_DANMAKU(0), /** * 底部弹幕 */ BOTTOM_DANMAKU(1), /** * 顶部弹幕 */ TOP_DANMAKU(2), /** * 逆向弹幕 */ REVERSE_DANMAKU(3); public int value; DanmakuType(int value) { this.value = value; } }
import { Injectable } from '@nestjs/common'; import { BuildMongestService } from '../../src/BuildMongestService'; import { Cat } from './cat.entity'; @Injectable() export class CatsService extends BuildMongestService(Cat) { async findByName(name: string): Promise<Cat | null> { const doc = await this.findOne({ name }); return doc; } }
/** * Test case for all {@link ActivitiEvent}s related to historic process instances. * * @author Daisuke Yoshimoto */ public class HistoricProcessInstanceEventsTest extends PluggableActivitiTestCase{ private TestActivitiEntityEventListener listener; /** * Test create, update and delete events of historic process instances. */ @Deployment(resources= {"org/activiti/engine/test/api/runtime/oneTaskProcess.bpmn20.xml"}) public void testHistoricProcessInstanceEvents() throws Exception { if (processEngineConfiguration.getHistoryLevel().isAtLeast(HistoryLevel.ACTIVITY)) { ProcessInstance processInstance = runtimeService.startProcessInstanceByKey("oneTaskProcess"); // Check create-event assertEquals(1, listener.getEventsReceived().size()); assertEquals(ActivitiEventType.HISTORIC_PROCESS_INSTANCE_CREATED, listener.getEventsReceived().get(0).getType()); listener.clearEventsReceived(); Task task = taskService.createTaskQuery().processInstanceId(processInstance.getId()).singleResult(); taskService.complete(task.getId()); // Check end-event assertEquals(1, listener.getEventsReceived().size()); assertEquals(ActivitiEventType.HISTORIC_PROCESS_INSTANCE_ENDED, listener.getEventsReceived().get(0).getType()); listener.clearEventsReceived(); historyService.deleteHistoricProcessInstance(processInstance.getId()); // Check delete-event assertEquals(1, listener.getEventsReceived().size()); assertEquals(ActivitiEventType.ENTITY_DELETED, listener.getEventsReceived().get(0).getType()); } } @Override protected void initializeServices() { super.initializeServices(); listener = new TestActivitiEntityEventListener(HistoricProcessInstance.class); processEngineConfiguration.getEventDispatcher().addEventListener(listener); } @Override protected void tearDown() throws Exception { super.tearDown(); if(listener != null) { listener.clearEventsReceived(); processEngineConfiguration.getEventDispatcher().removeEventListener(listener); } } }
/** * Loads the specified XML file and returns it as a string. * * <p>The XML returned is wrapped in the following XML: * <pre>{@code <psml-file name="[filename]" base="[basedir]" status="[status]"> ... </psml-file>}</pre> * * @param psml The PSML file to load * * @return the content of the XML file. * * @throws IOException If an error occurs while trying to read or write the XML. */ public static String load(PSMLFile psml) throws IOException { File file = psml.file(); XMLStringWriter xml = new XMLStringWriter(false, false); xml.openElement("psml-file"); xml.attribute("name", file.getName()); String base = psml.getBase(); xml.attribute("base", base); if (file.exists()) { xml.attribute("status", "ok"); XMLCopy.copyTo(file, xml); LOGGER.debug("loaded {}", file.getAbsolutePath()); } else { xml.attribute("status", "not-found"); xml.writeText("Unable to find file: "+psml.path()); LOGGER.debug("{} does not exist", file.getAbsolutePath()); } xml.closeElement(); xml.flush(); return xml.toString(); }
package leetcodetests func removePalindromeSub(s string) int { l := len(s) if l == 0 { return 0 } for i := 0; i < l-1; { if s[i] != s[l-1] { return 2 } i++ l-- } return 1 }
/** Tests adding a chain which is called every time new data is added to a data list */ @SuppressWarnings("unchecked") @Test public void testStreamingData() throws InterruptedException, ExecutionException, TimeoutException { StreamProcessor streamProcessor = new StreamProcessor(); Chain<Processor> streamProcessing = new Chain<Processor>(streamProcessor); ProcessorLibrary.FutureDataSource futureDataSource=new ProcessorLibrary.FutureDataSource(); Chain<Processor> main=new Chain<>(new ProcessorLibrary.DataCounter(), new ProcessorLibrary.StreamProcessingInitiator(streamProcessing), futureDataSource); Request request=new Request(); Response response= Execution.createRoot(main, 0, Execution.Environment.createEmpty()).process(request); IncomingData incomingData = futureDataSource.incomingData.get(0); assertEquals(1,response.data().asList().size()); assertEquals("Data count: 0",response.data().get(0).toString()); assertEquals("Add data listener invoked also for DataCounter", 1, streamProcessor.invocationCount); assertEquals("Initial data count", 1, response.data().asList().size()); incomingData.add(new ProcessorLibrary.StringData(request, "d1")); assertEquals("Data add listener not invoked as we are not listening on new data yet",1, streamProcessor.invocationCount); assertEquals("New data is not consumed", 1, response.data().asList().size()); incomingData.addNewDataListener(new MockNewDataListener(incomingData), MoreExecutors.directExecutor()); assertEquals("We got a data add event for the data which was already added", 2, streamProcessor.invocationCount); assertEquals("New data is consumed", 2, response.data().asList().size()); incomingData.add(new ProcessorLibrary.StringData(request, "d2")); assertEquals("We are now getting data add events each time", 3, streamProcessor.invocationCount); assertEquals("New data is consumed", 3, response.data().asList().size()); incomingData.addLast(new ProcessorLibrary.StringData(request, "d3")); assertEquals("We are getting data add events also the last time", 4, streamProcessor.invocationCount); assertEquals("New data is consumed", 4, response.data().asList().size()); response.data().complete().get(1000, TimeUnit.MILLISECONDS); assertEquals("d1",response.data().get(1).toString().toString()); assertEquals("d2",response.data().get(2).toString().toString()); assertEquals("d3",response.data().get(3).toString().toString()); }
#! /usr/bin/env python # -*- coding: utf-8 -*- """ Copyright (c) 2014-2017 <NAME> Released under the MIT license http://opensource.org/licenses/mit-license.php """ import sys #コマンドラインオプション解析用 import argparse #正規表現を使用 import re def chWord(line): ret = ord(line[0])+ord(line[1])*256 return ret def chDword(line): ret = ord(line[0])+(ord(line[1])<<8)+(ord(line[2])<<16)+(ord(line[3])<<24) return ret def chInt(line): ret = ord(line[0])+(ord(line[1])<<8)+(ord(line[2])<<16)+(ord(line[3])<<24) if ret>0x80000000: ret -= 0x100000000 return int(ret) def sec_pos(SecID,sec_size): return 512+SecID*sec_size def ssec_pos(SecID,sec_size): return SecID*sec_size def readSAT(line,MSAT,sec_size): ret = [] for SecID in MSAT: pos = sec_pos(SecID,sec_size) buf = line[pos:pos+sec_size] for i in range(sec_size/4): var = chInt(buf[i*4:i*4+4]) ret.append(var) return ret def SATtoStream(line,sat,sec_size): ret = [] for ary in sat: txt = "" for var in ary: txt += line[sec_pos(var,sec_size):sec_pos(var,sec_size)+sec_size] ret.append(txt) return ret def SSATtoStream(line,sat,sec_size): ret = [] for ary in sat: txt = "" for var in ary: txt += line[ssec_pos(var,sec_size):ssec_pos(var,sec_size)+sec_size] ret.append(txt) return ret def deUni(uniline): ret = "" for i in range(len(uniline)/2): ret += unichr(chWord(uniline[i*2:i*2+2])) try: return str(ret) except: return "encode error" #コマンドラインオプション parser = argparse.ArgumentParser(description='DocFile Perser') parser.add_argument('FileName', metavar='FileName', type=str, help='DocFile format FileName') parser.add_argument('-d','--debug', action="store_true", default=False,dest='debug', help='DebugMode') parser.add_argument('-O', dest='outputfile', metavar='FileName', type=str, default="a.txt", help='Output File Name (default: "a.txt")') args = parser.parse_args() #DocFile format ファイルを読み込み filename = args.FileName input1 = open(filename,"rb") allLines = input1.read(); input1.close() #output fileへの出力の準備 outLines = "" if args.debug: outLines += "Debug Mode\n" #DocFileのヘッダーチェック if allLines[0:8] =="\xD0\xCF\x11\xE0\xA1\xB1\x1A\xE1": print "This is DocFile" else: print "This file is not DocFile" sys.exit() sec_size = 1<<chWord(allLines[30:32]) print "Size of a sector:", sec_size ssec_size = 1<<chWord(allLines[32:34]) print "Size of a short-sector:", ssec_size num_sec = chDword(allLines[44:48]) print "Total number of sectors:", num_sec DictSecID = chInt(allLines[48:52]) print "SecID of first sector of the dictionary stream",DictSecID mini_size = chDword(allLines[56:60]) print "Minimum size of standard stream",mini_size print "SecID of first sector of ssat",chInt(allLines[60:64]) #ssat = readSAT(allLines,chInt(allLines[60:64]),sec_size) #print ssat num_ssec = chDword(allLines[64:68]) if chInt(allLines[60:64]) == 0: num_ssec = 0 print "Total number of short-sectors:",num_ssec #ファイルが切れているかの簡易チェック if len(allLines) < 512 + 512 * ( 512 / 4 ) * (num_sec -1 ): print "unfinished MS file" #Master Sector Allocation Tableの分析 msat = [] next_sec = chInt(allLines[68:72]) for i in range(109): var = chInt(allLines[76+i*4:80+i*4]) if var != -1: msat.append(var) while next_sec >=0:#!= -2: for i in range((sec_size-4)/4): var = chInt(allLines[sec_pos(next_sec,sec_size)+i*4:sec_pos(next_sec,sec_size)+i*4+4]) if var != -1: msat.append(var) print next_sec next_sec = chInt(allLines[sec_pos(next_sec,sec_size)+sec_size-4:sec_pos(next_sec,sec_size)+sec_size]) #print "MSAT:",msat #Sector Allocation Tableの読み込み sat = readSAT(allLines,msat,sec_size) #print "SAT:",sat #Short-Sector Allocation Tableの読み込み ssat = [] next_sec = chInt(allLines[60:64]) while next_sec >0:#!= -2: for i in range(sec_size/4): var = chInt(allLines[sec_pos(next_sec,sec_size)+i*4:sec_pos(next_sec,sec_size)+i*4+4]) ssat.append(var) next_sec = sat[next_sec] #print "SSAT:",ssat #Dictionary Streamの解析 DirID = 0 SSCS = "" next_sec = DictSecID DictSize = 0 total_c_size = 0 while next_sec >=0:#!= -2: DictSecPos = sec_pos(next_sec,sec_size) for i in range(4): print DirID, Dict = allLines[DictSecPos:DictSecPos+128] #print i, #Directry Name name = deUni(Dict[:chWord(Dict[64:66])]) f_empty = False if Dict[66] == '\x01': f_empty = True name = 'D:' + name elif Dict[66] == '\x00': f_empty = True name = 'Empty' elif Dict[66] == '\x02': name = 'U:' + name print name, #Type of the entry: #print ord(Dict[66]) f_id = chInt(Dict[116:120]) print f_id, #print "SecID of first sector or short-sector:",f_id, f_size = chDword(Dict[120:124]) if f_empty: f_size = 0 print "stream size:", f_size, if f_size < mini_size: c_size = (f_size + ssec_size -1)/ssec_size*ssec_size else: c_size = (f_size + sec_size-1)/sec_size*sec_size if Dict[66] == '\x05': c_size = (f_size + sec_size-1)/sec_size*sec_size print "composed size:", c_size if f_size >= mini_size or Dict[66] == '\x05':#DirID != 0: #Root Entryの場合は足さない total_c_size += c_size DictSecPos += 128 DirID += 1 DictSize += 512 if next_sec == sat[next_sec]: break next_sec = sat[next_sec] #未使用セクタの表示 l = len(allLines) if num_sec*sec_size*sec_size/4+512 < l: l = num_sec * sec_size*sec_size /4 + 512 print "suspicious file size!" num_unused_block = 0 for i in range(l/sec_size-1): if sat[i] == -1: print '%08X-%08X:unused' % ((i * sec_size+512),(i * sec_size+512+sec_size-1)) num_unused_block += 1 if sat[l/sec_size-1-1] == -1: #最終セクタが未使用は怪しい print 'suspicious unused sector!' #null blockの判定 l = len(allLines)/sec_size num_null_block = 0 for i in range(l): f = True for j in range(sec_size): if allLines[i*sec_size+j] != '\x00': f = False break if f: num_null_block += 1 #判定 print "file size:",len(allLines) if (len(allLines)-512)%sec_size != 0: print "file size error!" print "header size:",(num_sec+num_ssec+1)*sec_size print "total composed size:",total_c_size print "Dictionary Stream size:", DictSize print "unused sector",num_unused_block * sec_size print "unknown data:",len(allLines)-(num_sec+num_ssec+1)*sec_size-total_c_size-DictSize print "Null block size:", num_null_block*sec_size sys.exit()
from django.shortcuts import render def page_not_found(request): return render(request, '404.html') def page_error(request): return render(request, '500.html') def permission_denied(request): return render(request, '403.html')
// Run gets records by scope ID and name (optional) func (params *GetRecordsParams) Run(ctx sdk.Context, keeper keeper.Keeper) ([]byte, error) { scopeID, err := types.MetadataAddressFromBech32(params.ScopeID) if err != nil { return nil, fmt.Errorf("wasm: invalid scope ID: %w", err) } records, err := keeper.GetRecords(ctx, scopeID, strings.TrimSpace(params.Name)) if err != nil { return nil, fmt.Errorf("wasm: unable to get scope records: %w", err) } return createRecordsResponse(records) }
CLOSE USA Today Sports' Tom Pelissero recaps the biggest takeaways from Sunday's games in Week 6 of the NFL. USA TODAY Sports New York Jets wide receiver Brandon Marshall (15) reacts in the fourth quarter against the Arizona Cardinals at University of Phoenix Stadium. (Photo: Mark J. Rebilas, USA TODAY Sports) HOUSTON — It’s an election year, silly. That wasn’t the entire company line, but the impact of the dramatic presidential election cycle was certainly a prevailing sentiment as NFL owners gathered Tuesday for their quarterly meeting and assessed the league's unusual and precipitous dip in TV ratings. Assuming the results aren’t, well, rigged, NFL games — the undisputed king of U.S. sports viewing — were down 11% for the first six weeks of the season when compared to a similar point last year. Blame it on Hillary vs. Donald? Or a sign of deeper problems for the NFL? “It’s a very muddied water right now because you’ve got obviously the debates going on and you have the Donald Trump show,” Atlanta Falcons owner Arthur Blank told USA TODAY Sports. “That’s a lot of commotion right now. It’s pretty hard to figure out right now what’s real and what’s not.” The first debate, which ran opposite of a Falcons-New Orleans Saints Monday Night Football matchup in late September, drew a record 84 million viewers. The second debate, coinciding with a New York Giants-Green Bay Packers Sunday night prime-time clash, had 69 million viewers. “Obviously, the debates have had a big impact,” Houston Texans owner Robert McNair told USA TODAY Sports. But the debates represent just the biggest of several suspected factors. Tom Brady served four games in Deflategate jail. Peyton Manning retired. The younger generation is increasingly watching games or clips streamed to mobile devices. Too many penalties. Unappealing prime-time matchups. Too many prime-time matchups. Then there are the protests. The national anthem protests by players, ignited by San Francisco 49ers quarterback Colin Kaepernick’s mission to raise awareness about police brutality and social justice inequalities that victimize African Americans, has been a polarizing debate of its own on the NFL’s grand stage. Though the protests — from players like Kaepernick taking a knee, to players raising a fist, to players and coaches locking arms in unity — end when the games begin, they generate much discussion before and after the contests. Still, the impact of the protests illustrates the power of the NFL’s reach. “I think it’s the wrong venue,” Indianapolis Colts owner Jim Irsay told USA TODAY Sports. “It hasn’t been a positive thing. What we all have to be aware of as players, owners, PR people, equipment managers, is when the lights go on we are entertainment. We are being paid to put on a show. There are other places to express yourself.” Irsay’s view is undoubtedly shared by other owners who frown on the protests drawing attention from their product. Given the intense backlash against Kaepernick, it’s plausible that people have turned away to protest the protests. “People come to the game because they want to get away from what’s happening in their everyday lives,” McNair said. “When you bring those types of things into the scene, yeah, it will turn some people off. But the main thing we try to do is to say, ‘We recognize your concern. Let’s do something about it.’ " It’s striking that the anthem protests, connected to other factors, are viewed as a variable that seemingly runs deeper than other recent crises. The NFL took tremendous PR hits with its domestic violence issues and concerns about the effects of concussions. But those serious issues seemingly didn’t have a major effect on the ratings. Last year, NFL games represented 63 of the top 100 highest-rated TV shows. And though NFL viewership was up 27% over the previous 25 years, according to league figures, viewership for all prime-time viewing was down 36%, as TV-watching habits have changed. It is way premature to suggest that the NFL is in trouble of losing its position as the nation’s most popular sport. Although Monday night’s New York Jets-Arizona Cardinals game drew a 6.2 overnight rating that was down 35% from a New York Giants-Philadelphia Eagles Monday nighter the previous year, it still dwarfed the 3.4 rating of the American League Championship Series game between the Cleveland Indians and Toronto Blue Jays. But you can believe that the league is taking the declining numbers seriously. What if the lost viewers from this season never come back? “That should be the NFL’s biggest fear,” consultant Marc Ganis of Sportscorp, Ltd., told USA TODAY Sports, adding that it took years for Major League Baseball to recover after a labor impasse wiped out the playoffs and the World Series in 1994. “The ratings thing, we can’t ignore,” cautioned Blank, co-founder of The Home Depot. He knows all about the impact of researching the fan base for clues. “What’s going up? Where is the softness? How do you respond to that?” Blank said. “It’s no different from my days running The Home Depot, when we had markets where we didn’t get the response. We had to figure out why aren’t we getting customers in our stores here. It didn’t happen very often, but sometimes it happens.” Now the NFL is similarly challenged to figure out how to best present its product. Follow NFL columnist Jarrett Bell on Twitter @JarrettBell PHOTOS: NFL power rankings entering Week 7
def ex_get_control_access(self, node): res = self.connection.request( '%s/controlAccess' % get_url_path(node.id)) everyone_access_level = None is_shared_elem = res.object.find( fixxpath(res.object, "IsSharedToEveryone")) if is_shared_elem is not None and is_shared_elem.text == 'true': everyone_access_level = res.object.find( fixxpath(res.object, "EveryoneAccessLevel")).text subjects = [] for elem in res.object.findall( fixxpath(res.object, "AccessSettings/AccessSetting")): access_level = elem.find(fixxpath(res.object, "AccessLevel")).text subject_elem = elem.find(fixxpath(res.object, "Subject")) if subject_elem.get('type') == 'application/vnd.vmware.admin.group+xml': subj_type = 'group' else: subj_type = 'user' res = self.connection.request(get_url_path(subject_elem.get('href'))) name = res.object.get('name') subject = Subject(type=subj_type, name=name, access_level=access_level, id=subject_elem.get('href')) subjects.append(subject) return ControlAccess(node, everyone_access_level, subjects)
Stuxnet and Aurora utilized design features of the system or controllers to attack physical systems. Stuxnet and Aurora are not traditional network vulnerabilities and cannot be found or mitigated by using traditional IT security techniques. May 19th I attended a lecture by Rebecca Slayton at Stanford’s Center for International Security and Cooperation (CISAC) on “Information for Power: Risk Management, Cybersecurity, and the Electrical Power Grid”. Rebecca identified the Smart Grid NISTR-7628 “Top-Down Analysis of Cyber Threats by classes” as the vehicle for identifying classes of cyber threats to the electric systems. The NISTR approach did not identify design features that can be exploited such as by Stuxnet or system design features that can be exploited such as by Aurora. The recent NERC Lessons Learned report provided another set of design features that can be exploited by cyber that can damage electric substations but not be identified by IT as a cyber threat or attack. It should also be noted that NERC continues to refuse to identify cyber incidents as “cyber”. There is a disconnect between what the electric industry is trying to protect and what a sophisticated attacker that wants to damage the grid will attack. This was cross-posted from the Unfettered blog.
// serveBoth binds to the HTTP and HTTPS ports, redirecting all HTTP requests to HTTPS func (s *Server) serveBoth(httpsAddr, httpAddr string) error { baseURI := s.GetConf("BASEURI") if baseURI == "" { return errors.Errorf("%s_BASEURI must be set to redirect from HTTP to HTTPS", s.envPrefix) } var httpsErr, httpErr error var wg *sync.WaitGroup wg.Add(1) go func() { defer wg.Done() httpsErr = s.serveHTTPS(httpsAddr) }() go func() { defer wg.Done() redirFunc := func(w http.ResponseWriter, r *http.Request) { http.Redirect(w, r, baseURI, http.StatusFound) } httpErr = s.serveHTTPToHandler(httpAddr, http.HandlerFunc(redirFunc)) }() wg.Wait() s.httpServer.Stop(s.Timeout) s.httpsServer.Stop(s.Timeout) <-s.httpServer.StopChan() <-s.httpsServer.StopChan() var err *multierror.Error if httpsErr != nil { multierror.Append(err, httpsErr) } if httpErr != nil { multierror.Append(err, httpErr) } return err }
Congenital hereditary lymphedema caused by a mutation that inactivates VEGFR3 tyrosine kinase. Hereditary lymphedema is a chronic swelling of limbs due to dysfunction of lymphatic vessels. An autosomal dominant, congenital form of the disease, also known as "Milroy disease," has been mapped to the telomeric part of chromosome 5q, in the region 5q34-q35. This region contains a good candidate gene for the disease, VEGFR3 (FLT4), that encodes a receptor tyrosine kinase specific for lymphatic vessels. To clarify the role of VEGFR3 in the etiology of the disease, we have analyzed a family with hereditary lymphedema. We show linkage of the disease with markers in 5q34-q35, including a VEGFR3 intragenic polymorphism, and we describe an A-->G transition that cosegregates with the disease, corresponding to a histidine-to-arginine substitution in the catalytic loop of the protein. In addition, we show, by in vitro expression, that this mutation inhibits the autophosphorylation of the receptor. Thus, defective VEGFR3 signaling seems to be the cause of congenital hereditary lymphedema linked to 5q34-q35.
package state import ( "encoding/json" "io/ioutil" "os" "testing" "github.com/stretchr/testify/assert" "github.com/wavesplatform/gowaves/pkg/proto" "github.com/wavesplatform/gowaves/pkg/settings" "github.com/wavesplatform/gowaves/pkg/util/common" ) func testIterImpl(t *testing.T, params StateParams) { dataDir, err := ioutil.TempDir(os.TempDir(), "dataDir") assert.NoError(t, err) st, err := NewState(dataDir, params, settings.MainNetSettings) assert.NoError(t, err) defer func() { err = st.Close() assert.NoError(t, err) err = os.RemoveAll(dataDir) assert.NoError(t, err) }() blockHeight := proto.Height(9900) blocks, err := ReadMainnetBlocksToHeight(blockHeight) assert.NoError(t, err) // Add extra blocks and rollback to check that rollback scenario is handled correctly. err = st.AddOldDeserializedBlocks(blocks) assert.NoError(t, err) err = st.RollbackToHeight(8000) assert.NoError(t, err) err = st.StartProvidingExtendedApi() assert.NoError(t, err) addr, err := proto.NewAddressFromString("3P2CVwf4MxPBkYZKTgaNMfcTt5SwbNXQWz6") assert.NoError(t, err) var txJs0 = ` { "senderPublicKey": "<KEY>", "amount": 569672223116, "sender": "3PAWwWa6GbwcJaFzwqXQN5KQm7H96Y7SHTQ", "feeAssetId": null, "signature": "<KEY>", "proofs": [ "<KEY>" ], "fee": 1, "recipient": "<KEY>", "id": "<KEY>", "type": 2, "timestamp": 1465747778493, "height": 28 } ` var txJs1 = ` { "senderPublicKey": "<KEY>", "amount": 100000000, "sender": "<KEY>", "feeAssetId": null, "signature": "<KEY>", "proofs": [ "<KEY>" ], "fee": 1, "recipient": "3P2CVwf4MxPBkYZKTgaNMfcTt5SwbNXQWz6", "id": "42qzKopS4Wc5BYR5bXD8fEJ65cQUo51cSFSWQKhjS97Srvxzwb5FcHwTASGoeQGToHsLGST4bBceP6pWkh1MhyCf", "type": 2, "timestamp": 1465753398476, "height": 107 } ` tx0 := &proto.Payment{Version: 1} tx1 := &proto.Payment{Version: 1} err = json.Unmarshal([]byte(txJs0), tx0) assert.NoError(t, err) err = json.Unmarshal([]byte(txJs1), tx1) assert.NoError(t, err) validTxs := []proto.Transaction{tx1, tx0} iter, err := st.NewAddrTransactionsIterator(addr) assert.NoError(t, err) i := 0 for iter.Next() { tx, err := iter.Transaction() assert.NoError(t, err) assert.Equal(t, validTxs[i], tx) i++ } assert.Equal(t, 2, i) iter.Release() assert.NoError(t, iter.Error()) } func TestTransactionsByAddrIterator(t *testing.T) { params := DefaultTestingStateParams() params.StoreExtendedApiData = true params.ProvideExtendedApi = true testIterImpl(t, params) } func TestTransactionsByAddrIteratorOptimized(t *testing.T) { params := DefaultTestingStateParams() params.StoreExtendedApiData = true params.ProvideExtendedApi = false testIterImpl(t, params) } func TestAddrTransactionsIdempotent(t *testing.T) { stor, path, err := createStorageObjects() assert.NoError(t, err) atxDir, err := ioutil.TempDir(os.TempDir(), "atx") assert.NoError(t, err) path = append(path, atxDir) defer func() { stor.close(t) err = common.CleanTemporaryDirs(path) assert.NoError(t, err, "failed to clean test data dirs") }() params := &addressTransactionsParams{ dir: atxDir, batchedStorMemLimit: AddressTransactionsMemLimit, maxFileSize: MaxAddressTransactionsFileSize, providesData: false, } atx, err := newAddressTransactions(stor.db, stor.stateDB, stor.rw, params) assert.NoError(t, err) addr, err := proto.NewAddressFromString(testAddr) assert.NoError(t, err) tx := createPayment(t) txID, err := tx.GetID(proto.MainNetScheme) assert.NoError(t, err) // Save the same transaction ID twice. // Then make sure it was added to batchedStor only once. err = stor.rw.writeTransaction(tx) assert.NoError(t, err) stor.addBlock(t, blockID0) err = atx.saveTxIdByAddress(addr, txID, blockID0, true) assert.NoError(t, err) err = atx.saveTxIdByAddress(addr, txID, blockID0, true) assert.NoError(t, err) stor.flush(t) err = atx.flush() assert.NoError(t, err) err = atx.reset(true) assert.NoError(t, err) err = atx.startProvidingData() assert.NoError(t, err) iter, err := atx.newTransactionsByAddrIterator(addr) assert.NoError(t, err) i := 0 for iter.Next() { transaction, err := iter.Transaction() assert.NoError(t, err) assert.Equal(t, tx, transaction) i++ } assert.Equal(t, 1, i) iter.Release() assert.NoError(t, iter.Error()) }
<filename>src/main/java/com/cy/intercept/InterceptTest.java package com.cy.intercept; public class InterceptTest { }
Radio-Factor Authentication: Identity Management over the 900MHz Band Highly sensitive information systems need a simple and secure method of authentication. To supplement standard login procedures, we can implement multiple variations of two-factor authentication through radio waves, based on specific use cases. This allows us to enforce a proximity-based authentication system allowing for a physical separation of users at work and home. We can also ensure fine-grained access control for systems requiring a small number of privileged users. We will demonstrate two possible implementations, both using forms of public-key cryptography. The first scheme broadcasts a time-synchronized token to all available computers in a geographic area, and the second scheme uses public and private key pairs for digitally signed access request over-the-air.
class Search_Space: """ The Search_Space consisting of name, the initial state (starting number) and the goal. The stop-variable is for the purpose if the Search_Space reaches the 10, no further successor states are produced. This was necessary because no duplication detection was possible for this simple class. In this example only one goal state (representing a number) was used. """ def __init__(self, name, initial_state, goals): self.name = name self.initial_state = initial_state self.goals = goals def init(self): """ Returns the initial state. """ return self.initial_state def goal_reached(self, state): """ If goal is reached (start number and goal number are equal) return True. """ return (self.goals == state) def get_successor_states(self, state): """ Three operators were defined: Subtract one, Add one, Add two. For the 5to20-Search_Space that should represent an unsolvable Search_Space the production of successor states was stopped completely when the value 10 was reached, simulating the case that no new states (different from these of the duplicates) can be found. The number range goes from 0 to 10. So there is no successor state with a value smaller than 0 and bigger than 10. """ successors = [] if 0 < state: successors.append(("sub1", state - 1)) if state < 9: successors.append(("add2", state + 2)) if state < 10: successors.append(("add1", state + 1)) return successors
/** * * @author Suraj Singh */ public class UnionFind { int objects[]; public UnionFind(int n) { objects = new int[n]; for(int i =0;i<n;i++) objects[i] = i; System.out.println("List of elements "); for(int i =0;i<n;i++) System.out.print(" "+objects[i]); } public boolean Connected(int i , int j) { if(objects[i] == objects[j]) { System.out.println("Connected"); return true; } else return false; } public void Union(int i , int j) { int value = objects[i]; for(int k = 0 ;k < objects.length;k++) if(objects[k]==value) objects[k]= objects[j]; System.out.println("List of elements "); for(int l =0;l<objects.length;l++) System.out.print(" "+objects[l]); } public void ListConnectedComponents() { int count =1; System.out.println("List of connected components"); for(int a =0;a<objects.length;a++) { while(a < objects.length && objects[a]==99) a++; System.out.print(count+ " connected component \n {" + a); for(int b = a+1;b<objects.length;b++) { if(objects[b]==objects[a]) { System.out.print(", "+b); objects[b]=99; } } System.out.println("}"); count++; } } }
/** * Test Class for the "Parse through diagrams project" * * TODO list: * - STAY SCANNERLESS * - PARSE FOREST * - SEPARATE * - FORMATS (MAKE EBNF) * - RECOGNISERS/PARSERS (USE LALR IF GRAMMAR ALLOWS IT, A GENERAL ONE IN OTHER CASES) * - ERROR REPORTERS (IF NOT RECOGNISED) * - GENERATORS * - Tell me more about ambiguity (not ambiguous if can build unambiguous parser) then you can say maybe ambiguous and watch execution. * - add builtin alternatives for characters sets + slicing -> example is ascii['a'-'b'] or utf8['0'-'9'] * - implement "parseGiveOne" "parseGiveAll" and "parseMaximum" for giving one sol, all sols, or the longest successful parse. * - think about bad parse messages to send back. * - put a batch of words and see which part is matched by your working grammar. * - Define 2 classes of diagrams, one that builds a tree and the other that just aggregates the string browsed between the in and the out * !!! Better if done with a simple post processing cf reduce * - Intern all string to make comparaison super FASSSTTTT optimization * - Enable something like nontermA-nontermB matches nontermA but words that does not match nontermB * - As every non terminal describes a set, all potential set operators Union (|) Difference (\) Intersection (&) Cart Product (,) * - Embed a browsing mode for parse tree that goes through the tree and raises events when entering and leaving non terminals "a la SAX" * * @author joris * */ public class SVGthroughGraphViz { public static final String dot = "/usr/local/bin/dot"; /** * Handles display of a grammar. * @param g * @param path * @throws IOException */ public static void saveAsDotAndConvertToSVG(DGraph g, String path, String todot) throws IOException{ File resultDir = new File(path); resultDir.mkdirs(); // Write Graphs FileString.toFile(new File(path+"graph.dot"), DGraphs.toDot(g,"pipo")); convertDot(resultDir.getAbsolutePath(),"graph",todot); // Write Diagrams } public static void saveAsDotAndConvertToSVG(DGraph g,String path) throws IOException{ saveAsDotAndConvertToSVG(g,path,dot); } public static void saveAsDotAndConvertToSVG(Graph g, String path) throws IOException{ saveAsDotAndConvertToSVG(g,path,dot); } public static void saveAsDotAndConvertToSVG(Graph g,String path,String todot) throws IOException{ File resultDir = new File(path); resultDir.mkdirs(); // Write Graphs FileString.toFile(new File(path+"graph.dot"),g.toDot("tostitos")); convertDot(resultDir.getAbsolutePath(),"graph",todot); // Write Diagrams } public static void saveDotAndConvertToSVG(String dot,String path,String todot, String fileName) throws IOException{ File resultDir = new File(path); resultDir.mkdirs(); // Write Graphs FileString.toFile(new File(path+fileName+".dot"),dot); convertDot(resultDir.getAbsolutePath(),fileName,todot); // Write Diagrams } private static void convertDot(String path, String fileName,String todot){ try{ String dotPath = path+File.separator+fileName+".dot"; String destPath = path+File.separator+fileName+".svg"; String command = todot+" -Tsvg -v "+dotPath+" -o"+destPath ; Runtime.getRuntime().exec(command).waitFor(); } catch(Throwable e){ e.printStackTrace(); } } }
<reponame>lucaseverini/fuse-alto<gh_stars>0 #include "fileinfo.h" afs_fileinfo::afs_fileinfo() : m_parent(0), m_name(), m_st(), m_leader_page_vda(0), m_deleted(true), m_children() { } afs_fileinfo::afs_fileinfo(afs_fileinfo* parent, std::string name, struct stat st, int vda, bool deleted) : m_parent(parent), m_name(name), m_st(st), m_leader_page_vda(vda), m_deleted(deleted), m_children() { } afs_fileinfo::~afs_fileinfo() { std::vector<afs_fileinfo*>::iterator it = m_children.begin(); while (it != m_children.end()) { afs_fileinfo* child = m_children.at(0); m_children.erase(it); delete child; } } afs_fileinfo* afs_fileinfo::parent() const { return m_parent; } std::string afs_fileinfo::name() const { return m_name; } struct stat* afs_fileinfo::st() { return &m_st; } const struct stat* afs_fileinfo::st() const { return &m_st; } page_t afs_fileinfo::leader_page_vda() const { return m_leader_page_vda; } bool afs_fileinfo::deleted() const { return m_deleted; } void afs_fileinfo::setDeleted(bool on) { m_deleted = on; } int afs_fileinfo::size() const { return (int)m_children.size(); } std::vector<afs_fileinfo*> afs_fileinfo::children() const { return m_children; } afs_fileinfo* afs_fileinfo::child(int idx) { return m_children.at(idx); } const afs_fileinfo* afs_fileinfo::child(int idx) const { return m_children.at(idx); } afs_fileinfo* afs_fileinfo::find(std::string name) { std::vector<afs_fileinfo*>::iterator it; for (it = m_children.begin(); it != m_children.end(); it++) { afs_fileinfo* node = *it.base(); if (!node) continue; if (name == node->name()) return node; } return NULL; } ino_t afs_fileinfo::statIno() const { return m_st.st_ino; } time_t afs_fileinfo::statCtime() const { return m_st.st_ctime; } time_t afs_fileinfo::statMtime() const { return m_st.st_mtime; } time_t afs_fileinfo::statAtime() const { return m_st.st_atime; } uid_t afs_fileinfo::statUid() const { return m_st.st_uid; } gid_t afs_fileinfo::statGid() const { return m_st.st_gid; } mode_t afs_fileinfo::statMode() const { return m_st.st_mode; } size_t afs_fileinfo::statSize() const { return m_st.st_size; } size_t afs_fileinfo::statBlockSize() const { return m_st.st_blksize; } size_t afs_fileinfo::statBlocks() const { return m_st.st_blocks; } size_t afs_fileinfo::statNLink() const { return m_st.st_nlink; } void afs_fileinfo::setIno(ino_t ino) { m_st.st_ino = ino; } void afs_fileinfo::setStatCtime(time_t t) { m_st.st_ctime = t; } void afs_fileinfo::setStatMtime(time_t t) { m_st.st_mtime = t; } void afs_fileinfo::setStatAtime(time_t t) { m_st.st_atime = t; } void afs_fileinfo::setStatUid(uid_t uid) { m_st.st_uid = uid; } void afs_fileinfo::setStatGid(gid_t gid) { m_st.st_gid = gid; } void afs_fileinfo::setStatMode(mode_t mode) { m_st.st_mode = mode; } void afs_fileinfo::setStatSize(size_t size) { m_st.st_size = size; } void afs_fileinfo::setStatBlockSize(size_t blocksize) { m_st.st_blksize = (int)blocksize; } void afs_fileinfo::setStatBlocks(size_t blocks) { m_st.st_blocks = blocks; } void afs_fileinfo::setStatNLink(size_t count) { m_st.st_nlink = count; } void afs_fileinfo::erase(int pos, int count) { std::vector<afs_fileinfo*>::iterator it; int idx = 0; for (it = m_children.begin(); it != m_children.end(); it++) { if (idx < pos) continue; if (idx >= pos + count) break; m_children.erase(it); m_st.st_nlink -= 1; idx++; } } void afs_fileinfo::erase(std::vector<afs_fileinfo*>::iterator pos) { m_children.erase(pos); } void afs_fileinfo::rename(std::string newname) { m_name = newname; } void afs_fileinfo::append(afs_fileinfo* info) { m_children.push_back(info); m_st.st_nlink += 1; } bool afs_fileinfo::remove(afs_fileinfo* child) { std::vector<afs_fileinfo*>::iterator it; int idx = 0; for (it = m_children.begin(); it != m_children.end(); it++) { afs_fileinfo* node = m_children.at(idx); if (child->name() == node->name()) { m_children.erase(it); return true; } idx++; } return false; }
from sys import stdin from collections import defaultdict input = stdin.readline # ~ T = int(input()) T = 1 for t in range(1,T + 1): n,k = map(int,input().split()) _input = [] for i in range(n): _input.append(int(input())) _input.sort() min_ans = 10000000000 for i in range(k - 1,len(_input)): min_ans = min(min_ans,_input[i] - _input[i - k + 1]) print(min_ans)
def install(self) -> None: self.waitfordevicelocal() netns = str(self.node.pid) self.net_client.device_ns(self.localname, netns) self.node.node_net_client.device_name(self.localname, self.name) self.node.node_net_client.device_up(self.name)
Trump explains why he didn’t cancel NAFTA, and still could if he wants to U.S. President Donald Trump insists he wasn’t bluffing about threatening to pull out of NAFTA this week. He says he was two or three days away from doing it — really. But he also says he had a change of heart during phone calls with the leaders of Canada and Mexico. ”I like both of these gentlemen very much,” Trump said Thursday, recapping this week’s roller-coaster of drama involving the North American Free Trade Agreement. ”I respect their countries very much. The relationship is very special. And I said, I will hold on the termination; let’s see if we can make it a fair deal.” He also hinted at a more subtantive reason for not announcing a pullout of NAFTA: economic disruption. The mere rumour of it happening this week, floated by the White House, shaved almost two per cent off the Mexican peso and a third of a cent off the loonie, while businessmen and lawmakers were up in arms. Just the agriculture industry by itself produced enough scared quotes to fill a newscast. Pork producers called the idea of cancelling NAFTA financially devastating. Corn producers called it disastrous. The head of the U.S. grains lobby said he was shocked and distressed. Trump conceded that renegotiating NAFTA is simpler: ”And so I decided (to do that) rather than terminating NAFTA, which would be a pretty big shock to the system.” He emphasized, however, that he retains the right to cancel NAFTA if he can’t get a deal. And that, according to numerous trade-watchers, is what this week was really about: leverage. It’s a view shared by some within the Canadian government — that Trump wants to flex some muscle entering the negotiations, and the threat to pull out is his strongest lever. That lever was brandished this week when stories started appear
// Get returns a specific OID from the set. The memory will be released when the // set itself is released. func (s *OIDSet) Get(index int) (*OID, error) { if s == nil || index < 0 || index >= int(s.C_gss_OID_set.count) { return nil, fmt.Errorf("index %d out of bounds", index) } oid := s.NewOID() oid.C_gss_OID = C.get_oid_set_member(s.C_gss_OID_set, C.int(index)) return oid, nil }
<reponame>Orion992-cpu/api_database<filename>src/api2/api2.module.ts import { Module } from '@nestjs/common'; import { DatabaseModule } from 'src/database/database.module'; import { api2Providers } from './api2.providers'; import { Api2Service } from './api2.service'; import { Api2Controller } from './api2.controller'; @Module({ imports: [DatabaseModule], controllers: [Api2Controller], providers: [ ...api2Providers, Api2Service, ] }) export class Api2Module {}
/* Copyright (c) 1999 - 2010, Vodafone Group Services Ltd All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the Vodafone Group Services Ltd nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef HTTPFUNCTIONHANDLER_H #define HTTPFUNCTIONHANDLER_H // Includes #include "config.h" #include "StringUtility.h" #include "HttpParserThreadConfig.h" #include "ImageDraw.h" #include "ServerProximityPackets.h" #include "StringTableUtility.h" #include "NotCopyable.h" // Forward declatations class ThreadRequestContainer; class UserSearchRequestPacket; class RouteRequest; class SearchRequest; class UserItem; class UserStorage; class SearchMatchLink; class PacketContainer; class ExpandItemID; class ExpandStringItem; class HttpHeader; class HttpBody; class HttpFunctionNote; // See below class HttpVariableContainer; // See below class HttpParserThread; // See HttpParserThread.h ////////////////////////////////////////////////////////////// // HttpFunctionHandler ////////////////////////////////////////////////////////////// /** * Contains the functions used to make dynamic pages. * */ class HttpFunctionHandler: private NotCopyable { public: /** * Definition of a Html function. * \begin{itemize} * \item The stringVector* is the vector containing the parameters. * \item The int is the acctual number of parameters in * the stringVector*. * \item The ParametersMap* contains all the parameters to the * page. * \item The first HttpHead* is the Header of the request. * \item The second HttpHead* is the Header of the responce. * \item The first HttpBody* is the body of the request. * \item The secong HttpBody* is the body of the responce. * \item The HttpThread* is the caller of the function. * Used to send requests. * \item The HttpVariableContainer contains the variables passed * between the functions, while parsing the page. * \end{itemize} */ typedef bool (htmlFunction)( stringVector* params, int paramc, stringMap* paramsMap, HttpHeader* inHead, HttpHeader* outHead, HttpBody* inBody, HttpBody* outBody, HttpParserThread* myThread, HttpVariableContainer* myVar ); /** * Constructs a new HttpFunctionHandler */ HttpFunctionHandler(); /** * Removes allocated memory. */ ~HttpFunctionHandler(); /** * Returns a notice if the string func corresponds to a function. * @param func the string with the name of the function * @return a notice pointer. NULL if no such function. */ HttpFunctionNote* getFunctionNoteOf(const MC2String& func); /** * Requrns the variables for the htmlfunctions. * @return pointer to the HttpVariableContainer. */ HttpVariableContainer* getVariableContainer(); private: typedef map<MC2String, HttpFunctionNote*> FunctionsMap; /// Maps between function name and function poiner. FunctionsMap m_funcMap; /// The variables used while parsing a page. HttpVariableContainer* variableContainer; /** * Add a method. */ void addMethod( const MC2String& name, HttpFunctionNote* n ); /** * A testfunction. */ static bool htmlTest( stringVector* params, int paramc, stringMap* paramsMap, HttpHeader* inHead, HttpHeader* outHead, HttpBody* inBody, HttpBody* outBody, HttpParserThread* myThread, HttpVariableContainer* myVar ); }; ////////////////////////////////////////////////////////////// // HttpVariableContainer ////////////////////////////////////////////////////////////// /** * Contains variables for the HttpFunctions. */ class HttpVariableContainer { public: /** * Constructs a new container with undefined values. */ HttpVariableContainer(); /** * Destructs the variablecontainer. Does nothing. */ ~HttpVariableContainer(); // Selected language StringTable::languageCode currentLanguage; // Sockets and Crypt bool https; private: }; ////////////////////////////////////////////////////////////// // HttpFunctionNote ////////////////////////////////////////////////////////////// /** * A notice to hold information about and a httpFunction * */ class HttpFunctionNote { public: /** * A notice to a html-function. * @param theFunction is the html generating function. * @param minArguments is the mininum number of arguments needed * in order to run the function. */ HttpFunctionNote( HttpFunctionHandler::htmlFunction* theFunction, uint32 minArguments ); /** The mininum allowed number of parameters to the function. * @returns the minargc. */ uint32 getMinArguments(); /** * Return the function of this notice * @return the function. */ HttpFunctionHandler::htmlFunction* getFunction(); private: /// The function of this notice HttpFunctionHandler::htmlFunction* m_Function; /// The minimum number of arguments uint32 m_minArgc; }; #endif // HTTPFUNCTIONHANDLER_H
def _parse_course_data(raw_page: str) -> [{str: str}]: l_headers = d('th').contents() d_headers = {item.lower(): l_headers.index(item) for item in l_headers} course_data = [] for q_list in q_listings.items(): data = {k: q_list('td').eq(v).text() for k, v in d_headers.items()} if 'time' in data: data['time'] = ' '.join(data['time'].split()) course_data.append(data) return course_data try: d = pq(raw_page)('.course-list') except lxml.etree.ParserError as e: raise WebSOCError(e) adapted from Tristan Jogminas tr_listings = d("tr[valign*='top'], tr[align*=left]") l_headers = [] dept = '' num = '' title = '' course_data = [] for tr in tr_listings.items(): tr_type = len(tr('td')) if tr_type == 1: course title clean up multi-line nbsp formatting from WebSOC dept = tr('td').contents()[0].strip().split('\xa0')[0][:-1] num = tr('td').contents()[0].strip().split('\xa0')[1][1:] title = tr.text().split('\n')[1] elif tr_type == 0: table headers l_headers = [header.lower() for header in tr('th').contents()] d_headers = dict(enumerate(l_headers, 0)) else: course listing tr_data = {'dept': dept.upper(), 'num': num, 'title': title} tr_data.update((th, td.text()) for th, td in zip(l_headers, tr('td').items())) tr_data.update({v: tr('td').eq(k).text() for k, v in d_headers.items()}) if 'time' in tr_data: tr_data['time'] = ' '.join(tr_data['time'].split()) course_data.append(tr_data) return course_data
// count number of reference list in this object func (o *Object) countRefList() int { for i, ref := range o.ReferenceList { if isNilReference(ref) { return i } } return len(o.ReferenceList) }
#include<bits/stdc++.h> using namespace std; int main() { int n,i,j,k,l; while(scanf("%d",&n)==1) { char x[n+1],kk; k=0; l=0; scanf("%s",x); kk=x[0]; for(i=1; i<=n; i++) { if(x[i]==kk) { k++; } kk=x[i]; } printf("%d\n",k); } return 0; }
/** * Run Mercurial command. * * @param reposRoot directory of the repository root * @param args {@code hg} command arguments */ static public void runHgCommand(File reposRoot, String ... args) { List<String> cmdargs = new ArrayList<>(); MercurialRepository repo = new MercurialRepository(); cmdargs.add(repo.getRepoCommand()); cmdargs.addAll(Arrays.asList(args)); Executor exec = new Executor(cmdargs, reposRoot); int exitCode = exec.exec(); if (exitCode != 0) { fail("hg command '" + cmdargs.toString() + "' failed." + "\nexit code: " + exitCode + "\nstdout:\n" + exec.getOutputString() + "\nstderr:\n" + exec.getErrorString()); } }
#include<iostream> #include<cstring> #include<cstdio> #include<algorithm> using namespace std; int main(){ char s[100],t[100]; scanf("%s%s",s,t); sort(s,s+strlen(s)); if(strlen(s)!=1) if(s[0]=='0') swap(s[0],s[1]); if(!strcmp(s,t)) cout<<"OK"<<endl; else cout<<"WRONG_ANSWER"<<endl; }
Without a doubt, the Mona Lisa painting by Leonardo Da Vinci is the most well known painting of all time. While I do like the painting and I am a huge fan of Leonardo, I think this Klingon version of the Mona Lisa is more my taste. I think she looks much better with the cranial ridges than she does without them. Now, the only thing this picture needs is to have her holding a Bat’leth! This brings me to the question, what painting do you think would look better if the person or people in the picture were turned into Klingons? Personally, I would like to see a Klingon version of The Oath of Horatii as it just seems fitting. Let us know in the comments! Pass this along to any Star Trek or Mona Lisa fans you know! Like us on Facebook too! [via @zenon814]
def _id(self) -> int: return int(self.comment['data-comment-id'])
/** * Copyright (c) 2011, 2012 * Claudio Kopper <[email protected]> * and the IceCube Collaboration <http://www.icecube.wisc.edu> * * Permission to use, copy, modify, and/or distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. * * * $Id$ * * @file I3CLSimLightSourceParameterization.h * @version $Revision$ * @date $Date$ * @author Claudio Kopper */ #ifndef I3CLSIMLIGHTSOURCEPARAMETERIZATION_H_INCLUDED #define I3CLSIMLIGHTSOURCEPARAMETERIZATION_H_INCLUDED #include "icetray/I3TrayHeaders.h" #include "dataclasses/physics/I3Particle.h" #include "clsim/I3CLSimLightSource.h" #include <vector> // forward declarations struct I3CLSimLightSourceToStepConverter; I3_POINTER_TYPEDEFS(I3CLSimLightSourceToStepConverter); /** * @brief Defines a particle parameterization for fast simulation, * bypassing Geant4 */ struct I3CLSimLightSourceParameterization { public: I3CLSimLightSourceParameterization(); ~I3CLSimLightSourceParameterization(); // for I3Particle I3CLSimLightSourceParameterization(I3CLSimLightSourceToStepConverterPtr converter_, I3Particle::ParticleType forParticleType_, double fromEnergy_, double toEnergy_, bool needsLength_=false); I3CLSimLightSourceParameterization(I3CLSimLightSourceToStepConverterPtr converter_, const I3Particle &forParticleType_, double fromEnergy_, double toEnergy_, bool needsLength_=false); struct AllParticles_t {}; static const AllParticles_t AllParticles; I3CLSimLightSourceParameterization(I3CLSimLightSourceToStepConverterPtr converter_, const AllParticles_t &, double fromEnergy_, double toEnergy_, bool needsLength_=false); // for I3CLSimFlasherPulse I3CLSimLightSourceParameterization(I3CLSimLightSourceToStepConverterPtr converter_, I3CLSimFlasherPulse::FlasherPulseType forFlasherPulseType_); I3CLSimLightSourceToStepConverterPtr converter; #ifndef I3PARTICLE_SUPPORTS_PDG_ENCODINGS I3Particle::ParticleType forParticleType; #else int32_t forPdgEncoding; #endif double fromEnergy, toEnergy; bool needsLength; bool catchAll; bool flasherMode; I3CLSimFlasherPulse::FlasherPulseType forFlasherPulseType; bool IsValidForParticle(const I3Particle &particle) const; bool IsValid(I3Particle::ParticleType type, double energy, double length=NAN) const; #ifdef I3PARTICLE_SUPPORTS_PDG_ENCODINGS bool IsValidForPdgEncoding(int32_t encoding, double energy, double length=NAN) const; #endif bool IsValidForLightSource(const I3CLSimLightSource &lightSource) const; private: }; inline bool operator==(const I3CLSimLightSourceParameterization &a, const I3CLSimLightSourceParameterization &b) { if (a.converter != b.converter) return false; if (a.flasherMode != b.flasherMode) return false; if (a.flasherMode) { if (a.fromEnergy != b.fromEnergy) return false; if (a.toEnergy != b.toEnergy) return false; #ifndef I3PARTICLE_SUPPORTS_PDG_ENCODINGS if (a.forParticleType != b.forParticleType) return false; #else if (a.forPdgEncoding != b.forPdgEncoding) return false; #endif if (a.needsLength != b.needsLength) return false; if (a.catchAll != b.catchAll) return false; } else { if (a.forFlasherPulseType != b.forFlasherPulseType) return false; } return true; } typedef std::vector<I3CLSimLightSourceParameterization> I3CLSimLightSourceParameterizationSeries; I3_POINTER_TYPEDEFS(I3CLSimLightSourceParameterization); I3_POINTER_TYPEDEFS(I3CLSimLightSourceParameterizationSeries); #endif //I3CLSIMLIGHTSOURCEPARAMETERIZATION_H_INCLUDED
<filename>LeetCode/C++/1566. Detect Pattern of Length M Repeated K or More Times.cpp //Brute force //Runtime: 4 ms, faster than 50.00% of C++ online submissions for Detect Pattern of Length M Repeated K or More Times. //Memory Usage: 8.8 MB, less than 25.00% of C++ online submissions for Detect Pattern of Length M Repeated K or More Times. //time: O(N^3) class Solution { public: bool containsPattern(vector<int>& arr, int m, int k) { int n = arr.size(); for(int i = 0; i+m*k <= n; ++i){ vector<int> pattern(arr.begin()+i, arr.begin()+i+m); bool found = true; for(int t = 1; t < k; ++t){ // cout << "[" << i+m*t << ", " << i+m*(t+1) << ")" << endl; if(pattern != vector<int>(arr.begin()+i+m*t, arr.begin()+i+m*(t+1))){ found = false; break; } } // if(found){ // for(int e : pattern){ // cout << e << " "; // } // cout << endl; // } if(found) return true; } return false; } }; //One Pass //https://leetcode.com/problems/detect-pattern-of-length-m-repeated-k-or-more-times/discuss/819361/Simple-c%2B%2B-solution(0ms)-100-fast //time: O(N) class Solution { public: bool containsPattern(vector<int>& arr, int m, int k) { int cnt = 0; int n = arr.size(); for(int i = 0; i+m < n; ++i){ if(arr[i] != arr[i+m]){ cnt = 0; }else{ ++cnt; } if(cnt == (k-1)*m) return true; } return false; } };
def main(): config = Config.azrael() logging.basicConfig( filename=config.get_log_filename(), level=logging.INFO, format="%(relativeCreated)6d %(process)d %(message)s", ) extractor = Process(target=extract_and_load, args=[config]) transformer = Process(target=transform_and_load, args=[config]) extractor.start() logging.info("Extractor started.") transformer.start() logging.info("Transformer started.") extractor.join() transformer.join()
import { AutocompleteState } from './autocomplete.state'; import { FilterState } from './filter.state'; import { PlacesState } from './places.state'; export const states = [PlacesState, FilterState, AutocompleteState];
<gh_stars>1-10 /* * Licensed to The Apereo Foundation under one or more contributor license * agreements. See the NOTICE file distributed with this work for * additional information regarding copyright ownership. * * The Apereo Foundation licenses this file to you under the Apache License, * Version 2.0 (the "License"); you may not use this file except in * compliance with the License. You may obtain a copy of the License at: * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * * See the License for the specific language governing permissions and * limitations under the License. * */ package org.unitime.timetable.server.admin; import java.util.ArrayList; import java.util.List; import org.cpsolver.ifs.util.ToolBox; import org.hibernate.Session; import org.springframework.security.access.prepost.PreAuthorize; import org.springframework.stereotype.Service; import org.unitime.localization.impl.Localization; import org.unitime.timetable.defaults.UserProperty; import org.unitime.timetable.gwt.resources.GwtMessages; import org.unitime.timetable.gwt.shared.SimpleEditInterface; import org.unitime.timetable.gwt.shared.SimpleEditInterface.Field; import org.unitime.timetable.gwt.shared.SimpleEditInterface.FieldType; import org.unitime.timetable.gwt.shared.SimpleEditInterface.ListItem; import org.unitime.timetable.gwt.shared.SimpleEditInterface.PageName; import org.unitime.timetable.gwt.shared.SimpleEditInterface.Record; import org.unitime.timetable.model.ChangeLog; import org.unitime.timetable.model.Department; import org.unitime.timetable.model.DepartmentalInstructor; import org.unitime.timetable.model.Roles; import org.unitime.timetable.model.ChangeLog.Operation; import org.unitime.timetable.model.ChangeLog.Source; import org.unitime.timetable.model.dao.DepartmentDAO; import org.unitime.timetable.model.dao.DepartmentalInstructorDAO; import org.unitime.timetable.model.dao.RolesDAO; import org.unitime.timetable.security.SessionContext; import org.unitime.timetable.security.rights.Right; import org.unitime.timetable.util.NameFormat; /** * @author <NAME> */ @Service("gwtAdminTable[type=instructorRole]") public class InstructorRoles implements AdminTable { protected static final GwtMessages MESSAGES = Localization.create(GwtMessages.class); @Override public PageName name() { return new PageName(MESSAGES.pageInstructorRole(), MESSAGES.pageInstructorRoles()); } @Override @PreAuthorize("checkPermission('InstructorRoles')") public SimpleEditInterface load(SessionContext context, Session hibSession) { List<ListItem> departments = new ArrayList<ListItem>(); List<ListItem> instructorRoles = new ArrayList<ListItem>(); instructorRoles.add(new ListItem("", "")); for (Roles role: Roles.findAllInstructorRoles()) { instructorRoles.add(new ListItem(role.getUniqueId().toString(), role.getAbbv())); } SimpleEditInterface data = new SimpleEditInterface( new Field(MESSAGES.fieldDepartment(), FieldType.list, 160, departments), new Field(MESSAGES.fieldInstructor(), FieldType.person, 300), new Field(MESSAGES.fieldRole(), FieldType.list, 300, instructorRoles) ); data.setSortBy(0, 1); boolean deptIndep = context.getUser().getCurrentAuthority().hasRight(Right.DepartmentIndependent); NameFormat nameFormat = NameFormat.fromReference(context.getUser().getProperty(UserProperty.NameFormat)); for (Department department: Department.getUserDepartments(context.getUser())) { if (!department.isAllowEvents()) continue; departments.add(new ListItem(department.getUniqueId().toString(), department.getLabel())); for (DepartmentalInstructor instructor: (List<DepartmentalInstructor>)hibSession.createQuery( "from DepartmentalInstructor i where i.department.uniqueId = :departmentId and i.externalUniqueId is not null order by i.lastName, i.firstName") .setLong("departmentId", department.getUniqueId()).list()) { if (deptIndep && instructor.getRole() == null) continue; Record r = data.addRecord(instructor.getUniqueId(), false); r.setField(0, instructor.getDepartment().getUniqueId().toString(), false); r.setField(1, null, false); r.addToField(1, instructor.getLastName() == null ? "" : instructor.getLastName()); r.addToField(1, instructor.getFirstName() == null ? "" : instructor.getFirstName()); r.addToField(1, instructor.getMiddleName() == null ? "" : instructor.getMiddleName()); r.addToField(1, instructor.getExternalUniqueId()); r.addToField(1, instructor.getEmail() == null ? "" : instructor.getEmail()); r.addToField(1, instructor.getAcademicTitle() == null ? "" : instructor.getAcademicTitle()); r.addToField(1, nameFormat.format(instructor)); r.setField(2, instructor.getRole() == null ? "" : instructor.getRole().getUniqueId().toString()); r.setDeletable(deptIndep); } } data.setEditable(context.hasPermission(Right.InstructorRoleEdit)); return data; } @Override @PreAuthorize("checkPermission('InstructorRoleEdit')") public void save(SimpleEditInterface data, SessionContext context, Session hibSession) { for (Department department: Department.getUserDepartments(context.getUser())) { if (!department.isAllowEvents()) continue; List<DepartmentalInstructor> instructors = (List<DepartmentalInstructor>)hibSession.createQuery( "from DepartmentalInstructor i where i.department.uniqueId = :departmentId and i.externalUniqueId is not null order by i.lastName, i.firstName") .setLong("departmentId", department.getUniqueId()).list(); for (DepartmentalInstructor instructor: instructors) { Record r = data.getRecord(instructor.getUniqueId()); if (r == null) delete(instructor, context, hibSession); else update(instructor, r, context, hibSession); } for (Record r: data.getNewRecords()) if (department.getUniqueId().toString().equals(r.getField(0))) save(department, instructors, r, context, hibSession); } } protected void save(Department department, List<DepartmentalInstructor> instructors, Record record, SessionContext context, Session hibSession) { if (department == null) return; if (record.getField(1) == null || record.getField(1).isEmpty()) return; String[] name = record.getValues(1); DepartmentalInstructor instructor = null; boolean add = true; if (instructors == null) { instructor = DepartmentalInstructor.findByPuidDepartmentId(name[3], department.getUniqueId()); } else { for (DepartmentalInstructor i: instructors) if (name[3].equals(i.getExternalUniqueId())) { instructor = i; add = false; break; } } if (instructor == null) { instructor = new DepartmentalInstructor(); instructor.setExternalUniqueId(name[3]); instructor.setLastName(name[0]); instructor.setFirstName(name[1]); instructor.setMiddleName(name[2].isEmpty() ? null : name[2]); instructor.setEmail(name.length <=4 || name[4].isEmpty() ? null : name[4]); instructor.setAcademicTitle(name.length <= 5 || name[5].isEmpty() ? null : name[5]); instructor.setIgnoreToFar(false); instructor.setDepartment(department); instructor.setRole(record.getField(2) == null || record.getField(2).isEmpty() ? null : RolesDAO.getInstance().get(Long.valueOf(record.getField(2)))); record.setUniqueId((Long)hibSession.save(instructor)); } else { record.setUniqueId(instructor.getUniqueId()); instructor.setRole(record.getField(2) == null || record.getField(2).isEmpty() ? null : RolesDAO.getInstance().get(Long.valueOf(record.getField(2)))); hibSession.update(instructor); } record.setDeletable(false); record.setField(0, record.getField(0), false); record.setField(1, record.getField(1), false); ChangeLog.addChange(hibSession, context, instructor, instructor.getName(DepartmentalInstructor.sNameFormatLastInitial) + ": " + (instructor.getRole() == null ? MESSAGES.noRole() : instructor.getRole().getAbbv()), Source.SIMPLE_EDIT, (add ? Operation.CREATE : Operation.UPDATE), null, instructor.getDepartment()); } @Override @PreAuthorize("checkPermission('InstructorRoleEdit')") public void save(Record record, SessionContext context, Session hibSession) { save(DepartmentDAO.getInstance().get(Long.valueOf(record.getField(0))), null, record, context, hibSession); } protected void update(DepartmentalInstructor instructor, Record record, SessionContext context, Session hibSession) { if (instructor == null) return; if (ToolBox.equals(instructor.getRole() == null ? "" : instructor.getRole().getUniqueId().toString(), record.getField(2))) return; instructor.setRole(record.getField(2) == null || record.getField(2).isEmpty() ? null : RolesDAO.getInstance().get(Long.valueOf(record.getField(2)))); hibSession.update(instructor); ChangeLog.addChange(hibSession, context, instructor, instructor.getName(DepartmentalInstructor.sNameFormatLastInitial) + ": " + (instructor.getRole() == null ? MESSAGES.noRole() : instructor.getRole().getAbbv()), Source.SIMPLE_EDIT, Operation.UPDATE, null, instructor.getDepartment()); } @Override @PreAuthorize("checkPermission('InstructorRoleEdit')") public void update(Record record, SessionContext context, Session hibSession) { update(DepartmentalInstructorDAO.getInstance().get(record.getUniqueId(), hibSession), record, context, hibSession); } protected void delete(DepartmentalInstructor instructor, SessionContext context, Session hibSession) { if (instructor == null) return; if (instructor.getRole() == null) return; instructor.setRole(null); hibSession.update(instructor); ChangeLog.addChange(hibSession, context, instructor, instructor.getName(DepartmentalInstructor.sNameFormatLastInitial) + ": " + MESSAGES.noRole(), Source.SIMPLE_EDIT, Operation.DELETE, null, instructor.getDepartment()); } @Override @PreAuthorize("checkPermission('InstructorRoleEdit')") public void delete(Record record, SessionContext context, Session hibSession) { delete(DepartmentalInstructorDAO.getInstance().get(record.getUniqueId(), hibSession), context, hibSession); } }
<filename>api-go/parsers/flipparser/flipParser.go package flipparser import ( "context" "encoding/hex" "encoding/json" "fmt" "math/big" "strings" "time" "github.com/mohae/deepcopy" "../../contracts" "../../db" "../../eth" "../../global" "../../util" "github.com/ethereum/go-ethereum" "github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/core/types" "github.com/inconshreveable/log15" ) const ( topicKick = "c84ce3a1" topicTend = "4b43ed12" topicDent = "5ff3a382" topicDeal = "c959c42b" topicFile = "29ae8114" fileTTL = "74746c0000000000000000000000000000000000000000000000000000000000" fileTAU = "7461750000000000000000000000000000000000000000000000000000000000" ) // KickEvent defines a kick event type KickEvent struct { ID uint64 `json:"id"` Lot string `json:"lot"` Bid string `json:"bid"` Tab string `json:"tab"` Usr string `json:"usr"` Gal string `json:"gal"` BlockNum uint64 `json:"blockNum"` TxIndex uint64 `json:"txIndex"` } // BidEvent defines a bid event type BidEvent struct { ID uint64 `json:"id"` Lot string `json:"lot"` Bid string `json:"bid"` Usr string `json:"usr"` BlockNum uint64 `json:"blockNum"` TxIndex uint64 `json:"txIndex"` } // TTLEvent defines a ttl change type TTLEvent struct { TTL string `json:"ttl"` BlockNum uint64 `json:"blockNum"` TxIndex uint64 `json:"txIndex"` } // TAUEvent defines a tau change type TAUEvent struct { TAU string `json:"tau"` BlockNum uint64 `json:"blockNum"` TxIndex uint64 `json:"txIndex"` } // Auction defines an auction type Auction struct { ID uint64 `json:"id"` Phase string `json:"phase"` Lot string `json:"lot"` Bid string `json:"bid"` Tab string `json:"tab"` Guy string `json:"guy"` Tic uint64 `json:"tic"` End uint64 `json:"end"` Usr string `json:"usr"` Gal string `json:"gal"` } // History defines an auction history type History struct { ID uint64 `json:"id"` Lot string `json:"lot"` Bid string `json:"bid"` Tab string `json:"tab"` Guy string `json:"guy"` End uint64 `json:"end"` } // State defines the state of the flip parser type State struct { LastBlock uint64 `json:"lastBlock"` Auctions map[uint64]Auction `json:"auctions"` Histories map[uint64]History `json:"histories"` KickEvents map[uint64]KickEvent `json:"kickEvents"` LastBidEvents map[uint64]BidEvent `json:"bidEvents"` TTLs []TTLEvent `json:"ttls"` TAUs []TAUEvent `json:"taus"` } // FlipParser defines a flip parser type FlipParser struct { log log15.Logger db db.DB token string isInitialized bool startBlockNum *big.Int contract contracts.FlipContract savedStates []State state State stateChan chan State kickChan chan KickEvent } // New creates a new flip parser instance func New( token string, startBlockNum *big.Int, contract contracts.FlipContract, stateChan chan State, kickChan chan KickEvent, ) *FlipParser { return &FlipParser{ log: log15.New("module", fmt.Sprintf("flip/%s", token)), db: db.New(fmt.Sprintf("flip_%s", token)), token: token, isInitialized: false, contract: contract, startBlockNum: startBlockNum, savedStates: []State{}, stateChan: stateChan, kickChan: kickChan, } } // Run starts the main execution routine func (p *FlipParser) Run() { // Init state from disk dbContent := p.db.Read() err := json.Unmarshal([]byte(dbContent), &p.state) if err != nil { p.log.Info("could not parse db. creating default content") p.state = State{ Auctions: make(map[uint64]Auction), Histories: make(map[uint64]History), KickEvents: make(map[uint64]KickEvent), LastBidEvents: make(map[uint64]BidEvent), TTLs: []TTLEvent{ TTLEvent{ TTL: "10800", BlockNum: 0, TxIndex: 0, }, }, TAUs: []TAUEvent{ TAUEvent{ TAU: "172800", BlockNum: 0, TxIndex: 0, }, }, } if global.IsWebtest() { p.state.LastBlock = 0 } else { p.state.LastBlock = 8900000 - 1 } } // Subscribe to new blocks newBlockSubChan := make(chan *types.Header) _, err = eth.GetWSClient().SubscribeNewHead(context.Background(), newBlockSubChan) if err != nil { p.log.Crit("could not subscribe to block headers", "err", err.Error()) panic("") } p.onNewBlock(p.startBlockNum) // Listen to new blocks for { select { case head := <-newBlockSubChan: p.onNewBlock(head.Number) } } } // onNewBlock runs when a new block is received func (p *FlipParser) onNewBlock(blockNumBig *big.Int) { stateCpy := deepcopy.Copy(p.state).(State) blockNum := blockNumBig.Uint64() var startParseBlock uint64 var endParseBlock uint64 if blockNum > p.state.LastBlock { if p.haveSavedState(p.state.LastBlock - 5) { startParseBlock = p.state.LastBlock - 4 endParseBlock = blockNum p.revertState(startParseBlock) } else { startParseBlock = p.state.LastBlock + 1 endParseBlock = blockNum } } else { if blockNum != p.startBlockNum.Uint64() { // Chain reorg. Revert state to 1 block before reorg p.log.Info("chain reorg detected", "old", p.state.LastBlock, "new", blockNum) p.revertState(blockNum) } startParseBlock = blockNum endParseBlock = blockNum } err := p.parseBlockRange(startParseBlock, endParseBlock) if err != nil { p.log.Error("failed parsing blocks", "from", startParseBlock, "to", endParseBlock, "err", err.Error()) p.state = stateCpy time.Sleep(time.Second * 2) p.onNewBlock(blockNumBig) return } p.updateAuctionPhases() p.state.LastBlock = endParseBlock p.isInitialized = true p.saveState() p.stateChan <- deepcopy.Copy(p.state).(State) } func (p *FlipParser) parseBlockRange(startBlock uint64, endBlock uint64) error { // During initial parsing there will be a large range // Split it down to maximum of 100k blocks per request blockRange := endBlock - startBlock if blockRange > 100000 { err := p.parseBlockRange(startBlock, startBlock+100000-1) if err != nil { return err } err = p.parseBlockRange(startBlock+100000, endBlock) if err != nil { return err } return nil } p.log.Info("parsing blocks", "from", startBlock, "to", endBlock) err := p.parseEventsInBlocks(startBlock, endBlock) if err != nil { return err } return nil } func (p *FlipParser) parseEventsInBlocks(startBlock uint64, endBlock uint64) error { events, err := p.getEventsInBlocks(startBlock, endBlock) if err != nil { blockRange := endBlock - startBlock if blockRange == 0 { // we cannot reduce blockrange any further return fmt.Errorf("failed to parse events: %s", err.Error()) } p.log.Warn("too many events in blocks") halfRange := blockRange / 2 err := p.parseEventsInBlocks(startBlock, startBlock+halfRange) if err != nil { return err } err = p.parseEventsInBlocks(startBlock+halfRange+1, endBlock) if err != nil { return err } } updates := []uint64{} deals := []uint64{} for _, ev := range events { topic := hex.EncodeToString(ev.Topics[0].Bytes())[0:8] if strings.Compare(topic, topicKick) == 0 { updates = append(updates, p.parseKickEvent(ev)) } else if strings.Compare(topic, topicTend) == 0 { updates = append(updates, p.parseTendOrDentEvent(ev)) } else if strings.Compare(topic, topicDent) == 0 { updates = append(updates, p.parseTendOrDentEvent(ev)) } else if strings.Compare(topic, topicDeal) == 0 { deals = append(deals, p.parseDealEvent(ev)) } else if strings.Compare(topic, topicFile) == 0 { p.parseFileEvent(ev) } } updates = util.FilterUint64(updates, func(index int, elem uint64) bool { return util.IndexOfUint64(updates, elem) == index && util.IndexOfUint64(deals, elem) == -1 }) deals = util.FilterUint64(deals, func(index int, elem uint64) bool { return util.IndexOfUint64(deals, elem) == index }) err = p.updateAuctions(updates) if err != nil { return err } err = p.makeAuctionsHistory(deals) if err != nil { return err } return nil } func (p *FlipParser) updateAuctions(ids []uint64) error { maxUpdates := 100 for i := 0; i < len(ids); i += maxUpdates { ch := make(chan error) for j := 0; j < maxUpdates && i+j < len(ids); j++ { id := ids[i+j] go func() { ch <- p.updateAuction(uint64(id)) }() } var err error = nil for j := 0; j < maxUpdates && i+j < len(ids); j++ { e := <-ch if e != nil { err = e } } if err != nil { return err } } return nil } func (p *FlipParser) updateAuction(id uint64) error { p.log.Info("updating auction", "id", id) auc, err := p.contract.ContractWS.Bids(nil, new(big.Int).SetUint64(id)) if err != nil { return fmt.Errorf("failed to update auction %d: %s", id, err.Error()) } a := Auction{ ID: id, Lot: auc.Lot.String(), Bid: auc.Bid.String(), Tab: auc.Tab.String(), Guy: auc.Guy.String(), Tic: auc.Tic.Uint64(), End: auc.End.Uint64(), Usr: auc.Usr.String(), Gal: auc.Gal.String(), } a.Phase = p.makeAuctionPhase(a) p.state.Auctions[id] = a return nil } func (p *FlipParser) makeAuctionsHistory(ids []uint64) error { maxUpdates := 100 for i := 0; i < len(ids); i += maxUpdates { ch := make(chan error) for j := 0; j < maxUpdates && i+j < len(ids); j++ { id := ids[i+j] go func() { ch <- p.makeAuctionHistory(uint64(id)) }() } var err error = nil for j := 0; j < maxUpdates && i+j < len(ids); j++ { e := <-ch if e != nil { err = e } } if err != nil { return err } } return nil } func (p *FlipParser) makeAuctionHistory(id uint64) error { p.log.Info("making history", "id", id) kickEvent, ok := p.state.KickEvents[id] if !ok { panic(fmt.Sprintf("have no kick event for auction %d", id)) } kickBlock := kickEvent.BlockNum kickTxIndex := kickEvent.TxIndex lastBidEvent, ok := p.state.LastBidEvents[id] if !ok { panic(fmt.Sprintf("have no last bid event for auction %d", id)) } lastBidEventBlock := lastBidEvent.BlockNum lastBidEventTxIndex := lastBidEvent.TxIndex tau := p.getTAUAtTx(kickBlock, kickTxIndex) ttl := p.getTTLAtTx(lastBidEventBlock, lastBidEventTxIndex) tauBlock, err := eth.GetWSClient().BlockByNumber(context.Background(), new(big.Int).SetUint64(kickBlock)) if err != nil { return fmt.Errorf("could not request tau block: %s", err.Error()) } tauTS := new(big.Int).SetUint64(tauBlock.Header().Time) ttlBlock, err := eth.GetWSClient().BlockByNumber(context.Background(), new(big.Int).SetUint64(lastBidEventBlock)) if err != nil { return fmt.Errorf("could not request ttl block timestamp: %s", err.Error()) } ttlTS := new(big.Int).SetUint64(ttlBlock.Header().Time) var end *big.Int if new(big.Int).Add(tauTS, tau).Cmp(new(big.Int).Add(ttlTS, ttl)) < 0 { end = new(big.Int).Add(tauTS, tau) } else { end = new(big.Int).Add(ttlTS, ttl) } p.state.Histories[id] = History{ ID: id, Lot: lastBidEvent.Lot, Bid: lastBidEvent.Bid, Tab: kickEvent.Tab, Guy: lastBidEvent.Usr, End: end.Uint64(), } delete(p.state.Auctions, id) delete(p.state.LastBidEvents, id) delete(p.state.KickEvents, id) return nil } func (p *FlipParser) getEventsInBlocks(startBlock uint64, endBlock uint64) ([]types.Log, error) { query := ethereum.FilterQuery{ FromBlock: big.NewInt(int64(startBlock)), ToBlock: big.NewInt(int64(endBlock)), Addresses: []common.Address{ p.contract.Address, }, } events, err := eth.GetHTTPClient().FilterLogs(context.Background(), query) if err != nil { return nil, fmt.Errorf("failed to get events: %s", err.Error()) } return events, nil } func (p *FlipParser) parseKickEvent(event types.Log) uint64 { usr := common.HexToAddress("0x" + hex.EncodeToString(event.Topics[1].Bytes())[24:]).String() gal := common.HexToAddress("0x" + hex.EncodeToString(event.Topics[2].Bytes())[24:]).String() encodedData := hex.EncodeToString(event.Data) idBig, ok := new(big.Int).SetString(encodedData[0:64], 16) if !ok { panic("failed to parse kick id to big") } lotBig, ok := new(big.Int).SetString(encodedData[64:128], 16) if !ok { panic("failed to parse kick lot to big") } bidBig, ok := new(big.Int).SetString(encodedData[128:192], 16) if !ok { panic("failed to parse kick bid to big") } tabBig, ok := new(big.Int).SetString(encodedData[192:256], 16) if !ok { panic("failed to parse kick tab to big") } parsed := KickEvent{ ID: idBig.Uint64(), Lot: lotBig.String(), Bid: bidBig.String(), Tab: tabBig.String(), Usr: usr, Gal: gal, BlockNum: event.BlockNumber, TxIndex: uint64(event.TxIndex), } p.state.KickEvents[parsed.ID] = parsed fmt.Printf("new kick event: %+v\n", parsed) if p.isInitialized { p.kickChan <- parsed } return parsed.ID } func (p *FlipParser) parseTendOrDentEvent(event types.Log) uint64 { usr := common.HexToAddress("0x" + hex.EncodeToString(event.Topics[1].Bytes())[24:]).String() idHex := hex.EncodeToString(event.Topics[2].Bytes()) lotHex := hex.EncodeToString(event.Topics[3].Bytes()) encodedData := hex.EncodeToString(event.Data) idBig, ok := new(big.Int).SetString(idHex, 16) if !ok { panic("failed to parse tend/dent id to big") } lotBig, ok := new(big.Int).SetString(lotHex, 16) if !ok { panic("failed to parse tend/dent lot to big") } bidBig, ok := new(big.Int).SetString(encodedData[8+256:8+256+64], 16) if !ok { panic("failed to parse tend/dent bid to big") } parsed := BidEvent{ ID: idBig.Uint64(), Lot: lotBig.String(), Bid: bidBig.String(), Usr: usr, BlockNum: event.BlockNumber, TxIndex: uint64(event.TxIndex), } fmt.Printf("new tend or dent event: %+v\n", parsed) lastEvent, has := p.state.LastBidEvents[parsed.ID] if !has || lastEvent.BlockNum < parsed.BlockNum || (lastEvent.BlockNum == parsed.BlockNum && lastEvent.TxIndex < parsed.TxIndex) { p.state.LastBidEvents[parsed.ID] = parsed } return parsed.ID } func (p *FlipParser) parseDealEvent(event types.Log) uint64 { idHex := hex.EncodeToString(event.Topics[2].Bytes()) idBig, ok := new(big.Int).SetString(idHex, 16) if !ok { panic("failed to parse deal id to big") } fmt.Printf("new deal event: %s\n", idHex) return idBig.Uint64() } func (p *FlipParser) parseFileEvent(event types.Log) { what := hex.EncodeToString(event.Topics[2].Bytes()) valueHex := hex.EncodeToString(event.Topics[3].Bytes()) valueBig, ok := new(big.Int).SetString(valueHex, 16) if !ok { panic("failed to parse file value to big") } if strings.Compare(what, fileTTL) == 0 { p.state.TTLs = append(p.state.TTLs, TTLEvent{ TTL: valueBig.String(), BlockNum: event.BlockNumber, TxIndex: uint64(event.TxIndex), }) } else if strings.Compare(what, fileTAU) == 0 { p.state.TAUs = append(p.state.TAUs, TAUEvent{ TAU: valueBig.String(), BlockNum: event.BlockNumber, TxIndex: uint64(event.TxIndex), }) } } func (p *FlipParser) updateAuctionPhases() { for k := range p.state.Auctions { auc := p.state.Auctions[k] auc.Phase = p.makeAuctionPhase(auc) p.state.Auctions[k] = auc } } func (p *FlipParser) makeAuctionPhase(auc Auction) string { phase := "DAI" if strings.Compare(auc.Bid, auc.Tab) == 0 { phase = "GEM" } currentTS := uint64(time.Now().Unix()) if auc.End == 0 { phase = "DEL" } else if auc.Tic != 0 && (auc.End < currentTS || auc.Tic < currentTS) { phase = "FIN" } else if auc.Tic == 0 && auc.End < currentTS { phase = "RES" } return phase } func (p *FlipParser) haveSavedState(blockNum uint64) bool { for _, s := range p.savedStates { if s.LastBlock == blockNum { return true } } return false } func (p *FlipParser) saveState() { lastBlock := p.state.LastBlock filtered := []State{} for i, s := range p.savedStates { if s.LastBlock < lastBlock && s.LastBlock+10 >= lastBlock { filtered = append(filtered, p.savedStates[i]) } } filtered = append(filtered, deepcopy.Copy(p.state).(State)) p.savedStates = filtered p.db.WriteJSON(p.state) } func (p *FlipParser) revertState(blockNum uint64) { for i, s := range p.savedStates { if s.LastBlock == blockNum-1 { p.state = deepcopy.Copy(p.savedStates[i]).(State) return } } p.log.Crit("could not revert state", "block", p.state.LastBlock, "revert", blockNum) panic("") } func (p *FlipParser) getTTLAtTx(blockNum uint64, txIndex uint64) *big.Int { bestTTL := new(big.Int).SetInt64(0) bestBlockNum := -1 bestTxIndex := -1 for _, v := range p.state.TTLs { if v.BlockNum < blockNum || (v.BlockNum == blockNum && v.TxIndex < txIndex) { if int(v.BlockNum) > bestBlockNum || (int(v.BlockNum) == bestBlockNum && int(v.TxIndex) > bestTxIndex) { var ok bool bestTTL, ok = new(big.Int).SetString(v.TTL, 10) if !ok { panic("could not create big int from ttl") } bestBlockNum = int(v.BlockNum) bestTxIndex = int(v.TxIndex) } } } return bestTTL } func (p *FlipParser) getTAUAtTx(blockNum uint64, txIndex uint64) *big.Int { bestTAU := new(big.Int).SetInt64(0) bestBlockNum := -1 bestTxIndex := -1 for _, v := range p.state.TAUs { if v.BlockNum < blockNum || (v.BlockNum == blockNum && v.TxIndex < txIndex) { if int(v.BlockNum) > bestBlockNum || (int(v.BlockNum) == bestBlockNum && int(v.TxIndex) > bestTxIndex) { var ok bool bestTAU, ok = new(big.Int).SetString(v.TAU, 10) if !ok { panic("could not create big int from tau") } bestBlockNum = int(v.BlockNum) bestTxIndex = int(v.TxIndex) } } } return bestTAU }
<gh_stars>1-10 import { expect } from 'chai' import { repeat } from '../../../Implementation/StringHelpers' import * as Up from '../../../Main' import { insideDocumentAndParagraph } from '../Helpers' // For context, please see: http://stackstatus.net/post/147710624694/outage-postmortem-july-20-2016 const lotsOfSpaces = repeat(' ', 10000) context('A long string of whitespace should never cause cause the parser to hang:', () => { specify('Between words', () => { expect(Up.parse('Hear' + lotsOfSpaces + 'me?')).to.deep.equal( insideDocumentAndParagraph([ new Up.Text('Hear' + lotsOfSpaces + 'me?') ])) }) context('In inline code:', () => { specify('As the sole content', () => { expect(Up.parse('`' + lotsOfSpaces + '`')).to.deep.equal( insideDocumentAndParagraph([ new Up.InlineCode(lotsOfSpaces) ])) }) specify('In the middle of other code', () => { expect(Up.parse('`odd' + lotsOfSpaces + 'code`')).to.deep.equal( insideDocumentAndParagraph([ new Up.InlineCode('odd' + lotsOfSpaces + 'code') ])) }) specify('At the start, directly followed by backticks', () => { expect(Up.parse('`' + lotsOfSpaces + '``code`` `')).to.deep.equal( insideDocumentAndParagraph([ new Up.InlineCode(lotsOfSpaces.slice(1) + '``code``') ])) }) specify('At the end, directly following backticks', () => { expect(Up.parse('` ``code``' + lotsOfSpaces + '`')).to.deep.equal( insideDocumentAndParagraph([ new Up.InlineCode('``code``' + lotsOfSpaces.slice(1)) ])) }) }) specify('Before a footnote', () => { const markup = "I don't eat cereal." + lotsOfSpaces + '(^Well, I do, but I pretend not to.)' const footnote = new Up.Footnote([ new Up.Text('Well, I do, but I pretend not to.') ], { referenceNumber: 1 }) expect(Up.parse(markup)).to.deep.equal( new Up.Document([ new Up.Paragraph([ new Up.Text("I don't eat cereal."), footnote ]), new Up.FootnoteBlock([footnote]) ])) }) specify('Before an unmatched footnote start delimiter', () => { expect(Up.parse('Still typing' + lotsOfSpaces + '[^')).to.deep.equal( insideDocumentAndParagraph([ new Up.Text('Still typing' + lotsOfSpaces + '[^') ])) }) specify("Between an otherwise-valid link's bracketed content and the unmatched open bracket for its URL", () => { expect(Up.parse('(Unreasonable)' + lotsOfSpaces + '(https://')).to.deep.equal( insideDocumentAndParagraph([ new Up.NormalParenthetical([ new Up.Text('(Unreasonable)') ]), new Up.Text(lotsOfSpaces + '(https://') ])) }) specify('Before an unmatched start delimiter from a rich bracketed convention', () => { expect(Up.parse('Still typing' + lotsOfSpaces + '[SPOILER:')).to.deep.equal( insideDocumentAndParagraph([ new Up.Text('Still typing' + lotsOfSpaces + '[SPOILER:') ])) }) specify("Between a link's bracketed content and its bracketed URL", () => { expect(Up.parse('[Hear me?]' + lotsOfSpaces + '(example.com)')).to.deep.equal( insideDocumentAndParagraph([ new Up.Link([ new Up.Text('Hear me?') ], 'https://example.com') ])) }) specify("At the end of a link's content", () => { expect(Up.parse('[Hear me?' + lotsOfSpaces + '](example.com)')).to.deep.equal( insideDocumentAndParagraph([ new Up.Link([ new Up.Text('Hear me?' + lotsOfSpaces) ], 'https://example.com') ])) }) context("In a link's URL:", () => { specify('At the start', () => { expect(Up.parse('[Hear me?](' + lotsOfSpaces + 'example.com)')).to.deep.equal( insideDocumentAndParagraph([ new Up.Link([ new Up.Text('Hear me?') ], 'https://example.com') ])) }) specify('At the end', () => { expect(Up.parse('[Hear me?](example.com' + lotsOfSpaces + ')')).to.deep.equal( insideDocumentAndParagraph([ new Up.Link([ new Up.Text('Hear me?') ], 'https://example.com') ])) }) specify('Before an open bracket', () => { expect(Up.parse('[Hear me?](example.com?some=ridiculous-' + lotsOfSpaces + '[arg])')).to.deep.equal( insideDocumentAndParagraph([ new Up.Link([ new Up.Text('Hear me?') ], 'https://example.com?some=ridiculous-' + lotsOfSpaces + '[arg]') ])) }) }) context("In a media convention's description:", () => { specify('At the start', () => { expect(Up.parse('[image:' + lotsOfSpaces + 'ear](example.com/ear.svg)')).to.deep.equal( new Up.Document([ new Up.Image('ear', 'https://example.com/ear.svg') ])) }) specify('At the end', () => { expect(Up.parse('[image: ear' + lotsOfSpaces + '](example.com/ear.svg)')).to.deep.equal( new Up.Document([ new Up.Image('ear', 'https://example.com/ear.svg') ])) }) specify('Before an open bracket', () => { expect(Up.parse('[image: haunted' + lotsOfSpaces + '[house]](http://example.com/?state=NE)')).to.deep.equal( new Up.Document([ new Up.Image('haunted' + lotsOfSpaces + '[house]', 'http://example.com/?state=NE') ])) }) }) specify("Between a media convention's bracketed description and its bracketed URL", () => { expect(Up.parse('[image: ear]' + lotsOfSpaces + '(example.com/ear.svg)')).to.deep.equal( new Up.Document([ new Up.Image('ear', 'https://example.com/ear.svg') ])) }) context("In a media convention's URL:", () => { specify('At the start', () => { expect(Up.parse('[image: ear](' + lotsOfSpaces + 'example.com/ear.svg)')).to.deep.equal( new Up.Document([ new Up.Image('ear', 'https://example.com/ear.svg') ])) }) specify('At the end', () => { expect(Up.parse('[image: ear](example.com/ear.svg' + lotsOfSpaces + ')')).to.deep.equal( new Up.Document([ new Up.Image('ear', 'https://example.com/ear.svg') ])) }) specify('Before an open bracket', () => { expect(Up.parse('[image: ear](example.com/ear.svg?some=ridiculous-' + lotsOfSpaces + '[arg])')).to.deep.equal( new Up.Document([ new Up.Image('ear', 'https://example.com/ear.svg?some=ridiculous-' + lotsOfSpaces + '[arg]') ])) }) }) specify("Between a linkified media convention's bracketed URL and its linkifying URL", () => { expect(Up.parse('[image: ear] (example.com/ear.svg)' + lotsOfSpaces + '(example.com)')).to.deep.equal( new Up.Document([ new Up.Link([ new Up.Image('ear', 'https://example.com/ear.svg') ], 'https://example.com') ])) }) context("In a linkified media convention's linkifying URL:", () => { specify('At the start', () => { expect(Up.parse('[image: ear] (example.com/ear.svg)(' + lotsOfSpaces + 'example.com)')).to.deep.equal( new Up.Document([ new Up.Link([ new Up.Image('ear', 'https://example.com/ear.svg') ], 'https://example.com') ])) }) specify('At the end', () => { expect(Up.parse('[image: ear] (example.com/ear.svg)(example.com' + lotsOfSpaces + ')')).to.deep.equal( new Up.Document([ new Up.Link([ new Up.Image('ear', 'https://example.com/ear.svg') ], 'https://example.com') ])) }) specify('Before an open bracketURL', () => { expect(Up.parse('[image: ear] (example.com/ear.svg)(example.com?some=ridiculous-' + lotsOfSpaces + '[arg])')).to.deep.equal( new Up.Document([ new Up.Link([ new Up.Image('ear', 'https://example.com/ear.svg') ], 'https://example.com?some=ridiculous-' + lotsOfSpaces + '[arg]') ])) }) }) specify("Between a non-media convention's bracketed URL and its linkifying URL", () => { expect(Up.parse('[SPOILER: His ear grew back!]' + lotsOfSpaces + '(example.com)')).to.deep.equal( insideDocumentAndParagraph([ new Up.InlineRevealable([ new Up.Link([ new Up.Text('His ear grew back!') ], 'https://example.com') ]) ])) }) context("In a non-media convention's linkifying URL:", () => { specify('At the start', () => { expect(Up.parse('[SPOILER: His ear grew back!](' + lotsOfSpaces + 'example.com)')).to.deep.equal( insideDocumentAndParagraph([ new Up.InlineRevealable([ new Up.Link([ new Up.Text('His ear grew back!') ], 'https://example.com') ]) ])) }) specify('At the end', () => { expect(Up.parse('[SPOILER: His ear grew back!](example.com' + lotsOfSpaces + ')')).to.deep.equal( insideDocumentAndParagraph([ new Up.InlineRevealable([ new Up.Link([ new Up.Text('His ear grew back!') ], 'https://example.com') ]) ])) }) specify('Before a an open bracket', () => { expect(Up.parse('[SPOILER: His ear grew back!](example.com?some=ridiculous-' + lotsOfSpaces + '[arg])')).to.deep.equal( insideDocumentAndParagraph([ new Up.InlineRevealable([ new Up.Link([ new Up.Text('His ear grew back!') ], 'https://example.com?some=ridiculous-' + lotsOfSpaces + '[arg]') ]) ])) }) }) specify('Between the delimiters of a a rich convention', () => { expect(Up.parse('(SPOILER:' + lotsOfSpaces + ')')).to.deep.equal( insideDocumentAndParagraph([ new Up.InlineRevealable([]) ])) }) context('In a rich convention:', () => { specify('At the start', () => { expect(Up.parse('[SPOILER:' + lotsOfSpaces + 'He did not die.]')).to.deep.equal( insideDocumentAndParagraph([ new Up.InlineRevealable([ new Up.Text('He did not die.') ]) ])) }) specify('At the end', () => { expect(Up.parse('[SPOILER: He did not die.' + lotsOfSpaces + ']')).to.deep.equal( insideDocumentAndParagraph([ new Up.InlineRevealable([ new Up.Text('He did not die.' + lotsOfSpaces) ]) ])) }) }) context('In a section link', () => { specify('At the start', () => { expect(Up.parse('[topic:' + lotsOfSpaces + 'He did not die.]')).to.deep.equal( insideDocumentAndParagraph([ new Up.SectionLink('He did not die.') ])) }) specify('At the end', () => { expect(Up.parse('[topic: He did not die.' + lotsOfSpaces + ']')).to.deep.equal( insideDocumentAndParagraph([ new Up.SectionLink('He did not die.') ])) }) specify('Before an open bracket', () => { expect(Up.parse('[topic: He did not die.' + lotsOfSpaces + '(Really.)]')).to.deep.equal( insideDocumentAndParagraph([ new Up.SectionLink('He did not die.' + lotsOfSpaces + '(Really.)') ])) }) }) specify('On a blank line at the start of a document', () => { const markup = lotsOfSpaces + ` This is not reasonable.` expect(Up.parse(markup)).to.deep.equal( insideDocumentAndParagraph([ new Up.Text('This is not reasonable.') ])) }) specify('On a blank line at the end of a document', () => { const markup = ` This is not reasonable. ` + lotsOfSpaces expect(Up.parse(markup)).to.deep.equal( insideDocumentAndParagraph([ new Up.Text('This is not reasonable.') ])) }) specify('At the start of a paragraph at the beginning of a document', () => { const markup = lotsOfSpaces + 'This is not reasonable.' expect(Up.parse(markup)).to.deep.equal( insideDocumentAndParagraph([ new Up.Text('This is not reasonable.') ])) }) specify('At the end of a paragraph at the end of a document', () => { const markup = 'This is not reasonable.' + lotsOfSpaces expect(Up.parse(markup)).to.deep.equal( insideDocumentAndParagraph([ new Up.Text('This is not reasonable.') ])) }) specify('At the start of a paragraph that is not the first convention within a document', () => { const markup = lotsOfSpaces + ` This is not reasonable. ${lotsOfSpaces}However, we have to go with it.` expect(Up.parse(markup)).to.deep.equal( new Up.Document([ new Up.Paragraph([ new Up.Text('This is not reasonable.') ]), new Up.Paragraph([ new Up.Text('However, we have to go with it.') ]) ])) }) specify('At the end of a paragraph that is not the last convention within a document', () => { const markup = lotsOfSpaces + ` This is not reasonable.${lotsOfSpaces} However, we have to go with it.` expect(Up.parse(markup)).to.deep.equal( new Up.Document([ new Up.Paragraph([ new Up.Text('This is not reasonable.') ]), new Up.Paragraph([ new Up.Text('However, we have to go with it.') ]) ])) }) specify('At the start of a thematic break streak that is not the first convention within a document', () => { const markup = lotsOfSpaces + ` This is not reasonable. ${lotsOfSpaces}-~-~-~-~-~-` expect(Up.parse(markup)).to.deep.equal( new Up.Document([ new Up.Paragraph([ new Up.Text('This is not reasonable.') ]), new Up.ThematicBreak() ])) }) specify('At the end of a thematic break streak that is not the last convention within a document', () => { const markup = lotsOfSpaces + ` -~-~-~-~-~-~-~-${lotsOfSpaces} However, we have to go with it.` expect(Up.parse(markup)).to.deep.equal( new Up.Document([ new Up.ThematicBreak(), new Up.Paragraph([ new Up.Text('However, we have to go with it.') ]) ])) }) })
/** * An internal method for closing the open window and notifying the * listener, but not sending any events. */ private void closeAndNotify() { amountListener = null; interfaces.remove(InterfaceType.WINDOW); if (listener != null) { listener.interfaceClosed(); listener = null; } }
When a player hitting .172 with a .232 on base percentage is traded for an A-ball pitching prospect, it usually doesn’t generate big headlines. So, you can be forgiven if you haven’t paid a ton of attention to the most recent trade between the Nationals and Cubs, which sent outfielder Scott Hairston to Washington and Ivan Pineyro to the Cubs, plus a pair of PTBNLs, with one going in each direction. According to Jed Hoyer, the two players to be named later “will not affect the balance of the deal”, so it’s basically Hairston for Pineyro, with the Cubs picking up a small part of Hairston’s small contract for 2014. However, just because this is a minor deal doesn’t mean it’s an unimportant deal. Last summer, Marco Scutaro was traded in a similar kind of swap, and turned out to be the best player acquired at the deadline. Role players have value, and Scott Hairston could be a pretty nice role player for the Nationals. First off, let’s put Hairston’s 2013 batting line — which is terrible — in context. We’re dealing with just 112 plate appearances, because the play of Ryan Sweeney and Nate Schierholtz forced Hairston into something of a bench player, and the Cubs didn’t face enough LHPs for Hairston to get regular playing time. Any performance over 112 plate appearances has limited predictive value, and in Hairston’s case, there’s no reason to think there’s anything actually wrong. Here are Hairston’s numbers from the last two seasons, side by side: Season PA BB% K% ISO BABIP AVG OBP SLG wOBA wRC+ 2012 398 5% 21% 0.241 0.287 0.263 0.299 0.504 0.342 118 2013 112 6% 22% 0.263 0.129 0.172 0.232 0.434 0.282 73 His BB/K/ISO numbers are basically identical, so the drop in his numbers is entirely attributable to a .129 BABIP, which is so hilariously unsustainable that it doesn’t even need any further analysis. Because he hits so many fly balls, Hairston will always post lower than average BABIPs, but his career mark is .272, and he’s never had a season below .236 before. Hairston’s BABIP is nothing to worry about, and thus, his performance for the Cubs is nothing to worry about. So, Hairston remains roughly a league average hitter who can be an effective platoon outfielder. He’s shown some decent sized splits over his career, but in just over 2,500 PAs overall, you can’t take those at face value. Hairston’s not any kind of impact player, and he shouldn’t be an everyday guy, but he could add value as a right-handed bat against left-handed pitching. He’s exactly the kind of fourth outfielder that the Nationals needed. Will Scott Hairston be the difference between the Nationals catching the Braves or missing the playoffs? Probably not. We’re talking about a guy that is probably a +1 win player if utilized correctly over an entire season, and the Nationals are picking him up for just the second half. But, quality role players can make a difference, especially in specific match-ups where the outcome of each game is of extreme importance, like a one-game wild card play-in contest, for instance. If the Nationals end up getting one of the two wild card spots and drawing a left-handed pitcher with their season on the line, they’ll be very happy they have Scott Hairston around. This trade won’t attract a lot of notice, but the Nationals did a nice job picking up a decent player who fills a need. It won’t get headlines, but it was a good low cost improvement for a team that needed improving.
/** * @brief Game::ai_move: Get a move from the ai, execute it, evaluate board, proceed with ai or human */ void Game::ai_move(){ std::mutex huMu; std::unique_lock<std::mutex> aiLock(huMu); m_aiPlayer.wait(aiLock,[this]{return (m_current_player == 1 && m_p1_is_ai) || (m_current_player == 2 && m_p2_is_ai);}); std::pair<int, int> aipair; int t_delta; if(m_current_player == 1){ auto t_start = std::chrono::high_resolution_clock::now(); aipair = m_ai_1->get_move(m_board); auto t_end = std::chrono::high_resolution_clock::now(); t_delta = std::chrono::duration_cast<std::chrono::milliseconds>(t_end-t_start).count(); m_p1_time += t_delta; } else{ auto t_start = std::chrono::high_resolution_clock::now(); aipair = m_ai_2->get_move(m_board); auto t_end = std::chrono::high_resolution_clock::now(); t_delta = std::chrono::duration_cast<std::chrono::milliseconds>(t_end-t_start).count(); m_p2_time += t_delta; } m_board.drop(aipair.first, m_current_player); m_iForm->updatePositions(m_board.get_positions()); m_iForm->writeToLog("player " + QString::number(m_current_player) + ": " + QString::number(aipair.first)); m_iForm->writeToLog("score: " + QString::number(aipair.second)); if(t_delta >= 1000){ t_delta /= 1000; m_iForm->writeToLog("time: " + QString::number(t_delta) + " s"); } else{ m_iForm->writeToLog("time: " + QString::number(t_delta) + " ms"); } m_iForm->writeToLog("------------------"); if(m_board.is_winner(m_current_player)){ m_iForm->writeToLog("player " + QString::number(m_current_player) + " wins"); m_iForm->writeToLog("------------------"); final_time(); game_over = true; m_iForm->gameOver(m_current_player); m_iForm->setWinningLine(m_board.get_winning_line(m_current_player)); } else if (m_board.is_full()) { final_time(); game_over = true; m_iForm->gameOver(0); } else{ m_current_player = 3 - m_current_player; if((m_current_player == 1 && m_p1_is_ai) || (m_current_player == 2 && m_p2_is_ai)){ std::thread t(&Game::ai_move,this); t.detach(); } else{ m_iForm->updatePossibleDrops(m_board.possible_drops()); } } m_aiPlayer.notify_one(); }
An ultra-Orthodox Jewish man stabbed participants at a Jerusalem Gay Pride parade, killing a 16-year-old girl and wounding others. (Representational Image) An Israeli court today sentenced an ultra-Orthodox Jewish man to life in prison for killing a 16-year-old girl and wounding others during a stabbing spree at a Jerusalem Gay Pride parade.The Jerusalem District Court convicted Yishai Schlissel in April of murder and six counts of attempted murder over the July 2015 stabbings.He was sentenced to life plus 31 years, a court statement said, after prosecutors had requested life plus 60 years.Schlissel was led into the courtroom with both his hands and feet shackled.The incident triggered harsh criticism of the police when it emerged that Schlissel had been released from prison only three weeks earlier after serving a 10-year sentence for a similar attack.He had also posted a letter on the Internet speaking of the "abomination" of a Gay Pride parade being held in the Holy City and the need to stop it, even at the cost of one's life.Many questioned how Schlissel, 40 when he was convicted, was allowed anywhere near the parade, which saw thousands marching through central Jerusalem.Witnesses described terrifying scenes of Schlissel, with a long beard and dressed in the dark suit worn by ultra-Orthodox Jews, storming the parade with a knife."This guy showed no remorse," Noam Eyal, 31, who said he was one of the victims, told AFP outside the court."In the last hearing before this he said that this is a religious war."Sarah Kala, executive director of Jerusalem Open House LGBT centre, said after the sentencing that "it's another step to try and deter the terrible homophobia raging on our streets."They don't usually give the maximum possible sentence, but in our view to know that Yishai Schlissel will stay in prison for the rest of his days is certainly something that comforts us a little," she told public radio.During the trial, the court said police knew of the potential threat but failed to prevent it."The evidence clearly shows that Israeli police were aware of the dangers the defendant, released (from prison) a short while before the march, posed," the April judgement stated."The unbearable ease in which the defendant managed to infiltrate the marchers and carry out his nefarious deed before being apprehended is incomprehensible."It said that "the gloomy picture arising is that lessons that should have been learned from the 2005 march were not implemented, and intelligence and other materials in possession of the police were not used prudently."Six senior Israeli policemen were removed from their posts as a consequence. The court also noted the "absurdity" in Schlissel being released without any supervision or having undergone rehabilitation.
def args_extractor(f, merge_defaults=False): spec = inspect.getfullargspec(f) if spec.defaults: param_defaults = dict(zip(spec.args[-len(spec.defaults):], spec.defaults)) else: param_defaults = {} named_param_defaults = spec.kwonlydefaults or {} default_dicts = {} num_named_args = len(spec.args) if merge_defaults is True and hasattr(f, '__merge_defaults__'): merge_defaults = f.__merge_defaults__ if merge_defaults: default_dicts = t.pipe( t.merge(named_param_defaults, param_defaults), tc.valfilter(lambda v: isinstance(v, dict)), ) if isinstance(merge_defaults, Sequence): default_dicts = {k: default_dicts[k] for k in merge_defaults} def _args_dict(args, kargs): unnamed_args = dict(zip(spec.args, args[0:num_named_args])) varargs = args[num_named_args:] kargs = t.merge(kargs, unnamed_args) for k, d in default_dicts.items(): kargs[k] = t.merge(d, kargs.get(k) or {}) return varargs, kargs else: def _args_dict(args, kargs): unnamed_args = dict(zip(spec.args, args[0:num_named_args])) varargs = args[num_named_args:] kargs = t.merge(kargs, unnamed_args) return varargs, kargs return _args_dict
import {ChildProcess} from 'child_process'; import {SpawnOptions2} from "../custom-typings"; export interface ChildProcessSpawn{ spawn(command: string, args?: string[], options?: SpawnOptions2): ChildProcess; }
#include <bits/stdc++.h> typedef long long ll; using namespace std; int main(){ ios::sync_with_stdio(0); cin.tie(0); ll n, m; cin>>n>>m; if(n-(2*m) <= 0){ cout<<0<<" "; }else{ cout<<n-(2*m)<<" "; } ll mx = (n*(n-1))>>1; if(m == 0) cout<<n<<"\n"; else if(m >= mx) cout<<"0\n"; else{ for (ll i = 0; i <= n; ++i) { ll tmp = (i*(i-1))>>1; if(m <= tmp){ cout<<n-i<<"\n"; break; } } } }
def solution(a): c=0 for i in range(len(a)): if a[i]=="X++" or a[i]=="++X": c+=1 elif a[i]=="--X" or a[i]=="X--": c-=1 print(c) l=[] for i in range(int(input())): l.append(input()) solution(l)
<gh_stars>1-10 use super::super::room::Room; use crate::api::effect::Effect; use crate::api::object_id::ObjectId; use crate::api::{array::ScreepsArray, room_position::RoomPosition}; use wasm_bindgen::prelude::*; #[wasm_bindgen] extern "C" { #[wasm_bindgen] pub type Controller; #[wasm_bindgen(method, getter = pos)] pub fn position(this: &Controller) -> RoomPosition; #[wasm_bindgen(method, getter = effects)] pub fn effects(this: &Controller) -> ScreepsArray<Effect>; #[wasm_bindgen(method, getter = room)] pub fn room(this: &Controller) -> Room; #[wasm_bindgen(method, getter = hits)] pub fn hitpoints(this: &Controller) -> u32; #[wasm_bindgen(method, getter = hitsMax)] pub fn hitpoints_maxiumu(this: &Controller) -> u32; #[wasm_bindgen(method, getter = id)] pub fn object_id(this: &Controller) -> ObjectId<Controller>; #[wasm_bindgen(method, getter = my)] pub fn is_my(this: &Controller) -> bool; // TODO: // * structureType // * destroy // * isActive // * notifyWhenAttacked // * owner // * isPowerEnabled // * level // * progress // * progressTotal // * reservation // * safeMode // * safeModeAvailable // * safeModeCooldown // * sign // * ticksToDowngrade // * upgradeBlocked // * activateSafeMode // * unclaim }
Surveying the History of Science Writing survey books is often a thankless task. Most of us don't want to do it, and we find it easy to criticize those who do undertake such work for what they have not included as much as for the sin of approaching the subject in any way different from our own, were we to take on this Herculean labor. It has been striking how few general studies of the history of science exist. Every year the list of specialized monographs grows-a sign that there is a great deal of new information to discuss. Finally, a few historians have begun to think about the kind of new survey one might write from this embarrassment of riches. In Servants of Nature, Lewis Pyenson and the late Susan Sheets-Pyenson bring their respective expertise in the history of the exact sciences and the history of natural history, and their mutual commitment to studying science in its non-European and European contexts, to bear in writing a social and institutional history of science for a general audience. Pyenson and Sheets-Pyenson divide their subject into three broad areas-institutions, enterprises, and sensibilities-that are treated thematically more than chronologically. Institutions include schools and universities, societies, observatories, museums, botanical gardens, and zoos but, curiously, not the laboratory, which has been the subject of a great deal of interest in recent years. The authors are at their best in discussing the proliferation of observatories and museums in the era of European overseas expansion and colonialism, demonstrating well how such institutions created a network of local communities engaged in collecting and exchanging observations and specimens and using and perfecting instruments. They are at their weakest in discussing the history of scientific education, where not enough space is devoted to the process by which schools and universities became important centers for disciplinary and experimental innovations in the early modern and modem periods. This observation raises a general point: What is a scientific institution? A general discussion of how the modern idea of an institution developed in relation to the history of science is very much needed. Since this is a subject that has engaged historians of science in the past few decades, a critical overview of the idea of the scientific institution as much as its specific manifestations would have been especially welcome.
use futures::{Async, Future}; use std::cell::{Ref, RefCell}; use std::fmt; use std::fmt::Debug; use std::mem; use engine::asset::loader; use engine::asset::{AssetError, AssetResult}; impl<T> Debug for ResourceKind<T> where T: Debug + loader::Loadable, { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { match self { &ResourceKind::Consumed => write!(f, "ResourceKind::Consumed"), &ResourceKind::Data(ref t) => write!(f, "ResourceKind::Data({:?})", *t), &ResourceKind::Future(_) => write!(f, "ResourceKind::Future"), } } } enum ResourceKind<T: Debug> { Consumed, Data(T), Future(Box<Future<Item = T, Error = AssetError>>), } impl<T: Debug> ResourceKind<T> { fn replace(&mut self, other: ResourceKind<T>) -> ResourceKind<T> { mem::replace(self, other) } } impl<T: Debug> ResourceKind<T> { fn try_into_data(self) -> Option<T> { match self { ResourceKind::Data(d) => Some(d), _ => None, } } fn try_as_data(&self) -> Option<&T> { match self { &ResourceKind::Data(ref d) => Some(d), _ => None, } } } #[derive(Debug)] pub struct Resource<T: Debug + loader::Loadable>(RefCell<ResourceKind<T>>); impl<T: Debug + loader::Loadable> Resource<T> { pub fn new_future<FT>(f: FT) -> Self where FT: Future<Item = T, Error = AssetError> + 'static, { Resource(RefCell::new(ResourceKind::Future(Box::new(f)))) } pub fn new(f: T) -> Self { Resource(RefCell::new(ResourceKind::Data(f))) } pub fn try_into(&self) -> AssetResult<T> { match &mut *self.0.borrow_mut() { &mut ResourceKind::Future(ref mut f) => { return match f.poll() { Err(e) => Err(e), Ok(Async::NotReady) => Err(AssetError::NotReady), Ok(Async::Ready(i)) => Ok(i), }; } img @ &mut ResourceKind::Data(_) => { let r = img.replace(ResourceKind::Consumed); Ok(r.try_into_data().unwrap()) } _ => unreachable!(), } } pub fn try_borrow(&self) -> AssetResult<Ref<T>> { let mut data = None; if let &mut ResourceKind::Future(ref mut f) = &mut *self.0.borrow_mut() { match f.poll() { Err(e) => return Err(e), Ok(Async::NotReady) => return Err(AssetError::NotReady), Ok(Async::Ready(i)) => { data = Some(i); } } } if let Some(i) = data { let kind: &mut ResourceKind<T> = &mut self.0.borrow_mut(); kind.replace(ResourceKind::Data(i)); } let b0 = self.0.borrow(); return Ok(Ref::map(b0, |t| t.try_as_data().unwrap())); } pub fn replace(&self, t: T) { self.0.borrow_mut().replace(ResourceKind::Data(t)); } } impl<T: Debug + loader::Loadable> From<T> for Resource<T> { fn from(r: T) -> Resource<T> { Resource::new(r) } }
<filename>python/desligaLuzQuarto.py import RPi.GPIO as GPIO GPIO.setmode(GPIO.BCM) GPIO.setup(3, GPIO.OUT) GPIO.output(3,1) print('ok')
Enhancing Disease Diagnosis: Biomedical Applications of Surface-Enhanced Raman Scattering Surface-enhanced Raman scattering (SERS) has recently gained increasing attention for the detection of trace quantities of biomolecules due to its excellent molecular specificity, ultrasensitivity, and quantitative multiplex ability. Specific single or multiple biomarkers in complex biological environments generate strong and distinct SERS spectral signals when they are in the vicinity of optically active nanoparticles (NPs). When multivariate chemometrics are applied to decipher underlying biomarker patterns, SERS provides qualitative and quantitative information on the inherent biochemical composition and properties that may be indicative of healthy or diseased states. Moreover, SERS allows for differentiation among many closely-related causative agents of diseases exhibiting similar symptoms to guide early prescription of appropriate, targeted and individualised therapeutics. This review provides an overview of recent progress made by the application of SERS in the diagnosis of cancers, microbial and respiratory infections. It is envisaged that recent technology development will help realise full benefits of SERS to gain deeper insights into the pathological pathways for various diseases at the molecular level. Introduction In clinical practice disease diagnosis is a critical step towards disease management and acts as an indispensable guide towards appropriate treatment and personalised therapy . The initial stage of disease diagnosis, or differential diagnosis, screens possible disease candidates unambiguously correlated to empirical clinical symptoms. This crucial process involves a systematic and objective analysis and understanding of infection-driven changes in complex metabolic processes highlighted by biological markers (biomarkers) . In addition to stratifying normal biological from pathogenic processes, biomarkers may provide valuable information about severity and stage of disease, drug targets and pharmacological response to therapy . Thus, there is always a growing demand to identify, evaluate and validate disease biomarkers for existing and emerging infections. However, diseases usually have complex pathogenesis and pathophysiology profiles that involve multiple intertwined networks of cellular and molecular changes. To capture this biological and biochemical complexity holistically (e.g., in biofluids, tissues, etc.) indicative of healthy or disease states, intensive high-throughput omics-based analytical procedures are frequently used . These aim to identify and characterise multiple biomarkers comprehensively to enhance patient outcomes, disease prevention and drug discovery. Surface Enhanced Raman Scattering: A Brief Tutorial Previously theorised by Smekal in 1923 , the Raman effect was discovered and experimentally demonstrated by Sir C. V. Raman in 1928 using simple optical materials and instrumental setup . Raman spectroscopy is a specific optical readout platform which involves scattering of irradiated light following interaction with polarisable molecules under interrogation with a monochromatic laser source. By far the largest proportion of scattered photons has the same energy as incident light (known as Rayleigh scattering), and so does not carry significant chemical information . However, a very small fraction of emergent radiation is inelastically scattered (Raman scattering), with photons of lower frequency than incident radiation (Stokes scattering) being measured in conventional Raman spectroscopy. Raman spectroscopy instrumentation has greatly evolved mechanically and technically over time. Nowadays, powerful and stable lasers, sensitive detectors, optical fibres and software have accelerated the application of Raman spectroscopy in bioanalytical chemistry. Despite this, the scattering cross-section of the Raman Effect is still extremely inefficient with a photon conversion rate of less than 1 per 10 6 -10 8 incident photons. So to obtain Raman spectra of good signal-to-noise ratio, high power density, long collection time and clinically unrealistic concentrations are usually required. Such measurement parameters often initiate fluorescence and photodegradation which frequently mask Raman spectral lines. This limits the utility of Raman spectroscopy for diagnosis of diseases in biofluids or cells, where biomarkers may be in trace quantities and preserving sample integrity is desirable . It is indeed exciting that significant attention has now shifted to SERS to overcome the quantum inefficiency of conventional Raman spectroscopy. The SERS effect was discovered accidentally by Fleischmann and coworkers in 1974 at the University of Southampton (UK), when they observed a dramatic increase in Raman signals of a monolayer of pyridine adsorbed onto electrochemically roughened Ag electrodes . At that time, the anomalous observation was thought to be a combined effect of the large surface area of roughened Ag Appl. Sci. 2019, 9,1163 4 of 24 electrodes and increased local concentration of adsorbed pyridine molecules. This discovery marked the beginning of an excitingly new era of physical and surface chemistry, attracting much attention from analytical scientists and engineers. In 1977, Van Duyne and colleagues and Creighton and coworkers independently concluded that the observed anomaly in Raman signals of pyridine was rather due to increased cross-section as a result of enhanced electric fields induced by roughened Ag electrodes. It was at this time when Van Duyne enlightened the scientific community and coined the term 'surface-enhanced Raman scattering (SERS)' as we know it today. In principle, SERS involves interactions between electromagnetic radiation and molecules adsorbed onto, or in close proximity to, nanoscale rough metallic particles of smaller diameter than the wavelength of excitation radiation . Although there is no comprehensive explanation for the SERS phenomenon to date, and the topic is under active debate , the independent research by Van Duyne and Creighton, combined with selection rules of molecules adsorbed on metal surfaces , proposed two simultaneously operative theories. The electromagnetic (EM) theory, thought to be the dominant mechanism, is a physical effect which involves optical excitation of electric fields and charge motions created by collective oscillations of electrons in the conduction band (surface plasmon) of NPs. This interaction creates localised surface plasmon resonance (LSPR), the so-called 'hotspots' around NP surfaces . Therefore, analyte molecules that interact with LSPR and produce intensified spectral signals with enhancement factor (EF) of 10 6 -10 8 compared to conventional Raman lines. According to the EM theory , the SERS intensity (I) is directly proportional to the fourth power of the local electromagnetic field strength (E 4 ). By contrast, E varies inversely to the distance (d) between analyte and NP surface, that is, E α (1/d) 12 . Based on these mathematical expressions, EM is distance-dependent, and small modification in d and E results in exponential changes in I. Ideally, the optimum SERS is achieved when an optimal number of analytes are within regions of strong LSPR, within the interstices of aggregated NPs . The second, chemical enhancement (CM) or charge-transfer mechanism relies on resonance Raman scattering-like effect. It is thought to involve electronic excitation of covalently bound coupled electron clouds within chemical bonds formed between analytes and the NP surface. CM increases polarisability of adsorbed molecules and contributes up to 10 3 orders of magnitude to the overall SERS EF . Unlike EM, CM is only significant when molecules are chemically bonded to NPs typically at monolayer coverage. It is also noteworthy that the Raman effect can be tuned further to 10 14 orders of magnitude to reach fluorescence-like cross-section . This is achieved when incident frequency excites electronic energy states of specific chromophores near or bonded to the NP surface, giving rise to surface-enhanced resonance Raman scattering (SERRS). With this large EF, ultralow limit of detection (LOD) to the extent of single molecule detection were reported by Kneipp and coworkers . Overall, to achieve this optimally huge SERS amplification, several experimental parameters need to be manipulated including laser wavelength, morphology of NPs and dispersion medium. For more detailed information on experimental considerations for optimal SERS, readers are directed to excellent reviews by Stiles et al. and Fisk et al. . Quite importantly, optimally aggregated NPs, excitation frequency overlapped with plasmonic band of NPs and enhancing media with small refractive index collectively yield large and reproducible EFs . In terms of technical aspect, flow injection and microfluidics technologies combined with SERS show good SERS reproducibility and reliability . Microfluidics devices consist of microchannels where a minute volume of a sample is mixed uniformly with NPs at constant and automated flow velocity. Tandem microfluidics-SERS overcomes effects of localised heating, photodissociation and variations in scattering geometry of NPs. A considerable focus of recent research has been on developing microfluidic SERS to improve in situ bioanalysis, which will play a decisive role for clinical utility of SERS in healthcare systems . The Affinity of SERS-Active Substrates for Biochemical Molecules Currently, there is a wide range of solid and dispersed substrates used as SERS enhancement media ranging from roughened metals to thin metal films to colloidal NPs. Colloids based on coinage Appl. Sci. 2019, 9, 1163 5 of 24 metals-Ag and Au-are predominantly used in bioanalytical science partly due to ease of preparation and modification, low-cost, high stability and large EF . Also, Ag and Au nanomaterials exhibit naturally high affinity for molecules that possess highly electronegative or charged atoms (e.g., oxygen, nitrogen, sulfur, etc.). Interestingly, numerous biomarkers such as metabolites, nucleic acids and proteins (the reactants, intermediates and end products of genetically encoded processes), contain one or more electronegative atoms and polarisable delocalised π-conjugated system and so they have strong SERS activity . Moreover, since molecular symmetry properties change when molecules are bonded to the NP surface, centrosymmetric biomolecules are detectable by SERS. Thus, SERS has unlocked new prospects to apply unique diagnostic information obtained from symmetrical biomarkers, which would otherwise not be amenable by Raman or Fourier-transform infrared (FT-IR) spectroscopies, according to the mutual exclusion principle . In general, there are two approaches to accomplish SERS measurements namely, label-free and label-based techniques as shown in Figure 1. Label-free or intrinsic SERS measures direct interactions between analytes and NPs . The resultant spectral bands provide detailed intrinsic structural information and dynamics in biomolecules directly attached to NPs. By contrast, label-based or extrinsic SERS combines optical activity of plasmonic materials (Ag, Au, Cu, etc.) functionalised with SERS-active messenger molecules (so-called Raman 'reporters'), which are resonant with a wide range of available excitation lasers . The recognition element, e.g., antibody, enzyme, aptamer, etc., attached to NP surface binds to epitope(s) of specific target analytes (e.g., a metabolite, nucleic acid or bacterium) and its plasmonically enhanced characteristic SERS signal is measured indirectly through the Raman reporter. When several biocompatible recognition elements are employed, extrinsic SERS offers quantitative multiplexed analysis of biomarkers in complex fluid matrices. Appl. Sci. 2019, 9 FOR PEER REVIEW 5 Currently, there is a wide range of solid and dispersed substrates used as SERS enhancement media ranging from roughened metals to thin metal films to colloidal NPs. Colloids based on coinage metals-Ag and Au-are predominantly used in bioanalytical science partly due to ease of preparation and modification, low-cost, high stability and large EF . Also, Ag and Au nanomaterials exhibit naturally high affinity for molecules that possess highly electronegative or charged atoms (e.g., oxygen, nitrogen, sulfur, etc.). Interestingly, numerous biomarkers such as metabolites, nucleic acids and proteins (the reactants, intermediates and end products of genetically encoded processes), contain one or more electronegative atoms and polarisable delocalised π-conjugated system and so they have strong SERS activity . Moreover, since molecular symmetry properties change when molecules are bonded to the NP surface, centrosymmetric biomolecules are detectable by SERS. Thus, SERS has unlocked new prospects to apply unique diagnostic information obtained from symmetrical biomarkers, which would otherwise not be amenable by Raman or Fourier-transform infrared (FT-IR) spectroscopies, according to the mutual exclusion principle . In general, there are two approaches to accomplish SERS measurements namely, label-free and label-based techniques as shown in Figure 1. Label-free or intrinsic SERS measures direct interactions between analytes and NPs . The resultant spectral bands provide detailed intrinsic structural information and dynamics in biomolecules directly attached to NPs. By contrast, label-based or extrinsic SERS combines optical activity of plasmonic materials (Ag, Au, Cu, etc.) functionalised with SERS-active messenger molecules (so-called Raman 'reporters'), which are resonant with a wide range of available excitation lasers . The recognition element, e.g., antibody, enzyme, aptamer, etc., attached to NP surface binds to epitope(s) of specific target analytes (e.g., a metabolite, nucleic acid or bacterium) and its plasmonically enhanced characteristic SERS signal is measured indirectly through the Raman reporter. When several biocompatible recognition elements are employed, extrinsic SERS offers quantitative multiplexed analysis of biomarkers in complex fluid matrices. Multivariate Chemometrics The SERS spectra obtained from biological samples are multivariate in nature. That is to say, they consist of thousands of complex and combination vibrational modes which are difficult to interpret by simple stare & compare. The fundamental tenet of multivariate analysis (MVA) is to simplify multivariate data systematically to a small number of variables whilst preserving maximum variance within a data matrix to guide scientific reasoning. This is accomplished through unsupervised and/or supervised MVA (machine learning) modelling. Unsupervised MVA explore natural variance within a data matrix using spectral variables in vertical columns (the X-data) as the only input data measured from objects in horizontal rows (the Y-data) of a data matrix. Principal components analysis (PCA) is a widely used traditional, unsupervised approach to reduce data dimensionality, classify spectra into specific groups and to identify outliers . In principle, PCA decomposes multivariate data into scores (clusters) and associated spectral loadings. The scores plots consist of uncorrelated orthogonal hyperplanes called principal components (PCs), which display the differences, similarities and the total explained variance in the dataset; e.g., cancerous vs. noncancerous conditions. PC1 is extracted from the input X-data to account for the largest variance, whilst PC2, PC3, . . . PCn (where n is an integer), explain the remaining natural variance in decreasing order. The PC loadings spectra are plotted to highlight the most important input spectral variables responsible for the clustering pattern observed on a PCA scores plot. In the case of disease diagnosis, the variables on a loadings plot often denote intensity ratio differences and/or spectral band shifts between diseased and healthy groups. Since PCA does not rely on prior knowledge of investigated samples, it is exploratory by nature which implies that the algorithm can discover novel hidden biological patterns within samples under review, and is very useful for outlier detection . Other unsupervised models are dendrograms, self-organising maps, autoassociative neural network, etc. By contrast, supervised models are calibrated with a known response variable(s) (the Y-data) to act as building blocks for supervised algorithms. Discriminant analysis and partial least-squares regression (PLSR) are commonly used supervised models for classification and quantitative predictions of SERS spectral data . Discriminant function analysis (DFA) combines PCs extracted from the X-data and a priori knowledge: e.g., sample objects (classes) in the Y-data to minimise within-class and maximise between-class variance for classification purposes. On the other hand, PLSR has proved to be powerful and useful in quantitative analysis of biomarkers by SERS, which employs correlated variables in both Xand Y-data to build a linear relationship. For PLSR modelling, the sample spectral dataset is first divided into two parts: the training and test sets. The first subset of sample spectra are used to calibrate or train a PLSR model followed by quantitative prediction of the remaining unseen samples in the test set . A PLSR model of good quality or performance will aim to generate a correlation coefficient (R 2 or Q 2 for trained and tested sets, respectively), which is close to one and low root mean squared error (RMSE). Partial least-squares-discriminant analysis (PLS-DA), a variant of PLSR, is also a very powerful multivariate model particularly applied when the Y-data are categorical by nature . In addition to these linear discriminant analysis methods there are several nonlinear equivalents that effect nonlinear mapping from input X-data (spectra) to output Y-data (the classes or sample objects). These are often referred to as machine learning techniques and perhaps the most popular are support vector machines (SVMs), random forests (RFs) and kernel PLS (kPLS): these are reviewed in Gromski et al., Mazivila et al., Shinzawa et al., and Ellis et al. . The recent resurgence of interest in artificial intelligence and artificial neural networks (ANNs) has given rise to deep learning. In these convolution neural networks many different layers are used , which are fundamentally different to the single layer neural networks developed by Rumelhart and colleagues . These have predominantly been used for image analysis and speech recognition and they are very data hungry (that is to say require lots of input data) and may have a role in the analysis of chemical images generated from SERS and Raman microspectroscopy as illustrated by Shi et al. and Krauss et al. . As the old saying reveals-"you will reap what you sow"-if a supervised model is incorrectly calibrated, there is a possibility of overfitting and subsequently false classification or regression. For accurate prediction, quality control and assurance in disease diagnostics, it is always important that the number of latent variables is selected carefully and supervised models are validated through bootstrapping or n-fold cross-validation, etc. . Cancer Cancer is a major public health concern and an economic burden globally. Recent research indicates that 17.5 million cancer cases were recorded worldwide in 2015, culminating in 8.7 million deaths . Unfortunately the number of cancer cases increased sharply after two years in 2018 making cancer the second most common cause of death worldwide . Cancer is a collective term used to describe malignancies characterised by rapid abnormal growth of cells which invade other parts of the body. At the onset of cancer, specific biochemical molecules (e.g., proteins) are elevated or decreased in cancer patients and such dynamics serve as biomarkers for various types of cancers . The early diagnosis of cancer, especially before metastasis, correlates well with the effectiveness of therapy, improved patient survival rates and prognosis. To this end, significant effort has been devoted towards the development of molecular biosensing platforms in recent years to meet the ever-demanding diagnostics of cancers in human cells, tissues and biofluids. Among these, immunoassays exhibit considerably high sensitivity and so they are commonly used as noninvasive strategies to measure biomarkers based on antibody-antigen interactions at different stages of cancer development . The early work to highlight the diagnostic potential of immunoassays used radioisotopes to label antigens and antibodies to trace dynamics in target markers and metabolism indicative of cancers . Unsurprisingly, the use of radioisotope immunoassay (RIA) in clinics declined due to environmental health and safety hazards, in addition to costly specialised laboratory facilities and waste disposal routes for radioactive materials. Enzymes are now commonly used as reporters in the enzyme-linked immunosorbent assay (ELISA). ELISA is safer, simpler, rapid and the gold standard assay used for routine analysis of protein cancer biomarkers. Several authors have documented the utility of ELISA in cancer studies . For instance, Ambrosi et al. detected low levels of CA15-3 glycoprotein antigen mainly observed in patients with breast cancer using anti-CA15-3-horseradish peroxidase conjugate chemically bonded to an Au solid substrate ; whereas Fitzgerald et al. investigated colorectal cancer by accurately measuring autoimmune responses of IgM and IgG antibodies in human serum . Nonetheless, long analysis times and high cost of commercial ELISA test kit limit the application of ELISA. Alternatively, fluorescence-based portable biosensors are well developed and extensively applied as readout assays. In terms of quantum yield, fluorescence has a large absorption cross-section, thus fluorescent immunoassay (FIA) has excellent sensitivity which enables detection of cancer biomarkers at clinically desirable LODs . However, FIA has several drawbacks: difficulty with labelling recognition or target species, limited multiplexing due to frequent spectral overlap caused by broad emission bands, as well as photobleaching and nonspecific binding, especially at low analyte levels. SERS has vital advantages over traditional immunoassays used in clinical biochemistry: it exhibits multiplexing ability with a single excitation laser and ultrasensitivity leading to absolute quantitative detection and, if optimized, correctly rivals gold standard assays. Recently, reproducible substrates based on antibody-conjugated hollow Au nanospheres and magnetic beads were applied for rapid sensing of carcinoembryonic antigen (CEA)-a biomarker of lung cancer . In this study, the SERS assay detected low amount (1 pg/mL) of CEA accurately, which was 1000-fold more sensitive than ELISA. Since the amount of CEA is clinically determined to be about 10 ng/mL in malignant cases , SERS detected subinfectious regime suitable for monitoring and predicting inception of lung cancer to avoid increased risk of severe metastasis. In subsequent studies, attention shifted towards ex vivo analysis of human biofluids to test novel biosensors in real biological environments. Within this framework, Wang et al. analysed diagnostic and prognostic markers of pancreatic cancer, mucin (MUC4) protein and serum carbohydrate (CA-19-9) antigen, and compared SERS results to ELISA and RIA . In addition to quick readout time, SERS demonstrated much better sensitivity (LOD 33 ng/mL) than ELISA (LOD 30 µg/mL) for MUC4, and an LOD of 0.8 U/mL compared to RIA's 1.0 U/mL for CA-19-9. Interestingly, SERS quantified trace levels of MUC4 in human serum for pancreatic cancer patients whereas ELISA and RIA failed to register signals under same conditions. Many other articles have appeared showing that SERS is also versatile as it can detect various cancers both in vitro and in vivo . The last decade has seen substantial progress directed towards the application of SERS for detecting multiple disease markers simultaneously aimed to reduce risks of false positives associated with singular biomarker detection and to strengthen differential diagnostics. One of the earliest multiplexed assays was reported by Faulds, who identified five DNA sequences (5-plex) quantitatively at 1 pM detection limit within a single assay . Quite recently, a similar sensitivity regime was reported for prostate-free prostate specific antigens (f-PSA) and complexed prostate specific antigens (c-PSA) and pancreatic CA19-9, matrix metalloproteinase-7 (MMP7) and MUC4 , cancers in 2-plex and 3-plex in human serum, respectively. The multiplex SERS method shown in Figure 2 of the former study, proposed a very sensitive and selective sandwich immunocomplex (SIC) nanotags, similar to those reported in recent years . The robust and specific SIC assay showed negligible nonspecific binding and cross reactivity between f-PSA and c-PSA . This represents a remarkable leap toward reduction of false positives and assay non-reactivity driven by random binding to interferents which hamper the vast majority of diagnostic immunoassays. More important, the computed values of free to total PSA ratio for clinical samples clearly indicated that optimal dual-binding SERS assay can match the sensitivity of chemiluminescence standard used in clinics. Appl. Sci. 2019, 9 FOR PEER REVIEW 8 10 ng/mL in malignant cases , SERS detected subinfectious regime suitable for monitoring and predicting inception of lung cancer to avoid increased risk of severe metastasis. In subsequent studies, attention shifted towards ex vivo analysis of human biofluids to test novel biosensors in real biological environments. Within this framework, Wang et al. analysed diagnostic and prognostic markers of pancreatic cancer, mucin (MUC4) protein and serum carbohydrate (CA-19-9) antigen, and compared SERS results to ELISA and RIA . In addition to quick readout time, SERS demonstrated much better sensitivity (LOD 33 ng/mL) than ELISA (LOD 30 µg/mL) for MUC4, and an LOD of 0.8 U/mL compared to RIA's 1.0 U/mL for CA-19-9. Interestingly, SERS quantified trace levels of MUC4 in human serum for pancreatic cancer patients whereas ELISA and RIA failed to register signals under same conditions. Many other articles have appeared showing that SERS is also versatile as it can detect various cancers both in vitro and in vivo . The last decade has seen substantial progress directed towards the application of SERS for detecting multiple disease markers simultaneously aimed to reduce risks of false positives associated with singular biomarker detection and to strengthen differential diagnostics. One of the earliest multiplexed assays was reported by Faulds, who identified five DNA sequences (5-plex) quantitatively at 1 pM detection limit within a single assay . Quite recently, a similar sensitivity regime was reported for prostate-free prostate specific antigens (f-PSA) and complexed prostate specific antigens (c-PSA) and pancreatic CA19-9, matrix metalloproteinase-7 (MMP7) and MUC4 , cancers in 2-plex and 3-plex in human serum, respectively. The multiplex SERS method shown in Figure 2 of the former study, proposed a very sensitive and selective sandwich immunocomplex (SIC) nanotags, similar to those reported in recent years . The robust and specific SIC assay showed negligible nonspecific binding and cross reactivity between f-PSA and c-PSA . This represents a remarkable leap toward reduction of false positives and assay non-reactivity driven by random binding to interferents which hamper the vast majority of diagnostic immunoassays. More important, the computed values of free to total PSA ratio for clinical samples clearly indicated that optimal dual-binding SERS assay can match the sensitivity of chemiluminescence standard used in clinics. Huge efforts have also been made to deploy automated digital microfluidics-SERS for on-site biosensing of multiple cancer protein markers in minute sample volumes. Recently, a simple prototype microfluidics-SERS, with minimal sample processing, detected subinfective dose of prostate cancer . Nguyen et al. and Perozziello et al. quantified breast cancer biomarkers at 6.5 fM in serum and 0.1 ppm in plasma , respectively. The latter study demonstrated a novel sensor for rapid sorting and quantitative detection of peptides which may play a vital role where multiplexed analysis tends to be complicated by intermolecular interactions between analyte biomarkers. Other researchers have gone so far as to suggest multiplex PCR-SERS to study melanoma DNA mutations associated with various cancers . By combining the biochemical superiority of PCR, and the sensitivity and specificity of SERS, mutant alleles as low as 0.1% could be identified in serum. This illustrates the potential of PCR-SERS to guide important clinical decisions regarding tumour biology with respect to heredity, diagnostics and treatment. Microbial Infections-Pathogen Detection Microbes are found everywhere in large quantities and complex consortia where they perform specialised functions that play a vital role in ecosystems on which humans depend-quite often within the superorganism host . However, it is well known that a small proportion of microbes such as bacteria, fungi and protozoa are responsible for foodborne, waterborne and tuberculosis which contribute to high mortality and morbidity rates globally . Although local and international guidelines for combatting diseases are available, microbial infections are still unacceptably high worldwide with a significant burden borne by the developing world. Just recently perhaps the largest outbreak of listeriosis, a foodborne infection contracted through consumption of food contaminated with Listeria monocytogenes bacterium, which occurred in South Africa . The number of affected confirmed laboratory cases was estimated at 674 with 183 deaths recorded. On the other hand, waterborne infections are reported to be associated with gastroenteritis cases that claim 2 million deaths per year globally, and the number of deaths is likely to increase every hour that therapeutic treatment is delayed . At the time of writing this review, it was reported that at least 54% of chicken meat sold in Germany supermarkets and 79% of those in slaughter houses were contaminated with Campylobacter spp. pathogens . Due to the dramatic increase in incidence rates driven by quick transmission, spread and antimicrobial resistance (AMR) of acute infections, there is an urgent demand for rapid and ultrasensitive tools to characterise pathogens to protect public health and to prevent potential bioterrorism. Furthermore, unequivocal identification and differentiation of pathogens, especially at the PoC, will certainly offer an opportunity to trace the origin of fatal sporadic infections in order to design effective immediate and long-term corrective action. Until very recently, routine microbial diagnostics were dominated by traditional platforms based on culturing, biochemical tests and colony counting . However, these methods are inherently time-consuming, laborious and centralised (i.e., tests are done in dedicated laboratories rather than on site), as well as sometimes being inapplicable depending on pathogenic species under investigation. For example, bacteria such as Mycobacterium tuberculosis may take several days to weeks to form visible colonies, while selective Campylobacter spp. are biochemically unreactive . Alternatively, immunoassays and molecular tools based on ELISA and PCR have shown much improved rapidity and applicability . These techniques have better detection limits, offer accurate phylogenetic identity and classification and allow for simultaneous detection of multiple pathogens provided several antibodies or primers are employed. Although ELISA and PCR have found extensive use in clinical diagnostics they are costly, labour-intensive, require trained personnel and have long turnaround times necessitated by pretreatment and enrichment steps. PCR in particular is prone false positive tests due to cross contamination from the environment and during sample collection, omits phenotypic characteristics of pathogens and fails to differentiate viable from nonviable infectious pathogens since the genetic material is always present in live or dead microbial cells. Moreover, the analytical merit of PCR based on exponential amplification of genetic materials can be a devastating disadvantage in case of sample contamination . To bridge the gap, intrinsically robust SERS protocols with little or no sample preparation could be applied widely for reliable and noninvasive biosensing of microbial infections. The objective of using SERS is to obtain unique "whole-organism" metabolic fingerprints to discern intrinsic biochemical content and dynamics of microbial cells. The differential characteristic SERS frequencies of chemical bonds are used to identify, discriminate and define phenotypes of infectious microbes at species and strain levels in just a few minutes, as demonstrated previously . Of the successful applications of SERS for microbial diagnostics, the report by Jarvis and Goodacre was the first to study urinary tract infection (UTI) . In this study, 21 clinical isolates responsible for UTI, including Escherichia coli, Enterococcus spp., Klebsiella pneumoniae and Proteus mirabilis, were identified and classified accurately at species and strain level without recourse to DNA methods. To solve reproducibility problems linked to the simple mixing method applied by Jarvis and other authors , Shanmukh et al. developed robust clinical grade fabricated solid Ag nanorods to classify molecular signatures of respiratory syncytial virus (RSV) associated with bronchiolitis , whilst Chen et al. proposed aggregated Au/AgNP covered SiO 2 solid substrates to detect phenotypes of resistant etiological agents of sexually transmitted infections (STIs) in just 10 s . In a further analysis, SERS was extended to diagnosis of aggressive fungal diseases, in order to discriminate among dermatophyte fungi strains responsible for mycotic infections using reproducible commercial Ag-coated NPs . Recently, novel SERS employing in situ synthesis of NPs, wherein microbial cells act as templates for nucleation of NP, has shown much improved reproducibility of the detected microbial SERS signal. In this method when active microbial cells are soaked in an oxidant (e.g., AgNO 3 ), metal cations are attracted to anions within cellular biomolecules to act as coordination centres. A reductant solution (e.g., NaBH 4 ) is then introduced to reduce coordinated metal cations to form cell wall-bound monodispersed NPs . Intracellular deposition of NPs can also be achieved when oxidants and reductants are added in a reverse order . This approach enhances clinical applicability of SERS as demonstrated for accurate identification and classification of clinical isolates of E. coli, Bacillus spp. , opportunistic Staphylococcus epidermidis , Aspergillus fumigatus and Rhizomucor pusillus , and to probe microbial cell functionality . Intriguingly, an in situ SERS method proved to be very sensitive and effective for susceptibility assessment of clinical pathogens against common first line antibiotics treatment , to complement previous efforts . AMR is considered as one of the biggest public health threats where microbes elude antibiotics designed to kill them. AMR makes it difficult or impossible to treat some invasive microbial infections. To improve the management of endemic AMR, Zhou et al. developed a sensitive SERS biosensor to detect biochemical signatures and status of bacterial infections and AMR rapidly . This SERS biosensor not only enumerated cells to assess infection severity but also distinguished live (resistant) from dead (sensitive) E. coli cells challenged with antibiotics based on differential spectral signatures as illustrated in Figure 3; a result which is practically impossible to achieve by DNA tools like PCR . Whilst lengthy conventional culturing methods can also differentiate live and dead cells for cultivable organisms, it gives false negative results for viable but non-culturable pathogens. The practical advantage with SERS here is that unlike in dead cells, metabolically active microbes can incorporate NPs to their biomolecules in the cell envelope or cytoplasm more effectively that provide larger and distinctive EF shown in Figure 3. This provides access to vital information about microbial viability state which is invaluable for tracking the response of causative microbes to prescribed antimicrobial therapy. Apparently the short time at which AMR was detected in this study may help to prevent or cure opportunistic infections during surgeries and organ transplants, and to avoid the use of broad spectrum antibiotics which contribute to increase in AMR. In the coming years, in situ SERS needs validation through multicentral tests for AMR in a large cohort of clinical isolates and antibiotics using standardised protocols. Also, the Raman vibrational modes for heavy water (D 2 O) are well established , and several authors have probed the general metabolic activity of microbial community members capable of degrading environmental contaminants . Similarly, D 2 O can be incorporated to AMR studies to probe the metabolic activity of sensitive and resistant microbes for in-depth assessment and elucidation of bactericidal and bacteriostatic effects of common and novel antibiotics. Since SERS spectral data provides information on the structural properties of biomolecules, probing cells with D 2 O in a time course fashion may identify novel (multi)-resistant prokaryotes, offer kinetics and mechanistic insights into AMR and treatment prognosis. In addition, it may reveal valuable biochemical changes in pathogens due to the regulation of drug metabolism to guide the next generation therapeutics. The practical advantage with SERS here is that unlike in dead cells, metabolically active microbes can incorporate NPs to their biomolecules in the cell envelope or cytoplasm more effectively that provide larger and distinctive EF shown in Figure 3. This provides access to vital information about microbial viability state which is invaluable for tracking the response of causative microbes to prescribed antimicrobial therapy. Apparently the short time at which AMR was detected in this study may help to prevent or cure opportunistic infections during surgeries and organ transplants, and to avoid the use of broad spectrum antibiotics which contribute to increase in AMR. In the coming years, in situ SERS needs validation through multicentral tests for AMR in a large cohort of clinical isolates and antibiotics using standardised protocols. Also, the Raman vibrational modes for heavy water (D2O) are well established , and several authors have probed the general metabolic activity of microbial community members capable of degrading environmental contaminants . Similarly, D2O can be incorporated to AMR studies to probe the metabolic activity of sensitive and resistant microbes for in-depth assessment and elucidation of bactericidal and bacteriostatic effects of common and novel antibiotics. Since SERS spectral data provides information on the structural properties of biomolecules, probing cells with D2O in a time course fashion may identify novel (multi)-resistant prokaryotes, offer kinetics and mechanistic The label-free SERS approach can also be complemented by label-based SERS to allow high-level multiplexed analysis of pathogens in patient samples. Renishaw Diagnostics recently developed a cutting edge robust SERS-based multiplex tool called Fungiplex assay . This is perhaps the greatest milestone in recent years which has pushed SERS toward routine clinical use. In its trial phase, a Fungiplex assay reproducibly detected and identified blood-derived DNA for 12 Candida and Aspergillus pathogens ex vivo in a single run. Following this, several researchers have launched ultrasensitive assays of clinical quality for the isolation and quantitative detection of antibiotic resistant bacteria at 10 cfu/mL , DNA of meningitis-associated pathogens at~21 pM , viruses at 10 pg/mL in serum and deadly Ebola virus in blood , in 3-plexed analysis, to confirm the potential of SERS to meet workflow demands for real-time PoC diagnostics. When coupled to rapidly emerging novel devices-lab-on-a-chip , microbial-and nano-barcoding -SERS has better stability, ruggedness and analytical performance. This makes SERS undoubtedly exploitable for in-field investigations of multiple conflicting coinfections in vivo, especially in remote areas which are highly susceptible to outbreaks of communicable diseases . SERS in Breath Analysis Breathomics is defined as the metabolomics of exhaled air. Human exhaled breath predominantly consists of water, oxygen, nitrogen, nitric oxide and carbon dioxide. In addition, it contains thousands of small gaseous volatile organic compounds (VOCs) which are produced by numerous and highly regulated metabolic reactions in various metabolic pathways . Since the structural identity and abundance of endogenous VOCs vary depending on the health status of an individual, the field of breathomics has much potential as a powerful noninvasive platform for biomarker discovery. Moreover, since each patient produces unique characteristic VOC signatures for specific illnesses, breathomics will potentially play a crucial role in personalised medicine . Despite attracting increasing attention, the progress of breathomics research for clinical diagnostics is relatively slow partly due to limitations associated with capture of breath, and selective and sensitive detection of trace quantities of VOCs . Electronic nose (eNose) technology is an emerging portable tool for pattern recognition in composite responses of mixed VOCs (breathprint). Although eNose provides breathprints that can be used to detect asthma and chronic obstructive pulmonary disease (COPD) , it does not identify individual VOC markers within a complex mixture. Ion mobility spectrometry (IMS) is a relatively sensitive method though it is destructive, may not be ideal for complex VOC mixtures and its long term reproducibility is yet to be demonstrated . Currently, GC-MS is the gold standard for global profiling of exhaled VOCs related to abnormal metabolism and extrapolated to cancers, diabetes etc. However, if the goal is to develop a rapid analytical tool for online diagnostics and/or offline use in real-time at the PoC especially in low income countries, then GC-MS is perhaps not convenient. In addition as partly stated earlier, GC-MS requires professional operators, lengthy procedures, and high instrument maintenance and consumables costs. This is where SERS prevails in clinical practice to offer low-cost but quicker detection of disease-specific singular or multiplex VOC targets directly in exhaled breath at a point of need. The fact that both exogenous and endogenous VOCs are present at trace concentrations in exhaled breath; SERS is an exciting prospect as it may allow for the detection of any VOCs adsorbed onto nanomaterials, down to single-molecule level . Although SERS as a clinical diagnostic tool for breath analysis is currently at the budding stage, the results obtained so far shows unprecedented potential. Initial proof-of-concept work aimed to detect low amounts of pure acetone and ethanol vapour at LODs of 3.7 pg and 1.7 pg, respectively, as singular or duplex markers of glucose level in plasma . Nevertheless, due to complex pathogenesis associated with breath-related infections, where multiple biochemical and cellular processes are linked to innate and adaptive mechanisms, human exhaled breath is chemically complex. Hence, VOC components should be 'sorted out' to improve pathophysiological assessment of various diseases. An interesting study by Chen et al. was initially applied a high-throughput GC-MS, to capture this biological complexity, and SERS in parallel, to discriminate between early and advanced gastric cancer (EGC and AGC respectively) from healthy controls . Using GC-MS and solid phase microextraction, 14 characteristic volatile metabolites were screened and identified in human exhaled breath. SERS based on in situ synthesised AuNPs coated on graphene oxide then detected individual profiles for all 14 VOCs rapidly and accurately in both simulated and real exhaled breath sampled from patients of gastric cancer. PCA scores plot of SERS successfully discriminated healthy, EGC and AGC in 200 breath samples at diagnostic sensitivity of >92% and specificity of >83%, very comparable to that of GC-MS, though quantitative characterisation was not demonstrated . It is worth noting that SERS profiles were unaffected by age and gender of patients, illustrating the ability of SERS to overcome effects of between-patient confounding factors in routine clinical tests. A key prerequisite for successful SERS application in breath analysis is the design of optically sensitive probes that 'arrest' and provide a large surface area for adsorption of volatile metabolites since VOCs diffuse quickly according to the Graham's law . Several SERS substrates with improved capture properties have been proposed including bimetallic nanogaps , dendritic nanocrystals and 3D multilayered nanowires . However, a SERS biosensor designed by Qiao et al. (Figure 4), where aggregated spherical Au superparticles (GSP) coated with ZIF-8 (so-called GSP@ZIF-8) formed a 3D core-shell to act as a SERS-active substrate and metal organic framework (MOF) that is quite attractive . The notable merit of the GSP@ZIF-8 3D structure sensor is the ability to slow down the flow rate to promote adsorption, retention and equilibration of VOCs, resulting in increased enrichment and reproducibility. Clearly, GSP@ZIF-8 biosensor is more appealing and feasible for clinical analysis of specific lung cancer VOC biomarkers in real-time at the point of need. specificity of >83%, very comparable to that of GC-MS, though quantitative characterisation was not demonstrated . It is worth noting that SERS profiles were unaffected by age and gender of patients, illustrating the ability of SERS to overcome effects of between-patient confounding factors in routine clinical tests. A key prerequisite for successful SERS application in breath analysis is the design of optically sensitive probes that 'arrest' and provide a large surface area for adsorption of volatile metabolites since VOCs diffuse quickly according to the Graham's law . Several SERS substrates with improved capture properties have been proposed including bimetallic nanogaps , dendritic nanocrystals and 3D multilayered nanowires . However, a SERS biosensor designed by Qiao et al. (Figure 4), where aggregated spherical Au superparticles (GSP) coated with ZIF-8 (so-called GSP@ZIF-8) formed a 3D core-shell to act as a SERS-active substrate and metal organic framework (MOF) that is quite attractive . The notable merit of the GSP@ZIF-8 3D structure sensor is the ability to slow down the flow rate to promote adsorption, retention and equilibration of VOCs, resulting in increased enrichment and reproducibility. Clearly, GSP@ZIF-8 biosensor is more appealing and feasible for clinical analysis of specific lung cancer VOC biomarkers in real-time at the point of need. Recently, SERS has also become popular for the identification of validated headspace VOCs produced by specific invasive pathogens. Bacterial VOCs within exhaled breath or culture headspace are distinct, and their selective recognition serves as biomarkers for chronic illnesses . The chemical structures and identities of a large number of microbial VOCs (mVOCs) biomarkers produced by human pathogens are available in mVOC database . Hydrogen cyanide emitted by Pseudomonas aeruginosa associated with cystic fibrosis , isovaleric acid for Staphylococcus aureus and ethyl acetate and indole linked to sepsis , have all been detected and Recently, SERS has also become popular for the identification of validated headspace VOCs produced by specific invasive pathogens. Bacterial VOCs within exhaled breath or culture headspace are distinct, and their selective recognition serves as biomarkers for chronic illnesses . The chemical structures and identities of a large number of microbial VOCs (mVOCs) biomarkers produced by human pathogens are available in mVOC database . Hydrogen cyanide emitted by Pseudomonas aeruginosa associated with cystic fibrosis , isovaleric acid for Staphylococcus aureus and ethyl acetate and indole linked to sepsis , have all been detected and quantified accurately by SERS , to supplement recently published results using MS . Finally, SERS has been developed to probe the headspace of bacterial cultures to differentiate accurately viable cells from dead bacteria after treatment with the antibiotic gentamicin . Henceforth, such VOC biomarkers will be identifiable in new patients with the same infections. It is very clear here that SERS offers dual benefits for healthcare: the rapid and sensitive detection of actual microbial pathogens and their phenotypic characteristics down to a single cell and the corresponding VOC molecular signatures as confirmatory results simultaneously. Conclusions and Future Outlook In this review, we have discussed recent applications of SERS and shown that this method is a versatile physicochemical tool that can be used to extract diagnostic and quantitative information within cancers and microbial infections, as well as in respiratory disease. Both direct and indirect SERS play vital roles in targeted detection of specific biomarker in cells or human biofluids and their use have been extended to several diseases in need of urgent attention. However, to harness full analytical capabilities and to accelerate translation of SERS to the clinic, there are pending technical and methodological issues that need urgent improvements. We know very well that SERS peak intensity or area is linearly proportional to the concentration of sample analytes which facilitates accurate quantitative detection of diseases. However, due to saturation of NP surfaces in label-free SERS, whereby some molecules reside outside the LSPR, or instability of a radiation source and inconsistent cumulative aggregation, reproducibility and linearity are lost. To overcome this problem in quantitative analysis, a suitable internal standard (IS) can be integrated so that characteristic biomarker bands are normalised to distinct IS peak. This reduces spectral signal fluctuations since analyte and internal standard spectral bands are affected in exactly the same way, especially when isotopologues are used as illustrated recently . Alternatively the standard addition method can be used as this also accounts for any sample background as shown for the quantification of uric acid in the urine of pregnant individuals . This way quantitative detection and prediction of disease/prognostic markers in a clinical environment would be more accurate and reliable. One other area which perhaps needs more attention to simplify SERS interpretation is the use of functionalised NPs (label-based SERS) for selective detection of specific markers within complex backgrounds. It is worth noting that reporters and capture species prevent undesirable aggregation of NPs and resolve known multiple biomarkers simultaneously without any prior separation or enrichment procedures. The use of SERS in parallel with multiparameter tools is also showing promising trends; it is clear that SERS has improved time between diagnostics and cure, specificity and quantitative analysis of biomarkers identified by chromatography and spectrometry. Moreover, several problematic issues, due to nonspecific binding or cross-reactivity among untargeted species, NPs and capture elements, especially polyclonal antibodies, which hampered detection of desirable targets, have been addressed in recent years. Generally, a series of washing steps and biocompatible coating materials such as polymers and silica as protective layers or core/encapsulation-shells have been applied successfully. The main objective here is to prevent leaching of Raman reporter and capture probes, enhance specificity and to shield interfering contaminants from accessing NP surfaces and paratopes of capture elements. Another fundamental area of focus for reliable interpretation of biomedical SERS in future is spectral band assignment. For bacterial SERS, metabolomics and isotopic labelling experiments have played an indispensable role; we now understand significant contributions of purines and pyrimidines to characteristic SERS spectral bands . Nonetheless, there is still a pressing need to formulate standard operating protocols and a unified library of reference biomaterials for SERS to allow for repeatability, unambiguous and detailed understanding of molecular dynamics and metabolic pathways. In light of this, it should be emphasised that only when pending problems are effectively overcome will SERS be convincingly relevant and acceptable to clinicians. For now the SERS community should aim to provide experimental details including sample preparation, NP synthesis/properties (e.g., plasmonics and morphology), how NPs and analytes are brought into contact, excitation wavelength used, signal acquisition parameters and data processing employed, as part of the minimum reporting standards. Lastly, theranostics is an emerging interesting area of clinical diagnostics where SERS will play a crucial role as revealed by recent reports . Theranostics combines diagnosis of disease and therapeutic treatment simultaneously. Ag-and Au-NPs have high optical activity and large surface area for functionalisation, and so they are attractive for diagnosis of specific diseases, photothermal therapy and TDM. This means that NPs can provide large optical enhancement and rapid quantitative assessment of disease at trace amounts whilst delivering therapeutic drugs to intended targets. Due to good biocompatibility, AuNPs can migrate across cellular membranes into the cytoplasm to give a snapshot of intracellular chemical processes and metabolic flux distribution without causing significant damage. When coupled to automated microfluidics or LoC devices and deep machine learning, SERS would permit in/off/online platform to monitor therapeutic drug activity and patients' response to new drugs in real-time. However, disease diagnosis by SERS should be validated carefully and comprehensively, which may include confirmatory inputs from techniques presented in Table 1. This is aimed to avoid costly and risky unpredictable analytics like the blood testing system which led to the 'Theranos scandal', perhaps the biggest science saga of the 21st century to date. It is clearly evident that theranostics-SERS interface along with MVA will potentially revolutionise patient care, AMR biochemistry and drug discovery by expanding on the diagnostics and therapeutics toolbox for clinicians in the foreseeable future. We are hopeful this review will contribute to the ongoing efforts to translate vibrational spectroscopy to the clinic, spearheaded by the international society for clinical spectroscopy (CLIRSPEC) . Table 1. A summary of different types of enhanced Raman scattering techniques commonly used in disease diagnostics. Enhanced Raman Techniques Brief Description Resonance Raman (RR) scattering RR occurs when the incident frequency excites electronic energy (EE) states of specific molecules; e.g., aromatic chromophores, and this aids the detection of pathogens . If the EE level is excited by laser frequency in the ultraviolet (UV) region, the technique is UVRR, and can be susceptible to photodissociation . Spatially offset Raman spectroscopy (SORS) SORS collects distinctive chemical information and images from deep subsurfaces (including analytes in opaque containers) of a sample. SORS spectral signals are recorded when backscattered Raman photons are collected at points spatially offset (∆x) from the point of illumination (x). Negligible photodissociation, allowing for noninvasive deep medical diagnostics . Surface-enhanced spatially offset Raman spectroscopy (SESORS) SORS combined with NPs that enable SERS: that is to say subsurface information is measured from molecules in the vicinity of or chemically bonded to SERS substrates. Negligible fluorescence and excellent background contrast, specificity and sensitivity with improved detection limit for various disease markers . Surface-enhanced spatially offset resonance Raman spectroscopy (SESORRS) A variant of SESORS where incident frequency matches the EE of molecules near SERS-active substrates. SESORRS increases spectral signals further by orders of magnitude to provide extra biochemical selectivity and sensitivity, theoretically better than SESORS, as demonstrated by Fay et al. for breast cancer detection . Tip-enhanced Raman spectroscopy (TERS) Similar to the SERS phenomenon, but here a single SERS-active AFM probe whose sharp pointed apex (tip) is covered in NPs and scans through biomolecules on a sample surface, resulting in highly confined plasmonic enhancement (electrostatic lightning rod and SPR effects). Improves lateral spatial resolution to as low as 10 nm, about the diameter of the tip probe. Achieves single molecule detection and discrimination of bacterial pathogens . Stimulated Raman scattering (SRS) Two lasers provide a pump (ω p ) (similar to conventional Raman) and Stokes (ω s ) beam frequencies that intersect at the sample surface. The energy difference (∆ω = ω p − ω s ) between the beams matches the frequency (Ω vib ) of molecular bond vibrations, leading to larger scattering cross-section as a consequence of stimulated Raman excitation. No nonresonance background , making SRS ideal for in vivo medical imaging to improve disease diagnostics . Coherent anti-Stokes Raman scattering (CARS) Like SRS, CARS employs two lasers of frequencies ω p and ω s . When molecular bonds whose Ω vib coincide with ∆ω (as in SRS), anti-Stokes (as) lines are produced at frequency ω as = 2ω p − ω s . Thus, analytes are excited twice, from the ground to first and second excited states before relaxing back to the ground state. Though prone to nonresonance background effects which may limit the quantification of target analytes , CARS is effectively applied for disease detection including differential diagnostics of cancers . Conflicts of Interest: The authors declare no conflict of interest.
<reponame>BirWri/CS50BirWri import { Processor } from '@ephox/boulder'; import * as Behaviour from '../../api/behaviour/Behaviour'; import { AlloyComponent } from '../../api/component/ComponentApi'; export interface ReceivingBehaviour extends Behaviour.AlloyBehaviour<ReceivingConfigSpec, ReceivingConfig> { config: (config: ReceivingConfigSpec) => Behaviour.NamedConfiguredBehaviour<ReceivingConfigSpec, ReceivingConfig>; } export interface ReceivingConfig extends Behaviour.BehaviourConfigDetail { channels: { [ key: string ]: { schema: Processor; onReceive: (comp: AlloyComponent, message: any) => void; }; }; } export interface ReceivingConfigSpec extends Behaviour.BehaviourConfigSpec { channels: { [ key: string]: { onReceive: (comp: AlloyComponent, message: any) => void; schema?: Processor; }; }; }
Inhibition of ULK1 and Beclin1 by an α-herpesvirus Akt-like Ser/Thr kinase limits autophagy to stimulate virus replication Significance As a catabolic program that maintains homeostasis during adversity, autophagy is an immune defense restricting pathogenesis of viruses, including HSV-1. While HSV-1 ICP34.5 interferes with host Beclin1 to suppress autophagy and support virus replication in neurons, whether autophagy is a cell type-specific antiviral response or broadly limits HSV-1 replication in nonneuronal cells is unknown. We show that the HSV-1 Us3 Ser/Thr kinase antagonizes autophagy in nonneuronal cells. Phosphorylation of autophagy regulators ULK1 and Beclin1 was Us3 dependent and Beclin1 was identified as a direct Us3 substrate. Finally, replication of Us3-deficient HSV-1 was enhanced by depleting ULK1 and Beclin1. This establishes that autophagy limits HSV-1 replication in nonneuronal cells and reveals a new function for the Us3 kinase encoded by α-herpesviruses. Autophagy is a powerful host defense that restricts herpes simplex virus-1 (HSV-1) pathogenesis in neurons. As a countermeasure, the viral ICP34.5 polypeptide, which is exclusively encoded by HSV, antagonizes autophagy in part through binding Beclin1. However, whether autophagy is a cell-type–specific antiviral defense or broadly restricts HSV-1 reproduction in nonneuronal cells is unknown. Here, we establish that autophagy limits HSV-1 productive growth in nonneuronal cells and is repressed by the Us3 gene product. Phosphorylation of the autophagy regulators ULK1 and Beclin1 in virus-infected cells was dependent upon the HSV-1 Us3 Ser/Thr kinase. Furthermore, Beclin1 was unexpectedly identified as a direct Us3 kinase substrate. Although disabling autophagy did not impact replication of an ICP34.5-deficient virus in primary human fibroblasts, depleting Beclin1 and ULK1 partially rescued Us3-deficient HSV-1 replication. This shows that autophagy restricts HSV-1 reproduction in a cell-intrinsic manner in nonneuronal cells and is suppressed by multiple, independent viral functions targeting Beclin1 and ULK1. Moreover, it defines a surprising role regulating autophagy for the Us3 kinase, which unlike ICP34.5 is widely encoded by alpha-herpesvirus subfamily members.
/*! * Copyright (c) Microsoft Corporation. All rights reserved. * Licensed under the MIT License. */ import { expect } from 'chai'; import { v4 as uuidv4 } from 'uuid'; import { DetachedSequenceId, NodeId, TraitLabel } from '../Identifiers'; import { Change, StableRange, Side, ChangeType, ConstraintEffect, EditResult, Insert, StablePlace, } from '../PersistedTypes'; import { initialTree } from '../InitialTree'; import { Transaction } from '../Transaction'; import { makeEmptyNode, testTrait, simpleTreeSnapshot, left, leftTraitLocation, initialSnapshot, right, rightTraitLocation, leftTraitLabel, rightTraitLabel, } from './utilities/TestUtilities'; describe('Transaction', () => { describe('Constraints', () => { it('can be met', () => { const transaction = new Transaction(initialSnapshot); transaction.applyChange({ toConstrain: StableRange.all(testTrait), effect: ConstraintEffect.InvalidAndDiscard, type: ChangeType.Constraint, }); expect(transaction.result).equals(EditResult.Applied); }); const nonExistentNode = '57dd2fc4-72fa-471c-9f37-70010d31b59c' as NodeId; const invalidStableRange: StableRange = { start: { side: Side.After, referenceSibling: nonExistentNode }, end: { side: Side.Before }, }; it('can be unmet', () => { const transaction = new Transaction(initialSnapshot); transaction.applyChange({ toConstrain: invalidStableRange, effect: ConstraintEffect.InvalidAndDiscard, type: ChangeType.Constraint, }); expect(transaction.result).equals(EditResult.Invalid); }); it('effect can apply anyway', () => { const transaction = new Transaction(initialSnapshot); transaction.applyChange({ toConstrain: invalidStableRange, effect: ConstraintEffect.ValidRetry, type: ChangeType.Constraint, }); expect(transaction.result).equals(EditResult.Applied); }); it('length can be met', () => { const transaction = new Transaction(initialSnapshot); transaction.applyChange({ toConstrain: StableRange.all(testTrait), effect: ConstraintEffect.InvalidAndDiscard, type: ChangeType.Constraint, length: 0, }); expect(transaction.result).equals(EditResult.Applied); }); it('length can be unmet', () => { const transaction = new Transaction(initialSnapshot); transaction.applyChange({ toConstrain: StableRange.all(testTrait), effect: ConstraintEffect.InvalidAndDiscard, type: ChangeType.Constraint, length: 1, }); expect(transaction.result).equals(EditResult.Invalid); }); it('parent can be met', () => { const transaction = new Transaction(initialSnapshot); transaction.applyChange({ toConstrain: StableRange.all(testTrait), effect: ConstraintEffect.InvalidAndDiscard, type: ChangeType.Constraint, parentNode: initialTree.identifier, }); expect(transaction.result).equals(EditResult.Applied); }); it('parent can be unmet', () => { const transaction = new Transaction(initialSnapshot); transaction.applyChange({ toConstrain: StableRange.all(testTrait), effect: ConstraintEffect.InvalidAndDiscard, type: ChangeType.Constraint, parentNode: nonExistentNode, }); expect(transaction.result).equals(EditResult.Invalid); }); it('label can be met', () => { const transaction = new Transaction(initialSnapshot); transaction.applyChange({ toConstrain: StableRange.all(testTrait), effect: ConstraintEffect.InvalidAndDiscard, type: ChangeType.Constraint, label: testTrait.label, }); expect(transaction.result).equals(EditResult.Applied); }); it('label can be unmet', () => { const transaction = new Transaction(initialSnapshot); transaction.applyChange({ toConstrain: StableRange.all(testTrait), effect: ConstraintEffect.InvalidAndDiscard, type: ChangeType.Constraint, label: '7969ee2e-5418-43db-929a-4e9a23c5499d' as TraitLabel, // Arbitrary label not equal to testTrait.label }); expect(transaction.result).equals(EditResult.Invalid); }); }); describe('SetValue', () => { it('can be invalid', () => { const transaction = new Transaction(initialSnapshot); transaction.applyChange({ nodeToModify: '7969ee2e-5418-43db-929a-4e9a23c5499d' as NodeId, // Arbitrary id not equal to initialTree.identifier payload: { base64: 'eg==' }, // Arbitrary valid base64 string. type: ChangeType.SetValue, }); expect(transaction.result).equals(EditResult.Invalid); }); it('can change payload', () => { const transaction = new Transaction(initialSnapshot); transaction.applyChange({ nodeToModify: initialTree.identifier, payload: { base64: 'eg==' }, // Arbitrary valid base64 string. type: ChangeType.SetValue, }); expect(transaction.result).equals(EditResult.Applied); expect(transaction.view.getSnapshotNode(initialTree.identifier).payload?.base64).equals('eg=='); }); it('can set empty payload', () => { const transaction = new Transaction(initialSnapshot); transaction.applyChange({ nodeToModify: initialTree.identifier, payload: { base64: '' }, type: ChangeType.SetValue, }); expect(transaction.result).equals(EditResult.Applied); expect(transaction.view.getSnapshotNode(initialTree.identifier).payload?.base64).equals(''); }); it('can clear an unset payload', () => { const transaction = new Transaction(initialSnapshot); transaction.applyChange(Change.clearPayload(initialTree.identifier)); expect(transaction.result).equals(EditResult.Applied); expect({}.hasOwnProperty.call(transaction.view.getSnapshotNode(initialTree.identifier), 'payload')).false; expect({}.hasOwnProperty.call(transaction.view.getChangeNode(initialTree.identifier), 'payload')).false; }); it('can clear a set payload', () => { const transaction = new Transaction(initialSnapshot); transaction.applyChange({ nodeToModify: initialTree.identifier, payload: { base64: '' }, type: ChangeType.SetValue, }); expect(transaction.result).equals(EditResult.Applied); expect(transaction.view.getSnapshotNode(initialTree.identifier).payload).not.undefined; transaction.applyChange(Change.clearPayload(initialTree.identifier)); expect(transaction.result).equals(EditResult.Applied); expect({}.hasOwnProperty.call(transaction.view.getSnapshotNode(initialTree.identifier), 'payload')).false; expect({}.hasOwnProperty.call(transaction.view.getChangeNode(initialTree.identifier), 'payload')).false; }); }); describe('Insert', () => { const buildId = 0 as DetachedSequenceId; const builtNodeId = uuidv4() as NodeId; const newNode = makeEmptyNode(builtNodeId); it('can be malformed', () => { const transaction = new Transaction(simpleTreeSnapshot); transaction.applyChange(Change.build([newNode], buildId)); transaction.applyChange( Change.insert( // Non-existent detached id 1 as DetachedSequenceId, { referenceSibling: initialTree.identifier, side: Side.After } ) ); expect(transaction.result).equals(EditResult.Malformed); }); it('can be invalid', () => { const transaction = new Transaction(simpleTreeSnapshot); transaction.applyChange(Change.build([newNode], buildId)); transaction.applyChange( Change.insert( buildId, // Arbitrary id not present in the tree { referenceSibling: '7969ee2e-5418-43db-929a-4e9a23c5499d' as NodeId, side: Side.After } ) ); expect(transaction.result).equals(EditResult.Invalid); }); [Side.Before, Side.After].forEach((side) => { it(`can insert a node at the ${side === Side.After ? 'beginning' : 'end'} of a trait`, () => { const transaction = new Transaction(simpleTreeSnapshot); transaction.applyChanges( Insert.create( [newNode], side === Side.After ? StablePlace.atStartOf(leftTraitLocation) : StablePlace.atEndOf(leftTraitLocation) ) ); expect(transaction.view.getTrait(leftTraitLocation)).deep.equals( side === Side.After ? [builtNodeId, left.identifier] : [left.identifier, builtNodeId] ); }); it(`can insert a node ${side === Side.Before ? 'before' : 'after'} another node`, () => { const transaction = new Transaction(simpleTreeSnapshot); transaction.applyChanges(Insert.create([newNode], { referenceSibling: left.identifier, side })); expect(transaction.view.getTrait(leftTraitLocation)).deep.equals( side === Side.Before ? [builtNodeId, left.identifier] : [left.identifier, builtNodeId] ); }); }); }); describe('Build', () => { it('can be malformed', () => { const transaction = new Transaction(initialSnapshot); // Build two nodes with the same detached id transaction.applyChange(Change.build([makeEmptyNode()], 0 as DetachedSequenceId)); expect(transaction.result).equals(EditResult.Applied); transaction.applyChange(Change.build([makeEmptyNode()], 0 as DetachedSequenceId)); expect(transaction.result).equals(EditResult.Malformed); }); it('can be invalid', () => { const transaction = new Transaction(initialSnapshot); // Build two nodes with the same identifier const identifier = uuidv4() as NodeId; transaction.applyChange(Change.build([makeEmptyNode(identifier)], 0 as DetachedSequenceId)); expect(transaction.result).equals(EditResult.Applied); transaction.applyChange(Change.build([makeEmptyNode(identifier)], 1 as DetachedSequenceId)); expect(transaction.result).equals(EditResult.Invalid); }); it('can build a detached node', () => { const transaction = new Transaction(initialSnapshot); const identifier = uuidv4() as NodeId; const newNode = makeEmptyNode(identifier); transaction.applyChange(Change.build([newNode], 0 as DetachedSequenceId)); expect(transaction.result).equals(EditResult.Applied); expect(transaction.view.hasNode(identifier)).is.true; expect(transaction.view.getParentSnapshotNode(identifier)).is.undefined; expect(transaction.view.getChangeNode(identifier)).deep.equals(newNode); }); it("is malformed if detached node id doesn't exist", () => { const transaction = new Transaction(initialSnapshot); const editNode = 0 as DetachedSequenceId; transaction.applyChange({ destination: 1 as DetachedSequenceId, source: [editNode], type: ChangeType.Build, }); expect(transaction.result).equals(EditResult.Malformed); }); }); describe('Detach', () => { it('can be malformed', () => { const transaction = new Transaction(simpleTreeSnapshot); // Supplied StableRange is malformed transaction.applyChange( Change.detach({ start: { referenceTrait: leftTraitLocation, referenceSibling: left.identifier, side: Side.Before }, end: StablePlace.after(right), }) ); expect(transaction.result).equals(EditResult.Malformed); }); it('can be invalid', () => { const transaction = new Transaction(simpleTreeSnapshot); // Start place is before end place transaction.applyChange( Change.detach({ start: StablePlace.atEndOf(leftTraitLocation), end: StablePlace.atStartOf(leftTraitLocation), }) ); expect(transaction.result).equals(EditResult.Invalid); }); it('can delete a node', () => { const transaction = new Transaction(simpleTreeSnapshot); transaction.applyChange(Change.detach(StableRange.only(left))); expect(transaction.view.hasNode(left.identifier)).is.false; }); }); describe('Composite changes', () => { it('can form a node move', () => { const transaction = new Transaction(simpleTreeSnapshot); const detachedId = 0 as DetachedSequenceId; transaction.applyChange(Change.detach(StableRange.only(left), detachedId)); transaction.applyChange(Change.insert(detachedId, StablePlace.after(right))); expect(transaction.view.getTrait(leftTraitLocation)).deep.equals([]); expect(transaction.view.getTrait(rightTraitLocation)).deep.equals([right.identifier, left.identifier]); }); it('can form a wrap insert', () => { // A wrap insert is an edit that inserts a new node between a subtree and its parent atomically. // Ex: given A -> B -> C, a wrap insert of D around B would produce A -> D -> B -> C const transaction = new Transaction(simpleTreeSnapshot); const leftNodeDetachedId = 0 as DetachedSequenceId; const parentDetachedId = 1 as DetachedSequenceId; transaction.applyChange(Change.detach(StableRange.only(left), leftNodeDetachedId)); // This is node D, from the example const wrappingParentId = uuidv4() as NodeId; const wrappingTraitLabel = 'wrapTrait' as TraitLabel; transaction.applyChange( Change.build( [ { ...makeEmptyNode(wrappingParentId), traits: { [wrappingTraitLabel]: [leftNodeDetachedId] }, // Re-parent left under new node }, ], parentDetachedId ) ); transaction.applyChange(Change.insert(parentDetachedId, StablePlace.atStartOf(leftTraitLocation))); const leftTrait = transaction.view.getTrait(leftTraitLocation); expect(leftTrait).deep.equals([wrappingParentId]); const wrappingTrait = transaction.view.getTrait({ parent: wrappingParentId, label: wrappingTraitLabel }); expect(wrappingTrait).deep.equals([left.identifier]); }); it('can build and insert a tree that contains detached subtrees', () => { const transaction = new Transaction(simpleTreeSnapshot); const leftNodeDetachedId = 0 as DetachedSequenceId; const rightNodeDetachedId = 1 as DetachedSequenceId; const detachedIdSubtree = 2 as DetachedSequenceId; transaction.applyChange(Change.detach(StableRange.only(left), leftNodeDetachedId)); transaction.applyChange(Change.detach(StableRange.only(right), rightNodeDetachedId)); const detachedSubtree = { ...makeEmptyNode(), traits: { [leftTraitLabel]: [leftNodeDetachedId], [rightTraitLabel]: [rightNodeDetachedId], }, }; transaction.applyChange(Change.build([detachedSubtree], detachedIdSubtree)); transaction.applyChange(Change.insert(detachedIdSubtree, StablePlace.atStartOf(leftTraitLocation))); expect(transaction.view.getTrait(rightTraitLocation)).deep.equals([]); expect(transaction.view.getTrait(leftTraitLocation)).deep.equals([detachedSubtree.identifier]); const insertedSubtree = transaction.view.getChangeNode(detachedSubtree.identifier); expect(insertedSubtree.traits).deep.equals({ [leftTraitLabel]: [left], [rightTraitLabel]: [right], }); }); it('can build and insert a tree with the same identity as that of a detached subtree', () => { const transaction = new Transaction(simpleTreeSnapshot); transaction.applyChange(Change.detach(StableRange.only(left))); const idOfDetachedNodeToInsert = 1 as DetachedSequenceId; expect(transaction.view.getTrait(leftTraitLocation)).deep.equals([]); transaction.applyChange(Change.build([makeEmptyNode(left.identifier)], idOfDetachedNodeToInsert)); transaction.applyChange(Change.insert(idOfDetachedNodeToInsert, StablePlace.atStartOf(leftTraitLocation))); expect(transaction.view.getTrait(leftTraitLocation)).deep.equals([left.identifier]); }); }); });
L‐6: Late‐News Paper: Development of the Novel Transflective LCD Module Using Super‐Fine‐TFT Technology Adopting the new structure of planarization layer on the rough reflector and designing the new optical compensation with large cell gap margin, we developed the wide viewing angle transflective LCD module with our proprietary IPS technology (Super‐Fine‐TFT technology). After measuring the viewing‐angle characteristics, the developed transflective IPS‐LCD is shown to surpass the VA mode in the transmissive performance.
/** * This call provides the server with the opportunity to close this session and * redirect the PSU to the TPP or close the application window. * <p> * In any case, the session of the user will be closed. * * @param encryptedPaymentId ID of Payment * @param authorisationId ID of related Payment Authorisation * @return redirect location header with TPP url */ @GetMapping(path = "/{encryptedPaymentId}/authorisation/{authorisationId}/done") @ApiOperation(value = "Close consent session", authorizations = @Authorization(value = "apiKey"), notes = "This call provides the server with the opportunity to close this session and " + "redirect the PSU to the TPP or close the application window.") ResponseEntity<PaymentAuthorizeResponse> pisDone( @PathVariable("encryptedPaymentId") String encryptedPaymentId, @PathVariable("authorisationId") String authorisationId, @RequestParam(name = "oauth2", required = false, defaultValue = "false") boolean isOauth2Integrated, @RequestParam(name = "authConfirmationCode", required = false) String authConfirmationCode);
<filename>suda/0331/g.cpp #include <bits/stdc++.h> using namespace std; int main() { int n, a, b, c, d, m, mm; long long ans = 0; cin >> n >> a >> b >> c >> d; m = n + min(min(a+b, a+c), min(b+d, c+d)); mm = max(max(a+b, a+c), max(b+d, c+d)); ans += 1LL * max(0, m-mm) * n; cout << ans << endl; return 0; }
<gh_stars>0 export { default as Glyph } from './glyphs/Glyph'; export { default as GlyphDot } from './glyphs/GlyphDot'; export { default as GlyphCross } from './glyphs/GlyphCross'; export { default as GlyphDiamond } from './glyphs/GlyphDiamond'; export { default as GlyphStar } from './glyphs/GlyphStar'; export { default as GlyphTriangle } from './glyphs/GlyphTriangle'; export { default as GlyphWye } from './glyphs/GlyphWye'; export { default as GlyphSquare } from './glyphs/GlyphSquare'; export { default as GlyphCircle } from './glyphs/GlyphCircle'; //# sourceMappingURL=index.d.ts.map
/** * Class to keep information about the definition of a property: Its name, its type and the type * of its parent in case the property is a nested object. */ @Data @NoArgsConstructor @AllArgsConstructor public class PropertyDefinition { private String name; private Class<?> type; private Class<?> parentType; }
/** * * * @return list; RFS plus all the transitive subrfs of all rfs in RFS. */ public static final SubLObject rfs_closure(SubLObject rfs) { { SubLObject more_rfs = NIL; SubLObject cdolist_list_var = rfs; SubLObject rf = NIL; for (rf = cdolist_list_var.first(); NIL != cdolist_list_var; cdolist_list_var = cdolist_list_var.rest(), rf = cdolist_list_var.first()) { more_rfs = cons(rf, more_rfs); { SubLObject subrfs = subrfs(rf, T); if (NIL != subrfs) { more_rfs = nconc(subrfs, more_rfs); } } } return more_rfs; } }
package edu.kit.scc.dem.wapsrv.model.rdf; import org.apache.commons.rdf.api.BlankNodeOrIRI; import org.apache.commons.rdf.api.Dataset; import org.apache.commons.rdf.api.IRI; import org.apache.commons.rdf.api.Literal; import edu.kit.scc.dem.wapsrv.exceptions.InvalidContainerException; import edu.kit.scc.dem.wapsrv.model.Container; import edu.kit.scc.dem.wapsrv.model.FormattableObject; import edu.kit.scc.dem.wapsrv.model.rdf.vocabulary.AsVocab; import edu.kit.scc.dem.wapsrv.model.rdf.vocabulary.LdpVocab; import edu.kit.scc.dem.wapsrv.model.rdf.vocabulary.RdfSchemaVocab; import edu.kit.scc.dem.wapsrv.model.rdf.vocabulary.RdfVocab; /** * Implements a Container with RDF commons as data backend * * @author <NAME> * @author <NAME> * @author <NAME> * @author <NAME> * @author <NAME> * @version 1.1 */ public class RdfContainer extends RdfWapObject implements Container { /** prefer minimal container was requested */ protected boolean preferMinimalContainer; /** prefer iris only was requested */ protected boolean preferIrisOnly; /** * Creates a new RdfContainer object using the given parameters results in preferMinimalContainer = true * * @param dataset * The data set used as data backend * @param rdfBackend * The RDF backend * @param newContainerIri * The new target IRI of the Container */ public RdfContainer(Dataset dataset, RdfBackend rdfBackend, IRI newContainerIri) { this(dataset, true, true, rdfBackend, newContainerIri); } /** * Creates a new RdfContainer object using the given parameters * * @param dataset * The data set used as data backend * @param preferMinimalContainer * true, if preferMinimalContainer was requested * @param preferIrisOnly * true, if only IRIs was requested * @param rdfBackend * The RDF backend * @param newContainerIri * The new target IRI of the Container, null if no renaming should be done */ public RdfContainer(Dataset dataset, boolean preferMinimalContainer, boolean preferIrisOnly, RdfBackend rdfBackend, IRI newContainerIri) { super(dataset, rdfBackend); // init object this.preferMinimalContainer = preferMinimalContainer; this.preferIrisOnly = preferIrisOnly; iri = getIriForType(LdpVocab.basicContainer); BlankNodeOrIRI iriAnnotationCollection = getIriForType(AsVocab.orderedCollection); if (iri == null | iriAnnotationCollection == null) { throw new InvalidContainerException( "The given data does not represent a valid container, type is missing or does not match: " + "A Container has to be an ldp:BasicContainer AND an AnnotationCollection. " + "Another reason for this could be that the ID was not a valid URI/IRI."); } // rename the container here, because the generation of the RDF:seq names does not work with BlankNodes if (newContainerIri != null) { // call the super to not try to rename not existing RDF:seq names super.setIri(newContainerIri, true); } // If the RDF.sequence entry for the container does not exist, it will be added here. if (!dataset.getGraph().contains(Container.toContainerSeqIri(iri), RdfVocab.type, RdfVocab.seq)) { dataset.getGraph().add(Container.toContainerSeqIri(iri), RdfVocab.type, RdfVocab.seq); } // If the RDF.sequence entry for the container does not exist, it will be added here. if (!dataset.getGraph().contains(Container.toAnnotationSeqIri(iri), RdfVocab.type, RdfVocab.seq)) { dataset.getGraph().add(Container.toAnnotationSeqIri(iri), RdfVocab.type, RdfVocab.seq); } // DON'T put things to just change the container for output here. use the RdfOutputContainer class for that. } /** * Creates a new RdfContainer object using the given parameters * * @param dataset * The data set used as data backend * @param preferMinimalContainer * true, if preferMinimalContainer was requested * @param preferIrisOnly * true, if only IRIs was requested * @param rdfBackend * The RDF backend */ public RdfContainer(Dataset dataset, boolean preferMinimalContainer, boolean preferIrisOnly, RdfBackend rdfBackend) { this(dataset, preferMinimalContainer, preferIrisOnly, rdfBackend, null); } /* * @see edu.kit.scc.dem.wapsrv.model.Container#getLabel() */ @Override public String getLabel() { return getValue(RdfSchemaVocab.label); } /* * @see edu.kit.scc.dem.wapsrv.model.Container#createDefaultLabel() */ @Override public void createDefaultLabel() { String label = getValue(RdfSchemaVocab.label); Literal labelLiteral; if (label == null) { labelLiteral = rdfBackend.getRdf().createLiteral(getIriString()); } else { labelLiteral = rdfBackend.getRdf().createLiteral(label); } dataset.getGraph().add(iri, RdfSchemaVocab.label, labelLiteral); } /* * @see edu.kit.scc.dem.wapsrv.model.Container#isMinimalContainer() */ @Override public boolean isMinimalContainer() { return preferMinimalContainer; } /* * (non-Javadoc) * @see edu.kit.scc.dem.wapsrv.model.rdf.RdfWapObject#setIri(org.apache.commons.rdf.api.IRI) */ @Override public void setIri(BlankNodeOrIRI newIri, boolean copyVia) { RdfUtilities.renameNodeIri(dataset, Container.toContainerSeqIri(iri), Container.toContainerSeqIri(newIri)); RdfUtilities.renameNodeIri(dataset, Container.toAnnotationSeqIri(iri), Container.toAnnotationSeqIri(newIri)); super.setIri(newIri, copyVia); } @Override public Type getType() { return FormattableObject.Type.CONTAINER; } }
def make_list_unique(in_list): vals_dict = {} out_list = [] for val in in_list: if val in vals_dict: continue out_list.append(val) vals_dict[val] = 1 return out_list
import calendar import json import os import sys from datetime import datetime, timedelta times = [[10, 23], [10, 23], [10, 23], [10, 23], [10, 23], [9, 22], [9, 22], [9, 22]] def dummydays(Stadt): if not os.path.exists(f'days/{Stadt}'): os.mkdir(f'days/{Stadt}') for my_date in range(0, 7): day = calendar.day_name[my_date] print(day) n = {} time = datetime.strptime(f"{times[my_date][0]}:00", "%H:%M") while time < datetime.strptime(f"{times[my_date][1]}:00", "%H:%M"): new_time = time.replace(minute=((time.minute // 15) * 15)).strftime("%H:%M") if new_time not in n.keys() or len(n[new_time]) == 0: n[new_time] = [] time += timedelta(minutes=15) with open(f'days/{Stadt}/{day}.txt', 'w') as outfile: json.dump(n, outfile) if __name__ == '__main__': Stadt = sys.argv[1] dummydays(Stadt)
SSUES A SSOCIATED WITH B IG DATA IN C LOUD C OMPUTING In this paper, we discuss security issues for cloud computing, Big data, Map Reduce and Hadoop environment. The main focus is on security issues in cloud computing that are associated with big data. Big data applications are a great benefit to organizations, business, companies and many large scale and small scale industries.We also discuss various possible solutions for the issues in cloud computing security and Hadoop. Cloud computing security is developing at a rapid pace which includes computer security, network security, information security, and data privacy. Cloud computing plays a very vital role in protecting data, applications and the related infrastructure with the help of policies, technologies, controls, and big data tools. Moreover, cloud computing, big data and its applications, advantages are likely to represent the most promising new frontiers in science.
<filename>src/human_lambdas/user_handler/tests/test_forgotten_password.py import logging from django.test.utils import override_settings from django.utils import timezone from rest_framework import status from rest_framework.test import APITestCase from human_lambdas.user_handler.models import ForgottenPassword, User logger = logging.getLogger(__name__) class TestInvite(APITestCase): def setUp(self): self.valid_token = "<PASSWORD>" user = User(name="test", email="<EMAIL>") user.save() forgotten_password = ForgottenPassword( email="<EMAIL>", token=self.valid_token, expires_at=timezone.now() + timezone.timedelta(15), ) forgotten_password.save() @override_settings(DEBUG=True) def test_forgotten_password(self): data = {"email": "<EMAIL>"} response = self.client.post("/v1/users/forgotten-password", data) self.assertEqual(response.status_code, status.HTTP_200_OK) @override_settings(DEBUG=True) def test_forgotten_password_bad_email(self): data = {"email": "aaa.com"} response = self.client.post("/v1/users/forgotten-password", data) self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST) def test_forgotten_password_bad_token(self): response = self.client.get( "/v1/users/forgotten-password-token/feo80w3fn83t4f2n0fnwf3wb793282fsu" ) self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND) def test_forgotten_password_good_token(self): response = self.client.get( "/v1/users/forgotten-password-token/{0}".format(self.valid_token) ) self.assertEqual(response.status_code, status.HTTP_200_OK) # post endpoint after this line def test_forgotten_password_post_wrong_token(self): response = self.client.post( "/v1/users/forgotten-password-token/{0}".format("thisisnotavalidtoken"), {"password": "<PASSWORD>"}, ) self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST) def test_forgotten_password_post_no_password(self): response = self.client.post( "/v1/users/forgotten-password-token/{0}".format(self.valid_token), ) self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST) def test_forgotten_password_post_short_password(self): response = self.client.post( "/v1/users/forgotten-password-token/{0}".format(self.valid_token), {"password": "<PASSWORD>"}, ) self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST) def test_forgotten_password_post_expired_token(self): token = "<PASSWORD>" forgotten_password = ForgottenPassword( email="<EMAIL>", token=token, expires_at=timezone.now() - timezone.timedelta(15), ) forgotten_password.save() response = self.client.post( "/v1/users/forgotten-password-token/{0}".format(token), {"password": "<PASSWORD>"}, ) self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST) def test_forgotten_password_post(self): token, email, password, new_password = ( "<PASSWORD>", "<EMAIL>", "<PASSWORD>", "<PASSWORD>", ) user = User(name="sample", email=email, password=password) user.save() forgotten_password = ForgottenPassword( email=email, token=token, expires_at=timezone.now() + timezone.timedelta(15), ) forgotten_password.save() response = self.client.post( "/v1/users/forgotten-password-token/{0}".format(token), {"password": <PASSWORD>}, ) self.assertEqual(response.status_code, status.HTTP_200_OK) response = self.client.post( "/v1/users/token", {"email": email, "password": <PASSWORD>} ) self.assertEqual(response.status_code, status.HTTP_200_OK)
import pytest from discovery import api def list_services_response(): return { "redis": { "ID": "redis", "Service": "redis", "Tags": [], "TaggedAddresses": { "lan": {"address": "127.0.0.1", "port": 8000}, "wan": {"address": "198.18.0.53", "port": 80}, }, "Meta": {"redis_version": "4.0"}, "Port": 8000, "Address": "", "EnableTagOverride": False, "Weights": {"Passing": 10, "Warning": 1}, } } def register_payload(): return { "ID": "redis1", "Name": "redis", "Tags": ["primary", "v1"], "Address": "127.0.0.1", "Port": 8000, "Meta": {"redis_version": "4.0"}, "EnableTagOverride": False, "Check": { "DeregisterCriticalServiceAfter": "90m", "Args": ["/usr/local/bin/check_redis.py"], "Interval": "10s", "Timeout": "5s", }, "Weights": {"Passing": 10, "Warning": 1}, } def service_health_id_response(): return { "passing": { "ID": "web1", "Service": "web", "Tags": ["rails"], "Address": "", "TaggedAddresses": { "lan": {"address": "127.0.0.1", "port": 8000}, "wan": {"address": "198.18.0.53", "port": 80}, }, "Meta": None, "Port": 80, "EnableTagOverride": False, "Connect": {"Native": False, "Proxy": None}, "CreateIndex": 0, "ModifyIndex": 0, } } def service_health_name_response(): return { "critical": [ { "ID": "web2", "Service": "web", "Tags": ["rails"], "Address": "", "TaggedAddresses": { "lan": {"address": "127.0.0.1", "port": 8000}, "wan": {"address": "198.18.0.53", "port": 80}, }, "Meta": None, "Port": 80, "EnableTagOverride": False, "Connect": {"Native": False, "Proxy": None}, "CreateIndex": 0, "ModifyIndex": 0, } ], "passing": [ { "ID": "web1", "Service": "web", "Tags": ["rails"], "Address": "", "TaggedAddresses": { "lan": {"address": "127.0.0.1", "port": 8000}, "wan": {"address": "198.18.0.53", "port": 80}, }, "Meta": None, "Port": 80, "EnableTagOverride": False, "Connect": {"Native": False, "Proxy": None}, "CreateIndex": 0, "ModifyIndex": 0, } ], } def service_payload_response(): return { "Kind": "connect-proxy", "ID": "web-sidecar-proxy", "Service": "web-sidecar-proxy", "Tags": None, "Meta": None, "Port": 18080, "Address": "", "TaggedAddresses": { "lan": {"address": "127.0.0.1", "port": 8000}, "wan": {"address": "198.18.0.53", "port": 80}, }, "Weights": {"Passing": 1, "Warning": 1}, "EnableTagOverride": False, "ContentHash": "4ecd29c7bc647ca8", "Proxy": { "DestinationServiceName": "web", "DestinationServiceID": "web", "LocalServiceAddress": "127.0.0.1", "LocalServicePort": 8080, "Config": {"foo": "bar"}, "Upstreams": [ { "DestinationType": "service", "DestinationName": "db", "LocalBindPort": 9191, } ], }, } def status_response(): return { "passing": { "ID": "web1", "Service": "web", "Tags": ["rails"], "Address": "", "TaggedAddresses": { "lan": {"address": "127.0.0.1", "port": 8000}, "wan": {"address": "198.18.0.53", "port": 80}, }, "Meta": None, "Port": 80, "EnableTagOverride": False, "Connect": {"Native": False, "Proxy": None}, "CreateIndex": 0, "ModifyIndex": 0, } } @pytest.fixture @pytest.mark.asyncio async def service(consul_api): return api.Service(client=consul_api) @pytest.mark.asyncio @pytest.mark.parametrize("expected", [list_services_response()]) async def test_services(service, expected): service.client.expected = expected response = await service.services() response = await response.json() assert response == list_services_response() @pytest.mark.asyncio @pytest.mark.parametrize("expected", [service_payload_response()]) async def test_service(service, expected): service.client.expected = expected response = await service.service("web-sidecar-proxy") response = await response.json() assert response == service_payload_response() @pytest.mark.asyncio @pytest.mark.parametrize("expected", [200]) async def test_register(service, expected): service.client.expected = expected response = await service.register(register_payload()) assert response.status == 200 @pytest.mark.asyncio @pytest.mark.parametrize("expected", [200]) async def test_deregister(service, expected): service.client.expected = expected response = await service.deregister("my-service-id") assert response.status == 200 @pytest.mark.asyncio @pytest.mark.parametrize("expected", [200]) async def test_maintenance(service, expected): service.client.expected = expected response = await service.maintenance("my-service-id", True, "For the tests") assert response.status == 200 @pytest.mark.asyncio @pytest.mark.parametrize("expected", [service_payload_response()]) async def test_configuration(service, expected): service.client.expected = expected response = await service.configuration("web-sidecar-proxy") response = await response.json() assert response == service_payload_response() @pytest.mark.asyncio @pytest.mark.parametrize("expected", [service_health_name_response()]) async def test_service_health_by_name(service, expected): service.client.expected = expected response = await service.service_health_by_name("web") response = await response.json() assert response == service_health_name_response() @pytest.mark.asyncio @pytest.mark.parametrize("expected", [service_health_id_response()]) async def test_service_health_by_id(service, expected): service.client.expected = expected response = await service.service_health_by_id("web1") response = await response.json() assert response == service_health_id_response()
<reponame>mnimidamon/mnimidamon-backend package testsuites import ( "errors" "mnimidamonbackend/domain/model" "mnimidamonbackend/domain/repository" "testing" ) type ComputerRepositoryTester struct { Repo repository.ComputerRepository URepo repository.UserRepository Owner *model.User Computer *model.Computer } func (crt *ComputerRepositoryTester) Setup(t *testing.T) { err := crt.URepo.Create(crt.Owner) if err != nil { t.Errorf("Expected owner creation but got error %v", err) } } func (crt *ComputerRepositoryTester) FindBeforeSaveTests(t *testing.T) { cr := crt.Repo t.Run("FindAllEmpty", func(t *testing.T) { computers, err := cr.FindAll(crt.Owner.ID) if err != nil { t.Errorf(expectedGot("empty slice", err)) } if len(computers) != 0 { t.Errorf(expectedGot("empty slice", computers)) } }) t.Run("FindByIdNotFound", func(t *testing.T) { _, err := cr.FindById(crt.Computer.ID) if !errors.Is(repository.ErrNotFound, err) { t.Errorf("Expected %v, recieved %v", repository.ErrNotFound, err) } }) t.Run("FindByNameNotFound", func(t *testing.T) { _, err := cr.FindByName(crt.Computer.Name, crt.Owner.ID) if !errors.Is(repository.ErrNotFound, err) { t.Errorf("Expected %v, recieved %v", repository.ErrNotFound, err) } }) } func (crt *ComputerRepositoryTester) SaveSuccessfulTests(t *testing.T) { cr := crt.Repo t.Run("SaveSuccess", func(t *testing.T) { err := cr.Create(crt.Computer, crt.Owner.ID) if err != nil { t.Errorf(unexpectedErr(err)) } if crt.Computer.ID == 0 { t.Errorf("Expected ID greater 1, got %v", crt.Computer) } }) } func (crt *ComputerRepositoryTester) FindAfterSaveTests(t *testing.T) { cr := crt.Repo t.Run("FindAll", func(t *testing.T) { computers, err := cr.FindAll(crt.Owner.ID) if err != nil { t.Errorf(expectedGot("computers", err)) } if len(computers) == 0 { t.Errorf(expectedGot("non empty", computers)) } }) t.Run("FindById", func(t *testing.T) { c, err := cr.FindById(crt.Computer.ID) if err != nil { t.Errorf(unexpectedErr(err)) } if c.ID != crt.Computer.ID || c.OwnerID != crt.Computer.OwnerID || c.Name != crt.Computer.Name{ t.Errorf(expectedGot(crt.Computer, c)) } }) t.Run("FindByName", func(t *testing.T) { c, err := cr.FindByName(crt.Computer.Name, crt.Owner.ID) if err != nil { t.Errorf(unexpectedErr(err)) } if c.ID != crt.Computer.ID || c.OwnerID != crt.Computer.OwnerID || c.Name != crt.Computer.Name{ t.Errorf(expectedGot(crt.Computer, c)) } }) } func (crt *ComputerRepositoryTester) ConstraintsTest(t *testing.T) { cr := crt.Repo t.Run("SaveSameOwnerSameNameFails", func(t *testing.T) { err := cr.Create(crt.Computer, crt.Owner.ID) if !errors.Is(repository.ErrUniqueConstraintViolation, err) { t.Errorf(expectedGot(repository.ErrUniqueConstraintViolation, err)) } computers, err := cr.FindAll(crt.Owner.ID) if err != nil { t.Errorf(expectedGot("computers", err)) } if len(computers) != 1 { t.Errorf(expectedGot("expected 1 computer", computers)) } }) } func (crt *ComputerRepositoryTester) UpdateTests(t *testing.T) { cr := crt.Repo c := crt.Computer t.Run("UpdateNameSuccess", func(t *testing.T) { c.Name = "mac" c.OwnerID = 100 err := cr.Update(c) if err != nil { t.Errorf(unexpectedErr(err)) } if c.OwnerID != crt.Owner.ID { t.Errorf("Should not update ownerID") } if c.Name != "mac" { t.Errorf("Name was not updated") } }) } func (crt *ComputerRepositoryTester) SpecificTests(t *testing.T) { t.Skip("No specific tests needed") } func (crt *ComputerRepositoryTester) DeleteTests(t *testing.T) { cr := crt.Repo c := crt.Computer t.Run("DeleteSuccessful", func(t *testing.T) { if err := cr.Delete(c.ID); err != nil { t.Error(unexpectedErr(err)) } }) t.Run("FindByIdFail", func(t *testing.T) { if m, err := cr.FindById(c.ID); !errors.Is(repository.ErrNotFound, err) { t.Errorf("Expected %v, got err:%v computer:%v", repository.ErrNotFound, err, m) } }) } func (crt *ComputerRepositoryTester) BeginTx() TransactionSuiteTestTxInterface { crtx := crt.Repo.BeginTx() return &ComputerRepositoryTesterTx{ Repo: crtx, Computer: crt.Computer, Owner: crt.Owner, } } func (crt *ComputerRepositoryTester) Find() error { _, err := crt.Repo.FindById(crt.Computer.ID) return err } type ComputerRepositoryTesterTx struct { Repo repository.ComputerRepositoryTx Computer *model.Computer Owner *model.User } func (crtx *ComputerRepositoryTesterTx) Create() error { return crtx.Repo.Create(crtx.Computer, crtx.Owner.ID) } func (crtx *ComputerRepositoryTesterTx) Find() error { _, err := crtx.Repo.FindById(crtx.Computer.ID) return err } func (crtx *ComputerRepositoryTesterTx) CorrectCheck(t *testing.T) { if crtx.Computer.ID == 0 { t.Errorf("Expected computer.ID > 0, got %v", crtx.Computer) } } func (crtx *ComputerRepositoryTesterTx) Rollback() error { return crtx.Repo.Rollback() } func (crtx *ComputerRepositoryTesterTx) Commit() error { return crtx.Repo.Commit() } func ComputerRepositoryTestSuite(t *testing.T, cr repository.ComputerRepository, ur repository.UserRepository) { marmiha := model.User{ Entity: model.Entity{}, Username: "marmiha", PasswordHash: "<PASSWORD>", } thinkpad := model.Computer{ Entity: model.Entity{}, OwnerID: 0, Name: "thinkpad", Owner: nil, } brt := &ComputerRepositoryTester{ Repo: cr, URepo: ur, Owner: &marmiha, Computer: &thinkpad, } runCommonRepositoryTests(brt, t) }
Classifying “Micro” Routine Activities of Street-level Drug Transactions Objectives: Routine activities theory attempts to link the intersection of individuals’ everyday routine activities to crime events at particular places. This study examines crime events not just as the product of intersecting macroroutine activities but also microroutines (similar to crime “scripts”) that occur immediately before, during, and after a crime event occurs. Method: Closed-circuit television (CCTV) footage was accessed through Baltimore City Police Department from 2010 to 2011. Ethnographic techniques and systematic social observation of CCTV footage were used to categorize the microroutines of 74 street-level illicit drug transactions. Results: The findings illuminate eight microroutines of drug crime events that classify behaviors associated with illicit drug activity. Conclusions: This study advances our understanding of the link between routine activities and drug crime by examining how illicit transactions unfold from microroutines using a rarely employed, but fruitful source of data (CCTV footage).
/** * Regression description: * </p> * MasterPage from library can't be restored to modify * </p> * Test description: * <p> * Extends a lib.masterpage without modification, can't restore. If any property * change, restore is enabled * </p> */ public class Regression_142432 extends BaseTestCase { private String filename = "Regression_142432.xml"; //$NON-NLS-1$ private String libraryname = "Regression_142432_lib.xml"; //$NON-NLS-1$ protected void setUp( ) throws Exception { super.setUp( ); removeResource( ); // retrieve two input files from tests-model.jar file copyInputToFile ( INPUT_FOLDER + "/" + filename ); copyInputToFile ( INPUT_FOLDER + "/" + libraryname ); } /** * @throws DesignFileException * @throws SemanticException */ public void test_regression_142432( ) throws DesignFileException, SemanticException { openLibrary( libraryname, true ); MasterPageHandle masterpage = libraryHandle .findMasterPage( "NewSimpleMasterPage" ); //$NON-NLS-1$ openDesign( filename ); designHandle.includeLibrary( libraryname, "Lib" ); //$NON-NLS-1$ MasterPageHandle mp = (MasterPageHandle) designHandle .getElementFactory( ).newElementFrom( masterpage, "mp" ); //$NON-NLS-1$ designHandle.getMasterPages( ).add( mp ); assertFalse( mp.hasLocalProperties( ) ); mp.setOrientation( DesignChoiceConstants.PAGE_ORIENTATION_LANDSCAPE ); assertTrue( mp.hasLocalProperties( ) ); // check group element handle. List pages = new ArrayList(); pages.add( mp ); GroupElementHandle group = new SimpleGroupElementHandle( designHandle, pages ); assertTrue( group.hasLocalPropertiesForExtendedElements( ) ); } }
Don Payne, whose screenwriting credits include 2011’s Thor, Fantastic Four: Rise Of The Silver Surfer and My Super Ex-Girlfriend, and who was an award-winning writer/producer on The Simpsons, has died. He had been battling cancer. Payne started out in TV, hooking up with writing partner John Frink before graduating with a screenwriting master’s from UCLA. They penned episodes for such series as Hope & Gloria, The Brian Benben Show and Veronica’s Closet. Payne and Frink eventually joined The Simpsons in 1998, sharing in four Emmys for Outstanding Animated Program. In 2005, Payne received the WGA’s Paul Selvin Award for penning the Simpsons episode “Fraudcast News”, which skewered the TV news business. Another Simpsons episode — co-written as was the usual case with Frink — was “The Bart Wants What It Wants,” was nominated for a WGA Award for animation in 2003. Among his projects in the works, Payne, who described himself in a LA Times interview as “superhero geek”, wrote the first draft of Thor sequel Thor: The Dark World. He also was in development on Maximum Ride, based on the James Patterson books. In addition, as consulting producer he penned two Simpsons episodes that will air this fall: this year’s Christmas show “White Christmas Blues” and ‘Labor Pains”, which is set to air November 3. He is survived by his wife and three children.