content
stringlengths
10
4.9M
package de.heidelberg.pvs.container_bench.benchmarks.singleoperations.maps; import java.io.IOException; import org.openjdk.jmh.annotations.Param; import de.heidelberg.pvs.container_bench.benchmarks.singleoperations.AbstractSingleOperationsBench; import de.heidelberg.pvs.container_bench.generators.ElementGenerator; import de.heidelberg.pvs.container_bench.generators.GeneratorFactory; import de.heidelberg.pvs.container_bench.generators.PayloadType; public abstract class AbstractMapBench<K, V> extends AbstractSingleOperationsBench { @Param({ "100" }) public int percentageRangeKeys; protected ElementGenerator<K> keyGenerator; protected ElementGenerator<V> valueGenerator; /** * Implementation of our Randomness * @throws IOException */ @SuppressWarnings("unchecked") @Override public void generatorSetup() throws IOException { keyGenerator = (ElementGenerator<K>) GeneratorFactory.buildRandomGenerator(payloadType); // default value generator valueGenerator = (ElementGenerator<V>) GeneratorFactory.buildRandomGenerator(PayloadType.INTEGER_UNIFORM); keyGenerator.init(size, seed); valueGenerator.init(size, seed); } }
Libraries and public archives have played a central role in the life and art of internationally celebrated novelist and poet Jane Urquhart. Her mother founded the first library in Little Longlac, Ont., the mining town where she was born. “It was the centre of my mother’s universe—and mine,” Urquhart tells Maclean’s. She skipped class in high school to go to the library to read poets Leonard Cohen and Irving Layton: “It’s how I learned about sex,” she says with a laugh, adding that visiting Toronto’s library system was “like Christmas” and that a pamphlet found at Stratford’s library shaped her 2001 novel The Stone Carvers. Urquhart repaid the gift by donating her journals, unpublished work and first drafts in the 1990s to National Archives Canada. Six years ago, she tried, with no luck, to access her early work from Library and Archives Canada (LAC), the institution mandated in 2004 to acquire and preserve Canada’s documentary heritage, and make it available to all Canadians. Her request came just as the federal government began its savage cuts that closed more than two dozen federal libraries and led to the haphazard consolidation of others. Some 445 jobs were lost, most at the LAC and the National Research Council’s science library. There was tumult at the top: LAC head Daniel Caron resigned last year after news he’d spent nearly $4,500 of taxpayers’ money on personal Spanish lessons; this year, Guy Berthiaume replaced him. The situation has improved somewhat, says Urquhart. Still, she’s unsure she’ll donate again to such an “unpredictable organization”—a loss for present and future generations. Urquhart’s story is one of many disquieting testimonies in “The future now: Canada’s libraries, archives, and public memory,” a new report by the Royal Society of Canada that examines the vital role libraries play in a digital age—as well as the apparent disconnect between vibrant projects such as Halifax’s new public library and fault lines evident in public school libraries’ decline and the growing inequity of access in rural communities. Nowhere is this erosion more glaring than the disgraceful state of LAC, the repository of papers, photographs, paintings, film and artifacts that preserve the nation’s past. As University of Alberta professor Patricia Demers, head of the Royal Society study, put it in a podcast, “a decade-long decline in all the services” has seen the cancellation of the National Portrait Gallery, the end of public visitation to LAC, curtailing of loans and collecting important publications, and the virtual closure of the downtown Ottawa location, with storage moved to Gatineau, Que. The result has hurt not only academe and science, but artists and writers, ironically, at a moment of renewed public interest in historical fiction and literary history. Charlotte Gray, the acclaimed biographer and historical writer who sat on the Royal Society’s expert panel, says her work depends upon primary sources—diaries and personal letters from the “humbler bit players”—that require trained archivists to access. Searching online isn’t an option, says Gray; less than five per cent of LAC’s holdings are digitized, primarily government documents and papers belonging to prime ministers: “Those are not the people who give texture to our past,” says Gray. Morale has picked up since Berthiaume’s appointment, she says, though LAC remains “a morgue.” Urquhart uses deathly imagery to discuss how LAC’s code of conduct, which prohibits employees from speaking publicly, silences archivists: “It’s bone-chilling,” she says. She speaks with regret and anger: “Archives are the heart and soul of the country,” she says. “They are one of the great and good things about a democratic society. It never occurred to me that anyone would be denied access to those collections—or that our own government would be suspicious of what researchers were doing.” The Royal Society report, with its 70 recommendations, makes clear the situation is dire. “We have to be vigilant,” says Urquhart.
<filename>apps/sepp_tm/include/sepp_tm.h /* ========================================================================= sepp_tm - An app to inject basic SEPP telemetry into the OBSW datapool The MIT License (MIT) ========================================================================= */ #ifndef SEPP_TM_H_H_INCLUDED #define SEPP_TM_H_H_INCLUDED // Include type definitions #include "sepp_tm_types.h" // Include the project library file #include "sepp_tm_library.h" // Add your own public definitions here, if you need them #endif
/* eslint-disable no-var */ /* eslint-disable @typescript-eslint/explicit-module-boundary-types */ /* eslint-disable @typescript-eslint/no-unused-vars */ /* eslint-disable @typescript-eslint/no-explicit-any */ import React, {Component, MouseEventHandler} from 'react'; import GithubIcon from '../../resources/svgs/githubIcon.svg'; import LinkedinIcon from '../../resources/svgs/linkedin.svg'; import SteamIcon from '../../resources/svgs/steamIcon.svg'; import About from './pages/About'; import Home from './pages/Home'; import Projects from './pages/Projects'; import Resume from './pages/Resume'; enum Page { Home = 1, About = 2, Projects = 3, Resume = 4 } type Props = any; type State = { selectedPage : Page } class Main extends Component<Props,State> { public analyzer: any; public interval: any; constructor(props : any) { super(props); this.state = { selectedPage : Page.Home } this.gselectHome = this.gselectHome.bind(this); this.gselectAbout = this.gselectAbout.bind(this); this.gselectProjects = this.gselectProjects.bind(this); this.gselectResume = this.gselectResume.bind(this); this.selectHome = this.selectHome.bind(this); this.selectAbout = this.selectAbout.bind(this); this.selectProjects = this.selectProjects.bind(this); this.selectResume = this.selectResume.bind(this); } componentDidMount() { // const point = givePointOnBezier([{x: 0, y: 0},{x:10, y:10}], .55); } selectHome() { this.setState({ selectedPage : Page.Home }); } selectAbout() { this.setState({ selectedPage : Page.About }); } selectProjects() { this.setState({ selectedPage : Page.Projects }); } selectResume() { this.setState({ selectedPage : Page.Resume }); } gselectHome() { if(this.state.selectedPage == Page.Home) { return "_selected"; } return ""; } gselectAbout() { if(this.state.selectedPage == Page.About) { return "_selected"; } return ""; } gselectProjects() { if(this.state.selectedPage == Page.Projects) { return "_selected"; } return ""; } gselectResume() { if(this.state.selectedPage == Page.Resume) { return "_selected"; } return ""; } loadPage() { if(this.state.selectedPage == Page.Home) { return (<Home/>); } else if(this.state.selectedPage == Page.About) { return (<About/>); } else if(this.state.selectedPage == Page.Projects) { return (<Projects/>); } else if(this.state.selectedPage == Page.Resume) { return (<Resume/>) } } render() : JSX.Element { return( <div className={`main`}> <div className="header"> <div className="name"> <NAME> </div> <div className= "selections"> <div className={`selections_selection${this.gselectHome()}`} onClick={this.selectHome}> <div className="selections_selection_container"> Home </div> </div> <div className={`selections_selection${this.gselectAbout()}`} onClick={this.selectAbout}> <div className="selections_selection_container"> About </div> </div> <div className={`selections_selection${this.gselectProjects()}`} onClick={this.selectProjects}> <div className="selections_selection_container" > Projects </div> </div> <div className={`selections_selection${this.gselectResume()}`} onClick={this.selectResume}> <div className="selections_selection_container" > Resume </div> </div> </div> <div className="social"> <a className="social_item" href="https://github.com/yatiyr"> <GithubIcon/> </a> <a className="social_item" href="https://www.linkedin.com/in/eren-dere/"> <LinkedinIcon/> </a> <a className="social_item" href="https://steamcommunity.com/id/yatiyr"> <SteamIcon/> </a> </div> </div> {this.loadPage()} </div> ) } } export default Main;
/** * check barcode payment result is success. * * @param resultMap resultMap */ @Override public boolean successOfBarCode(Map<String, Object> resultMap) { Object returnCode = resultMap.get("return_code"); Object resultCode = resultMap.get("result_code"); return "SUCCESS".equals(resultCode) && "SUCCESS".equals(returnCode); }
def process_argument(self, args, options: dict): options["log"] = args.log return True
package eapli.base.servicomanagement.domain.servico; import eapli.base.atividademanagement.domain.Atividade; import eapli.base.atividademanagement.domain.AtividadeAutomatica; import eapli.base.atividademanagement.domain.AtividadeManual; import eapli.base.teammanagement.domain.EquipaType; import javax.persistence.*; import java.io.Serializable; import java.util.Set; @Entity public class FluxoResolucao implements Serializable { @Version private Long version; @Id @GeneratedValue private Long identificador; @OneToOne private Atividade atividade; private String tipoAtividade; @OneToMany private Set<EquipaType> tiposEquipa; private boolean resolucao; protected FluxoResolucao(final Atividade atividade) { this.atividade = atividade; if(atividade instanceof AtividadeManual){ this.tipoAtividade = AtividadeManual.class.getSimpleName(); }else{ this.tipoAtividade = AtividadeAutomatica.class.getSimpleName(); } } protected FluxoResolucao() { // for ORM only } public boolean executar(){ this.resolucao = true; return this.resolucao; } public Atividade getAtividade() { return this.atividade; } }
/** * * The Prolog class represents a tuProlog engine. * */ public class Prolog implements java.io.Serializable { // 2P version private static final String VERSION = "2.1"; /* manager of current theory */ private TheoryManager theoryManager; /* component managing primitive */ private PrimitiveManager primitiveManager; /* component managing operators */ private OperatorManager opManager; /* component managing flags */ private FlagManager flagManager; /* component managing libraries */ private LibraryManager libraryManager; /* component managing engine */ private EngineManager engineManager; /* spying activated ? */ private boolean spy; /* warning activated ? */ private boolean warning; /* listeners registrated for virtual machine output events */ private ArrayList outputListeners; /* listeners registrated for virtual machine internal events */ private ArrayList spyListeners; /* listeners registrated for virtual machine state change events */ private ArrayList warningListeners; /* listeners to theory events */ private ArrayList theoryListeners; /* listeners to library events */ private ArrayList libraryListeners; /* listeners to query events */ private ArrayList queryListeners; /** * Builds a prolog engine with default libraries loaded. * * The default libraries are BasicLibrary, ISOLibrary, * IOLibrary, and JavaLibrary */ public Prolog() { this(false,true); try { loadLibrary("alice.tuprolog.lib.BasicLibrary"); } catch (Exception ex) { ex.printStackTrace(); } try { loadLibrary("alice.tuprolog.lib.ISOLibrary"); } catch (Exception ex) { ex.printStackTrace(); } try { loadLibrary("alice.tuprolog.lib.IOLibrary"); } catch (Exception ex) { ex.printStackTrace(); } try { loadLibrary("alice.tuprolog.lib.JavaLibrary"); } catch (Exception ex) { ex.printStackTrace(); } } /** * Builds a tuProlog engine with loaded * the specified libraries * * @param libs the (class) name of the libraries to be loaded */ public Prolog(String[] libs) throws InvalidLibraryException { this(false,true); if (libs != null) { for (int i = 0; i < libs.length; i++) { loadLibrary(libs[i]); } } } /** * Initialize basic engine structures. * * @param spy spying activated * @param warning warning activated */ private Prolog(boolean spy, boolean warning) { outputListeners = new ArrayList(); spyListeners = new ArrayList(); warningListeners = new ArrayList(); this.spy = spy; this.warning = warning; theoryListeners = new ArrayList(); queryListeners = new ArrayList(); libraryListeners = new ArrayList(); initializeManagers(); } private void initializeManagers() { flagManager = new FlagManager(); libraryManager = new LibraryManager(); opManager = new OperatorManager(); theoryManager = new TheoryManager(); primitiveManager = new PrimitiveManager(); engineManager = new EngineManager(); //config managers theoryManager.initialize(this); libraryManager.initialize(this); flagManager.initialize(this); primitiveManager.initialize(this); engineManager.initialize(this); } /** Gets the component managing flags */ public FlagManager getFlagManager() { return flagManager; } /** Gets the component managing theory */ public TheoryManager getTheoryManager() { return theoryManager; } /** Gets the component managing primitives */ public PrimitiveManager getPrimitiveManager() { return primitiveManager; } /** Gets the component managing libraries */ public LibraryManager getLibraryManager() { return libraryManager; } /** Gets the component managing operators */ public OperatorManager getOperatorManager() { return opManager; } /** Gets the component managing engine */ public EngineManager getEngineManager() { return engineManager; } /** * Gets the current version of the tuProlog system */ public static String getVersion() { return VERSION; } // theory management interface /** * Sets a new theory * * @param th is the new theory * @throws InvalidTheoryException if the new theory is not valid * @see Theory */ public synchronized void setTheory(Theory th) throws InvalidTheoryException { theoryManager.clear(); addTheory(th); } /** * Adds (appends) a theory * * @param th is the theory to be added * @throws InvalidTheoryException if the new theory is not valid * @see Theory */ public synchronized void addTheory(Theory th) throws InvalidTheoryException { Theory oldTh = theoryManager.getLastConsultedTheory(); theoryManager.consult(th, true, null); theoryManager.solveTheoryGoal(); Theory newTh = theoryManager.getLastConsultedTheory(); TheoryEvent ev = new TheoryEvent(this,oldTh,newTh); this.notifyChangedTheory(ev); } /** * Gets current theory * * @return current(dynamic) theory */ public synchronized Theory getTheory() { try { return new Theory(theoryManager.getTheory(true)); } catch (Exception ex){ return null; } } /** * Gets last consulted theory, with the original textual format * * @return theory */ public synchronized Theory getLastConsultedTheory() { try { return theoryManager.getLastConsultedTheory(); } catch (Exception ex){ return null; } } /** * Clears current theory */ public synchronized void clearTheory() { try { setTheory(new Theory()); } catch (InvalidTheoryException e) { // this should never happen } } // libraries management interface /** * Loads a library. * * If a library with the same name is already present, * a warning event is notified and the request is ignored. * * @param className name of the Java class containing the library to be loaded * @return the reference to the Library just loaded * @throws InvalidLibraryException if name is not a valid library */ public synchronized Library loadLibrary(String className) throws InvalidLibraryException { return libraryManager.loadLibrary(className); } /** * Loads a specific instance of a library * * If a library with the same name is already present, * a warning event is notified * * @param lib the (Java class) name of the library to be loaded * @throws InvalidLibraryException if name is not a valid library */ public synchronized void loadLibrary(Library lib) throws InvalidLibraryException { libraryManager.loadLibrary(lib); } /** * Gets the list of current libraries loaded * * @return the list of the library names */ public synchronized String[] getCurrentLibraries() { return libraryManager.getCurrentLibraries(); } /** * Unloads a previously loaded library * * @param name of the library to be unloaded * @throws InvalidLibraryException if name is not a valid loaded library */ public synchronized void unloadLibrary(String name) throws InvalidLibraryException { libraryManager.unloadLibrary(name); } /** * Gets the reference to a loaded library * * @param name the name of the library already loaded * @return the reference to the library loaded, null if the library is * not found */ public synchronized Library getLibrary(String name) { return libraryManager.getLibrary(name); } protected Library getLibraryPredicate(String name, int nArgs) { return primitiveManager.getLibraryPredicate(name,nArgs); } protected Library getLibraryFunctor(String name, int nArgs) { return primitiveManager.getLibraryFunctor(name,nArgs); } // operators management /** * Gets the list of the operators currently defined * * @return the list of the operators */ public synchronized java.util.List getCurrentOperatorList() { return opManager.getOperators(); } // solve interface /** * Solves a query * * @param g the term representing the goal to be demonstrated * @return the result of the demonstration * @see SolveInfo **/ public synchronized SolveInfo solve(Term g) { //System.out.println("ENGINE SOLVE #0: "+g); if (g == null) return null; SolveInfo sinfo = engineManager.solve(g); QueryEvent ev = new QueryEvent(this,sinfo); notifyNewQueryResultAvailable(ev); return sinfo; } /** * Solves a query * * @param st the string representing the goal to be demonstrated * @return the result of the demonstration * @see SolveInfo **/ public synchronized SolveInfo solve(String st) throws MalformedGoalException { try { Parser p = new Parser(opManager, st); Term t = p.nextTerm(true); return solve(t); } catch (InvalidTermException ex) { throw new MalformedGoalException(); } } /** * Gets next solution * * @return the result of the demonstration * @throws NoMoreSolutionException if no more solutions are present * @see SolveInfo **/ public synchronized SolveInfo solveNext() throws NoMoreSolutionException { if (hasOpenAlternatives()) { SolveInfo sinfo = engineManager.solveNext(); QueryEvent ev = new QueryEvent(this,sinfo); notifyNewQueryResultAvailable(ev); return sinfo; } else throw new NoMoreSolutionException(); } /** * Halts current solve computation */ public void solveHalt() { engineManager.solveHalt(); } /** * Accepts current solution */ public synchronized void solveEnd() { engineManager.solveEnd(); } /** * Asks for the presence of open alternatives to be explored * in current demostration process. * * @return true if open alternatives are present */ public synchronized boolean hasOpenAlternatives() { return engineManager.hasOpenAlternatives(); } /** * Checks if the demonstration process was stopped by an halt command. * * @return true if the demonstration was stopped */ public synchronized boolean isHalted() { return engineManager.isHalted(); } /** * Unifies two terms using current demonstration context. * * @param t0 first term to be unified * @param t1 second term to be unified * @return true if the unification was successful */ public synchronized boolean match(Term t0, Term t1) { return t0.match(this, t1); } /** * Unifies two terms using current demonstration context. * * @param t0 first term to be unified * @param t1 second term to be unified * @return true if the unification was successful */ public synchronized boolean unify(Term t0, Term t1) { return t0.unify(this,t1); } /** * Identify functors * * @param term term to identify */ public synchronized void identifyFunctor(Term term) { primitiveManager.identifyFunctor(term); } /** * Gets a term from a string, using the operators currently * defined by the engine * * @param st the string representing a term * @return the term parsed from the string * @throws InvalidTermException if the string does not represent a valid term */ public synchronized Term toTerm(String st) throws InvalidTermException { return Parser.parseSingleTerm(st, opManager); } /** * Gets the string representation of a term, using operators * currently defined by engine * * @param term the term to be represented as a string * @return the string representing the term */ public synchronized String toString(Term term) { return (term.toStringAsArgY(opManager, OperatorManager.OP_HIGH)); } /** * Defines a new flag */ boolean defineFlag(String name, Struct valueList, Term defValue, boolean modifiable, String libName) { return flagManager.defineFlag(name,valueList,defValue,modifiable,libName); } // spy interface ---------------------------------------------------------- /** * Switches on/off the notification of spy information events * * @param state - true for enabling the notification of spy event */ public synchronized void setSpy(boolean state) { spy = state; } /** * Checks the spy state of the engine * * @return true if the engine emits spy information */ public synchronized boolean isSpy() { return spy; } /** * Notifies a spy information event */ protected void spy(String s) { if (spy) { notifySpy(new SpyEvent(this, s)); } } /** * Notifies a spy information event * @param s TODO */ protected void spy(String s, Engine e) { //System.out.println("spy: "+i+" "+s+" "+g); if (spy) { ExecutionContext ctx = e.currentContext; int i=0; String g = "-"; if (ctx.fatherCtx != null){ i = ctx.depth-1; g = ctx.fatherCtx.currentGoal.toString(); } notifySpy(new SpyEvent(this, e, "spy: " + i + " " + s + " " + g)); } } /** * Switches on/off the notification of warning information events * * @param state - true for enabling warning information notification */ public synchronized void setWarning(boolean state) { warning = state; } /** * Checks if warning information are notified * * @return true if the engine emits warning information */ public synchronized boolean isWarning() { return warning; } /** * Notifies a warn information event * * * @param m the warning message */ public void warn(String m) { if (warning){ notifyWarning(new WarningEvent(this, m)); //log.warn(m); } } /** * Produces an output information event * * @param m the output string */ public synchronized void stdOutput(String m) { notifyOutput(new OutputEvent(this, m)); } // event listeners management /** * Adds a listener to ouput events * * @param l the listener */ public synchronized void addOutputListener(OutputListener l) { outputListeners.add(l); } /** * Adds a listener to theory events * * @param l the listener */ public synchronized void addTheoryListener(TheoryListener l) { theoryListeners.add(l); } /** * Adds a listener to library events * * @param l the listener */ public synchronized void addLibraryListener(LibraryListener l) { libraryListeners.add(l); } /** * Adds a listener to theory events * * @param l the listener */ public synchronized void addQueryListener(QueryListener l) { queryListeners.add(l); } /** * Adds a listener to spy events * * @param l the listener */ public synchronized void addSpyListener(SpyListener l) { spyListeners.add(l); } /** * Adds a listener to warning events * * @param l the listener */ public synchronized void addWarningListener(WarningListener l) { warningListeners.add(l); } /** * Removes a listener to ouput events * * @param l the listener */ public synchronized void removeOutputListener(OutputListener l) { outputListeners.remove(l); } /** * Removes all output event listeners */ public synchronized void removeAllOutputListeners() { outputListeners.clear(); } /** * Removes a listener to theory events * * @param l the listener */ public synchronized void removeTheoryListener(TheoryListener l) { theoryListeners.remove(l); } /** * Removes a listener to library events * * @param l the listener */ public synchronized void removeLibraryListener(LibraryListener l) { libraryListeners.remove(l); } /** * Removes a listener to query events * * @param l the listener */ public synchronized void removeQueryListener(QueryListener l) { queryListeners.remove(l); } /** * Removes a listener to spy events * * @param l the listener */ public synchronized void removeSpyListener(SpyListener l) { spyListeners.remove(l); } /** * Removes all spy event listeners */ public synchronized void removeAllSpyListeners() { spyListeners.clear(); } /** * Removes a listener to warning events * * @param l the listener */ public synchronized void removeWarningListener(WarningListener l) { warningListeners.remove(l); } /** * Removes all warning event listeners */ public synchronized void removeAllWarningListeners() { warningListeners.clear(); } /** * Gets a copy of current listener list to output events */ public synchronized List getOutputListenerList() { return (List) outputListeners.clone(); } /** * Gets a copy of current listener list to warning events * */ public synchronized List getWarningListenerList() { return (List) warningListeners.clone(); } /** * Gets a copy of current listener list to spy events * */ public synchronized List getSpyListenerList() { return (List) spyListeners.clone(); } /** * Gets a copy of current listener list to theory events * */ public synchronized List getTheoryListenerList() { return (List) theoryListeners.clone(); } /** * Gets a copy of current listener list to library events * */ public synchronized List getLibraryListenerList() { return (List) libraryListeners.clone(); } /** * Gets a copy of current listener list to query events * */ public synchronized List getQueryListenerList() { return (List) queryListeners.clone(); } // notification /** * Notifies an ouput information event * * @param e the event */ protected void notifyOutput(OutputEvent e) { Iterator it = outputListeners.listIterator(); while (it.hasNext()) { ((OutputListener) it.next()).onOutput(e); } } /** * Notifies a spy information event * * @param e the event */ protected void notifySpy(SpyEvent e) { Iterator it = spyListeners.listIterator(); while (it.hasNext()) { ((SpyListener) it.next()).onSpy(e); } } /** * Notifies a warning information event * * @param e the event */ protected void notifyWarning(WarningEvent e) { Iterator it = warningListeners.listIterator(); while (it.hasNext()) { ((WarningListener) it.next()).onWarning(e); } } // /** * Notifies a new theory set or updated event * * @param e the event */ protected void notifyChangedTheory(TheoryEvent e) { Iterator it = theoryListeners.listIterator(); while (it.hasNext()) { ((TheoryListener) it.next()).theoryChanged(e); } } /** * Notifies a library loaded event * * @param e the event */ protected void notifyLoadedLibrary(LibraryEvent e) { Iterator it = libraryListeners.listIterator(); while (it.hasNext()) { ((LibraryListener) it.next()).libraryLoaded(e); } } /** * Notifies a library unloaded event * * @param e the event */ protected void notifyUnloadedLibrary(LibraryEvent e) { Iterator it = libraryListeners.listIterator(); while (it.hasNext()) { ((LibraryListener) it.next()).libraryUnloaded(e); } } /** * Notifies a library loaded event * * @param e the event */ protected void notifyNewQueryResultAvailable(QueryEvent e) { Iterator it = queryListeners.listIterator(); while (it.hasNext()) { ((QueryListener) it.next()).newQueryResultAvailable(e); } } }
Do Valuation (P/E, ROE and P/BV) Ratios Drive Stock Values? A Case of GCC Countries Do valuation ratios predict the future stock prices? Over the decades, researchers have explored data across various global financial markets and across different timelines to seek its unique answer. The results though were not universal, resulted in generating greater interest in the subject. Using valuation ratios as a stock price predictor gained further momentum after Campbell and Shiller’s seminal work involving a century of data sets. In spite of its practical relevance, not much effort was being made to establish the correlation between valuation ratios and stock price of GCC listed companies. This paper attempts to bridge the existing gap by studying 140 publicly listed companies in the six GCC countries namely Qatar, Kuwait, Bahrain, Saudi Arabia, Oman and United Arab Emirates (UAE) using the multiple regression model. The period of study was between 2013-2017. Correlation is established for each of the countries individually, followed by an integrated approach. The independent variables used in the study are Price Earnings Ratio (P/E), Return on Equity (ROE), Price to Book Ratio (P/BV) and Stock Returns being the dependent variable.
package cli import ( "context" "github.com/pkg/errors" "github.com/skratchdot/open-golang/open" "github.com/kopia/kopia/fs" "github.com/kopia/kopia/fs/cachefs" "github.com/kopia/kopia/fs/loggingfs" "github.com/kopia/kopia/internal/mount" "github.com/kopia/kopia/repo" "github.com/kopia/kopia/snapshot/snapshotfs" ) type commandMount struct { mountObjectID string mountPoint string mountPointBrowse bool mountTraceFS bool mountFuseAllowOther bool mountFuseAllowNonEmptyMount bool mountPreferWebDAV bool maxCachedEntries int maxCachedDirectories int } func (c *commandMount) setup(svc appServices, parent commandParent) { cmd := parent.Command("mount", "Mount repository object as a local filesystem.") cmd.Arg("path", "Identifier of the directory to mount.").Default("all").StringVar(&c.mountObjectID) cmd.Arg("mountPoint", "Mount point").Default("*").StringVar(&c.mountPoint) cmd.Flag("browse", "Open file browser").BoolVar(&c.mountPointBrowse) cmd.Flag("trace-fs", "Trace filesystem operations").BoolVar(&c.mountTraceFS) cmd.Flag("fuse-allow-other", "Allows other users to access the file system.").BoolVar(&c.mountFuseAllowOther) cmd.Flag("fuse-allow-non-empty-mount", "Allows the mounting over a non-empty directory. The files in it will be shadowed by the freshly created mount.").BoolVar(&c.mountFuseAllowNonEmptyMount) cmd.Flag("webdav", "Use WebDAV to mount the repository object regardless of fuse availability.").BoolVar(&c.mountPreferWebDAV) cmd.Flag("max-cached-entries", "Limit the number of cached directory entries").Default("100000").IntVar(&c.maxCachedEntries) cmd.Flag("max-cached-dirs", "Limit the number of cached directories").Default("100").IntVar(&c.maxCachedDirectories) cmd.Action(svc.repositoryReaderAction(c.run)) } func (c *commandMount) newFSCache() cachefs.DirectoryCacher { return cachefs.NewCache(&cachefs.Options{ MaxCachedDirectories: c.maxCachedDirectories, MaxCachedEntries: c.maxCachedEntries, }) } func (c *commandMount) run(ctx context.Context, rep repo.Repository) error { var entry fs.Directory if c.mountObjectID == "all" { entry = snapshotfs.AllSourcesEntry(rep) } else { var err error entry, err = snapshotfs.FilesystemDirectoryFromIDWithPath(ctx, rep, c.mountObjectID, false) if err != nil { return errors.Wrapf(err, "unable to get directory entry for %v", c.mountObjectID) } } if c.mountTraceFS { // nolint:forcetypeassert entry = loggingfs.Wrap(entry, log(ctx).Debugf).(fs.Directory) } // nolint:forcetypeassert entry = cachefs.Wrap(entry, c.newFSCache()).(fs.Directory) ctrl, mountErr := mount.Directory(ctx, entry, c.mountPoint, mount.Options{ FuseAllowOther: c.mountFuseAllowOther, FuseAllowNonEmptyMount: c.mountFuseAllowNonEmptyMount, PreferWebDAV: c.mountPreferWebDAV, }) if mountErr != nil { return errors.Wrap(mountErr, "mount error") } log(ctx).Infof("Mounted '%v' on %v", c.mountObjectID, ctrl.MountPath()) if c.mountPoint == "*" && !c.mountPointBrowse { log(ctx).Infof("HINT: Pass --browse to automatically open file browser.") } log(ctx).Infof("Press Ctrl-C to unmount.") if c.mountPointBrowse { if err := open.Start(ctrl.MountPath()); err != nil { log(ctx).Errorf("unable to browse %v", err) } } // Wait until ctrl-c pressed or until the directory is unmounted. ctrlCPressed := make(chan bool) onCtrlC(func() { close(ctrlCPressed) }) select { case <-ctrlCPressed: log(ctx).Infof("Unmounting...") // TODO: Consider lazy unmounting (-z) and polling till the filesystem is unmounted instead of failing with: // "unmount error: exit status 1: fusermount: failed to unmount /tmp/kopia-mount719819963: Device or resource busy, try --help" err := ctrl.Unmount(ctx) if err != nil { return errors.Wrap(err, "unmount error") } case <-ctrl.Done(): log(ctx).Infof("Unmounted.") return nil } // Reporting clean unmount in case of interrupt signal. <-ctrl.Done() log(ctx).Infof("Unmounted.") return nil }
/** *** Copyright 2020 ProximaX Limited. All rights reserved. *** Use of this source code is governed by the Apache 2.0 *** license that can be found in the LICENSE file. **/ import {Cell, DetailedCell, NewVariableCell} from "../Cell"; import {CellFactory} from "../CellFactory"; import * as defines from "../Identifiers"; import * as cmd from "../Command"; import {PutUint16} from "../../utils/Binary"; import * as caster from "../../utils/typeCaster" export class AuthChallengeCell extends DetailedCell implements CellFactory { private challenge : Buffer; private methods : Uint16Array; constructor() { super(defines.Command.AuthChallenge); this.methods = new Uint16Array(32); } setChallenge(challenge) { this.challenge = challenge; } getMethods() { return this.methods; } createCell() : Cell { var m = this.methods.length; var n = 32 + 2 + 2*m; var c = NewVariableCell(0, defines.Command.AuthChallenge, caster.int16(n)); var offset = cmd.PayloadOffset(defines.Command.AuthChallenge); var data = c.getData(); var methodBuffer = new Buffer(this.methods); methodBuffer.copy(data, offset); PutUint16(data, caster.int16(m), 32); let ptr = 34; for(let i = 0; i < this.methods.length; i++) { PutUint16(data, caster.int16(this.methods[i]), ptr); ptr += 2; } return c; } supportsMethod(m) : boolean { for(let i = 0; i < this.methods.length; i++) { if(this.methods[i] == m) return true; } return false; } } ////////////////////////////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////////////////////////// // AuthBypassCell represents AUTH_BYPASS cell export class AuthBypassCell extends DetailedCell implements CellFactory { constructor() { super(defines.Command.AuthBypass); } createCell() : Cell { var c = NewVariableCell(0, defines.Command.AuthBypass, 0); return c; } } ////////////////////////////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////////////////////////////////
"""TODO: docstring """ import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt from torchvision import transforms from torch.utils.data import DataLoader import collections from collections import Counter import util from torchvision.transforms import transforms from torch.utils.data import Dataset, DataLoader, WeightedRandomSampler import numpy as np def data_loading (img_size, num_tr_smpl,num_test_smpl, tsk_list, re_weighting = False ): our_transform = transforms.Compose([ transforms.Resize(img_size), transforms.ToTensor()]) source_dataset, source_dataset_test, source_dataset_validation = [], [] , [] train_loader = [] for tsk in tsk_list: print ('LLLLLLLLLLLLLLL loading the current task '+tsk) if tsk =='mnist': source_dataset.append(util.Local_Dataset_digit(data_name='mnist', set='train', data_path='data/mnist', transform=our_transform, num_samples=num_tr_smpl)) source_dataset_test.append( util.Local_Dataset_digit(data_name='mnist', set='test', data_path='data/mnist',transform=our_transform, num_samples=num_test_smpl)) source_dataset_validation.append( util.Local_Dataset_digit(data_name='mnist', set='validation', data_path='data/mnist', transform=our_transform, num_samples=1000)) if tsk == 'm_mnist': source_dataset.append(util.Local_Dataset_digit(data_name='m_mnist', set='train', data_path='data/mnist_m', transform=our_transform, num_samples=num_tr_smpl)) source_dataset_test.append( util.Local_Dataset_digit(data_name='m_mnist', set='test', data_path='data/mnist_m',transform=our_transform, num_samples=num_test_smpl)) source_dataset_validation.append( util.Local_Dataset_digit(data_name='m_mnist', set='validation', data_path='data/mnist_m', transform=our_transform, num_samples=1000)) if tsk =='svhn': source_dataset.append(util.Local_SVHN(root='data/SVHN', split='train', transform=our_transform, download=True, num_smpl=num_tr_smpl)) source_dataset_test.append( util.Local_SVHN(root='data/SVHN', split='test',transform=our_transform, download=True, num_smpl=num_test_smpl)) source_dataset_validation.append( util.Local_SVHN(root='data/SVHN', split='extra', transform=our_transform, download=True, num_smpl=1000)) need_balance = re_weighting if re_weighting: for i in range(len(source_dataset)): if type(source_dataset[i].targets).__module__ == np.__name__: source_classes = source_dataset[i].targets else: source_classes = source_dataset[i].targets.numpy() source_freq = Counter(source_classes) #print('------------- source freq is',source_freq) source_class_weight = {x : 1.0 / source_freq[x] if need_balance else 1.0 for x in source_freq} #print('-----------source_class_weight is',source_class_weight) source_weights = [source_class_weight[x] for x in source_classes] source_sampler = WeightedRandomSampler(source_weights, len(source_classes)) #train_loader.append(DataLoader(source_dataset[i],batch_size=batch_size, sampler = source_sampler, drop_last=True, num_workers=8)) train_loader.append(DataLoader(source_dataset[i], batch_size=16, sampler = source_sampler, num_workers=0)) #[DataLoader(source_dataset[t], batch_size=16, shuffle=True, num_workers=0) # for t in range(len(source_dataset_test))] #test_loader.append(DataLoader(source_dataset_test[i],batch_size=test_batch_size, shuffle = True, num_workers=8)) else: train_loader = [DataLoader(source_dataset[t], batch_size=16, shuffle=True, num_workers=0) for t in range(len(source_dataset_test))] test_loader = [ DataLoader(source_dataset_test[t], batch_size=128, shuffle=False, num_workers=0) for t in range(len(source_dataset_test))] validation_loader = [ DataLoader(source_dataset_validation[t], batch_size=128, shuffle=False, num_workers=0) for t in range(len(source_dataset_test))] return train_loader,test_loader,validation_loader
/* * omap3isp_preview_init - Previewer initialization. * @isp : Pointer to ISP device * return -ENOMEM or zero on success */ int omap3isp_preview_init(struct isp_device *isp) { struct isp_prev_device *prev = &isp->isp_prev; init_waitqueue_head(&prev->wait); preview_init_params(prev); return preview_init_entities(prev); }
import React, { useState, useRef } from "react"; // Styles import style from "./style.module.css"; // Props types interface Props { onSubmit: Function; value: string; placeholder?: string; } export default function InputEdit(props: Props) { const input = useRef<HTMLInputElement>(null); const [value, setValue] = useState<string>(props.value); /** * When user removes focus from * inpute element */ const onBlur = () => { props.onSubmit(value); } // change event handler. // Save vale to state const onChange = (e: React.ChangeEvent<HTMLInputElement>) => { setValue(e.target.value); }; // Check if users presses enter // and remove their focus from input const onKeyUp = (e: React.KeyboardEvent<HTMLInputElement>) => { if (e.key === "Enter") { input.current?.blur(); } }; return ( <input autoFocus className={style.input} ref={input} value={value} onBlur={onBlur} onChange={onChange} onKeyUp={onKeyUp} placeholder={props.placeholder} ></input> ); }
/** * DataSource that routes to one of various target DataSources based on the * current transaction isolation level. The target DataSources need to be * configured with the isolation level name as key, as defined on the * {@link org.springframework.transaction.TransactionDefinition TransactionDefinition interface}. * * <p>This is particularly useful in combination with JTA transaction management * (typically through Spring's {@link org.springframework.transaction.jta.JtaTransactionManager}). * Standard JTA does not support transaction-specific isolation levels. Some JTA * providers support isolation levels as a vendor-specific extension (e.g. WebLogic), * which is the preferred way of addressing this. As alternative (e.g. on WebSphere), * the target database can be represented through multiple JNDI DataSources, each * configured with a different isolation level (for the entire DataSource). * The present DataSource router allows to transparently switch to the * appropriate DataSource based on the current transaction's isolation level. * * <p>The configuration can for example look like this, assuming that the target * DataSources are defined as individual Spring beans with names * "myRepeatableReadDataSource", "mySerializableDataSource" and "myDefaultDataSource": * * <pre> * &lt;bean id="dataSourceRouter" class="org.springframework.jdbc.datasource.lookup.IsolationLevelDataSourceRouter"&gt; * &lt;property name="targetDataSources"&gt; * &lt;map&gt; * &lt;entry key="ISOLATION_REPEATABLE_READ" value-ref="myRepeatableReadDataSource"/&gt; * &lt;entry key="ISOLATION_SERIALIZABLE" value-ref="mySerializableDataSource"/&gt; * &lt;/map&gt; * &lt;/property&gt; * &lt;property name="defaultTargetDataSource" ref="myDefaultDataSource"/&gt; * &lt;/bean&gt;</pre> * * Alternatively, the keyed values can also be data source names, to be resolved * through a {@link #setDataSourceLookup DataSourceLookup}: by default, JNDI * names for a standard JNDI lookup. This allows for a single concise definition * without the need for separate DataSource bean definitions. * * <pre> * &lt;bean id="dataSourceRouter" class="org.springframework.jdbc.datasource.lookup.IsolationLevelDataSourceRouter"&gt; * &lt;property name="targetDataSources"&gt; * &lt;map&gt; * &lt;entry key="ISOLATION_REPEATABLE_READ" value="java:comp/env/jdbc/myrrds"/&gt; * &lt;entry key="ISOLATION_SERIALIZABLE" value="java:comp/env/jdbc/myserds"/&gt; * &lt;/map&gt; * &lt;/property&gt; * &lt;property name="defaultTargetDataSource" value="java:comp/env/jdbc/mydefds"/&gt; * &lt;/bean&gt;</pre> * * Note: If you are using this router in combination with Spring's * {@link org.springframework.transaction.jta.JtaTransactionManager}, * don't forget to switch the "allowCustomIsolationLevels" flag to "true". * (By default, JtaTransactionManager will only accept a default isolation level * because of the lack of isolation level support in standard JTA itself.) * * <pre> * &lt;bean id="transactionManager" class="org.springframework.transaction.jta.JtaTransactionManager"&gt; * &lt;property name="allowCustomIsolationLevels" value="true"/&gt; * &lt;/bean&gt;</pre> * * @author Juergen Hoeller * @since 2.0.1 * @see #setTargetDataSources * @see #setDefaultTargetDataSource * @see org.springframework.transaction.TransactionDefinition#ISOLATION_READ_UNCOMMITTED * @see org.springframework.transaction.TransactionDefinition#ISOLATION_READ_COMMITTED * @see org.springframework.transaction.TransactionDefinition#ISOLATION_REPEATABLE_READ * @see org.springframework.transaction.TransactionDefinition#ISOLATION_SERIALIZABLE * @see org.springframework.transaction.jta.JtaTransactionManager */ public class IsolationLevelDataSourceRouter extends AbstractRoutingDataSource { /** Constants instance for TransactionDefinition */ private static final Constants constants = new Constants(TransactionDefinition.class); /** * Supports Integer values for the isolation level constants * as well as isolation level names as defined on the * {@link org.springframework.transaction.TransactionDefinition TransactionDefinition interface}. */ protected Object resolveSpecifiedLookupKey(Object lookupKey) { if (lookupKey instanceof Integer) { return (Integer) lookupKey; } else if (lookupKey instanceof String) { String constantName = (String) lookupKey; if (constantName == null || !constantName.startsWith(DefaultTransactionDefinition.PREFIX_ISOLATION)) { throw new IllegalArgumentException("Only isolation constants allowed"); } return constants.asNumber(constantName); } else { throw new IllegalArgumentException( "Invalid lookup key - needs to be isolation level Integer or isolation level name String: " + lookupKey); } } protected Object determineCurrentLookupKey() { return TransactionSynchronizationManager.getCurrentTransactionIsolationLevel(); } }
How has the Chinese economy capitalised on the demographic dividend during the reform period? China’s unprecedented economic growth during the period of reform that began in the late 1970s has been accompanied by a dramatic demographic transition— namely, a rapid decline in the fertility rate. In the period 1978–2015, China realised a real growth rate of gross national income (GNI) of 9.6 per cent—the fastest speed anywhere in the world in that period. On the other hand, according to the United Nations (UN 2015), China’s total fertility rate (TFR) dropped from 2.5–3 in the late 1970s and early 1980s to a replacement level of 2 in the first half of the 1990s, and has remained constant at about 1.5 since the second half of the 1990s.
''' foi aplicado o gerenc. de layout pack - fill - o qual faz com que o componente estenda ao tamanho da janela/disponivel, podendo ser no sentido vertical ou horizontal ''' from tkinter import* root = Tk() lb1 = Label(root, text="escrito 1", bg="purple") lb1.pack(side=TOP, fill=X) lb2 = Label(root, text="escrito 2", bg="pink") lb2.pack(side=LEFT, fill=Y) lb3 = Label(root, text="escrito 3", bg="yellowgreen") lb3.pack(side=RIGHT, fill=Y) lb4 = Label(root, text="escrito 4", bg="green") lb4.pack(side=BOTTOM, fill=X) root.geometry("300x300+100+100") root.mainloop()
Get the biggest Aston Villa FC stories by email Subscribe Thank you for subscribing We have more newsletters Show me See our privacy notice Could not subscribe, try again later Invalid Email Steve Hollis insists he is ‘not Randy Lerner’s poodle’ after telling the Aston Villa owner that major changes are needed to end the current demise. The Villa chairman is leading a major review at the club that has already seen CEO Tom Fox and sporting director Hendrik Almstadt leave this week. Director of recruitment Paddy Riley could stick around to work under the new regime that includes Mervyn King and David Bernstein as board members as well as Brian Little who now holds an advisory role. Hollis, meanwhile, will oversee the day-to-day running at Villa Park and has told Lerner: “If you want some poodle who’s just going to be your mouthpiece you’ve got the wrong bloke.” Hollis met Lerner in the United States prior to his arrival in January and said: “You’re a board member, Randy, and I’ll respect you as a shareholder but that’s the relationship we have. “It’s brave of him. It’s part of his family assets and he’s handed over the stewardship of that asset to a new board team. “Randy is passionate about Aston Villa. It’s down to a change in his personal circumstances that he just can’t physically be around the football club. “But telephones work very well. We talk, meet up, what have you, and he just wants to see the best for Aston Villa. “If you put yourself into Randy’s position, he’s put a huge amount of his emotional energy into the football club. “He’s put a huge amount of cash into the club. “It’s a real brave decision on his part to hand over the running of the club to a board where he’s a board member, but he’s one of five. Gabby Agbonlahor 10 years on Video Loading Video Unavailable Click to play Tap to play The video will start in 8 Cancel Play now “I was very clear with him when I took the chair (about the way forward).” After the recent changes, Hollis reckons the board is now in a good shape. “We’ve got a good mix between the commercials through people like me, the heavyweight business through Mervyn and David and also highly respected football people like Mervyn, David and Brian Little. “It is what we need. The tone at the top in any organisation is what drives what happens when you go through it.”
/** * The MatchActionOperations class holds a list of MatchActionOperationEntry * objects to be executed together as one set. */ public class MatchActionOperations extends BatchOperation<MatchActionOperationEntry> { private final MatchActionOperationsId id; private MatchActionOperationsState state; private final Set<MatchActionOperationsId> dependencies; /** * The MatchAction operators. */ public enum Operator { /** Add a new match action. */ ADD, /** Remove an existing match action. */ REMOVE, /*** Modify an existing match action entry strictly matching wildcards * and priority (works as MODIFY StRICT). */ MODIFY, } /** * Constructs a MatchActionOperations object from an id. Internal * constructor called by a public factory method. * * @param newId match action operations identifier for this instance */ public MatchActionOperations(final MatchActionOperationsId newId) { id = checkNotNull(newId); state = MatchActionOperationsState.INIT; dependencies = new HashSet<>(); } /** * no-arg constructor for Kryo. */ protected MatchActionOperations() { id = null; dependencies = null; } /** * Gets the identifier for the Match Action Operations object. * * @return identifier for the Opertions object */ public MatchActionOperationsId getOperationsId() { return id; } /** * Gets the state of the Match Action Operations. * * @return state of the operations */ public MatchActionOperationsState getState() { return state; } /** * Sets the state of the Match Action Operations. * * @param newState new state of the operations */ public void setState(final MatchActionOperationsState newState) { state = newState; } /** * Gets the set of IDs of operations that are dependent on this * operation. * * @return set of operations IDs of dependent operations */ public Set<MatchActionOperationsId> getDependencies() { return dependencies; } /** * Adds a dependency to this set of Operations. * * @param dependentOperationId Identifier of the Operations that must * complete before this one can be installed */ public void addDependency(MatchActionOperationsId dependentOperationId) { dependencies.add(dependentOperationId); } @Override public int hashCode() { return id.hashCode(); } @Override public boolean equals(Object obj) { if (obj instanceof MatchActionOperations) { final MatchActionOperations other = (MatchActionOperations) obj; return (id.equals(other.id)); } return false; } }
// #[cfg(test)] pub mod test; pub mod boolean; pub mod multieq; pub mod uint32; pub mod blake2s; pub mod num; pub mod lookup; pub mod baby_ecc; pub mod ecc; pub mod pedersen_hash; pub mod baby_pedersen_hash; pub mod multipack; pub mod sha256; pub mod baby_eddsa; pub mod float_point; pub mod polynomial_lookup; pub mod as_waksman; // pub mod linear_combination; pub mod expression; // pub mod shark_mimc; pub mod rescue; pub mod sapling; pub mod sprout; use bellman::{ SynthesisError }; // TODO: This should probably be removed and we // should use existing helper methods on `Option` // for mapping with an error. /// This basically is just an extension to `Option` /// which allows for a convenient mapping to an /// error on `None`. pub trait Assignment<T> { fn get(&self) -> Result<&T, SynthesisError>; fn grab(self) -> Result<T, SynthesisError>; } impl<T: Clone> Assignment<T> for Option<T> { fn get(&self) -> Result<&T, SynthesisError> { match *self { Some(ref v) => Ok(v), None => Err(SynthesisError::AssignmentMissing) } } fn grab(self) -> Result<T, SynthesisError> { match self { Some(v) => Ok(v), None => Err(SynthesisError::AssignmentMissing) } } } use crate::bellman::pairing::ff::{Field, PrimeField}; pub trait SomeField<F: Field> { fn add(&self, other: &Self) -> Self; fn sub(&self, other: &Self) -> Self; fn mul(&self, other: &Self) -> Self; fn fma(&self, to_mul: &Self, to_add: &Self) -> Self; fn negate(&self) -> Self; } impl<F: Field> SomeField<F> for Option<F> { fn add(&self, other: &Self) -> Self { match (self, other) { (Some(s), Some(o)) => { let mut tmp = *s; tmp.add_assign(&o); Some(tmp) }, _ => None } } fn sub(&self, other: &Self) -> Self { match (self, other) { (Some(s), Some(o)) => { let mut tmp = *s; tmp.sub_assign(&o); Some(tmp) }, _ => None } } fn mul(&self, other: &Self) -> Self { match (self, other) { (Some(s), Some(o)) => { let mut tmp = *s; tmp.mul_assign(&o); Some(tmp) }, _ => None } } fn fma(&self, to_mul: &Self, to_add: &Self) -> Self { match (self, to_mul, to_add) { (Some(s), Some(m), Some(a)) => { let mut tmp = *s; tmp.mul_assign(&m); tmp.add_assign(&a); Some(tmp) }, _ => None } } fn negate(&self) -> Self { match self { Some(s) => { let mut tmp = *s; tmp.negate(); Some(tmp) }, _ => None } } }
/* Helper function to compare if oldpath is the prefix path of newpath */ int32_t _check_path_prefix(const char *oldpath, const char *newpath) { char *temppath; if (strlen(oldpath) < strlen(newpath)) { temppath = malloc(strlen(oldpath)+10); if (temppath == NULL) return -ENOMEM; snprintf(temppath, strlen(oldpath)+5, "%s/", oldpath); if (strncmp(newpath, oldpath, strlen(temppath)) == 0) { free(temppath); return -EINVAL; } free(temppath); } return 0; }
/** * Responsible for moving the web heads around and for locking/unlocking the web head to * remove view. * * @param event the touch event */ private void handleMove(@NonNull MotionEvent event) { movementTracker.addMovement(event); float offsetX = event.getRawX() - posX; float offsetY = event.getRawY() - posY; if (Math.hypot(offsetX, offsetY) > touchSlop) { dragging = true; } if (dragging) { getTrashy().reveal(); userManuallyMoved = true; int x = (int) (initialDownX + offsetX); int y = (int) (initialDownY + offsetY); if (isNearRemoveCircle(x, y)) { getTrashy().grow(); touchUp(); xSpring.setSpringConfig(SpringConfigs.SNAP); ySpring.setSpringConfig(SpringConfigs.SNAP); xSpring.setEndValue(trashLockCoOrd()[0]); ySpring.setEndValue(trashLockCoOrd()[1]); } else { getTrashy().shrink(); xSpring.setSpringConfig(SpringConfigs.DRAG); ySpring.setSpringConfig(SpringConfigs.DRAG); xSpring.setCurrentValue(x); ySpring.setCurrentValue(y); touchDown(); } } }
def _delete_nxos_db(self, vlan_id, device_id, host_id, vni, is_provider_vlan): try: rows = nxos_db.get_nexusvm_bindings(vlan_id, device_id) for row in rows: nxos_db.remove_nexusport_binding(row.port_id, row.vlan_id, row.vni, row.switch_ip, row.instance_id, row.is_provider_vlan) except excep.NexusPortBindingNotFound: return
Adorable Boy Donates His Allowance to Help Adorable Kittens This application requires JavaScript. An adorable boy from Philadelphia took his love of kittens to another level. Evan, a 10-year-old boy and unabashed kitten lover, wrote this heartwarming letter to an animal shelter when he was 7 years old. City Kitties is the animal shelter where his family adopted their first cat, Macha. Here’s his letter: Dear City Kitties, My name is Evan. I am 7 years old. And guess what? … I LOVE cats! They are my favorite animals and I got my cat at City Kitties as well! THANK YOU for letting me get my new cat! Thank you, City Kitties. Sincerely, Evan P.S. I get an allowance every week and I chose to make a donation to you. I love that you help cats find homes. I saved this money to help you help cats. And since 2009, Evan has been donating his allowance to City Kitties. In his first year of donating, Evan saved up $46.75 for animal shelter. The following year, he upped his donation to $97, and in 2012, he donated $110. Each year, he’s also sent a handwritten letter to the shelter. Way to go, Evan!
/** * Calculates the purchase price of a product */ public static Map<String, Object> calculatePurchasePrice(DispatchContext dctx, Map<String, ? extends Object> context) { Delegator delegator = dctx.getDelegator(); LocalDispatcher dispatcher = dctx.getDispatcher(); Map<String, Object> result = FastMap.newInstance(); List<GenericValue> orderItemPriceInfos = FastList.newInstance(); boolean validPriceFound = false; BigDecimal price = BigDecimal.ZERO; GenericValue product = (GenericValue)context.get("product"); String productId = product.getString("productId"); String currencyUomId = (String)context.get("currencyUomId"); String partyId = (String)context.get("partyId"); BigDecimal quantity = (BigDecimal)context.get("quantity"); Locale locale = (Locale)context.get("locale"); if (!validPriceFound) { Map<String, Object> priceContext = UtilMisc.toMap("currencyUomId", currencyUomId, "partyId", partyId, "productId", productId, "quantity", quantity); List<GenericValue> productSuppliers = null; try { Map<String, Object> priceResult = dispatcher.runSync("getSuppliersForProduct", priceContext); if (ServiceUtil.isError(priceResult)) { String errMsg = ServiceUtil.getErrorMessage(priceResult); Debug.logError(errMsg, module); return ServiceUtil.returnError(errMsg); } productSuppliers = UtilGenerics.checkList(priceResult.get("supplierProducts")); } catch (GenericServiceException gse) { Debug.logError(gse, module); return ServiceUtil.returnError(gse.getMessage()); } if (productSuppliers != null) { for (GenericValue productSupplier: productSuppliers) { if (!validPriceFound) { price = ((BigDecimal)productSupplier.get("lastPrice")); validPriceFound = true; } StringBuilder priceInfoDescription = new StringBuilder(); priceInfoDescription.append(UtilProperties.getMessage(resource, "ProductSupplier", locale)); priceInfoDescription.append(" ["); priceInfoDescription.append(UtilProperties.getMessage(resource, "ProductSupplierMinimumOrderQuantity", locale)); priceInfoDescription.append(productSupplier.getBigDecimal("minimumOrderQuantity")); priceInfoDescription.append(UtilProperties.getMessage(resource, "ProductSupplierLastPrice", locale)); priceInfoDescription.append(productSupplier.getBigDecimal("lastPrice")); priceInfoDescription.append("]"); GenericValue orderItemPriceInfo = delegator.makeValue("OrderItemPriceInfo"); String priceInfoDescriptionString = priceInfoDescription.toString(); if (priceInfoDescriptionString.length() > 250) { priceInfoDescriptionString = priceInfoDescriptionString.substring(0, 250); } orderItemPriceInfo.set("description", priceInfoDescriptionString); orderItemPriceInfos.add(orderItemPriceInfo); } } } if (!validPriceFound) { List<GenericValue> prices = null; try { prices = delegator.findByAnd("ProductPrice", UtilMisc.toMap("productId", productId, "productPricePurposeId", "PURCHASE"), UtilMisc.toList("-fromDate")); if (UtilValidate.isEmpty(prices)) { GenericValue parentProduct = ProductWorker.getParentProduct(productId, delegator); if (parentProduct != null) { String parentProductId = parentProduct.getString("productId"); prices = delegator.findByAnd("ProductPrice", UtilMisc.toMap("productId", parentProductId, "productPricePurposeId", "PURCHASE"), UtilMisc.toList("-fromDate")); } } } catch (GenericEntityException e) { Debug.logError(e, module); return ServiceUtil.returnError(e.getMessage()); } prices = EntityUtil.filterByDate(prices); List<GenericValue> pricesToUse = EntityUtil.filterByAnd(prices, UtilMisc.toMap("productPriceTypeId", "AVERAGE_COST")); if (UtilValidate.isEmpty(pricesToUse)) { pricesToUse = EntityUtil.filterByAnd(prices, UtilMisc.toMap("productPriceTypeId", "DEFAULT_PRICE")); if (UtilValidate.isEmpty(pricesToUse)) { pricesToUse = EntityUtil.filterByAnd(prices, UtilMisc.toMap("productPriceTypeId", "LIST_PRICE")); } } GenericValue thisPrice = EntityUtil.getFirst(pricesToUse); if (thisPrice != null) { price = thisPrice.getBigDecimal("price"); validPriceFound = true; } } result.put("price", price); result.put("validPriceFound", Boolean.valueOf(validPriceFound)); result.put("orderItemPriceInfos", orderItemPriceInfos); return result; }
// Copyright 2020 <NAME> // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package org.finos.legend.pure.runtime.java.compiled.generation.processors.support; import org.eclipse.collections.api.RichIterable; import org.eclipse.collections.api.list.ImmutableList; import org.eclipse.collections.api.list.ListIterable; import org.eclipse.collections.api.map.MutableMap; import org.eclipse.collections.api.tuple.Pair; import org.eclipse.collections.impl.factory.Lists; import org.eclipse.collections.impl.map.mutable.UnifiedMap; import org.finos.legend.pure.m3.coreinstance.meta.pure.functions.collection.List; import org.finos.legend.pure.m3.coreinstance.meta.pure.metamodel.function.LambdaFunction; import org.finos.legend.pure.m3.coreinstance.meta.pure.metamodel.function.NativeFunction; import org.finos.legend.pure.m3.coreinstance.meta.pure.metamodel.type.FunctionType; import org.finos.legend.pure.m3.coreinstance.meta.pure.metamodel.valuespecification.InstanceValue; import org.finos.legend.pure.m3.coreinstance.meta.pure.metamodel.valuespecification.SimpleFunctionExpression; import org.finos.legend.pure.m3.coreinstance.meta.pure.metamodel.valuespecification.ValueSpecification; import org.finos.legend.pure.m3.coreinstance.meta.pure.metamodel.valuespecification.VariableExpression; import org.finos.legend.pure.m3.execution.ExecutionSupport; import org.finos.legend.pure.runtime.java.compiled.generation.processors.support.function.PureLambdaFunction; import org.finos.legend.pure.runtime.java.compiled.generation.processors.support.function.PureLambdaFunction0; import org.finos.legend.pure.runtime.java.compiled.generation.processors.support.function.PureLambdaFunction1; import org.finos.legend.pure.runtime.java.compiled.generation.processors.support.function.PureLambdaFunction2; import org.finos.legend.pure.runtime.java.compiled.generation.processors.support.function.PureLambdaFunction3; import org.finos.legend.pure.runtime.java.compiled.generation.processors.support.map.PureMap; import java.util.Objects; /* An implementation of PureLambdaFunction that implements the execution by dynamically looking at the expression sequence and reactivates / evaluates them individually to compose the overall result of the function. */ public class DynamicPureLambdaFunctionImpl<T> implements PureLambdaFunction<T> { private final MutableMap<String, Object> openVariables; private final LambdaFunction func; private final Bridge bridge; public DynamicPureLambdaFunctionImpl( LambdaFunction func, MutableMap<String, Object> openVariables, Bridge bridge ) { Objects.requireNonNull(func, "func"); Objects.requireNonNull(openVariables, "openVariables"); this.func = func; this.openVariables = openVariables.asUnmodifiable(); this.bridge = bridge; } @Override public MutableMap<String, Object> getOpenVariables() { return openVariables; } @Override public T execute(ListIterable vars, ExecutionSupport es) { Objects.requireNonNull(vars, "vars"); Objects.requireNonNull(es, "es"); PureMap runningOpenVariablesMap = new PureMap(UnifiedMap.newMap()); for (Pair entry : openVariables.keyValuesView()) { runningOpenVariablesMap.getMap().put(entry.getOne(), createList()._valuesAddAll(CompiledSupport.toPureCollection(entry.getTwo()))); } FunctionType ft = (FunctionType)func._classifierGenericType()._typeArguments().getFirst()._rawType(); ImmutableList<? extends VariableExpression> parameters = Lists.immutable.withAll(ft._parameters()); for (int i = 0; i < parameters.size(); i++) { runningOpenVariablesMap.getMap().put(parameters.get(i)._name(), createList()._valuesAddAll(CompiledSupport.toPureCollection(vars.get(i)))); } Object finalResult = null; for (Object expressionSequenceItem : func._expressionSequence()) { ValueSpecification vs = (ValueSpecification)expressionSequenceItem; Object result = Reactivator.reactivateWithoutJavaCompilation(bridge, vs, runningOpenVariablesMap, es); if (vs instanceof SimpleFunctionExpression) { SimpleFunctionExpression sfe = (SimpleFunctionExpression)vs; if (sfe._func() instanceof NativeFunction && sfe._func()._name().equals("letFunction_String_1__T_m__T_m_")) { String varName = (String)((InstanceValue)sfe._parametersValues().getFirst())._values().getFirst(); runningOpenVariablesMap.getMap().put(varName, createList()._valuesAddAll(CompiledSupport.toPureCollection(result))); } } finalResult = result; } return (T)finalResult; } private List<Object> createList() { return this.bridge.listBuilder().value(); } @Override public String toString() { return com.google.common.base.MoreObjects.toStringHelper(this) .add("func", func) .add("openVariables", openVariables) .toString(); } private LambdaFunction lambdaFunction() { return this.func; } public static PureLambdaFunction<Object> createPureLambdaFunction( final LambdaFunction func, final MutableMap<String, Object> openVariables, Bridge bridge) { return createPureLambdaFunctionWrapper(new DynamicPureLambdaFunctionImpl<Object>(func, openVariables, bridge)); } public static <X> PureLambdaFunction<X> createPureLambdaFunctionWrapper(final DynamicPureLambdaFunctionImpl<X> inner) { LambdaFunction func = inner.lambdaFunction(); RichIterable<? extends VariableExpression> params = ((FunctionType)func._classifierGenericType()._typeArguments().getFirst()._rawType())._parameters(); if (params.size() == 0) { return new PureLambdaFunction0<X>() { @Override public MutableMap<String, Object> getOpenVariables() { return inner.getOpenVariables(); } @Override public X valueOf(ExecutionSupport executionSupport) { return execute(Lists.immutable.empty(), executionSupport); } @Override public X execute(ListIterable vars, ExecutionSupport es) { return inner.execute(vars, es); } @Override public String toString() { return com.google.common.base.MoreObjects.toStringHelper(this) .add("inner", inner) .toString(); } }; } else if (params.size() == 1) { return new PureLambdaFunction1<Object, X>() { @Override public MutableMap<String, Object> getOpenVariables() { return inner.getOpenVariables(); } @Override public X value(Object o, ExecutionSupport executionSupport) { return execute(Lists.immutable.with(o), executionSupport); } @Override public X execute(ListIterable vars, ExecutionSupport es) { return inner.execute(vars, es); } public String toString() { return com.google.common.base.MoreObjects.toStringHelper(this) .add("inner", inner) .toString(); } }; } else if (params.size() == 2) { return new PureLambdaFunction2<Object, Object, X>() { @Override public MutableMap<String, Object> getOpenVariables() { return inner.getOpenVariables(); } @Override public X value(Object o, Object o2, ExecutionSupport executionSupport) { return execute(Lists.immutable.with(o, o2), executionSupport); } @Override public X execute(ListIterable vars, ExecutionSupport es) { return inner.execute(vars, es); } public String toString() { return com.google.common.base.MoreObjects.toStringHelper(this) .add("inner", inner) .toString(); } }; } else if (params.size() == 3) { return new PureLambdaFunction3<Object, Object, Object, X>() { @Override public MutableMap<String, Object> getOpenVariables() { return inner.getOpenVariables(); } @Override public X value(Object o, Object o2, Object o3, ExecutionSupport executionSupport) { return execute(Lists.immutable.with(o, o2, o3), executionSupport); } @Override public X execute(ListIterable vars, ExecutionSupport es) { return inner.execute(vars, es); } public String toString() { return com.google.common.base.MoreObjects.toStringHelper(this) .add("inner", inner) .toString(); } }; } else { return inner; } } }
// expects, splitworld, name of var, constructor function for the reducer, any constructor params boost::python::object raw_addVariable(boost::python::tuple t, boost::python::dict kwargs) { int l=len(t); if (l<3) { throw SplitWorldException("Insufficient parameters to addVariable."); } extract<SplitWorld&> exw(t[0]); if (!exw.check()) { throw SplitWorldException("First parameter to addVariable must be a SplitWorld."); } SplitWorld& ws=exw(); object pname=t[1]; extract<std::string> ex2(pname); if (!ex2.check()) { throw SplitWorldException("Second parameter to addVariable must be a string"); } std::string name=ex2(); object creator=t[2]; tuple ntup=tuple(t.slice(3,l)); ws.addVariable(name, creator, ntup, kwargs); return object(); }
package uk.gov.digital.ho.hocs.casework.api.dto; import com.fasterxml.jackson.annotation.JsonProperty; import lombok.AccessLevel; import lombok.AllArgsConstructor; import lombok.Getter; import lombok.NoArgsConstructor; import java.util.Set; @NoArgsConstructor @AllArgsConstructor @Getter public class GetCorrespondentTypeResponse { @JsonProperty("correspondentTypes") Set<CorrespondentTypeDto> correspondentTypes; public static GetCorrespondentTypeResponse from(Set<CorrespondentTypeDto> correspondentTypeSet) { return new GetCorrespondentTypeResponse(correspondentTypeSet); } }
/** * Generic text sign with Base64 String output * * @param data * @param flat * @return * @throws IOException * @throws SignatureException * @throws NoSuchAlgorithmException * @throws InvalidKeyException * @throws Exception */ static String signBase64Challenge(final String data, final boolean webCryptoAPI) throws InvalidKeyException, NoSuchAlgorithmException, SignatureException, IOException { final Encoder base64 = Base64.getEncoder(); final byte[] signature = signChallenge(data.getBytes(), webCryptoAPI); return base64.encodeToString(signature); }
// Create will ensure that a namespace for the provided handle exists. func (t *Transaction) Create(handle Handle) error { t.mutex.Lock() defer t.mutex.Unlock() err := handle.Validate(true) if err != nil { return err } if handle[0] == Local { return fmt.Errorf("namespace local.* is read only") } if t.catalog.Namespaces[handle] != nil { return nil } t.catalog = t.catalog.Clone() t.catalog.Namespaces[handle] = mongokit.NewCollection(true) t.dirty = true return nil }
<gh_stars>1-10 /****************************************************************************\ * * * IFC (Iris Foundation Classes) Project * * http://github.com/haoxingeng/ifc * * * * Copyright 2008 HaoXinGeng (<EMAIL>) * * All rights reserved. * * * * Licensed under the Apache License, Version 2.0 (the "License"); * * you may not use this file except in compliance with the License. * * You may obtain a copy of the License at * * * * http://www.apache.org/licenses/LICENSE-2.0 * * * * Unless required by applicable law or agreed to in writing, software * * distributed under the License is distributed on an "AS IS" BASIS, * * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * * See the License for the specific language governing permissions and * * limitations under the License. * * * \****************************************************************************/ /// @file ifc_configmgr.cpp #include "stdafx.h" #include "ifc_configmgr.h" #include "ifc_sysutils.h" namespace ifc { /////////////////////////////////////////////////////////////////////////////// // CBaseConfigMgr //----------------------------------------------------------------------------- CBaseConfigMgr::CBaseConfigMgr() { m_Sections.SetCaseSensitive(false); } //----------------------------------------------------------------------------- CBaseConfigMgr::~CBaseConfigMgr() { Clear(); } //----------------------------------------------------------------------------- CNameValueList* CBaseConfigMgr::FindNameValueList(LPCTSTR lpszSection) { CNameValueList *pResult = NULL; int i = m_Sections.IndexOf(lpszSection); if (i >= 0) pResult = (CNameValueList*)m_Sections.GetData(i); return pResult; } //----------------------------------------------------------------------------- CNameValueList* CBaseConfigMgr::AddSection(LPCTSTR lpszSection) { CNameValueList *pResult; pResult = FindNameValueList(lpszSection); if (pResult == NULL) { pResult = new CNameValueList(); pResult->SetCaseSensitive(false); m_Sections.Add(lpszSection, pResult); } return pResult; } //----------------------------------------------------------------------------- bool CBaseConfigMgr::GetValueStr(LPCTSTR lpszSection, LPCTSTR lpszName, CString& strValue) { CNameValueList *pNVList = FindNameValueList(lpszSection); return (pNVList != NULL && pNVList->GetValue(lpszName, strValue)); } //----------------------------------------------------------------------------- void CBaseConfigMgr::Load(CConfigIO& IO) { CConfigIO::CAutoUpdater AutoUpdater(IO); CStrList SectionList, KeyList; CString strSection, strKey, strValue; Clear(); IO.GetSectionList(SectionList); for (int nSecIndex = 0; nSecIndex < SectionList.GetCount(); nSecIndex++) { strSection = SectionList[nSecIndex]; KeyList.Clear(); IO.GetKeyList(strSection, KeyList); for (int nKeyIndex = 0; nKeyIndex < KeyList.GetCount(); nKeyIndex++) { strKey = KeyList[nKeyIndex]; strValue = IO.Read(strSection, strKey, TEXT("")); SetString(strSection, strKey, strValue); } } } //----------------------------------------------------------------------------- void CBaseConfigMgr::Save(CConfigIO& IO) { CConfigIO::CAutoUpdater AutoUpdater(IO); CString strSection, strKey, strValue; CNameValueList *pNVList; for (int nSecIndex = 0; nSecIndex < m_Sections.GetCount(); nSecIndex++) { strSection = m_Sections[nSecIndex]; pNVList = (CNameValueList*)m_Sections.GetData(nSecIndex); IO.DeleteSection(strSection); for (int nKeyIndex = 0; nKeyIndex < pNVList->GetCount(); nKeyIndex++) { pNVList->GetItems(nKeyIndex, strKey, strValue); IO.Write(strSection, strKey, strValue); } } } //----------------------------------------------------------------------------- CString CBaseConfigMgr::GetString(LPCTSTR lpszSection, LPCTSTR lpszName, LPCTSTR lpszDefault) { CString strResult; if (!GetValueStr(lpszSection, lpszName, strResult)) strResult = lpszDefault; return strResult; } //----------------------------------------------------------------------------- int CBaseConfigMgr::GetInteger(LPCTSTR lpszSection, LPCTSTR lpszName, int nDefault) { int nResult; CString strValue; if (GetValueStr(lpszSection, lpszName, strValue)) nResult = StrToInt(strValue, nDefault); else nResult = nDefault; return nResult; } //----------------------------------------------------------------------------- bool CBaseConfigMgr::GetBool(LPCTSTR lpszSection, LPCTSTR lpszName, bool bDefault) { bool bResult; CString strValue; if (GetValueStr(lpszSection, lpszName, strValue)) bResult = StrToBool(strValue, bDefault); else bResult = bDefault; return bResult; } //----------------------------------------------------------------------------- double CBaseConfigMgr::GetFloat(LPCTSTR lpszSection, LPCTSTR lpszName, double fDefault) { double fResult; CString strValue; if (GetValueStr(lpszSection, lpszName, strValue)) fResult = StrToFloat(strValue, fDefault); else fResult = fDefault; return fResult; } //----------------------------------------------------------------------------- int CBaseConfigMgr::GetBinaryData(LPCTSTR lpszSection, LPCTSTR lpszName, CBuffer& Value) { CString strValue = GetString(lpszSection, lpszName, TEXT("")); Value.Clear(); if (!strValue.IsEmpty()) DecodeBase16(strValue, Value); return Value.GetSize(); } //----------------------------------------------------------------------------- void CBaseConfigMgr::SetString(LPCTSTR lpszSection, LPCTSTR lpszName, LPCTSTR lpszValue) { CNameValueList *pNVList = AddSection(lpszSection); pNVList->Add(lpszName, lpszValue); } //----------------------------------------------------------------------------- void CBaseConfigMgr::SetInteger(LPCTSTR lpszSection, LPCTSTR lpszName, int nValue) { CNameValueList *pNVList = AddSection(lpszSection); pNVList->Add(lpszName, IntToStr(nValue)); } //----------------------------------------------------------------------------- void CBaseConfigMgr::SetBool(LPCTSTR lpszSection, LPCTSTR lpszName, bool bValue) { CNameValueList *pNVList = AddSection(lpszSection); pNVList->Add(lpszName, BoolToStr(bValue)); } //----------------------------------------------------------------------------- void CBaseConfigMgr::SetFloat(LPCTSTR lpszSection, LPCTSTR lpszName, double fValue) { CNameValueList *pNVList = AddSection(lpszSection); pNVList->Add(lpszName, FloatToStr(fValue)); } //----------------------------------------------------------------------------- void CBaseConfigMgr::SetBinaryData(LPCTSTR lpszSection, LPCTSTR lpszName, PVOID pDataBuf, int nDataSize) { CString strText; if (nDataSize > 0 && pDataBuf != NULL) strText = EncodeBase16((char*)pDataBuf, nDataSize); SetString(lpszSection, lpszName, strText); } //----------------------------------------------------------------------------- void CBaseConfigMgr::DeleteSection(LPCTSTR lpszSection) { // The items in m_Sections should not be removed, so that the Save() can erase // the sections from the file correctly. CNameValueList *pNVList = FindNameValueList(lpszSection); if (pNVList != NULL) pNVList->Clear(); } //----------------------------------------------------------------------------- void CBaseConfigMgr::GetSection(LPCTSTR lpszSection, CNameValueList& List) { List.Clear(); CNameValueList *pNVList = FindNameValueList(lpszSection); if (pNVList != NULL) List = *pNVList; } //----------------------------------------------------------------------------- void CBaseConfigMgr::LoadFromIniFile(LPCTSTR lpszFileName) { CIniConfigIO IO(lpszFileName); Load(IO); } //----------------------------------------------------------------------------- void CBaseConfigMgr::SaveToIniFile(LPCTSTR lpszFileName) { CIniConfigIO IO(lpszFileName); Save(IO); } //----------------------------------------------------------------------------- void CBaseConfigMgr::LoadFromRegistry(HKEY hRootKey, LPCTSTR lpszPath) { CRegConfigIO IO(hRootKey, lpszPath); Load(IO); } //----------------------------------------------------------------------------- void CBaseConfigMgr::SaveToRegistry(HKEY hRootKey, LPCTSTR lpszPath) { CRegConfigIO IO(hRootKey, lpszPath); Save(IO); } //----------------------------------------------------------------------------- void CBaseConfigMgr::Clear() { for (int i = 0; i < m_Sections.GetCount(); i++) delete (CNameValueList*)m_Sections.GetData(i); m_Sections.Clear(); } /////////////////////////////////////////////////////////////////////////////// // CIniConfigIO //----------------------------------------------------------------------------- CIniConfigIO::CIniConfigIO(LPCTSTR lpszFileName) : m_strFileName(lpszFileName), m_pIniFile(NULL) { // nothing } //----------------------------------------------------------------------------- void CIniConfigIO::BeginUpdate() { m_pIniFile = new CMemIniFile(m_strFileName); } //----------------------------------------------------------------------------- void CIniConfigIO::EndUpdate() { m_pIniFile->UpdateFile(); delete m_pIniFile; m_pIniFile = NULL; } //----------------------------------------------------------------------------- void CIniConfigIO::GetSectionList(CStrList& List) { m_pIniFile->ReadSectionNames(List); } //----------------------------------------------------------------------------- void CIniConfigIO::GetKeyList(LPCTSTR lpszSection, CStrList& List) { m_pIniFile->ReadKeyNames(lpszSection, List); } //----------------------------------------------------------------------------- CString CIniConfigIO::Read(LPCTSTR lpszSection, LPCTSTR lpszName, LPCTSTR lpszDefault) { return m_pIniFile->ReadString(lpszSection, lpszName, lpszDefault); } //----------------------------------------------------------------------------- void CIniConfigIO::Write(LPCTSTR lpszSection, LPCTSTR lpszName, LPCTSTR lpszValue) { m_pIniFile->WriteString(lpszSection, lpszName, lpszValue); } //----------------------------------------------------------------------------- void CIniConfigIO::DeleteSection(LPCTSTR lpszSection) { m_pIniFile->EraseSection(lpszSection); } /////////////////////////////////////////////////////////////////////////////// // CRegConfigIO //----------------------------------------------------------------------------- CRegConfigIO::CRegConfigIO(HKEY hRootKey, LPCTSTR lpszPath) : m_hRootKey(hRootKey), m_strPath(lpszPath), m_pRegistry(NULL) { m_strPath = PathWithSlash(m_strPath); } //----------------------------------------------------------------------------- void CRegConfigIO::BeginUpdate() { m_pRegistry = new CRegistry(); m_pRegistry->SetRootKey(m_hRootKey); } //----------------------------------------------------------------------------- void CRegConfigIO::EndUpdate() { delete m_pRegistry; m_pRegistry = NULL; } //----------------------------------------------------------------------------- void CRegConfigIO::GetSectionList(CStrList& List) { List.Clear(); if (m_pRegistry->OpenKey(m_strPath, false)) m_pRegistry->GetKeyNames(List); } //----------------------------------------------------------------------------- void CRegConfigIO::GetKeyList(LPCTSTR lpszSection, CStrList& List) { List.Clear(); if (m_pRegistry->OpenKey(m_strPath + lpszSection, false)) m_pRegistry->GetValueNames(List); } //----------------------------------------------------------------------------- CString CRegConfigIO::Read(LPCTSTR lpszSection, LPCTSTR lpszName, LPCTSTR lpszDefault) { if (m_pRegistry->OpenKey(m_strPath + lpszSection, false)) return m_pRegistry->ReadString(lpszName); else return lpszDefault; } //----------------------------------------------------------------------------- void CRegConfigIO::Write(LPCTSTR lpszSection, LPCTSTR lpszName, LPCTSTR lpszValue) { if (m_pRegistry->OpenKey(m_strPath + lpszSection, true)) m_pRegistry->WriteString(lpszName, lpszValue); } //----------------------------------------------------------------------------- void CRegConfigIO::DeleteSection(LPCTSTR lpszSection) { if (m_pRegistry->OpenKey(m_strPath, false)) m_pRegistry->DeleteKey(lpszSection); } /////////////////////////////////////////////////////////////////////////////// } // namespace ifc
/** * Tile for the {@link BlockPurifier}.. * @author rubensworks * */ public class TilePurifier extends TankInventoryTileEntity implements CyclopsTileEntity.ITickingTile { /** * The amount of slots. */ public static final int SLOTS = 2; /** * The purify item slot. */ public static final int SLOT_PURIFY = 0; /** * The additional slot. */ public static final int SLOT_ADDITIONAL = 1; /** * Duration in ticks to show the 'poof' animation. */ private static final int ANIMATION_FINISHED_DURATION = 2; @Delegate private final ITickingTile tickingTileComponent = new TickingTileComponent(this); @NBTPersist private Float randomRotation = 0F; @Getter private int tick = 0; public static final int MAX_BUCKETS = 3; /** * Book bounce tick count. */ @NBTPersist public Integer tickCount = 0; /** * The next additional item rotation. */ @NBTPersist public Float additionalRotation2 = 0F; /** * The previous additional item rotation. */ @NBTPersist public Float additionalRotationPrev = 0F; /** * The additional item rotation. */ @NBTPersist public Float additionalRotation = 0F; @NBTPersist private Integer finishedAnimation = 0; @NBTPersist @Getter private Integer currentAction = -1; /* Copied from EnchantingTableTileEntity */ public int field_195522_a; public float field_195523_f; public float field_195524_g; public float field_195525_h; public float field_195526_i; public float field_195527_j; public float field_195528_k; public float field_195529_l; public float field_195530_m; public float field_195531_n; /** * Make a new instance. */ public TilePurifier() { super(RegistryEntries.TILE_ENTITY_PURIFIER, SLOTS, 1, FluidHelpers.BUCKET_VOLUME * MAX_BUCKETS, RegistryEntries.FLUID_BLOOD); // Trigger render update client-side getInventory().addDirtyMarkListener(this::sendUpdate); } @Override protected SimpleInventory createInventory(int inventorySize, int stackSize) { return new SimpleInventory(inventorySize, stackSize) { @Override public boolean isItemValidForSlot(int i, ItemStack itemStack) { if(i == 0) { return itemStack.getCount() == 1 && getActions().isItemValidForMainSlot(itemStack); } else if(i == 1) { return itemStack.getCount() == 1 && getActions().isItemValidForAdditionalSlot(itemStack); } return false; } }; } @Override protected SingleUseTank createTank(int tankSize) { return new ImplicitFluidConversionTank(tankSize, BloodFluidConverter.getInstance()); } public IPurifierActionRegistry getActions() { return EvilCraft._instance.getRegistryManager().getRegistry(IPurifierActionRegistry.class); } @Override public void updateTileEntity() { super.updateTileEntity(); int actionId = currentAction; if(actionId < 0) { actionId = getActions().canWork(this); } if(actionId >= 0) { tick++; if(getActions().work(actionId, this)) { tick = 0; currentAction = -1; onActionFinished(); } } else { tick = 0; currentAction = -1; } // Animation tick/display. if(finishedAnimation > 0) { finishedAnimation--; if(world.isRemote()) { showEnchantedEffect(); } } updateAdditionalItem(); } public void onActionFinished() { finishedAnimation = ANIMATION_FINISHED_DURATION; } /** * Get the amount of contained buckets. * @return The amount of buckets. */ public int getBucketsFloored() { return (int) Math.floor(getTank().getFluidAmount() / (double) FluidHelpers.BUCKET_VOLUME); } /** * Get the rest of the fluid that can not fit in a bucket. * Use this in {@link TilePurifier#setBuckets(int, int)} as rest. * @return The rest of the fluid. */ public int getBucketsRest() { return getTank().getFluidAmount() % FluidHelpers.BUCKET_VOLUME; } /** * Set the amount of contained buckets. This will also change the inner tank. * @param buckets The amount of buckets. * @param rest The rest of the fluid. */ public void setBuckets(int buckets, int rest) { getTank().setFluid(new FluidStack(RegistryEntries.FLUID_BLOOD, FluidHelpers.BUCKET_VOLUME * buckets + rest)); sendUpdate(); } /** * Set the maximum amount of contained buckets. * @return The maximum amount of buckets. */ public int getMaxBuckets() { return MAX_BUCKETS; } private void updateAdditionalItem() { this.additionalRotationPrev = this.additionalRotation2; this.additionalRotation += 0.02F; while (this.additionalRotation2 >= (float)Math.PI) { this.additionalRotation2 -= ((float)Math.PI * 2F); } while (this.additionalRotation2 < -(float)Math.PI) { this.additionalRotation2 += ((float)Math.PI * 2F); } while (this.additionalRotation >= (float)Math.PI) { this.additionalRotation -= ((float)Math.PI * 2F); } while (this.additionalRotation < -(float)Math.PI) { this.additionalRotation += ((float)Math.PI * 2F); } float baseNextRotation; for (baseNextRotation = this.additionalRotation - this.additionalRotation2; baseNextRotation >= (float)Math.PI; baseNextRotation -= ((float)Math.PI * 2F)) { } while (baseNextRotation < -(float)Math.PI) { baseNextRotation += ((float)Math.PI * 2F); } this.additionalRotation2 += baseNextRotation * 0.4F; ++this.tickCount; /* Copied from EnchantingTableTileEntity */ float f2; for(f2 = this.field_195531_n - this.field_195529_l; f2 >= (float)Math.PI; f2 -= ((float)Math.PI * 2F)) { ; } while(f2 < -(float)Math.PI) { f2 += ((float)Math.PI * 2F); } this.field_195529_l += f2 * 0.4F; this.field_195527_j = MathHelper.clamp(this.field_195527_j, 0.0F, 1.0F); ++this.field_195522_a; this.field_195524_g = this.field_195523_f; float f = (this.field_195525_h - this.field_195523_f) * 0.4F; float f3 = 0.2F; f = MathHelper.clamp(f, -0.2F, 0.2F); this.field_195526_i += (f - this.field_195526_i) * 0.9F; this.field_195523_f += this.field_195526_i; } /** * Get the purify item. * @return The purify item. */ public ItemStack getPurifyItem() { return getInventory().getStackInSlot(SLOT_PURIFY); } /** * Set the purify item. * @param itemStack The purify item. */ public void setPurifyItem(ItemStack itemStack) { this.randomRotation = world.rand.nextFloat() * 360; getInventory().setInventorySlotContents(SLOT_PURIFY, itemStack); } /** * Get the book item. * @return The book item. */ public ItemStack getAdditionalItem() { return getInventory().getStackInSlot(SLOT_ADDITIONAL); } /** * Set the book item. * @param itemStack The book item. */ public void setAdditionalItem(ItemStack itemStack) { getInventory().setInventorySlotContents(SLOT_ADDITIONAL, itemStack); } @OnlyIn(Dist.CLIENT) public void showEffect() { for (int i=0; i < 1; i++) { double particleX = getPos().getX() + 0.2 + world.rand.nextDouble() * 0.6; double particleY = getPos().getY() + 0.2 + world.rand.nextDouble() * 0.6; double particleZ = getPos().getZ() + 0.2 + world.rand.nextDouble() * 0.6; float particleMotionX = -0.01F + world.rand.nextFloat() * 0.02F; float particleMotionY = 0.01F; float particleMotionZ = -0.01F + world.rand.nextFloat() * 0.02F; Minecraft.getInstance().worldRenderer.addParticle( RegistryEntries.PARTICLE_BLOOD_BUBBLE, false, particleX, particleY, particleZ, particleMotionX, particleMotionY, particleMotionZ); } } @OnlyIn(Dist.CLIENT) public void showEnchantingEffect() { if(world.rand.nextInt(10) == 0) { for (int i=0; i < 1; i++) { double particleX = getPos().getX() + 0.45 + world.rand.nextDouble() * 0.1; double particleY = getPos().getY() + 1.45 + world.rand.nextDouble() * 0.1; double particleZ = getPos().getZ() + 0.45 + world.rand.nextDouble() * 0.1; float particleMotionX = -0.4F + world.rand.nextFloat() * 0.8F; float particleMotionY = -world.rand.nextFloat(); float particleMotionZ = -0.4F + world.rand.nextFloat() * 0.8F; Minecraft.getInstance().worldRenderer.addParticle( ParticleTypes.ENCHANT, false, particleX, particleY, particleZ, particleMotionX, particleMotionY, particleMotionZ); } } } @OnlyIn(Dist.CLIENT) private void showEnchantedEffect() { for (int i=0; i < 100; i++) { double particleX = getPos().getX() + 0.45 + world.rand.nextDouble() * 0.1; double particleY = getPos().getY() + 1.45 + world.rand.nextDouble() * 0.1; double particleZ = getPos().getZ() + 0.45 + world.rand.nextDouble() * 0.1; float particleMotionX = -0.4F + world.rand.nextFloat() * 0.8F; float particleMotionY = -0.4F + world.rand.nextFloat() * 0.8F; float particleMotionZ = -0.4F + world.rand.nextFloat() * 0.8F; Minecraft.getInstance().worldRenderer.addParticle( RegistryEntries.PARTICLE_MAGIC_FINISH, false, particleX, particleY, particleZ, particleMotionX, particleMotionY, particleMotionZ); } } /** * Get the random rotation for displaying the item. * @return The random rotation. */ public float getRandomRotation() { return randomRotation; } @Override public void onTankChanged() { super.onTankChanged(); sendUpdate(); } }
// NewResourceHandler creates a new resource handler. func NewResourceHandler(server *Server, router *mux.Router) *ResourceHandler { resourceHandler := &ResourceHandler{ router: router, Server: server, } router.HandleFunc("/{protocol}/{host}/{port}/", resourceHandler.getResource) router.HandleFunc("/{protocol}/{host}/{port}/{path}", resourceHandler.getResource) return resourceHandler }
package com.intellij.struts.dom.tiles; import com.intellij.struts.StrutsTest; import com.intellij.testFramework.TestDataPath; /** * @author <NAME> */ @TestDataPath("$CONTENT_ROOT/../testData/tiles/highlighting") public class TilesHighlightingTest extends StrutsTest { @Override protected String getBasePath() { return "/tiles/highlighting"; } public void testTiles20() { myFixture.testHighlighting("tiles-20.xml"); } public void testTiles21() { myFixture.testHighlighting("tiles-21.xml"); } public void testTiles30() { myFixture.testHighlighting("tiles-30.xml"); } }
Rep. Jerrold Nadler, New York Democrat, said FBI Director James B. Comey’s 11th-hour announcements about Hillary Clinton’s emails may have cost her the election, and he called on President Obama to fire Mr. Comey immediately. “What Jim Comey did was so highly improper and wrong, from the very beginning in July,” Mr. Nadler said Monday on CNN’s “New Day.” “He was putting his thumb on the scales right then, and it’s unforgivable for a police agency to opine, frankly, publicly, about legal conduct,” he said. “And then to send that letter when he had nothing to say, violating the guidelines that you don’t comment on an ongoing investigation — you don’t intervene in an election within 60 days — may very well have cost her the election,” Mr. Nadler said. “But whether it did or not, it was unforgivable as a political intervention by the police into the election, and the people in the FBI who were leaking to [former New York City Mayor] Rudy Giuliani through former agents or not — that was also clearly illegal,” he said. “The president ought to fire Comey immediately, and he ought to initiate an investigation,” he said. “That may be why she lost. It certainly hurt,” Mr. Nadler said. Mr. Nadler was apparently referring to comments from Mr. Giuliani, a top Trump surrogate, in which Mr. Giuliani said ahead of the election on Fox News that “you’re darn right I heard about it.” He later said on CNN he was referring to “consternation within the FBI” over the situation and that Mr. Comey’s letter was a surprise to him. In July, Mr. Comey announced that Mrs. Clinton and her aides acted carelessly with her email set-up but declined to recommend criminal charges for mishandling classified information. On Oct. 28, Mr. Comey disclosed to Congress that the bureau found emails that appeared to be pertinent to its investigation into Mrs. Clinton’s emails. Then on the Sunday before the election, he said the bureau had not changed its conclusions from July. Some Republicans have said Mrs. Clinton has only herself to blame for the situation, and that she could have headed off anything approaching what Mr. Comey had announced had she used a traditional state.gov email address and a server that wasn’t run out of her own home to do government business. Copyright © 2019 The Washington Times, LLC. Click here for reprint permission.
class ResponseHandler: """ Class to listen for json responses on the queue and then call registered handlers based on search patterns. This class is used by :class:`~aioharmony.client.HarmonyClient`, there is no need to use this class. The :class:`~aioharmony.client.HarmonyClient` class exposes methods :meth:`~ResponseHandler.register_handler` and :meth:`~ResponseHandler.unregister_handler` for registering additional handlers. :param message_queue: queue to listen on for JSON messages :type message_queue: asyncio.Queue """ def __init__(self, message_queue: asyncio.Queue, name: str = None) -> None: """""" self._message_queue = message_queue self._name = name self._handler_list = [] self._callback_task = asyncio.ensure_future(self._callback_handler()) async def close(self): """Close all connections and tasks This should be called to ensure everything is stopped and cancelled out. """ # Stop callback task if self._callback_task and not self._callback_task.done(): self._callback_task.cancel() def register_handler(self, handler: Handler, msgid: str = None, expiration: Union[ datetime, timedelta] = None) -> str: """Register a handler. :param handler: Handler object to be registered :type handler: Handler :param msgid: Message ID to match upon. DEFAULT = None :type msgid: Optional[str] :param expiration: How long or when handler should be removed. When this is specified it will override what is set in the Handler object. If datetime is provided then UTC will be assumed if tzinfo of the object is None. DEFAULT = None :type expiration: Optional[Union[ datetime.datetime, datetime.timedelta]] :return: Handler UUID number, this is a unique number for this handler :rtype: str """ handler_uuid = str(uuid4()) if expiration is None: expiration = handler.expiration if isinstance(expiration, timedelta): expiration = datetime.now(timezone.utc) + expiration if expiration is None: _LOGGER.debug("%s: Registering handler %s with UUID %s", self._name, handler.handler_name, handler_uuid) else: if expiration.tzinfo is None: expiration = expiration.replace(tzinfo=timezone.utc) _LOGGER.debug("%s: Registering handler %s with UUID %s that will " "expire on %s", self._name, handler.handler_name, handler_uuid, expiration.astimezone()) self._handler_list.append(CallbackEntryType( handler_uuid=handler_uuid, msgid=msgid, expiration=expiration, handler=handler )) return handler_uuid def unregister_handler(self, handler_uuid: str) -> bool: """Unregister a handler. :param handler_uuid: Handler UUID, this is returned by register_handler when registering the handler :type handler_uuid: str :return: True if handler was found and thus deleted, False if it was not found :rtype: bool """ _LOGGER.debug("%s: Unregistering handler with UUID %s", self._name, handler_uuid) found_uuid = False for index in [index for index, element in enumerate(self._handler_list) if element.handler_uuid == handler_uuid]: del self._handler_list[index] found_uuid = True break return found_uuid # pylint: disable=too-many-return-statements def _handler_match(self, dict_list, message, key=None): if key is not None: message = message.get(key) value = dict_list.get(key) else: value = dict_list if message is None or value is None: return False if isinstance(value, (dict, list)): # If they're different types then it is no match. # pylint: disable=unidiomatic-typecheck if type(message) != type(value): return False for new_key in value: if not self._handler_match(dict_list=value, message=message, key=new_key): return False return True # value is a string or a pattern. If message is a dict or a list # then it is not a match. # Unable to check if message and value type are same when value is # not a list or dict as it can then be a string or a pattern whereas # message should be a string to do matching. if isinstance(message, (dict, list)): return False if isinstance(value, Pattern): if value.search(message) is None: return False return True return value == message def _get_handlers(self, message: dict) -> List[CallbackEntryType]: """ Find the handlers to be called for the JSON message received :param message: JSON message to use :type message: dict :return: List of Handler objects. :rtype: List[Handler] """ callback_list = [] for handler in self._handler_list: if handler.msgid is not None: if message.get('id') is None or \ message.get('id') != handler.msgid: _LOGGER.debug("%s: No match on msgid for %s", self._name, handler.handler.handler_name) continue if handler.handler.resp_json is not None: if not self._handler_match( dict_list=handler.handler.resp_json, message=message): _LOGGER.debug("%s: No match for handler %s", self._name, handler.handler.handler_name) continue _LOGGER.debug("%s: Match for %s", self._name, handler.handler.handler_name) callback_list.append(handler) return callback_list def _unregister_expired_handlers(self, single_handler: CallbackEntryType = None) -> bool: """ Unregisters any expired handlers based on their expiration datetime. Will check the handler dict instead of the list if provided :param single_handler: Handler dict as it is put in the handler list by register_handler. DEFAULT = NONE :type single_handler: dict :return: True if one or more handlers were unregistered, otherwise False :rtype: bool """ if single_handler is None: handler_list = self._handler_list else: handler_list = [single_handler] removed_expired = False for handler in handler_list: if handler.expiration is not None: if datetime.now(timezone.utc) > handler.expiration: _LOGGER.debug("%s: Handler %s with UUID %s has " "expired, removing: %s", self._name, handler.handler.handler_name, handler.handler_uuid, handler.expiration.astimezone()) self.unregister_handler(handler.handler_uuid) removed_expired = True return removed_expired # pylint: disable=broad-except async def _callback_handler(self) -> None: """ Listens on the queue for JSON messages and then processes them by calling any handler(s) """ _LOGGER.debug("%s: Callback handler started", self._name) while True: # Put everything here in a try block, we do not want this # to stop running out due to an exception. try: # Wait for something to appear on the queue. message = await self._message_queue.get() _LOGGER.debug("%s: Message received: %s", self._name, message) # Go through list and call for handler in self._get_handlers(message=message): # Make sure handler hasn't expired yet. if self._unregister_expired_handlers( single_handler=handler): # Was expired and now removed, go on with next one. continue call_callback( callback_handler=handler.handler.handler_obj, result=message, callback_uuid=handler.handler_uuid, callback_name=handler.handler.handler_name ) # Remove the handler from the list if it was only to be # called once. if handler.handler.once: self.unregister_handler(handler.handler_uuid) # Go through all handlers and remove expired ones IF # currently # nothing in the queue. if self._message_queue.empty(): # Go through list and remove all expired ones. _LOGGER.debug("%s: Checking for expired handlers", self._name ) self._unregister_expired_handlers() except asyncio.CancelledError: _LOGGER.debug("%s: Received STOP for callback handler", self._name ) break # Need to catch everything here to prevent an issue in a # from causing the handler to exit. except Exception as exc: _LOGGER.exception("%s: Exception in callback handler: %s", self._name, exc) # Reset the queue. self._message_queue = asyncio.Queue() _LOGGER.debug("%s: Callback handler stopped.", self._name)
<filename>src/inference/src/remote_tensor.cpp<gh_stars>1-10 // Copyright (C) 2018-2021 Intel Corporation // SPDX-License-Identifier: Apache-2.0 // #include "openvino/runtime/remote_tensor.hpp" #include "ie_ngraph_utils.hpp" #include "ie_remote_blob.hpp" namespace ov { namespace runtime { void RemoteTensor::type_check(const Tensor& tensor, const std::map<std::string, std::vector<std::string>>& type_info) { OPENVINO_ASSERT(tensor, "Could not check empty tensor type"); auto remote_tensor = static_cast<const RemoteTensor*>(&tensor); auto remote_impl = dynamic_cast<ie::RemoteBlob*>(remote_tensor->_impl.get()); OPENVINO_ASSERT(remote_impl != nullptr, "Tensor was not initialized using remote implementation"); if (!type_info.empty()) { auto params = remote_impl->getParams(); for (auto&& type_info_value : type_info) { auto it_param = params.find(type_info_value.first); OPENVINO_ASSERT(it_param != params.end(), "Parameter with key ", type_info_value.first, " not found"); if (!type_info_value.second.empty()) { auto param_value = it_param->second.as<std::string>(); auto param_found = std::any_of(type_info_value.second.begin(), type_info_value.second.end(), [&](const std::string& param) { return param == param_value; }); OPENVINO_ASSERT(param_found, "Unexpected parameter value ", param_value); } } } } ie::ParamMap RemoteTensor::get_params() const { OPENVINO_ASSERT(_impl != nullptr, "Remote tensor was not initialized."); type_check(*this); auto remote_impl = static_cast<ie::RemoteBlob*>(_impl.get()); try { ParamMap paramMap; for (auto&& param : remote_impl->getParams()) { paramMap.emplace(param.first, Any{param.second, _so}); } return paramMap; } catch (const std::exception& ex) { throw ov::Exception(ex.what()); } catch (...) { OPENVINO_ASSERT(false, "Unexpected exception"); } } std::string RemoteTensor::get_device_name() const { OPENVINO_ASSERT(_impl != nullptr, "Remote tensor was not initialized."); type_check(*this); auto remote_impl = static_cast<ie::RemoteBlob*>(_impl.get()); try { return remote_impl->getDeviceName(); } catch (const std::exception& ex) { throw ov::Exception(ex.what()); } catch (...) { OPENVINO_ASSERT(false, "Unexpected exception"); } } } // namespace runtime } // namespace ov
Three fourths of the nation’s voters don’t care. No really. They don’t. New Gallup research reveals that 75 percent of all registered voters say Mitt Romney’s personal worth of some $250 million makes “no difference” to them; this includes 89 percent of Republicans, 76 percent of independents and 62 percent of Democrats. “The Obama campaign is focusing on Romney’s wealth in an attempt to position him as the candidate whose policies will benefit the wealthy and increase the gap between rich and poor — juxtaposed against Mr. Obama’s positioning as the candidate who will do more for the middle class. Most Americans claim Mr. Romney’s wealth will not affect their vote, perhaps reflecting Gallup research showing that the majority of Americans believe the U.S. benefits from having a rich class and would themselves like to be rich,” observes Gallup director Frank Newport. CARNEY’S FAVORITE PHRASE “I would refer you to the campaign.” Such a tidy phrase. It’s White House press secretary Jay Carney’s current method to defuse troublesome questions from restless journalists during the daily briefing. Mr. Carney has taken to labeling such inquiries a “campaign-specific question” and skipping the answer. In recent days, he’s used the “campaign” excuse to deflect queries about President Obama’s college records, the presidential seal, outsourcing, White House transparency, civility and the NAACP annual convention, among other things. Needless to say, Obama campaign spokesman Benjamin Bolt is about to get a lot busier. SAVE OUR SHIP This is still a ship to be reckoned with. Sixty years ago, the SS United States was the fastest passenger ship ever built — an all-aluminum vessel that could carry 14,000 troops in wartime, yet it still hosted more than 1 million hoity-toity passengers enamored by its nifty moderne interiors. Almost 1,000 feet in length, this grand ship was 107 feet longer than the Titanic. And of particular interest: The federal government originally worked with the United States Lines to develop a “super ship to be part Cold War weapon and part luxury ocean liner,”according to historical records. It was considered a “top secret Pentagon project.” But alas. Time, jet travel and the economy took their toll: America’s magnificent flagship was retired in 1969, sold, stripped of interior fittings in 1984 and sealed by the U.S. Navy for safety reasons. The iconic red, white and blue funnels faded but proud, the liner was finally docked at Pier 82 in Philadelphia a dozen years later and saved from the scrapheap by philanthropists and hopeful developers several times. But history still calls. A determined conservancy group has hopes to sway the vessel’s fate with an aggressive fundraising effort — one can “sponsor” an inch of the ship - in hopes of resurrecting the half-million square feet of floating space as a self-sustaining retail and event site. Hey, why not? The Queen Mary is enjoying just such a retirement in California. The SS United States, incidentally, hosted countless movie stars; we’re talking John Wayne and Marilyn Monroe here, along with Presidents Harry S. Truman, Dwight D. Eisenhower, John F. Kennedy and Bill Clinton. And yes, the conservancy is taking ideas about redevelopment. Perhaps the ship should now be re-launched as a space-going tourist vehicle. See information about the liner — and a haunting, current photo — here: www.ssusc.org. The animated, interactive fundraising site shows thousands of little spots on the ship funnels, hull and decks that already have been sponsored by kindly donors. See it all here: https://savetheunitedstates.org CPAC AWAKENS Those who pine to get in touch with their inner Ronald Reagan and think optimistic thoughts about the Republican Party, rejoice. The 40th annual Conservative Political Action Conference — affectionately known as CPAC — is awake, and ready to rumble. The American Conservative Union has opened registration for CPAC 2013, eight months before the extravaganza gets under way in mid-March at the spectacular Gaylord National Resort on the Potomac River. This is the largest combined hotel and conference center on the entire East Coast, which tells you a little something about the organization’s expectations for attendance. See the information here: www.conservative.org. THE BIZ VET “Transitioning service members are natural entrepreneurs, possessing the training, experience, and leadership skills to start businesses and create jobs.” So says the Small Business Administration, which has partnered with the Department of Veterans Affairs and the Defense Department to create “Operation Boots to Business,”an intensive training program for those who are, in military parlance, “short.” Coursework has been developed by the Whitman School of Management at Syracuse University; pilot programs are already being launched at four U.S. Marine Corp bases. The agencies point out that 15 percent of all U.S business owners are vets. Find basic information here: www.sba.gov/bootstobusiness. WEEKEND READING Sunday is “Cost of Government Day.” It marks how long it took the average American to pay off the cost of federal, state and local government spending and regulation. Americans for Tax Reform researchers figured the whole thing out. See the 44-page “burden of government” here: www.costofgovernment.org. POLL DU JOUR • 64 percent of Americans approve of Arizona’s immigration law requiring police to verify the legal status of those who have been stopped or arrested. • 89 percent of Republicans, 66 percent of independents and 39 percent of Democrats agree. • 61 percent overall would like to see a similar law passed in their state. • 87 percent of Republicans, 64 percent of independents and 37 percent of Democrats agree. • 55 percent overall support a new White House policy that allows young illegal immigrants to obtain a work permit rather than be deported. • 29 percent of Republicans, 55 percent of independents and 80 percent of Democrats agree. Source: A Quinnipiac University poll of 2,722 registered U.S. voters conducted July 1 to 8. • Tip line always open at [email protected]. Copyright © 2019 The Washington Times, LLC. Click here for reprint permission.
import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; public class JavaApplication32 { public static int n; public static String word; public static int len; public static char[] ch; public static void main(String[] args) throws IOException { BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); n = Integer.parseInt(br.readLine()); for (int i = 0; i < n; i++) { word = br.readLine(); len = word.length(); if (len <= 10) { System.out.println(word); } else { ch = word.toCharArray(); System.out.println(String.valueOf(ch[0])+(ch.length - 2) + String.valueOf(ch[ch.length - 1])); } } } }
Instance Selection for Online Automatic Post-Editing in a Multi-domain Scenario In recent years, several end-to-end online translation systems have been proposed to successfully incorporate human post-editing feedback in the translation workflow. The performance of these systems in a multi-domain translation environment (involving different text genres, post-editing styles, machine translation systems) within the automatic post-editing (APE) task has not been thoroughly investigated yet. In this work, we show that when used in the APE framework the existing online systems are not robust towards domain changes in the incoming data stream. In particular, these systems lack in the capability to learn and use domain-specific post-editing rules from a pool of multi-domain data sets. To cope with this problem, we propose an online learning framework that generates more reliable translations with significantly better quality as compared with the existing online and batch systems. Our framework includes: i) an instance selection technique based on information retrieval that helps to build domain-specific APE systems, and ii) an optimization procedure to tune the feature weights of the log-linear model that allows the decoder to improve the post-editing quality. Introduction Nowadays, machine translation (MT) is a core element in the computer-assisted translation (CAT) framework. The motivation for integrating MT in the CAT framework lies in its capability to provide useful suggestions for unseen segments, which helps to increase the translators' productivity. However, it has been observed that MT is often prone to systematic errors that human post-editing has to correct before publication. The by-product of this "translation as post-editing" process is an increasing amount of parallel data consisting of MT output on one side and its corrected version on the other side. This data can be leveraged to develop automatic post-editing (APE) systems capable not only to spot recurring MT errors, but also to correct them. Thus, integrating an APE system inside the CAT framework can further improve the quality of the suggested segments, reduce the workload of human post-editors and increase the productivity of the translation industry. As pointed out in (Parton et al., 2012) and (Chatterjee et al., 2015b), from the application point of view APE components would make it possible to: • Improve the MT output by exploiting information unavailable to the decoder, or by performing deeper text analysis that is too expensive at decoding stage; • Cope with systematic errors of an MT system whose decoding process is not accessible; • Provide professional translators with improved MT output quality to reduce (human) PE effort; • Adapt the output of a general-purpose MT system to the lexicon/style requested in a specific application domain. In the last decade several works have shown that the quality of the machine translated text can be improved significantly by post-processing the translations with an APE system (Simard et al., 2007a;Dugast et al., 2007;Terumasa, 2007;Pilevar, 2011;Béchara et al., 2011;Chatterjee et al., 2015bChatterjee et al., , 2016. These systems mainly follow the phrase-based machine translation approach where the MT outputs (with optionally the source sentence) are used as the source language corpus and the post-edits are used as the target language corpus. A common trait of all these APE systems is that they were developed in a batch mode, which consists of training the models over a batch of parallel sentences, optimizing the parameters over a development set, and then decoding the test data with the tuned parameters. Although these standard approaches showed promising results, they lack the ability to incorporate human feedback in a real-time translation workflow. This led to the development of online learning algorithms that can leverage the continuous streams of data arriving in the form of human post-editing feedback to dynamically update the models and tune the parameters on-the-fly within the CAT framework. In recent years, several online systems have been proposed in MT (see Section 2 for more details) to address the problem of incremental training of the models or on-the-fly optimization of feature weights. Few online MT systems have also been applied to the APE scenario (Simard and Foster, 2013;Lagarda et al., 2015) in a controlled working environment in which the systems are trained and evaluated on homogeneous/coherent data where the training and test sets share similar characteristics. Moving from this controlled lab environment to real-world translation workflow, where training and test data can be produced by different MT systems, post-edited by various translators and belong to several text genres, makes the task more challenging, because the APE systems have to adapt to all these diversities in real-time. We define this scenario as a multi-domain translation environment (MDTE), where a domain is made of segments belonging to the same text genre and the MT outputs are generated by the same MT system. To reproduce this scenario, in our experiments we run the online APE systems on the concatenation of two datasets belonging to different domains. A preliminary evaluation in the MDTE scenario reveals that online systems are not robust enough to learn and adapt towards the dynamics of the data, mainly because they try to leverage all the seen data without considering the peculiarities of each domain. In the long-run, these systems tend to become more and more generic, which may not be useful and even harmful to automatically post-edit domain-specific segments. To address this problem, for the first time, we propose an online APE system that is able to efficiently work in a MDTE scenario. Our intuition is that an online APE model trained with few but relevant data (with respect to the segment to be post-edited) can be more reliable than using all the available data as-is. To validate this intuition, we propose an online APE system based on an instance selection (IS) technique that is able to retrieve the most relevant training instances from a pool of multi-domain data for each segment to post-edit. The selected data are then used to train and tune the APE system on-thefly. The relevance of a training sample is measured by a similarity score that takes into account the context of the segment to be post-edited. This technique allows our online APE system to be flexible enough to decide if it has the correct knowledge for post-editing a sentence or if it is safer to keep the MT output untouched, avoiding possible damages of correction made with insufficient/unreliable knowledge. The results of our experiments with various data sets show that our online learning approach based on IS is: i) able to outperform the batch and the other online APE techniques in the single domain scenario, and ii) robust enough to work in a MDTE to generate reliable post-edits with significantly better performance than the existing online APE systems. Online Translation Systems Online translation systems aim to incorporate human post-editing feedback (or the corrected version of the MT output) into their models in real-time, as soon as it becomes available. This feedback helps the system to learn from the mistakes made in the past translations and to avoid repeating them in future translations. This continuous learning capability will eventually improve the quality of the translations and consequently increase the productivity of the translators/post-editors (Tatsumi, 2009) working with MT suggestions in a CAT environment. The basic workflow of an online translation system goes through the following steps repeatedly: i) the system receives an input segment; ii) the input segment is translated and provided to the post-editor to fix any errors in it; and iii) the human post-edited version of the translation is incorporated back into the system, by stepwise updating the underlying models and parameters. In the APE context, the input is a machine-translated segment (optionally with its corresponding source segment), which is processed by the online APE system to fix errors, and then verified by the post-editors. Several online translation systems have been proposed over the years (Hardt and Elming, 2010;Bertoldi et al., 2013;Mathur et al., 2013;Simard and Foster, 2013;Ortiz-Martınez and Casacuberta, 2014;Denkowski et al., 2014;Wuebker et al., 2015, inter alia). In this section, we describe two online systems that have been used in the APE task (PEPr, and Thot), and one in the MT scenario which is similar to our proposed system (Realtime cdec): PEPr: Post-Edit Propagation: Simard and Foster (2013) proposed a method for post-edit propagation (PEPr), which learns post-editors' corrections and applies them on-the-fly to further MT output. Their proposal is based on a phrase-based SMT system, used in an APE setting with online learning mechanism. To perform post-edit propagation, this system was trained incrementally using pairs of machine-translated (mt) and human post-edited (pe) segments as they were produced. When receiving a new pair (mt, pe), word alignments are obtained by using Damerau-Levenshtein distance. In the next step the phrase pairs are extracted and appended to the existing phrase table. The whole process is assumed to take place within the context of a single document. For every new document the APE system begins with an "empty" model. Since the post-editing rules are learned for a given document they can be more precise and useful for that document, but the limitation is that knowledge gained after processing one document is not utilized for other similar documents. This limitation can be addressed by our system (Section 3), in which we maintain one global knowledge base to store all the processed documents, still being able to retrieve post-editing rules specific to a document to be translated. Thot: The Thot toolkit (Ortiz-Martınez and Casacuberta, 2014) is developed to support fully automatic and interactive statistical machine translation. 1 It was also used by Lagarda et al. (2015) in an online setting for the APE task, to perform large-scale experiments with several data sets for multiple language pairs, with base MT systems built using different technologies (rule-based MT, statistical MT). In the majority of their experiments online APE successfully improved the quality of the translations obtained from the base MT system by a significant margin. To update the underlying translation and language models with the user feedback, a set of sufficient statistics was maintained that can be incrementally updated. In the case of language model, only the n-gram counts are required to maintain sufficient statistics. To update the translation model, an incremental version of EM algorithm is used to first obtain word alignment and then phrase pairs counts were extracted to update the sufficient statistics. Other features like source/target phrase-length models or distortion model are implemented by means of geometric distributions with fixed parameters. The sentence length model is implemented by means of Gaussian distributions. However, the feature weights of the log-linear model are static throughout the online learning process, as opposed to our method that updates the weights on-the-fly. Also, this method learns post-editing rules from all the data processed in real-time, whereas, our approach learns from the most relevant data points. Realtime cdec: Denkowski et al. (2014) proposed an online model adaptation method to leverage human post-edited feedback to improve the quality of an MT system in a real-time translation workflow. To build the translation models they use a static suffix array (Zhang and Vogel, 2005) to index initial data (or a seed corpus), and a dynamic lookup table to store information from the post-edited feedback. To decode a sentence, the statistics of the translation options are computed both from the suffix array and from the lookup table. An incremental language model is maintained and updated with each incoming human post-edit. To update the feature weights they used an extended version of the margin-infused relaxed algorithm (MIRA) (Chiang, 2012). The decoding is treated as simply the next iteration of MIRA, where a segment is first translated and then its corresponding reference/post-edition is provided to the model, and MIRA updates the parameters. While this system was earlier used in the context of MT, in this work we use it to investigate its applicability in online APE. A key difference between this approach and ours is the sampling technique. The former uses suffix-arrays to always retrieve the top k source phrases, whereas in our approach the number of samples (or the training instances) is dynamically set to use only the most relevant ones. Another difference is visible in the parameter optimization step. Realtime cdec optimizes the feature weights of the log-linear model after decoding each segment, whereas, our method optimizes the weights specifically for the segment to be post-edited. Instance Selection for Online APE System The online systems described in Section 2 compute and update the feature scores of the loglinear models based on all the previously seen data. This indicates that, in the long-run, the model will tend to become more and more generic, since the data processed in the online scenario may belong to multiple domains as explained in Section 1. Having a generic model might not be useful to retrieve the domain-specific post-editing rules needed to fix errors in a particular document. One solution is to build document-specific APE models as proposed by Simard and Foster (2013). In their approach, however, once the entire document is processed the models are reset back to their original state, due to which the knowledge gained from the current document is lost. To preserve all the knowledge gained in the online learning process, at the same time being able to apply specific post-editing rules when needed, we propose an instance selection technique for online APE. Our proposed framework, as shown in Figure 1, uses a global knowledge base to preserve all the data points seen in the online process, and has the ability to retrieve specific data points whose context is similar to the segment to be post-edited. These data points are used to build reliable APE models. When there are no reliable data points in the knowledge base, the MT output is kept untouched, as opposed to the existing APE systems, which tend to Figure 1: Architecture of our online APE system always "translate" the given input segment independently from the reliability of the applicable correction rules. This approach of post-editing with reliable information only makes our system more precise compared with others (see results in Section 5): that is when a post-editing rule is applied it is more likely to improve the quality of the translation. When no reliable knowledge is available for the correction, the MT output is left untouched. We propose online APE but we actually "emulate" it by processing the data points one at a time. Our proposed algorithm assumes to have the following data to run the online experiments: i) source (src); ii) MT output (mt); and iii) human post-edits (pe) of the MT output. At the beginning the knowledge base of our online APE system is empty and it will be updated whenever a new instance (a tuple containing parallel segments from all the above mentioned documents) is processed. When the system receives an input (src, mt), it proceeds through the following steps: Instance Selection. Initially, it selects the most relevant training instances from a pool of multi-domain data stored in our knowledge base. This will help to build a reliable APE model for each input segment processed in real-time. The relevance of the training instances with respect to the input segment is measured in terms of a similarity score based on the term frequency-inverse document frequency (tf-idf ), generally used in information retrieval. The larger the number of words in common between the training and the input sentences, the higher is the score. In our system, these scores are computed using the Lucene library. 2 Only those training instances that have a similarity score above a certain threshold (decided over a held-out development set) are used to build the system. In case there are no training instances available, we preserve the input segment as it is. Indeed, we assume that APE with unreliable information can damage the mt segment instead of improving the translation quality. This is one of the main outcomes of the first APE pilot task organized last year within the WMT initiative (Bojar et al., 2015) and, as we will see from our results, it represents a major problem for the approaches that always translate the given input segments. The proposed instance selection technique (or sampling mechanism) differs from the one proposed in real-time cdec (Denkowski et al., 2014), which uses suffix-arrays to select the top k instances. In our approach the sample size is in fact dynamically set in order to select only the most similar ones. This allows us to build more reliable models (since the underlying data better resembles the test segment), and to gain speed when the sample size is small. The use of a tf-idf similarity measure was proposed before in the context of machine translation by Hildebrand et al. (2005) to create a pseudo in-domain corpus from a big out-of-domain corpus. Our work is the first to investigate it for the APE task in an online learning scenario. Model Creation. From the selected instances we build several local models. The first is the language model: A tri-gram local language model is built over the target side of the training corpus with the IRSTLM toolkit (Federico et al., 2008). Since the selected training data closely resembles the input segment, we believe that the local LM can capture the peculiarities of the domain to which the input segment belongs. Along with the local LM we always use a trigram global LM, which is updated whenever a human post-edition (pe) is received. The other local models are the translation and the reordering models: these local models are built over the training instances retrieved from the knowledge base. Since the training instances are very similar to the input segment, the post-editing rules learned from these local models are more reliable for the test segment. These models are build with the Moses toolkit and the word alignment of each sentence pair is computed using the incremental GIZA++ software. 3 Parameter Optimization. The parameters are optimized over a section of the selected instances (development set). The size of this development set is critical: if it is too large, then the parameter optimization will be expensive. On the other hand, if it is too small the tuned weights might not be reliable. To achieve fast optimization with reliably-tuned weights, multiple instances of MIRA are run in parallel on several small development sets and all the resulting weights are then averaged. For this purpose, the data selected by the instance selection module are randomly split in training and development sets three times. A minimum number of selected sentence pairs is required to trigger the parameter optimisation process. If this minimum value is not reached, the optimization step is skipped because having few sentences might not yield to reliable weights. In this case, the weights computed on the previous input segment are used. In our experiments, we observed that this solution is more reliable and efficient than the feature weights obtained with a single tuning, as it was previously proposed in (Cettolo et al., 2011). We believe this procedure to optimize the feature weights over a development set that closely resembles the test segment can help to obtain weights more suitable to the segment to be post-edited. Decode Test Segment. To decode the input segments, all the local models (language, translation, reordering) are built with all the selected instances. The log-linear feature weights are computed by taking the arithmetic mean of the tuned weights for the three data splits. The decoding process is performed with the Moses toolkit recalling that the input segment is kept untouched when no reliable information is available in the knowledge base. Update Global Repository. In a real translation workflow, the automatically post-edited version (or the MT output, if there were no training data available) is provided to a post-editor for correction, and the corrected version is incorporated back into the system. To avoid the unnecessary costs of involving human post-editors in-the-loop when running these experiments, we simulate this condition by using the human post-edits of the MT output (which are already available in the data set). Each newly processed instance is added to our knowledge base, and the global language model is updated with the post-edited segment. Data To examine the performance of the online APE systems in a multi-domain translation environment, we select two data sets for the English-German language pair belonging to the information technology (IT) domain. Although they come from the same domain (IT), they feature variability in terms of vocabulary coverage, MT errors, and post-editing style. The two data sets are respectively a subset of the Autodesk Post-Editing Data corpus 4 and the resources used at the second round of the APE shared task at the First Conference on Machine Translation (WMT2016) (Bojar et al., 2016). 5 The data sets are pre-processed to obtain a jointrepresentation that links each source word with a MT word (mt#src). This representation has been proposed in the context-aware APE approach by Béchara et al. (2011) and leverages the source information to disambiguate post-editing rules. Recently, Chatterjee et al. (2015b) also confirmed this approach to work better than translating from raw MT segments over multiple language pairs. The joint-representation is used as a source corpus to train all the APE systems reported in this paper and it is obtained by first aligning the words of source (src) and MT (mt) segments using MGIZA++ (Gao and Vogel, 2008), and then each mt word is concatenated with its corresponding src words. The Autodesk training, development, and test sets consist of 12,238, 1,948, and 1,956 segments respectively, while the WMT2016 data contains 12,000, 1,000, and 2,000 segments. Table 1 provides some additional statistics of the source (mt#src) and target (pe) training corpus, the repetition rate (RR) to measure the repetitiveness inside a text (Bertoldi et al., 2013), and the average TER score for both the data sets (computed between MT and PE). It is interesting to note that the Autodesk data set has on average shorter segments compared with the WMT2016 corpus. This suggests that learning and applying post-editing rules in the Autodesk corpus can be easier than using the WMT2016 segments, because dealing with long segments generally increases the complexity of the rules extraction and decoding processes. Moreover, the WMT2016 data set has a repetition rate similar to the Autodesk even though it has more tokens. This indicates that the data is more sparse raising the difficulty of extracting reliable post-editing rules. Looking at the TER score, the smaller value of the WMT2016 data set compared with the Autodesk one suggests that the room for improvement is lower, because there are less corrections to perform and the chance to deteriorate the original MT output is larger. The diversity of the two data sets is further measured by computing the vocabulary overlap between the two joint-representations. This is performed internally to each data set (splitting the training data in two halves) and across them. As expected, in the first case the vocabulary overlap is much larger (> 40%) than in the second one (∼15%), and this indicates that the two data sets are quite different and few information can be shared. All the aforementioned aspects show the large variability in the corpora making them suitable to emulate the multi-domain translation environment. Evaluation metrics The performance of the different APE systems is evaluated using three different metrics: Translation Error rate (TER) (Snover et al., 2006), BLEU (Papineni et al., 2002) and Precision (Chatterjee et al., 2015a). TER and BLEU measure the similarity between the MT outputs and their references by looking at n-grams overlap (TER at word level, BLEU from 1 to 4 words). To give a better insight on the APE performance, we also report Precision, computed as the ratio of the number of sentences an APE system improves (with respect to the MT output) over all the sentences it modifies. 6 Values larger than 50% indicate that the APE system is able to improve the quality of most of the sentences it changes. Statistical significance tests are computed using the paired bootstrap resampling technique (Koehn, 2004) for the BLEU metric and the stratified approximate randomization test (Clark et al., 2011) for TER. Terms of comparison We evaluate our online learning approach against four different terms of comparison. MT. Our baseline is the "do-nothing" system that simply returns the MT outputs without changing them. As discussed in (Bojar et al., 2015), this baseline can be particularly hard to beat when the repetition rate of the data is low and due to the tendency of the APE systems to over-correct the MT output. Batch APE. This APE system is developed in a batch mode following the approach proposed in Chatterjee et al. (2015b). It is similar to the context-aware method (Béchara et al., 2011), but it uses word alignments produced by the monolingual machine translation APE technique proposed in Simard et al. (2007b). Being a batch method, it cannot learn from the test set, but it leverages all the training points at the same time. Online APE. We compare our approach against two online systems: i) the Thot toolkit that had been previously used in the online APE task, and ii) Realtime cdec that, among the other online MT systems, is the closest to our approach (i.e. it uses a data selection mechanism), but has never been tested in the APE scenario. Another online APE approach is PEPr that was meant for document level APE, but since we are working with data sets that do not have any intrinsic document structure, we do not find it to be a suitable term of comparison. Experiments and Results Our preliminary objective is to examine if the online learning methods are able to achieve results that are competitive with those of batch methods, which are potentially favored by the possibility to leverage all the training data at the same time. For this test, all the algorithms are evaluated in the classic in-domain setting, where training, development, and test sets are sampled from the same data set or domain. All the online APE methods are run in two modes; i) batch: the test set is not used in the learning process (to have a fair comparison with the batch APE), ii) online: the test set is leveraged in the online learning process. The experiments are performed for both the data sets (Autodesk and WMT2016), and their corresponding results are reported in Table 2 and Table 3 respectively. The parameters of our approach (i.e. similarity score threshold and minimum number of selected sentence) are optimised on the development set following a grid search strategy. We set the threshold values to 0.8 and 1 respectively for the Autodesk and WMT2016 datasets and the minimum number of selected sentences to 20. From the results of the in-domain experiments with the Autodesk data set it is evident that our proposed online APE method performs not only better than cdec and Thot (both in batch and online mode) but also better than the strong batch APE method. It achieves significant improvements of 0.54 BLEU, 1.26 TER, and 17.9% precision over the batch APE, which already beats the other online methods. The improvement of our system can be attributed to its ability to learn from the most relevant data and to avoid over-correction by leaving the test segment untouched when no reliable information is found in the knowledge base. As discussed in Section 4.1, several factors like sentence length, sparsity, and translation quality make the WMT2016 data set more challenging to improve for all the online APE methods. In particular, due to the higher translation quality of the mt segments, the room for improvement gets lower and the chances of damaging the correct parts are higher. This is visible from the low precision scores reported in Table 3. All the APE methods (batch and online) damage the MT segments in the majority of the cases (precision is lower than 50%). The only exception is our approach that performs significantly better than the batch APE (in terms of TER) and is the only successful method to significantly improve the MT segments in the majority of the cases (61.46%). These experimental results confirm that our proposed online learning APE method based on instance selection to learn only from the most relevant data is sound and reliable. Building on these results, the main goal of this research is to examine the performance of the online APE methods in a MDTE. This represents a more challenging condition since the system has to adapt to the dynamics of the data processed in a real-time scenario. To emulate this environment, all the online learning methods are trained and tuned on one data set (or domain) and evaluated on the other data set with the possibility to learn from it. In order to capture the peculiarities of the online learning methods over a long run with many data points, we use the training section of the second data set as a test set. The left side of Table 4 reports the performance of all the APE systems when they are trained and tuned on the WMT2016 data set and evaluated on the Autodesk data set. The experimental results reported in the right side of Table 4 are obtained by using the Autodesk data set to train and tune, and the WMT2016 to evaluate. The parameters of our approach (i.e. similarity score threshold and minimum number of selected sentence) are the same as computed in the in-domain setting. In Table 4, the poor performance of the batch APE, which can only leverage the knowledge from the training domain, indicates that the post-editing rules extracted from the training domain are not portable to the test one (even though both datasets belong to IT). This suggests the need of APE approaches that are able to adapt themselves to the incoming data in real-time. Comparing the performance of all the online approaches for both test sets, we notice that our system performs the best with significant gains in all the evaluation metrics. This confirms that our APE system, based on instance selection, is robust enough to work in a MDTE due to its capability to leverage only the most relevant information from a pool of multi-domain segments. Similar to the results on in-domain experiments, significant gains in performance are observed for the Autodesk test set. This does not happen for the WMT2016 test data, for which none of the online APE approaches is able to improve over the MT baseline. For this challenging data set, our approach has the minimal performance degradation (over MT), while the other online systems severely damage the MT segments as confirmed by their low precision (7.37% and 14.20% respectively). One of the common observations, both over the Autodesk and the WMT2016 test sets, is the large difference in precision (17.92% and 27.17% respectively) between the best (our approach) and the second best (Thot) online APE system. This indicates that our approach is more conservative and more suitable to extract and apply domain-specific post-editing rules from a pool of multi-domain data sets, which makes it a more viable and appropriate solution to be deployed in a real-world CAT framework. In the next section, we present some findings on the performance trends of different systems across the entire test set for the multi-domain scenario. Performance Analysis To understand and compare the behavior of different online learning approaches in the longrun, the plot in Figure 2 shows the moving average TER (window of 750 data points) at each segment of the Autodesk test set for the multi-domain experiment (Table 4). As it can be seen, our approach successfully maintains the best performance across the entire test set. As expected, at the beginning of the test set the performance of the online systems is close to the MT system, since there is not much relevant data available to learn from. As time progresses and more segments are processed, a clear trend of performance improvement (with respect to MT) is visible for our method and for the Thot system. This does not hold in the case of cdec, maybe due to the sampling techniques used in the suffix array, which is unable to retrieve relevant samples from the pool of multi-domain data to decode the test segments. For the WMT2016 test set the moving average TER is shown in Figure 3. As said before, improving translation quality on this test set is more challenging, which is reflected in the graph. Although none of the systems is able to improve over the MT baseline, our system manages to consistently stay close to the MT performance throughout the test set, whereas, all other systems show significant drops. This ensures that our approach is more robust against the domain-shift and even in this difficult scenario it is able to maintain stable performance close to the MT without a large deterioration. To gain further insights about the performance at the segment level, the plot in Figure 4 compares our approach against Thot for the first 300 segments of the Autodesk test set used in the multi-domain experiment. It shows the differences between the segment-level TER of the MT (T ER MT ) and our approach (T ER Our approach ), and MT and Thot (T ER T hot ) automatically post-edited segments. We notice that our approach modifies less segments compared with Thot, because it builds a model only if it finds relevant data in the knowledge base, otherwise it leaves the MT segment untouched. These untouched MT segments, when modified by Thot, often lead to deterioration rather than to improvements (as seen by many negative peaks for Thot in the Figure 4). This suggests that, compared with the other online approaches, the output obtained with our solution has a higher potential for being useful to human translators. Such usefulness comes not only in terms of a more pleasant post-editing activity, but also in terms of time savings yield by overall better suggestions. Conclusion We addressed the problem of building robust online APE systems in a multi-domain translation environment in which the system has to continuously adapt to the dynamics of diverse data processed in real-time. Our evaluation revealed that the online systems that leverage all the available data without considering the peculiarities of each domain are not robust enough to work in a multi-domain translation environment, because they are unable to learn domain-specific postediting rules. To overcome this limitation, we proposed an online learning framework based on instance selection that has the capability to filter out the most relevant information from a pool of multi-domain data for learning domain-specific post-editing rules. When no reliable information is available our system leaves the MT segments untouched, these segments when automatically post-edited by other systems are often found to get deteriorated. Therefore, the APE suggestions provided by our system to the translators/post-editors are more reliable with better translation quality. From our experiments in a simulated multi-domain environment, we learn that the postediting rules are not portable across domains which is revealed by the poor performance of the batch APE system that can leverage only the training data. In the case of online systems that leverage also the test set, it was still a challenging scenario (specially for the Autodesk-WMT2016 data set). Among all the online systems, our proposed approach has the highest improvement on the WMT2016-Autodesk data set, and the least degradation on the Autodesk-WMT2016 data set with respect to the MT quality. Experiments in the in-domain setting confirmed that our approach for instance selection is also useful in a single domain scenario. It performed significantly better than the batch APE that already beats cdec and Thot. One common observation from all the experiments in different working scenarios and with different data sets is that our system has the highest precision among all its competitors (MT, batch APE, cdec, and Thot). This indicate that when our system automatically post-edits MT segments, it is more likely to improve the quality of the MT output, which makes it a viable solution to be deployed in a real-word CAT framework.
<gh_stars>1-10 package vader import "net/http" // Middleware is a type used to chain handlers together. // // A middleware takes a handler as input and returns another handler that can wrap functionality // around the original handler type Middleware func(http.Handler) http.Handler // Chain is a convenient method to chain multiple handlers together func Chain(outer Middleware, inner ...Middleware) Middleware { return func(next http.Handler) http.Handler { for i := len(inner) - 1; i >= 0; i-- { next = inner[i](next) } return outer(next) } } // Finalize is a convenient method to chain multiple middlewares with a handler func Finalize(handler http.Handler, middlewares ...Middleware) http.Handler { mw := Chain(middlewares[len(middlewares)-1], middlewares[0:len(middlewares)-1]...) return mw(handler) }
The Use of Botulinum Toxin in the Management of Headache Disorders. Headache disorders can be further classified as episodic (< 15 headache days per month) or chronic (≥ 15 headache days per month for more than 3 months). Chronic migraine (CM) requires that headaches occur on 15 or more days a month for more than 3 months. These headaches must be migraines on at least 8 days per month. There are seven botulinum toxin (BoNT) serotypes (A1, A2, A3, B, C1, D, E, F, and G). All serotypes inhibit acetylcholine release, although their intracellular target proteins, physiochemical characteristics, and potencies are different. Its mechanism of action in pain is being investigated. Botulinum toxin type A (BoNT-A) has been the most widely studied serotype for therapeutic purposes. A major clinical advantage of type A toxin arises from its prolonged duration of action due to the longevity of its protease (90 days in rats and probably much longer in human neurons). Clinical studies suggest that BoNT is a safe treatment and is efficacious for the prevention of some forms of migraine, such as CM, and perhaps high-frequency episodic migraine.
package rtpaac import ( "encoding/binary" "fmt" "math/rand" "time" "github.com/pion/rtp" ) const ( rtpVersion = 0x02 rtpPayloadMaxSize = 1460 // 1500 (mtu) - 20 (ip header) - 8 (udp header) - 12 (rtp header) ) // Encoder is a RPT/AAC encoder. type Encoder struct { payloadType uint8 clockRate float64 sequenceNumber uint16 ssrc uint32 initialTs uint32 } // NewEncoder allocates an Encoder. func NewEncoder(payloadType uint8, clockRate int, sequenceNumber *uint16, ssrc *uint32, initialTs *uint32) *Encoder { return &Encoder{ payloadType: payloadType, clockRate: float64(clockRate), sequenceNumber: func() uint16 { if sequenceNumber != nil { return *sequenceNumber } return uint16(rand.Uint32()) }(), ssrc: func() uint32 { if ssrc != nil { return *ssrc } return rand.Uint32() }(), initialTs: func() uint32 { if initialTs != nil { return *initialTs } return rand.Uint32() }(), } } func (e *Encoder) encodeTimestamp(ts time.Duration) uint32 { return e.initialTs + uint32(ts.Seconds()*e.clockRate) } // Encode encodes an AU into an RTP/AAC packet. func (e *Encoder) Encode(at *AUAndTimestamp) ([]byte, error) { if len(at.AU) > rtpPayloadMaxSize { return nil, fmt.Errorf("data is too big") } // AU-headers-length payload := []byte{0x00, 0x10} // AU-header header := make([]byte, 2) binary.BigEndian.PutUint16(header, uint16(len(at.AU))<<3) payload = append(payload, header...) payload = append(payload, at.AU...) rpkt := rtp.Packet{ Header: rtp.Header{ Version: rtpVersion, PayloadType: e.payloadType, SequenceNumber: e.sequenceNumber, Timestamp: e.encodeTimestamp(at.Timestamp), SSRC: e.ssrc, }, Payload: payload, } e.sequenceNumber++ rpkt.Header.Marker = true frame, err := rpkt.Marshal() if err != nil { return nil, err } return frame, nil }
/** * change to the new userId for the current AccountID. Mainly use of userId change in account settings * Need to change the userID, userBareJid, accountUID; and mAccountProperties.USER_ID if Account ID changed * * @param userId new userId */ public void updateJabberAccountID(String userId) { if (userId != null) { this.userID = userId; this.accountUID = getProtocolName() + ":" + userID; mAccountProperties.put(USER_ID, userId); try { userBareJid = JidCreate.bareFrom(userId); } catch (XmppStringprepException e) { Timber.e("Unable to create BareJid for user account: %s", userId); } } }
<filename>denorm/cli/main.py<gh_stars>10-100 import argparse from ..version import __version__ def main(): parser = _create_parser() args = parser.parse_args() if args.command == "create-agg": from .create_agg import cli cli(args) if args.command == "create-join": from .create_join import cli cli(args) def _create_parser(): parser = argparse.ArgumentParser() parser.add_argument( "-v", "--version", action="version", version=f"%(prog)s {__version__}", help="show version and exit", ) subparsers = parser.add_subparsers(dest="command") _add_create_agg_command(subparsers) _add_create_join_command(subparsers) return parser def _add_create_agg_command(subparsers): parser = subparsers.add_parser("create-agg") parser.add_argument("--schema", default="-") parser.add_argument("--output", default="-") def _add_create_join_command(subparsers): parser = subparsers.add_parser("create-join") parser.add_argument("--schema", default="-") parser.add_argument("--output", default="-")
/** * Signals to the remote side that blocks are available on this machine to fetch. */ public void signalBlocksAvailable( String[] blockIds, int[] blockSizes, long taskAttemptId, int attemptNumber, String executorId, String host, int port) { BlocksAvailable message = new BlocksAvailable( blockIds, blockSizes, taskAttemptId, attemptNumber, executorId, host, port); channel.writeAndFlush(message); }
//TODO: add support for IPv6 public class TcpServerConfiguration { private final Properties properties; private TcpServerConfiguration(final Properties properties) { this.properties = properties; } public static TcpServerConfiguration configuration(final int port, final LifeCycle lifeCycle) { return new TcpServerConfiguration(Properties.defaultProperties(props -> { props.port = inetPort(port); props.lifeCycle = lifeCycle; })); } public TcpServerConfiguration and(final Consumer<Properties> transformer) { return new TcpServerConfiguration(properties.copy(transformer)); } public SocketAddressIn address() { return SocketAddressIn.create(properties.port, properties.address); } public SizeT backlogSize() { return properties.backlogSize; } public Set<SocketFlag> listenerFlags() { return properties.listenerFlags; } public Set<SocketFlag> acceptorFlags() { return properties.acceptorFlags; } public Set<SocketOption> listenerOptions() { return properties.listenerOptions; } public FN1<Promise<Unit>, IncomingConnectionContext> connectionHandler() { return properties.connectionHandler; } public LifeCycle lifeCycle() { return properties.lifeCycle; } public static final class Properties { public Inet4Address address = Inet4Address.INADDR_ANY; public InetPort port = inetPort(8081); public Set<SocketFlag> listenerFlags = SocketFlag.closeOnExec(); public Set<SocketFlag> acceptorFlags = SocketFlag.closeOnExec(); public Set<SocketOption> listenerOptions = SocketOption.reuseAll(); //TODO: do we actually need it??? public FN1<Promise<Unit>, IncomingConnectionContext> connectionHandler = ActiveServerContext::defaultConnectionHandler; public SizeT backlogSize = sizeT(16); public LifeCycle lifeCycle = ReadWriteLifeCycle.readWrite(ReadWriteLifeCycle::echo); private Properties() { } private static Properties defaultProperties(final Consumer<Properties> propertiesConsumer) { return new Properties().copy(propertiesConsumer); } private Properties copy(final Consumer<Properties> propertiesConsumer) { final var copy = new Properties(); copy.address = address; copy.port = port; copy.listenerFlags = listenerFlags; copy.acceptorFlags = acceptorFlags; copy.listenerOptions = listenerOptions; copy.backlogSize = backlogSize; copy.connectionHandler = connectionHandler; copy.lifeCycle = lifeCycle; propertiesConsumer.accept(copy); return copy; } } }
//InitSuite performs common logic for Ginkgo's BeforeSuite func InitSuite(suiteCtx *types.SuiteContext) { logf.SetLogger(zap.New(zap.WriteTo(GinkgoWriter), zap.UseDevMode(true))) By("bootstrapping test environment") useCluster := true suiteCtx.TestEnv = &envtest.Environment{ UseExistingCluster: &useCluster, AttachControlPlaneOutput: false, } var err error suiteCtx.Cfg, err = suiteCtx.TestEnv.Start() Expect(err).ToNot(HaveOccurred()) Expect(suiteCtx.Cfg).ToNot(BeNil()) err = apicurioScheme.AddToScheme(scheme.Scheme) Expect(err).NotTo(HaveOccurred()) suiteCtx.PackageClient = pmversioned.NewForConfigOrDie(suiteCtx.Cfg) suiteCtx.OLMClient = olmapiversioned.NewForConfigOrDie(suiteCtx.Cfg) suiteCtx.K8sManager, err = ctrl.NewManager(suiteCtx.Cfg, ctrl.Options{ Scheme: scheme.Scheme, }) Expect(err).ToNot(HaveOccurred()) go func() { err = suiteCtx.K8sManager.Start(ctrl.SetupSignalHandler()) Expect(err).ToNot(HaveOccurred()) }() suiteCtx.K8sClient = suiteCtx.K8sManager.GetClient() Expect(suiteCtx.K8sClient).ToNot(BeNil()) suiteCtx.Clientset = kubernetes.NewForConfigOrDie(suiteCtx.Cfg) Expect(suiteCtx.Clientset).ToNot(BeNil()) isocp, err := kubernetesutils.IsOCP(suiteCtx.Cfg) Expect(err).ToNot(HaveOccurred()) suiteCtx.IsOpenshift = isocp if suiteCtx.IsOpenshift { log.Info("Openshift cluster detected") suiteCtx.OcpRouteClient = ocp_route_client.NewForConfigOrDie(suiteCtx.Cfg) } cmd := kubernetescli.Kubectl if suiteCtx.IsOpenshift { cmd = kubernetescli.Oc } suiteCtx.CLIKubernetesClient = kubernetescli.NewCLIKubernetesClient(cmd) Expect(suiteCtx.CLIKubernetesClient).ToNot(BeNil()) selenium.DeploySeleniumIfNeeded(suiteCtx) }
def match_one_end(sequence, reference_dict, end, hairpin_reference,max_add=3): for node in LevelOrderIter(reference_dict['{}_prime_tree'.format(end)], maxlevel=1): node.sequence = sequence for node in PreOrderIter(reference_dict['{}_prime_tree'.format(end)], maxlevel=max_add+1): if(not node.is_root): if(end==5): node.sequence = ''.join([node.base,node.parent.sequence]) elif(end==3): node.sequence = ''.join([node.parent.sequence,node.base]) for hairpin in hairpin_reference: if (node.sequence in hairpin): node.is_template = True else: continue return(reference_dict)
/** * Count all changes in a method version. * * @param version Method version containing changes to be counted * @return Array containing total counts of all types of changes */ public int[] countChanges(StructureEntityVersion version) { for (SourceCodeChange change: version.getSourceCodeChanges()) { this.countChange(change); } return this.getCounts(); }
Seattle is right to explore options to bring the NBA back to KeyArena. But it shouldn’t lease the venue to developers with no team in sight. First let Chris Hansen’s arena deal in Sodo expire, then wholeheartedly pursue the NBA at a better location such as KeyArena. THE plot around Seattle’s arena proposals is thickening, unnecessarily. Seattle’s City Council already has sent a clear message to Chris Hansen and his effort to develop a Sodo-area arena that would hurt a critical maritime and industrial corridor. Last May it refused to vacate a street dividing his site, effectively scuttling his project. Within Seattle, KeyArena is the better location for an NBA team. Its feasibility for both NBA and professional hockey was established by a 2015 city study. Seattle Mayor Ed Murray and the council initially kept that study quiet as they moved forward with Hansen. Now Murray appears to have changed course. He’s charging forward with a plan to let developers redevelop the arena and adjacent Seattle Center property. Proposals are due in late April and Murray expects to select one in June. Returning the NBA to the arena is a great goal. But this is a reckless schedule that limits public discussion needed before signing away a premier public asset — especially since Murray’s not requiring the arena developer to have a team. If the city’s primary objective is to return the NBA to the arena, it shouldn’t lease the arena and adjacent property to a developer without a team. Seattle must not be suckered by the “build it and they will come” siren song that’s led other cities to pour hundreds of millions into venues that don’t get pro teams. Really, the best thing about Hansen’s deal was that it prevented this possibility, by making the financial participation of Seattle and King County contingent on securing an NBA team. Murray must be more transparent about what he’s offering to give arena developers. The arena and parking garage he’s offering generate more than $1 million a year to fund Seattle Center maintenance and operations. That’s after entertainment giant AEG, which books arena shows, gets its cut of around 40 percent of profits. What would be the revenue share if AEG leases the facility outright? Murray’s proposal also offers arena naming rights, which are worth perhaps $5 million a year with an NBA team, according to the 2015 report. Also offered to the developers is a city block adjacent to the arena. It’s an area that Murray’s now rezoning, to potentially allow high-rise buildings. The block could be worth tens of millions. Then there are questions about whether arena developers will expect tax breaks or the use of any tax revenues for the project. The cost and benefits of privatizing this public space are major policy considerations that deserve more review than Murray’s fast-track schedule permits. Remember, there is no deadline forcing a hasty decision. The city is making such a generous offer there will always be developer interest. Wait at least until Hansen’s Sodo deal expires in November, which will free the city to directly pursue an NBA team at a better location. Then the city can do this right — and pursue an NBA team unhindered by its Hansen obligations or a developer deal limiting options at Seattle Center.
Daily rainfall forecasts through a quantitative precipitation forecasting (QPF) model over Thiruvananthapuram and Madras areas for the monsoons of 1992 Quantitative precipitation forecasting (QPF) of daily rainfall of Thiruvananthapuram and Madras  for June-September and October-December respectively for the year 1992 has been attempted. A mathematical model of QPF based on the concept of conservation of specific humidity and with upper air data of a network of stations as the data input has been employed. Nearly 66% and 72% correct forecasts were realised respectively for the two stations. Scope for further refinement has been briefly discussed.    
import string, re, sys, datetime from .core import TomlError from .utils import rfc3339_re, parse_rfc3339_re if sys.version_info[0] == 2: _chr = unichr else: _chr = chr def load(fin, translate=lambda t, x, v: v, object_pairs_hook=dict): return loads(fin.read(), translate=translate, object_pairs_hook=object_pairs_hook, filename=getattr(fin, 'name', repr(fin))) def loads(s, filename='<string>', translate=lambda t, x, v: v, object_pairs_hook=dict): if isinstance(s, bytes): s = s.decode('utf-8') s = s.replace('\r\n', '\n') root = object_pairs_hook() tables = object_pairs_hook() scope = root src = _Source(s, filename=filename) ast = _p_toml(src, object_pairs_hook=object_pairs_hook) def error(msg): raise TomlError(msg, pos[0], pos[1], filename) def process_value(v, object_pairs_hook): kind, text, value, pos = v if kind == 'str' and value.startswith('\n'): value = value[1:] if kind == 'array': if value and any(k != value[0][0] for k, t, v, p in value[1:]): error('array-type-mismatch') value = [process_value(item, object_pairs_hook=object_pairs_hook) for item in value] elif kind == 'table': value = object_pairs_hook([(k, process_value(value[k], object_pairs_hook=object_pairs_hook)) for k in value]) return translate(kind, text, value) for kind, value, pos in ast: if kind == 'kv': k, v = value if k in scope: error('duplicate_keys. Key "{0}" was used more than once.'.format(k)) scope[k] = process_value(v, object_pairs_hook=object_pairs_hook) else: is_table_array = (kind == 'table_array') cur = tables for name in value[:-1]: if isinstance(cur.get(name), list): d, cur = cur[name][-1] else: d, cur = cur.setdefault(name, (None, object_pairs_hook())) scope = object_pairs_hook() name = value[-1] if name not in cur: if is_table_array: cur[name] = [(scope, object_pairs_hook())] else: cur[name] = (scope, object_pairs_hook()) elif isinstance(cur[name], list): if not is_table_array: error('table_type_mismatch') cur[name].append((scope, object_pairs_hook())) else: if is_table_array: error('table_type_mismatch') old_scope, next_table = cur[name] if old_scope is not None: error('duplicate_tables') cur[name] = (scope, next_table) def merge_tables(scope, tables): if scope is None: scope = object_pairs_hook() for k in tables: if k in scope: error('key_table_conflict') v = tables[k] if isinstance(v, list): scope[k] = [merge_tables(sc, tbl) for sc, tbl in v] else: scope[k] = merge_tables(v[0], v[1]) return scope return merge_tables(root, tables) class _Source: def __init__(self, s, filename=None): self.s = s self._pos = (1, 1) self._last = None self._filename = filename self.backtrack_stack = [] def last(self): return self._last def pos(self): return self._pos def fail(self): return self._expect(None) def consume_dot(self): if self.s: self._last = self.s[0] self.s = self[1:] self._advance(self._last) return self._last return None def expect_dot(self): return self._expect(self.consume_dot()) def consume_eof(self): if not self.s: self._last = '' return True return False def expect_eof(self): return self._expect(self.consume_eof()) def consume(self, s): if self.s.startswith(s): self.s = self.s[len(s):] self._last = s self._advance(s) return True return False def expect(self, s): return self._expect(self.consume(s)) def consume_re(self, re): m = re.match(self.s) if m: self.s = self.s[len(m.group(0)):] self._last = m self._advance(m.group(0)) return m return None def expect_re(self, re): return self._expect(self.consume_re(re)) def __enter__(self): self.backtrack_stack.append((self.s, self._pos)) def __exit__(self, type, value, traceback): if type is None: self.backtrack_stack.pop() else: self.s, self._pos = self.backtrack_stack.pop() return type == TomlError def commit(self): self.backtrack_stack[-1] = (self.s, self._pos) def _expect(self, r): if not r: raise TomlError('msg', self._pos[0], self._pos[1], self._filename) return r def _advance(self, s): suffix_pos = s.rfind('\n') if suffix_pos == -1: self._pos = (self._pos[0], self._pos[1] + len(s)) else: self._pos = (self._pos[0] + s.count('\n'), len(s) - suffix_pos) _ews_re = re.compile(r'(?:[ \t]|#[^\n]*\n|#[^\n]*\Z|\n)*') def _p_ews(s): s.expect_re(_ews_re) _ws_re = re.compile(r'[ \t]*') def _p_ws(s): s.expect_re(_ws_re) _escapes = { 'b': '\b', 'n': '\n', 'r': '\r', 't': '\t', '"': '"', '\\': '\\', 'f': '\f' } _basicstr_re = re.compile(r'[^"\\\000-\037]*') _short_uni_re = re.compile(r'u([0-9a-fA-F]{4})') _long_uni_re = re.compile(r'U([0-9a-fA-F]{8})') _escapes_re = re.compile(r'[btnfr\"\\]') _newline_esc_re = re.compile('\n[ \t\n]*') def _p_basicstr_content(s, content=_basicstr_re): res = [] while True: res.append(s.expect_re(content).group(0)) if not s.consume('\\'): break if s.consume_re(_newline_esc_re): pass elif s.consume_re(_short_uni_re) or s.consume_re(_long_uni_re): v = int(s.last().group(1), 16) if 0xd800 <= v < 0xe000: s.fail() res.append(_chr(v)) else: s.expect_re(_escapes_re) res.append(_escapes[s.last().group(0)]) return ''.join(res) _key_re = re.compile(r'[0-9a-zA-Z-_]+') def _p_key(s): with s: s.expect('"') r = _p_basicstr_content(s, _basicstr_re) s.expect('"') return r if s.consume('\''): if s.consume('\'\''): r = s.expect_re(_litstr_ml_re).group(0) s.expect('\'\'\'') else: r = s.expect_re(_litstr_re).group(0) s.expect('\'') return r return s.expect_re(_key_re).group(0) _float_re = re.compile(r'[+-]?(?:0|[1-9](?:_?\d)*)(?:\.\d(?:_?\d)*)?(?:[eE][+-]?(?:\d(?:_?\d)*))?') _basicstr_ml_re = re.compile(r'(?:""?(?!")|[^"\\\000-\011\013-\037])*') _litstr_re = re.compile(r"[^'\000\010\012-\037]*") _litstr_ml_re = re.compile(r"(?:(?:|'|'')(?:[^'\000-\010\013-\037]))*") def _p_value(s, object_pairs_hook): pos = s.pos() if s.consume('true'): return 'bool', s.last(), True, pos if s.consume('false'): return 'bool', s.last(), False, pos if s.consume('"'): if s.consume('""'): r = _p_basicstr_content(s, _basicstr_ml_re) s.expect('"""') else: r = _p_basicstr_content(s, _basicstr_re) s.expect('"') return 'str', r, r, pos if s.consume('\''): if s.consume('\'\''): r = s.expect_re(_litstr_ml_re).group(0) s.expect('\'\'\'') else: r = s.expect_re(_litstr_re).group(0) s.expect('\'') return 'str', r, r, pos if s.consume_re(rfc3339_re): m = s.last() return 'datetime', m.group(0), parse_rfc3339_re(m), pos if s.consume_re(_float_re): m = s.last().group(0) r = m.replace('_','') if '.' in m or 'e' in m or 'E' in m: return 'float', m, float(r), pos else: return 'int', m, int(r, 10), pos if s.consume('['): items = [] with s: while True: _p_ews(s) items.append(_p_value(s, object_pairs_hook=object_pairs_hook)) s.commit() _p_ews(s) s.expect(',') s.commit() _p_ews(s) s.expect(']') return 'array', None, items, pos if s.consume('{'): _p_ws(s) items = object_pairs_hook() if not s.consume('}'): k = _p_key(s) _p_ws(s) s.expect('=') _p_ws(s) items[k] = _p_value(s, object_pairs_hook=object_pairs_hook) _p_ws(s) while s.consume(','): _p_ws(s) k = _p_key(s) _p_ws(s) s.expect('=') _p_ws(s) items[k] = _p_value(s, object_pairs_hook=object_pairs_hook) _p_ws(s) s.expect('}') return 'table', None, items, pos s.fail() def _p_stmt(s, object_pairs_hook): pos = s.pos() if s.consume( '['): is_array = s.consume('[') _p_ws(s) keys = [_p_key(s)] _p_ws(s) while s.consume('.'): _p_ws(s) keys.append(_p_key(s)) _p_ws(s) s.expect(']') if is_array: s.expect(']') return 'table_array' if is_array else 'table', keys, pos key = _p_key(s) _p_ws(s) s.expect('=') _p_ws(s) value = _p_value(s, object_pairs_hook=object_pairs_hook) return 'kv', (key, value), pos _stmtsep_re = re.compile(r'(?:[ \t]*(?:#[^\n]*)?\n)+[ \t]*') def _p_toml(s, object_pairs_hook): stmts = [] _p_ews(s) with s: stmts.append(_p_stmt(s, object_pairs_hook=object_pairs_hook)) while True: s.commit() s.expect_re(_stmtsep_re) stmts.append(_p_stmt(s, object_pairs_hook=object_pairs_hook)) _p_ews(s) s.expect_eof() return stmts
//Get the area code of IP func (i *IpStore) GetGeocodeByIp(ipSearch string) (uint64, error) { row, err := i.searchIpRow(ipSearch) if err != nil { return 0, err } areacode := i.getGeocodeByRow(row) codeUint64, err := strconv.ParseUint(areacode, 10, 64) if err != nil { return 0, err } return codeUint64, nil }
The effect of angiotensin receptor type 2 inhibition and estrogen on experimental traumatic brain injury Background: Estrogen interferes with renin-angiotensin system (RAS). Increasing evidence suggests that estrogen interferes with the RAS such as decreasing angiotensin receptor in the brain. Objectives: This study aimed at investigating the mutual interaction between estrogen and candesartan (an angiotensin receptor blocker) to inhibit or amplify each other's neuroprotective effects after traumatic brain injury (TBI). Materials and Methods: Female rats were divided into 11 groups and the ovaries were removed in nine groups. Study groups included sham, TBI, oil, vehicle (Veh), a low dose (LC) and a high dose (HC) of candesartan, estrogen (E2), Veh + Veh, and a combination of estrogen with a low dose (E2 + LC) and a high dose (E2 + HC) of candesartan. TBI was induced by the Marmarou's method. Brain edema and integrity of blood–brain barrier (BBB) were assayed by calculating brain water content (BWC) and Evans blue content, respectively. The neurological outcome was evaluated using the veterinary coma scale (VCS). Results: The results showed that the BWC in the E2 group was less than that of the oil group (P < 0.01) and in the HC group was also less than that of the Veh group (P < 0.05). Posttraumatic Evans blue content in the TBI, oil, and Veh groups was higher than that in the E2 (P < 0.001) and HC (P < 0.001) groups. Although there was no significant difference in the above indicators between the LC and Veh groups, both the BWC and Evans blue content in the E2 + LC group were lower compared to the oil + Veh group (P < 0.001). In addition, the VCS increased in the E2, HC, and combined groups after TBI (P < 0.01). Conclusion: Prescribing estrogen alone and a high dose of candesartan and a low dose of candesartan with estrogen has a neuroprotective effect on brain edema, permeability of BBB, and neurological scores. This may suggest that estrogen and candesartan (especially in a low dose) act via similar paths.
package config import ( "testing" "github.com/stretchr/testify/require" ) func Test_ReadJSON_Success(t *testing.T) { config_object := NewEmptyConfig() config_object.ReadFromJSON("config_success.json") require.Equal(t, config_object.GetIP(), "127.0.0.1") require.Equal(t, config_object.GetTcpPort(), 100) require.Equal(t, config_object.GetUdpPort(), 101) require.Equal(t, config_object.GetTopicArray(), []string{"BeaconBlock"}) require.Equal(t, config_object.GetNetwork(), "testnet") require.Equal(t, config_object.GetEth2Endpoint(), "https://infura.test.endpoint") require.Equal(t, config_object.GetForkDigest(), "0xdlskgfn") require.Equal(t, config_object.GetUserAgent(), "bsc_test") require.Equal(t, config_object.GetPrivKey(), "<KEY>") } func Test_ReadJSON_Fail(t *testing.T) { config_object := NewEmptyConfig() config_object.ReadFromJSON("config_fail.json") require.NotEqual(t, config_object.GetIP(), "127.0.0.1") require.NotEqual(t, config_object.GetTcpPort(), 100) require.NotEqual(t, config_object.GetUdpPort(), 101) require.NotEqual(t, len(config_object.GetTopicArray()), 2) require.NotEqual(t, config_object.GetNetwork(), "testnet") require.NotEqual(t, config_object.GetEth2Endpoint(), "https://infura.test.endpoint") require.NotEqual(t, config_object.GetForkDigest(), "Altair") require.NotEqual(t, config_object.GetUserAgent(), "bsc_test") }
<reponame>Jrgriss2/chi-tech #ifndef SPATIAL_DISCRETIZATION_FE_H #define SPATIAL_DISCRETIZATION_FE_H #include "ChiMath/SpatialDiscretization/spatial_discretization.h" #include "ChiMath/UnknownManager/unknown_manager.h" #include "ChiMath/SpatialDiscretization/FiniteElement/finite_element.h" //################################################################### /**Base Finite Element spatial discretization class. * */ class SpatialDiscretization_FE : public SpatialDiscretization { protected: typedef chi_math::finite_element::UnitIntegralData UIData; typedef chi_math::finite_element::InternalQuadraturePointData QPDataVol; typedef chi_math::finite_element::FaceQuadraturePointData QPDataFace; std::vector<UIData> fe_unit_integrals; std::vector<QPDataVol> fe_vol_qp_data; std::vector<std::vector<QPDataFace>> fe_srf_qp_data; bool integral_data_initialized=false; bool qp_data_initialized=false; const chi_math::finite_element::SetupFlags setup_flags; protected: SpatialDiscretization_FE(int dim, chi_mesh::MeshContinuumPtr in_grid, SDMType in_type = SDMType::UNDEFINED, chi_math::finite_element::SetupFlags in_setup_flags= chi_math::finite_element::SetupFlags::NO_FLAGS_SET) : SpatialDiscretization(dim,in_grid,in_type), setup_flags(in_setup_flags) {} public: virtual const chi_math::finite_element::UnitIntegralData& GetUnitIntegrals(const chi_mesh::Cell& cell) { if (not integral_data_initialized) throw std::invalid_argument("SpatialDiscretization_FE::GetUnitIntegrals " "called without integrals being initialized." " Set flag COMPUTE_UNIT_INTEGRALS."); return fe_unit_integrals[cell.local_id]; } virtual const chi_math::finite_element::InternalQuadraturePointData& GetQPData_Volumetric(const chi_mesh::Cell& cell) { if (not qp_data_initialized) throw std::invalid_argument("SpatialDiscretization_FE::GetQPData_Volumetric " "called without integrals being initialized." " Set flag INIT_QP_DATA."); return fe_vol_qp_data[cell.local_id]; } virtual const chi_math::finite_element::FaceQuadraturePointData& GetQPData_Surface(const chi_mesh::Cell& cell, const unsigned int face) { if (not qp_data_initialized) throw std::invalid_argument("SpatialDiscretization_FE::GetQPData_Surface " "called without integrals being initialized." " Set flag INIT_QP_DATA."); return fe_srf_qp_data[cell.local_id][face]; } virtual ~SpatialDiscretization_FE() = default; }; #endif
<reponame>bunop/libplinkio<gh_stars>10-100 #include <Python.h> #include <plinkio/plinkio.h> #include "snparray.h" /** * Wrapper object for a plink file. In python it will * act as a handle to the file. */ typedef struct { PyObject_HEAD /** * The plink file. */ struct pio_file_t file; /** * Buffer for reading a row. */ snp_t *row; /** * Length of the row. */ size_t row_length; } c_plink_file_t; /** * Deallocates a Python CPlinkFile object. * * @param self Pointer to a c_plink_file_t. */ void cplinkfile_dealloc(c_plink_file_t *self) { if( self->row != NULL ) { pio_close( &self->file ); free( self->row ); self->row_length = 0; Py_TYPE( self )->tp_free( ( PyObject * ) self ); } } #if PY_MAJOR_VERSION >= 3 /** * Python type of the above. */ static PyTypeObject c_plink_file_prototype = { PyVarObject_HEAD_INIT( NULL, 0 ) "plinkio.CPlinkFile", /* tp_name */ sizeof( c_plink_file_t ), /* tp_basicsize */ 0, /* tp_itemsize */ (destructor) cplinkfile_dealloc, /* tp_dealloc */ 0, /* tp_print */ 0, /* tp_getattr */ 0, /* tp_setattr */ 0, /* tp_compare */ 0, /* tp_repr */ 0, /* tp_as_number */ 0, /* tp_as_sequence */ 0, /* tp_as_mapping */ 0, /* tp_hash */ 0, /* tp_call */ 0, /* tp_str */ 0, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ Py_TPFLAGS_DEFAULT, /* tp_flags */ "Contains the pio_file_t struct for interfacing libplinkio.", /* tp_doc */ }; #define PyInt_FromLong(x) (PyLong_FromLong((x))) #define PyInt_AsLong(x) (PyLong_AsLong((x))) #define PyString_AsString(x) (PyUnicode_AsUTF8(x)) #define PyString_Size(x) (PyObject_Size(x)) #else /** * Python type of the above. */ static PyTypeObject c_plink_file_prototype = { PyObject_HEAD_INIT( NULL ) 0, "plinkio.CPlinkFile", /* tp_name */ sizeof( c_plink_file_t ), /* tp_basicsize */ 0, /* tp_itemsize */ (destructor) cplinkfile_dealloc, /* tp_dealloc */ 0, /* tp_print */ 0, /* tp_getattr */ 0, /* tp_setattr */ 0, /* tp_compare */ 0, /* tp_repr */ 0, /* tp_as_number */ 0, /* tp_as_sequence */ 0, /* tp_as_mapping */ 0, /* tp_hash */ 0, /* tp_call */ 0, /* tp_str */ 0, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ Py_TPFLAGS_DEFAULT, /* tp_flags */ "Contains the pio_file_t struct for interfacing libplinkio.", /* tp_doc */ }; #endif /** * Opens a plink file and returns a handle to it. * * @param self - * @param args First argument is a path to the plink file. * * @return A handle to the plink file, or throws an IOError. */ static PyObject * plinkio_open(PyObject *self, PyObject *args) { const char *path; struct pio_file_t plink_file; c_plink_file_t *c_plink_file; int pio_open_status; if( !PyArg_ParseTuple( args, "s", &path ) ) { return NULL; } pio_open_status = pio_open( &plink_file, path ); if( pio_open_status != PIO_OK ) { if( pio_open_status == P_FAM_IO_ERROR ) { PyErr_SetString( PyExc_IOError, "Error while trying to open the FAM plink file." ); } else if( pio_open_status == P_BIM_IO_ERROR ) { PyErr_SetString( PyExc_IOError, "Error while trying to open the BIM plink file." ); } else if( pio_open_status == P_BED_IO_ERROR ) { PyErr_SetString( PyExc_IOError, "Error while trying to open the BED plink file." ); } else { PyErr_SetString( PyExc_IOError, "Error while trying to open plink file." ); } return NULL; } c_plink_file = (c_plink_file_t *) c_plink_file_prototype.tp_alloc( &c_plink_file_prototype, 0 ); c_plink_file->file = plink_file; c_plink_file->row = (snp_t *) malloc( pio_row_size( &plink_file ) ); c_plink_file->row_length = pio_num_samples( &plink_file ); if( !pio_one_locus_per_row( &plink_file ) ) { c_plink_file->row_length = pio_num_loci( &plink_file ); } return (PyObject *) c_plink_file; } /** * Converts a python sample object into a c sample object. * * @param py_locus A python sample object that will be read. * @param locus A C sample object that will be written. * * @return 1 on success, 0 on failure in which case the error * string has been set. */ int parse_sample(PyObject *py_sample, struct pio_sample_t *sample) { int sex, affection; float phenotype; PyObject *fid_object; PyObject *iid_object; PyObject *father_iid_object; PyObject *mother_iid_object; PyObject *phenotype_object; PyObject *sex_object; PyObject *affection_object; PyObject *fid_string; PyObject *iid_string; PyObject *father_iid_string; PyObject *mother_iid_string; int ret = 1; /* Get attributes */ fid_object = PyObject_GetAttrString( py_sample, "fid" ); iid_object = PyObject_GetAttrString( py_sample, "iid" ); father_iid_object = PyObject_GetAttrString( py_sample, "father_iid" ); mother_iid_object = PyObject_GetAttrString( py_sample, "mother_iid" ); sex_object = PyObject_GetAttrString( py_sample, "sex" ); affection_object = PyObject_GetAttrString( py_sample, "affection" ); phenotype_object = PyObject_GetAttrString( py_sample, "phenotype" ); /* Convert strings */ fid_string = PyObject_Str( fid_object ); iid_string = PyObject_Str( iid_object ); father_iid_string = PyObject_Str( father_iid_object ); mother_iid_string = PyObject_Str( mother_iid_object ); sex = PyInt_AsLong( sex_object ); affection = PyInt_AsLong( affection_object ); phenotype = PyFloat_AsDouble( phenotype_object ); if( fid_string == NULL || iid_string == NULL || father_iid_string == NULL || mother_iid_string == NULL ) { PyErr_SetString( PyExc_TypeError, "Error all iid fields must be convertable to a string." ); ret = 0; } else if( sex == -1 && PyErr_Occurred( ) ) { PyErr_SetString( PyExc_TypeError, "Error sex field must be an integer." ); ret = 0; } else if( affection == -1 && PyErr_Occurred( ) ) { PyErr_SetString( PyExc_TypeError, "Error affection field must be an integer." ); ret = 0; } else if( phenotype == -1.0f && PyErr_Occurred( ) ) { PyErr_SetString( PyExc_TypeError, "Error phenotype field must a float." ); ret = 0; } if( ret == 0 ) { goto sample_error; } /* Assign strings and other values, these will be copied in libplinkio so we don't make a copy here */ sample->fid = (char *) PyString_AsString( fid_string ), PyString_Size( fid_string ); sample->iid = (char *) PyString_AsString( iid_string ), PyString_Size( iid_string ); sample->father_iid = (char *) PyString_AsString( father_iid_string ); sample->mother_iid = (char *) PyString_AsString( mother_iid_string ); sample->phenotype = phenotype; if( sex == 0 ) { sample->sex = PIO_FEMALE; } else if( sex == 1 ) { sample->sex = PIO_MALE; } else { sample->sex = PIO_UNKNOWN; } if( affection == 0 ) { sample->affection = PIO_CONTROL; } else if( affection == 1 ) { sample->affection = PIO_CASE; } else if( affection == -9 ) { sample->affection = PIO_MISSING; } else { sample->affection = PIO_CONTINUOUS; } /* Lower refcount for both accessed attributes and objects */ sample_error: Py_DECREF( fid_string ); Py_DECREF( iid_string ); Py_DECREF( mother_iid_string ); Py_DECREF( father_iid_string ); Py_DECREF( iid_object ); Py_DECREF( fid_object ); Py_DECREF( father_iid_object ); Py_DECREF( mother_iid_object ); Py_DECREF( sex_object ); Py_DECREF( affection_object ); return ret; } /** * Creates a plink file and returns a handle to it. * * @param self - * @param args First argument is a path to the plink file. Second argument * is a list of Sample objects. * * @return A handle to the plink file, or throws an IOError. */ static PyObject * plinkio_create(PyObject *self, PyObject *args) { int i; const char *path; struct pio_sample_t *samples; struct pio_file_t plink_file; c_plink_file_t *c_plink_file; PyObject *sample_list; PyObject *sample_object; PyObject *i_object; int is_ok; int pio_create_status; size_t num_samples; if( !PyArg_ParseTuple( args, "sO", &path, &sample_list ) ) { return NULL; } /* Parse samples from object list */ num_samples = PyObject_Size( sample_list ); samples = ( struct pio_sample_t * ) malloc( sizeof( struct pio_sample_t ) * num_samples ); for(i = 0; i < PyObject_Size( sample_list ); i++) { i_object = PyInt_FromLong( i ); sample_object = PyObject_GetItem( sample_list, i_object ); is_ok = parse_sample( sample_object, &samples[ i ] ); Py_DECREF( i_object ); Py_DECREF( sample_object ); if( !is_ok ) { free( samples ); return NULL; } } pio_create_status = pio_create( &plink_file, path, samples, num_samples ); free(samples); /* Check for errors */ if( pio_create_status != PIO_OK ) { if( pio_create_status == P_FAM_IO_ERROR ) { PyErr_SetString( PyExc_IOError, "Error while trying to creating FAM file." ); } else if( pio_create_status == P_BIM_IO_ERROR ) { PyErr_SetString( PyExc_IOError, "Error while trying to creating BIM file." ); } else if( pio_create_status == P_BED_IO_ERROR ) { PyErr_SetString( PyExc_IOError, "Error while trying to creating BED file." ); } else { PyErr_SetString( PyExc_IOError, "Error while trying to creating plink file." ); } return NULL; } c_plink_file = (c_plink_file_t *) c_plink_file_prototype.tp_alloc( &c_plink_file_prototype, 0 ); c_plink_file->file = plink_file; c_plink_file->row = (snp_t *) malloc( pio_row_size( &plink_file ) ); c_plink_file->row_length = pio_num_samples( &plink_file ); return (PyObject *) c_plink_file; } /** * Converts a python locus object into a c locus object. * * @param py_locus A python locus object that will be read. * @param locus A C locus object that will be written. * * @return 1 on success, 0 on failure in which case the error * string has been set. */ int parse_locus(PyObject *py_locus, struct pio_locus_t *locus) { PyObject *chromosome_object; PyObject *name_object; PyObject *position_object; PyObject *bp_position_object; PyObject *allele1_object; PyObject *allele2_object; PyObject *name_string; PyObject *allele1_string; PyObject *allele2_string; int chromosome; float position; int bp_position; int ret = 1; chromosome_object = PyObject_GetAttrString( py_locus, "chromosome" ); name_object = PyObject_GetAttrString( py_locus, "name" ); position_object = PyObject_GetAttrString( py_locus, "position" ); bp_position_object = PyObject_GetAttrString( py_locus, "bp_position" ); allele1_object = PyObject_GetAttrString( py_locus, "allele1" ); allele2_object = PyObject_GetAttrString( py_locus, "allele2" ); chromosome = PyInt_AsLong( chromosome_object ); name_string = PyObject_Str( name_object ); position = PyFloat_AsDouble( position_object ); bp_position = PyInt_AsLong( bp_position_object ); allele1_string = PyObject_Str( allele1_object ); allele2_string = PyObject_Str( allele2_object ); if( chromosome == -1 && PyErr_Occurred( ) ) { PyErr_SetString( PyExc_TypeError, "Error chromosome field must be an integer." ); ret = 0; } else if( name_string == NULL ) { PyErr_SetString( PyExc_TypeError, "Error name field must be a string." ); ret = 0; } else if( position == -1.0f && PyErr_Occurred( ) ) { PyErr_SetString( PyExc_TypeError, "Error position field must be a float." ); ret = 0; } else if( bp_position == -1 && PyErr_Occurred( ) ) { PyErr_SetString( PyExc_TypeError, "Error bp_position field must be an integer." ); ret = 0; } if( allele1_string == NULL || allele2_string == NULL ) { PyErr_SetString( PyExc_TypeError, "Error allele fields must be strings." ); ret = 0; } if( ret == 0 ) { goto locus_error; } /* The strings wont get freed by plinkio so remove const qualifier */ locus->chromosome = PyInt_AsLong( chromosome_object ); locus->name = (char *) PyString_AsString( name_string ); locus->position = PyFloat_AsDouble( position_object ); locus->bp_position = PyInt_AsLong( bp_position_object ); locus->allele1 = (char *) PyString_AsString( allele1_string ); locus->allele2 = (char *) PyString_AsString( allele2_string ); locus_error: Py_DECREF( name_string ); Py_DECREF( allele1_string ); Py_DECREF( allele2_string ); Py_DECREF( chromosome_object ); Py_DECREF( name_object ); Py_DECREF( position_object ); Py_DECREF( bp_position_object ); Py_DECREF( allele1_object ); Py_DECREF( allele2_object ); return ret; } /** * Writes a row to a created plink file. * * @param self - * @param args First argument is the plink file. Second argument * is a Locus object. Third argument is a list of genotypes. * * @return A handle to the plink file, or throws an IOError. */ static PyObject * plinkio_write_row(PyObject *self, PyObject *args) { PyObject *plink_file; c_plink_file_t *c_plink_file; PyObject *locus_object; PyObject *genotypes; PyObject *i_object; PyObject *genotype_object; struct pio_locus_t locus; size_t i; int write_status; if( !PyArg_ParseTuple( args, "O!OO", &c_plink_file_prototype, &plink_file, &locus_object, &genotypes ) ) { return NULL; } c_plink_file = (c_plink_file_t *) plink_file; if( PyObject_Size( genotypes ) != (ssize_t) c_plink_file->row_length ) { PyErr_SetString( PyExc_ValueError, "Error, wrong number of genotypes given." ); return NULL; } if( !parse_locus( locus_object, &locus ) ) { return NULL; } for(i = 0; i < c_plink_file->row_length; i++) { i_object = PyInt_FromLong( i ); genotype_object = PyObject_GetItem( genotypes, i_object ); c_plink_file->row[ i ] = (snp_t) PyInt_AsLong( genotype_object ); Py_DECREF( genotype_object ); Py_DECREF( i_object ); } write_status = pio_write_row( &c_plink_file->file, &locus, c_plink_file->row ); if( write_status != PIO_OK ) { PyErr_SetString( PyExc_IOError, "Error while writing to plink file." ); return NULL; } Py_RETURN_NONE; } /** * Reads a row of SNPs from the bed, advances the file pointer, * returns the snps as a list, where the SNPs are encoded as in * pio_next_row. * * @param self - * @param args - First argument is a handle to an opened file. * * @return List of snps, or None if we are at the end. Throws IOError * if an error occurred. */ static PyObject * plinkio_next_row(PyObject *self, PyObject *args) { PyObject *plink_file; c_plink_file_t *c_plink_file; snp_t *row; int status; if( !PyArg_ParseTuple( args, "O!", &c_plink_file_prototype, &plink_file ) ) { return NULL; } c_plink_file = (c_plink_file_t *) plink_file; row = c_plink_file->row; status = pio_next_row( &c_plink_file->file, row ); if( status == PIO_END ) { Py_RETURN_NONE; } else if( status == PIO_ERROR ) { PyErr_SetString( PyExc_IOError, "Error while reading from plink file." ); return NULL; } return (PyObject *) snparray_from_array( &py_snp_array_prototype, row, c_plink_file->row_length ); } /** * Moves the file pointer to the first row, so that the next * call to pio_next_row returns the first row. * * @param self - * @param args - First argument is a handle to an opened file. */ static PyObject * plinkio_reset_row(PyObject *self, PyObject *args) { PyObject *plink_file; c_plink_file_t *c_plink_file; if( !PyArg_ParseTuple( args, "O!", &c_plink_file_prototype, &plink_file ) ) { return NULL; } c_plink_file = (c_plink_file_t *) plink_file; pio_reset_row( &c_plink_file->file ); Py_RETURN_NONE; } /** * Returns a list of loci and their locations whithin the * genome that are contained in the file. * * @param self - * @param args - First argument is a handle to an opened file. * * @return List of loci. */ static PyObject * plinkio_get_loci(PyObject *self, PyObject *args) { PyObject *plink_file; c_plink_file_t *c_plink_file; size_t i; PyObject *module; PyObject *locusClass; PyObject *loci_list; if( !PyArg_ParseTuple( args, "O!", &c_plink_file_prototype, &plink_file ) ) { return NULL; } c_plink_file = (c_plink_file_t *) plink_file; module = PyImport_ImportModule( "plinkio.plinkfile" ); if( module == NULL ) { return NULL; } locusClass = PyObject_GetAttrString( module, "Locus" ); if( locusClass == NULL ) { return NULL; } loci_list = PyList_New( pio_num_loci( &c_plink_file->file ) ); for(i = 0; i < pio_num_loci( &c_plink_file->file ); i++) { struct pio_locus_t *locus = pio_get_locus( &c_plink_file->file, i ); PyObject *args = Py_BuildValue( "BsfLss", locus->chromosome, locus->name, locus->position, locus->bp_position, locus->allele1, locus->allele2 ); PyObject *pyLocus = PyObject_CallObject( locusClass, args ); /* Steals the pyLocus reference */ PyList_SetItem( loci_list, i, pyLocus ); Py_DECREF( args ); } Py_DECREF( module ); Py_DECREF( locusClass ); return loci_list; } /** * Returns a list of samples and associated information * that are contained in the file. * * @param self - * @param args - First argument is a handle to an opened file. * * @return List of samples. */ static PyObject * plinkio_get_samples(PyObject *self, PyObject *args) { PyObject *plink_file; c_plink_file_t *c_plink_file; size_t i; int sex, affection; PyObject *module; PyObject *sample_list; PyObject *sample_class; if( !PyArg_ParseTuple( args, "O!", &c_plink_file_prototype, &plink_file ) ) { return NULL; } c_plink_file = (c_plink_file_t *) plink_file; module = PyImport_ImportModule( "plinkio.plinkfile" ); if( module == NULL ) { return NULL; } sample_class = PyObject_GetAttrString( module, "Sample" ); if( sample_class == NULL ) { return NULL; } sample_list = PyList_New( pio_num_samples( &c_plink_file->file ) ); for(i = 0; i < pio_num_samples( &c_plink_file->file ); i++) { struct pio_sample_t *sample = pio_get_sample( &c_plink_file->file, i ); PyObject *args; PyObject *pySample; sex = 0; if( sample->sex == PIO_MALE ) { sex = 1; } else if( sample->sex != PIO_FEMALE ) { sex = -9; } affection = 0; if( sample->affection == PIO_CASE ) { affection = 1; } else if( sample->affection != PIO_CONTROL ) { affection = -9; } args = Py_BuildValue( "ssssiif", sample->fid, sample->iid, sample->father_iid, sample->mother_iid, sex, affection, sample->phenotype ); pySample = PyObject_CallObject( sample_class, args ); /* Steals the pySample reference */ PyList_SetItem( sample_list, i, pySample ); Py_DECREF( args ); } Py_DECREF( module ); Py_DECREF( sample_class ); return sample_list; } /** * Determines whether samples are stored row-wise or column-wise. * * @param self - * @param args - First argument is a handle to an opened file. * * @return True if one row contains the genotypes for a single locus. */ static PyObject * plinkio_one_locus_per_row(PyObject *self, PyObject *args) { PyObject *plink_file; c_plink_file_t *c_plink_file; if( !PyArg_ParseTuple( args, "O!", &c_plink_file_prototype, &plink_file ) ) { return NULL; } c_plink_file = (c_plink_file_t *) plink_file; return PyBool_FromLong( (long) pio_one_locus_per_row( &c_plink_file->file ) ); } /** * Closes the given plink file. * * Note: Releases some memory allocated in pio_open. * * @param self - * @param args - First argument is a handle to an opened file. */ static PyObject * plinkio_close(PyObject *self, PyObject *args) { PyObject *plink_file; c_plink_file_t *c_plink_file; if( !PyArg_ParseTuple( args, "O!", &c_plink_file_prototype, &plink_file ) ) { return NULL; } c_plink_file = (c_plink_file_t *) plink_file; pio_close( &c_plink_file->file ); Py_RETURN_NONE; } /** * Transposes the given plink file. * * @param self - * @param args First argument is a path to the plink file to transpose, and * the second argument is the path to the transposed plink file. * * @return True if the file could be transposed, false otherwise. */ static PyObject * plinkio_transpose(PyObject *self, PyObject *args) { const char *old_path; const char *new_path; if( !PyArg_ParseTuple( args, "ss", &old_path, &new_path ) ) { return NULL; } return PyBool_FromLong( (long) ( pio_transpose( old_path, new_path ) == PIO_OK ) ); } static PyMethodDef plinkio_methods[] = { { "open", plinkio_open, METH_VARARGS, "Opens a plink file." }, { "next_row", plinkio_next_row, METH_VARARGS, "Reads the next row of a plink file." }, { "reset_row", plinkio_reset_row, METH_VARARGS, "Resets reading of the plink file to the first row." }, { "get_loci", plinkio_get_loci, METH_VARARGS, "Returns the list of loci." }, { "get_samples", plinkio_get_samples, METH_VARARGS, "Returns the list of samples." }, { "one_locus_per_row", plinkio_one_locus_per_row, METH_VARARGS, "Returns true if a row contains the snps for a single locus." }, { "close", plinkio_close, METH_VARARGS, "Close a plink file." }, { "transpose", plinkio_transpose, METH_VARARGS, "Transposes the plink file." }, { "create", plinkio_create, METH_VARARGS, "Creates a new plink file." }, { "write_row", plinkio_write_row, METH_VARARGS, "Writes genotypes to a created plink file." }, { NULL } }; #ifndef PyMODINIT_FUNC #define PyMODINIT_FUNC void #endif #if PY_MAJOR_VERSION >= 3 static PyModuleDef moduledef = { PyModuleDef_HEAD_INIT, "cplinkio", "Wrapper module for the libplinkio c functions.", -1, plinkio_methods, NULL, NULL, NULL, NULL }; PyMODINIT_FUNC PyInit_cplinkio(void) { PyObject *module; c_plink_file_prototype.tp_new = PyType_GenericNew; if( PyType_Ready( &c_plink_file_prototype ) < 0 ) { return NULL; } py_snp_array_prototype.tp_new = PyType_GenericNew; if( PyType_Ready( &py_snp_array_prototype ) < 0 ) { return NULL; } module = PyModule_Create( &moduledef ); if( module == NULL ) { return NULL; } Py_INCREF( &c_plink_file_prototype ); PyModule_AddObject( module, "CPlinkFile", (PyObject *) &c_plink_file_prototype ); Py_INCREF( &py_snp_array_prototype ); PyModule_AddObject( module, "SnpArray", (PyObject *) &py_snp_array_prototype ); return module; } #else PyMODINIT_FUNC initcplinkio(void) { PyObject *m; c_plink_file_prototype.tp_new = PyType_GenericNew; if( PyType_Ready( &c_plink_file_prototype ) < 0 ) { return; } py_snp_array_prototype.tp_new = PyType_GenericNew; if( PyType_Ready( &py_snp_array_prototype ) < 0 ) { return; } m = Py_InitModule3( "cplinkio", plinkio_methods, "Wrapper module for the libplinkio c functions." ); Py_INCREF( &c_plink_file_prototype ); PyModule_AddObject( m, "CPlinkFile", (PyObject *) &c_plink_file_prototype ); Py_INCREF( &py_snp_array_prototype ); PyModule_AddObject( m, "SnpArray", (PyObject *) &py_snp_array_prototype ); } #endif
/** * Combined finite element and blade momentum theory for the calculation of a blade: * - thrust * - power * - torque * * Source: Keys, Pgs 96-98; Proudy, Pg 96 * * @param/return {struct} blade parameters **/ void F_ROTOR_CALCULATION(struct blade_element_def *pBe) { double _dR, _r2R, _r, _thetaLocal, _theta0, _omegaLocal, _alphaLocal, _inducedVelocity[NUMSTATIONS], _thrustIncrement[NUMSTATIONS], _profileDragIncrement[NUMSTATIONS], _torqueIncrement[NUMSTATIONS], _powerIncrement[NUMSTATIONS], _vv; unsigned int n, i; pBe->T = 0.0; pBe->Q = 0.0; pBe->P = 0.0; pBe->avg_v1 = 0.0; _vv = 0.5 * pBe->a * pBe->b * pBe->c * pBe->omega + 4.0 * C_PI * pBe->Vperp; calculation (= (R - R0) / 100) _dR = (pBe->R - pBe->R0) / 100.0; _theta0 = M_ABS(pBe->collective) - pBe->twst * (0.75 - pBe->R0 / pBe->R); for(n = 1; n <= NUMSTATIONS; ++n) { i = n - 1; to total radius ratio (= (R0 + n * dR) / R) _r2R = (pBe->R0 + (double)(n) * _dR) / pBe->R; ( = _r2r * R) _r = _r2R * pBe->R; ( = root collective + twist * _r2r) _thetaLocal = _theta0 + _r2R * pBe->twst; _omegaLocal = pBe->omega * _r; (= collective angle - perpendicular velocity / angular velocity) _alphaLocal = _thetaLocal - pBe->Vperp / _omegaLocal; (Proudy pg 96) _inducedVelocity[i] = (sqrt(M_ABS(_vv * _vv + C_EIGHT_PI * pBe->b * pBe->omega * pBe->omega * pBe->a * pBe->c * _r * _alphaLocal)) - _vv) / C_EIGHT_PI; _thrustIncrement[i] = 4.0*C_PI*pBe->rho*(pBe->Vperp + _inducedVelocity[i])*_inducedVelocity[i]*_r*_dR; _profileDragIncrement[i] = 0.5*pBe->Cd0*pBe->rho*M_SQR(_omegaLocal)*pBe->c*_dR; _torqueIncrement[i] = (_thrustIncrement[i]*(pBe->Vperp + _inducedVelocity[i])/_omegaLocal + _profileDragIncrement[i])*_r; _powerIncrement[i] = _torqueIncrement[i] * pBe->omega; } for(n = 0; n < NUMSTATIONS; ++n) { pBe->T += _thrustIncrement[n]; pBe->Q += _torqueIncrement[n]; pBe->P += _powerIncrement[n]; pBe->avg_v1 += _inducedVelocity[n]; } pBe->avg_v1 = pBe->avg_v1 / (double)(NUMSTATIONS); if(pBe->collective < 0.0) { pBe->T *= -1.0; pBe->avg_v1 *= -1.0; } }
Martin O'Neill was last out of the Irish dressing-room on Tuesday night. Defeats still wound him deeply despite all his decades in the game or, more accurately, because of them. Martin O'Neill was last out of the Irish dressing-room on Tuesday night. Defeats still wound him deeply despite all his decades in the game or, more accurately, because of them. "He'll be devastated," notes one of his former players, Robbie Savage. "Absolutely devastated. He'll be thinking about it and trying to put it right, feeling like the worst guy." The Derryman knows that the buck will stop at him if Ireland fail in their now-dwindling efforts to reach the World Cup. The next Euros will be just as easy to qualify for as the last ones were but, quite apart from whether the FAI might want to stick with him for another two years, you wonder whether O'Neill would want to remain with the FAI. And given the inordinate difficulty Ireland made of what had seemed a relatively easy task last time around, it doesn't take a genius to work out that both sides would probably end up meeting in the middle before shaking hands. For the past week, O'Neill has grown become increasingly mournful about what his team has not rather than what it has, whether it is their crocked captain Seamus Coleman or the erstwhile goal-hungry abilities of a 27-year-old Robbie Keane. O'Neill's assistant, Roy Keane, visited the players' lounge afterwards and one word lingered amongst every group of people with whom he interacted. "Quality," he repeatedly muttered. Ireland simply didn't have it and, even if most still believe its best representation lies amidst the creaking bones of a 35-year-old Wes Hoolahan, it was not available to them when they needed it most. Never mind a 27-year-old Robbie Keane, what would Ireland give for a 27-year-old Wes Hoolahan? And so Ireland face a familiar short-term crisis but one fundamentally based on a far more long-term chaos. The cracks in the sport only become revealed when Ireland fail to party in the summer amongst the nations of the world. Even as supporters changed tyres and serenaded nuns in France and Ireland stumbled upon a football formula they have sadly not sustained, the same cracks were evident then as they are now. It's just convenient to disregard the fact your house is in decay when you're having the holiday of a lifetime. This has always been the case ever since Jack Charlton convinced the nation that they need not have a guilty conscience about the fact that the football industry here was in utter disrepair. Change is happening, glacially. "I coach the Irish underage teams and there are really, really good footballers," says former international Keith Andrews, Things have improved, for sure, but anyone waiting for the next Robbie Keane - or, say, Ben Woodburn - will require patience. Lots of it. "They're not ready now and it depends on what happens to them at their clubs. One of our best U-17s is at Manchester United but he is up against the best South Americans and Europeans in his year. "So it's having the pathway to make sure they can still be playing after they move on from the under-age teams. The development plan is probably as good as it has ever been, but we are still relying on them doing something, preferably in England. It's difficult and takes time." Time is not a commodity that has ever been available to O'Neill, though; his job is to get results now. He got enough of them to let the nation party last summer but he may not do so this time. "A Robbie Keane, a 27-year-old Robbie Keane, would have absolutely loved that situation - he would have loved to be the hero, to score the goal," he tells us, longingly, a few hours after the 1-0 defeat to Serbia which confirmed Ireland's alarming decline in the second half of this campaign. "Which I think he could have done. We don't have that real cutting edge and we've had to try and win games without that cutting edge. "Without that Gareth Bale in your team, without that world-class player. Our world-class player is, unfortunately, injured at this moment. And that is not demeaning to my team. My team were fantastic tonight." And they have been 'fantastic' before but never on a consistent basis and, as if aping a national stereotype, only when in response to catastrophe, whether a caning by Belgium or brutal ineptitude against Georgia. Then again, the stark reality is that, even if Ireland located somewhere near their 'fantastic' selves against Serbia's 11, 10 and then nine men, it still wasn't good enough to eke out a result. Now O'Neill finds himself in a position where tinkering with diamonds and players and formations become irrelevant; he is in the role that suits him best now. Mr Motivator. The fighter on the ropes hoping to dish out a bloody nose despite an aching lack of resources. O'Neill relished the immediacy of league football but has often seemed stranded by the yawning time gaps between games at international level as constant dithering and inconsistency of style and formation have shown. I asked him was this inconsistency a worry and he immediately demurred. "No, it's not. I don't judge it like that." For his professional existence lives or dies on the end result, not how it is achieved. "This is the first time that we've been beaten here in this tournament. First time we've been beaten at home in my time as well. "Beaten by a very, very decent Serbian side that we had the better of during the course of the game. "If you'd been speaking to the Welsh manager before their game, he would have taken a one-nil win in Moldova, delighted to get the points on the board. "I just keep getting back to the point. We didn't play well in the first half in Georgia, absolutely. We scored a goal and then we couldn't get the ball for periods. "We had to do something about that. Tonight we attempted to rectify that and we did do. But we still didn't get a goal. And that's obviously the big concern. "We came out of Georgia and got something out of the game and remained unbeaten. "It would have been great to have won and the irony of it all is the fact that even though we played poorly in the game, we actually created more chances than perhaps we normally do, even with an excellent performance. We could have scored four goals out in Georgia. Whether we deserved to do that is another thing." As it stands, the most important person in the FAI might be the international secretary hoping to smooth the passage of Scott Hogan into becoming eligible to play for Ireland. The country's hopes may now rest on a player struggling for Aston Villa in the Championship who hasn't exactly seemed delirious about the prospect of throwing in his lot with the Irish team. "While you'd like some people like himself and young (Sean) Maguire to come into the squad and maybe have a little look round for a while, it's asking a lot to go in. "But we'll see, you never know what the month might bring in terms of players playing a wee bit of extra football at club level, even in the Championship, and maybe just be ready for it." It all smacks of desperation. "We're still fighting. It's not big talk from me. We can win these last two games. The players want to win them." Everyone wants them to succeed. But whether they deserve to do so is quite another thing. Should Ireland fail in the attempt, the fallout will be predictable. A big-name manager getting a filleting from all corners. Another big-name manager being catapulted in as a seeming saviour. And all the while nobody looking at the big picture. Irish Independent
<filename>1.7.2/main/java/WayofTime/alchemicalWizardry/common/rituals/RitualEffectHealing.java<gh_stars>0 package WayofTime.alchemicalWizardry.common.rituals; import java.util.ArrayList; import java.util.Iterator; import java.util.List; import net.minecraft.entity.EntityLivingBase; import net.minecraft.entity.player.EntityPlayer; import net.minecraft.potion.Potion; import net.minecraft.potion.PotionEffect; import net.minecraft.server.MinecraftServer; import net.minecraft.util.AxisAlignedBB; import net.minecraft.world.World; import WayofTime.alchemicalWizardry.api.rituals.IMasterRitualStone; import WayofTime.alchemicalWizardry.api.rituals.RitualComponent; import WayofTime.alchemicalWizardry.api.rituals.RitualEffect; import WayofTime.alchemicalWizardry.api.soulNetwork.LifeEssenceNetwork; public class RitualEffectHealing extends RitualEffect { public final int timeDelay = 50; //public final int amount = 10; @Override public void performEffect(IMasterRitualStone ritualStone) { String owner = ritualStone.getOwner(); World worldSave = MinecraftServer.getServer().worldServers[0]; LifeEssenceNetwork data = (LifeEssenceNetwork) worldSave.loadItemData(LifeEssenceNetwork.class, owner); if (data == null) { data = new LifeEssenceNetwork(owner); worldSave.setItemData(owner, data); } int currentEssence = data.currentEssence; World world = ritualStone.getWorld(); int x = ritualStone.getXCoord(); int y = ritualStone.getYCoord(); int z = ritualStone.getZCoord(); if (world.getWorldTime() % this.timeDelay != 0) { return; } // if(!(world.getBlockTileEntity(x, y-1, z) instanceof TEAltar)) // { // return; // } //tileAltar = (TEAltar)world.getBlockTileEntity(x,y-1,z); int d0 = 10; int vertRange = 10; AxisAlignedBB axisalignedbb = AxisAlignedBB.getAABBPool().getAABB((double) x, (double) y, (double) z, (double) (x + 1), (double) (y + 1), (double) (z + 1)).expand(d0, vertRange, d0); List list = world.getEntitiesWithinAABB(EntityLivingBase.class, axisalignedbb); Iterator iterator1 = list.iterator(); EntityLivingBase entity; int entityCount = 0; boolean flag = false; while (iterator1.hasNext()) { entity = (EntityLivingBase) iterator1.next(); if (entity instanceof EntityPlayer) { entityCount += 10; } else { entityCount++; } } if (currentEssence < this.getCostPerRefresh() * entityCount) { EntityPlayer entityOwner = MinecraftServer.getServer().getConfigurationManager().getPlayerForUsername(owner); if (entityOwner == null) { return; } entityOwner.addPotionEffect(new PotionEffect(Potion.confusion.id, 80)); } else { Iterator iterator2 = list.iterator(); entityCount = 0; while (iterator2.hasNext()) { entity = (EntityLivingBase) iterator2.next(); if (entity.getHealth() + 0.1f < entity.getMaxHealth()) { entity.addPotionEffect(new PotionEffect(Potion.regeneration.id, timeDelay + 2, 0)); //entity.setHealth(entity.getHealth()-1); //entity.attackEntityFrom(DamageSource.outOfWorld, 1); if (entity instanceof EntityPlayer) { entityCount += 10; } else { entityCount++; } } // if(entity.getHealth()<=0.2f) // { // entity.onDeath(DamageSource.inFire); // } //tileAltar.sacrificialDaggerCall(this.amount, true); } data.currentEssence = currentEssence - this.getCostPerRefresh() * entityCount; data.markDirty(); } } @Override public int getCostPerRefresh() { // TODO Auto-generated method stub return 20; } @Override public List<RitualComponent> getRitualComponentList() { ArrayList<RitualComponent> healingRitual = new ArrayList(); healingRitual.add(new RitualComponent(4, 0, 0, RitualComponent.AIR)); healingRitual.add(new RitualComponent(5, 0, -1, RitualComponent.AIR)); healingRitual.add(new RitualComponent(5, 0, 1, RitualComponent.AIR)); healingRitual.add(new RitualComponent(-4, 0, 0, RitualComponent.AIR)); healingRitual.add(new RitualComponent(-5, 0, -1, RitualComponent.AIR)); healingRitual.add(new RitualComponent(-5, 0, 1, RitualComponent.AIR)); healingRitual.add(new RitualComponent(0, 0, 4, RitualComponent.FIRE)); healingRitual.add(new RitualComponent(-1, 0, 5, RitualComponent.FIRE)); healingRitual.add(new RitualComponent(1, 0, 5, RitualComponent.FIRE)); healingRitual.add(new RitualComponent(0, 0, -4, RitualComponent.FIRE)); healingRitual.add(new RitualComponent(-1, 0, -5, RitualComponent.FIRE)); healingRitual.add(new RitualComponent(1, 0, -5, RitualComponent.FIRE)); healingRitual.add(new RitualComponent(3, 0, 5, RitualComponent.WATER)); healingRitual.add(new RitualComponent(5, 0, 3, RitualComponent.WATER)); healingRitual.add(new RitualComponent(3, 0, -5, RitualComponent.WATER)); healingRitual.add(new RitualComponent(5, 0, -3, RitualComponent.WATER)); healingRitual.add(new RitualComponent(-3, 0, 5, RitualComponent.WATER)); healingRitual.add(new RitualComponent(-5, 0, 3, RitualComponent.WATER)); healingRitual.add(new RitualComponent(-3, 0, -5, RitualComponent.WATER)); healingRitual.add(new RitualComponent(-5, 0, -3, RitualComponent.WATER)); healingRitual.add(new RitualComponent(-3, 0, -3, RitualComponent.DUSK)); healingRitual.add(new RitualComponent(-3, 0, 3, RitualComponent.DUSK)); healingRitual.add(new RitualComponent(3, 0, -3, RitualComponent.DUSK)); healingRitual.add(new RitualComponent(3, 0, 3, RitualComponent.DUSK)); healingRitual.add(new RitualComponent(4, 0, 5, RitualComponent.EARTH)); healingRitual.add(new RitualComponent(4, -1, 5, RitualComponent.EARTH)); healingRitual.add(new RitualComponent(5, 0, 4, RitualComponent.EARTH)); healingRitual.add(new RitualComponent(5, -1, 4, RitualComponent.EARTH)); healingRitual.add(new RitualComponent(5, 0, 5, RitualComponent.EARTH)); healingRitual.add(new RitualComponent(4, 0, -5, RitualComponent.EARTH)); healingRitual.add(new RitualComponent(4, -1, -5, RitualComponent.EARTH)); healingRitual.add(new RitualComponent(5, 0, -4, RitualComponent.EARTH)); healingRitual.add(new RitualComponent(5, -1, -4, RitualComponent.EARTH)); healingRitual.add(new RitualComponent(5, 0, -5, RitualComponent.EARTH)); healingRitual.add(new RitualComponent(-4, 0, 5, RitualComponent.EARTH)); healingRitual.add(new RitualComponent(-4, -1, 5, RitualComponent.EARTH)); healingRitual.add(new RitualComponent(-5, 0, 4, RitualComponent.EARTH)); healingRitual.add(new RitualComponent(-5, -1, 4, RitualComponent.EARTH)); healingRitual.add(new RitualComponent(-5, 0, 5, RitualComponent.EARTH)); healingRitual.add(new RitualComponent(-4, 0, -5, RitualComponent.EARTH)); healingRitual.add(new RitualComponent(-4, -1, -5, RitualComponent.EARTH)); healingRitual.add(new RitualComponent(-5, 0, -4, RitualComponent.EARTH)); healingRitual.add(new RitualComponent(-5, -1, -4, RitualComponent.EARTH)); healingRitual.add(new RitualComponent(-5, 0, -5, RitualComponent.EARTH)); return healingRitual; } }
// Code generated by go-swagger; DO NOT EDIT. package inventory_in_store_pickup_api_get_pickup_locations_v1 // This file was generated by the swagger tool. // Editing this file might prove futile when you re-run the swagger generate command import ( "context" "net/http" "time" "github.com/go-openapi/errors" "github.com/go-openapi/runtime" cr "github.com/go-openapi/runtime/client" "github.com/go-openapi/strfmt" "github.com/go-openapi/swag" ) // NewInventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams creates a new InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams object, // with the default timeout for this client. // // Default values are not hydrated, since defaults are normally applied by the API server side. // // To enforce default values in parameter, use SetDefaults or WithDefaults. func NewInventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams() *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams { return &InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams{ timeout: cr.DefaultTimeout, } } // NewInventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParamsWithTimeout creates a new InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams object // with the ability to set a timeout on a request. func NewInventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParamsWithTimeout(timeout time.Duration) *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams { return &InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams{ timeout: timeout, } } // NewInventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParamsWithContext creates a new InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams object // with the ability to set a context for a request. func NewInventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParamsWithContext(ctx context.Context) *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams { return &InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams{ Context: ctx, } } // NewInventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParamsWithHTTPClient creates a new InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams object // with the ability to set a custom HTTPClient for a request. func NewInventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParamsWithHTTPClient(client *http.Client) *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams { return &InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams{ HTTPClient: client, } } /* InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams contains all the parameters to send to the API endpoint for the inventory in store pickup Api get pickup locations v1 execute get operation. Typically these are written to a http.Request. */ type InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams struct { /* SearchRequestAreaRadius. Search radius in KM. */ SearchRequestAreaRadius *int64 /* SearchRequestAreaSearchTerm. Search term string. */ SearchRequestAreaSearchTerm *string /* SearchRequestCurrentPage. Current page. */ SearchRequestCurrentPage *int64 /* SearchRequestExtensionAttributesProductsInfo0Sku. Product SKU. */ SearchRequestExtensionAttributesProductsInfo0Sku *string /* SearchRequestFiltersCityConditionType. Condition Type. */ SearchRequestFiltersCityConditionType *string /* SearchRequestFiltersCityValue. Value. */ SearchRequestFiltersCityValue *string /* SearchRequestFiltersCountryConditionType. Condition Type. */ SearchRequestFiltersCountryConditionType *string /* SearchRequestFiltersCountryValue. Value. */ SearchRequestFiltersCountryValue *string /* SearchRequestFiltersNameConditionType. Condition Type. */ SearchRequestFiltersNameConditionType *string /* SearchRequestFiltersNameValue. Value. */ SearchRequestFiltersNameValue *string /* SearchRequestFiltersPickupLocationCodeConditionType. Condition Type. */ SearchRequestFiltersPickupLocationCodeConditionType *string /* SearchRequestFiltersPickupLocationCodeValue. Value. */ SearchRequestFiltersPickupLocationCodeValue *string /* SearchRequestFiltersPostcodeConditionType. Condition Type. */ SearchRequestFiltersPostcodeConditionType *string /* SearchRequestFiltersPostcodeValue. Value. */ SearchRequestFiltersPostcodeValue *string /* SearchRequestFiltersRegionIDConditionType. Condition Type. */ SearchRequestFiltersRegionIDConditionType *string /* SearchRequestFiltersRegionIDValue. Value. */ SearchRequestFiltersRegionIDValue *string /* SearchRequestFiltersRegionConditionType. Condition Type. */ SearchRequestFiltersRegionConditionType *string /* SearchRequestFiltersRegionValue. Value. */ SearchRequestFiltersRegionValue *string /* SearchRequestFiltersStreetConditionType. Condition Type. */ SearchRequestFiltersStreetConditionType *string /* SearchRequestFiltersStreetValue. Value. */ SearchRequestFiltersStreetValue *string /* SearchRequestPageSize. Page size. */ SearchRequestPageSize *int64 /* SearchRequestScopeCode. Sales Channel code. */ SearchRequestScopeCode *string /* SearchRequestScopeType. Sales Channel Type. */ SearchRequestScopeType *string /* SearchRequestSort0Direction. Sorting direction. */ SearchRequestSort0Direction *string /* SearchRequestSort0Field. Sorting field. */ SearchRequestSort0Field *string timeout time.Duration Context context.Context HTTPClient *http.Client } // WithDefaults hydrates default values in the inventory in store pickup Api get pickup locations v1 execute get params (not the query body). // // All values with no default are reset to their zero value. func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) WithDefaults() *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams { o.SetDefaults() return o } // SetDefaults hydrates default values in the inventory in store pickup Api get pickup locations v1 execute get params (not the query body). // // All values with no default are reset to their zero value. func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) SetDefaults() { // no default values defined for this parameter } // WithTimeout adds the timeout to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) WithTimeout(timeout time.Duration) *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams { o.SetTimeout(timeout) return o } // SetTimeout adds the timeout to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) SetTimeout(timeout time.Duration) { o.timeout = timeout } // WithContext adds the context to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) WithContext(ctx context.Context) *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams { o.SetContext(ctx) return o } // SetContext adds the context to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) SetContext(ctx context.Context) { o.Context = ctx } // WithHTTPClient adds the HTTPClient to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) WithHTTPClient(client *http.Client) *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams { o.SetHTTPClient(client) return o } // SetHTTPClient adds the HTTPClient to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) SetHTTPClient(client *http.Client) { o.HTTPClient = client } // WithSearchRequestAreaRadius adds the searchRequestAreaRadius to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) WithSearchRequestAreaRadius(searchRequestAreaRadius *int64) *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams { o.SetSearchRequestAreaRadius(searchRequestAreaRadius) return o } // SetSearchRequestAreaRadius adds the searchRequestAreaRadius to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) SetSearchRequestAreaRadius(searchRequestAreaRadius *int64) { o.SearchRequestAreaRadius = searchRequestAreaRadius } // WithSearchRequestAreaSearchTerm adds the searchRequestAreaSearchTerm to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) WithSearchRequestAreaSearchTerm(searchRequestAreaSearchTerm *string) *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams { o.SetSearchRequestAreaSearchTerm(searchRequestAreaSearchTerm) return o } // SetSearchRequestAreaSearchTerm adds the searchRequestAreaSearchTerm to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) SetSearchRequestAreaSearchTerm(searchRequestAreaSearchTerm *string) { o.SearchRequestAreaSearchTerm = searchRequestAreaSearchTerm } // WithSearchRequestCurrentPage adds the searchRequestCurrentPage to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) WithSearchRequestCurrentPage(searchRequestCurrentPage *int64) *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams { o.SetSearchRequestCurrentPage(searchRequestCurrentPage) return o } // SetSearchRequestCurrentPage adds the searchRequestCurrentPage to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) SetSearchRequestCurrentPage(searchRequestCurrentPage *int64) { o.SearchRequestCurrentPage = searchRequestCurrentPage } // WithSearchRequestExtensionAttributesProductsInfo0Sku adds the searchRequestExtensionAttributesProductsInfo0Sku to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) WithSearchRequestExtensionAttributesProductsInfo0Sku(searchRequestExtensionAttributesProductsInfo0Sku *string) *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams { o.SetSearchRequestExtensionAttributesProductsInfo0Sku(searchRequestExtensionAttributesProductsInfo0Sku) return o } // SetSearchRequestExtensionAttributesProductsInfo0Sku adds the searchRequestExtensionAttributesProductsInfo0Sku to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) SetSearchRequestExtensionAttributesProductsInfo0Sku(searchRequestExtensionAttributesProductsInfo0Sku *string) { o.SearchRequestExtensionAttributesProductsInfo0Sku = searchRequestExtensionAttributesProductsInfo0Sku } // WithSearchRequestFiltersCityConditionType adds the searchRequestFiltersCityConditionType to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) WithSearchRequestFiltersCityConditionType(searchRequestFiltersCityConditionType *string) *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams { o.SetSearchRequestFiltersCityConditionType(searchRequestFiltersCityConditionType) return o } // SetSearchRequestFiltersCityConditionType adds the searchRequestFiltersCityConditionType to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) SetSearchRequestFiltersCityConditionType(searchRequestFiltersCityConditionType *string) { o.SearchRequestFiltersCityConditionType = searchRequestFiltersCityConditionType } // WithSearchRequestFiltersCityValue adds the searchRequestFiltersCityValue to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) WithSearchRequestFiltersCityValue(searchRequestFiltersCityValue *string) *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams { o.SetSearchRequestFiltersCityValue(searchRequestFiltersCityValue) return o } // SetSearchRequestFiltersCityValue adds the searchRequestFiltersCityValue to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) SetSearchRequestFiltersCityValue(searchRequestFiltersCityValue *string) { o.SearchRequestFiltersCityValue = searchRequestFiltersCityValue } // WithSearchRequestFiltersCountryConditionType adds the searchRequestFiltersCountryConditionType to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) WithSearchRequestFiltersCountryConditionType(searchRequestFiltersCountryConditionType *string) *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams { o.SetSearchRequestFiltersCountryConditionType(searchRequestFiltersCountryConditionType) return o } // SetSearchRequestFiltersCountryConditionType adds the searchRequestFiltersCountryConditionType to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) SetSearchRequestFiltersCountryConditionType(searchRequestFiltersCountryConditionType *string) { o.SearchRequestFiltersCountryConditionType = searchRequestFiltersCountryConditionType } // WithSearchRequestFiltersCountryValue adds the searchRequestFiltersCountryValue to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) WithSearchRequestFiltersCountryValue(searchRequestFiltersCountryValue *string) *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams { o.SetSearchRequestFiltersCountryValue(searchRequestFiltersCountryValue) return o } // SetSearchRequestFiltersCountryValue adds the searchRequestFiltersCountryValue to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) SetSearchRequestFiltersCountryValue(searchRequestFiltersCountryValue *string) { o.SearchRequestFiltersCountryValue = searchRequestFiltersCountryValue } // WithSearchRequestFiltersNameConditionType adds the searchRequestFiltersNameConditionType to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) WithSearchRequestFiltersNameConditionType(searchRequestFiltersNameConditionType *string) *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams { o.SetSearchRequestFiltersNameConditionType(searchRequestFiltersNameConditionType) return o } // SetSearchRequestFiltersNameConditionType adds the searchRequestFiltersNameConditionType to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) SetSearchRequestFiltersNameConditionType(searchRequestFiltersNameConditionType *string) { o.SearchRequestFiltersNameConditionType = searchRequestFiltersNameConditionType } // WithSearchRequestFiltersNameValue adds the searchRequestFiltersNameValue to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) WithSearchRequestFiltersNameValue(searchRequestFiltersNameValue *string) *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams { o.SetSearchRequestFiltersNameValue(searchRequestFiltersNameValue) return o } // SetSearchRequestFiltersNameValue adds the searchRequestFiltersNameValue to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) SetSearchRequestFiltersNameValue(searchRequestFiltersNameValue *string) { o.SearchRequestFiltersNameValue = searchRequestFiltersNameValue } // WithSearchRequestFiltersPickupLocationCodeConditionType adds the searchRequestFiltersPickupLocationCodeConditionType to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) WithSearchRequestFiltersPickupLocationCodeConditionType(searchRequestFiltersPickupLocationCodeConditionType *string) *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams { o.SetSearchRequestFiltersPickupLocationCodeConditionType(searchRequestFiltersPickupLocationCodeConditionType) return o } // SetSearchRequestFiltersPickupLocationCodeConditionType adds the searchRequestFiltersPickupLocationCodeConditionType to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) SetSearchRequestFiltersPickupLocationCodeConditionType(searchRequestFiltersPickupLocationCodeConditionType *string) { o.SearchRequestFiltersPickupLocationCodeConditionType = searchRequestFiltersPickupLocationCodeConditionType } // WithSearchRequestFiltersPickupLocationCodeValue adds the searchRequestFiltersPickupLocationCodeValue to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) WithSearchRequestFiltersPickupLocationCodeValue(searchRequestFiltersPickupLocationCodeValue *string) *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams { o.SetSearchRequestFiltersPickupLocationCodeValue(searchRequestFiltersPickupLocationCodeValue) return o } // SetSearchRequestFiltersPickupLocationCodeValue adds the searchRequestFiltersPickupLocationCodeValue to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) SetSearchRequestFiltersPickupLocationCodeValue(searchRequestFiltersPickupLocationCodeValue *string) { o.SearchRequestFiltersPickupLocationCodeValue = searchRequestFiltersPickupLocationCodeValue } // WithSearchRequestFiltersPostcodeConditionType adds the searchRequestFiltersPostcodeConditionType to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) WithSearchRequestFiltersPostcodeConditionType(searchRequestFiltersPostcodeConditionType *string) *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams { o.SetSearchRequestFiltersPostcodeConditionType(searchRequestFiltersPostcodeConditionType) return o } // SetSearchRequestFiltersPostcodeConditionType adds the searchRequestFiltersPostcodeConditionType to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) SetSearchRequestFiltersPostcodeConditionType(searchRequestFiltersPostcodeConditionType *string) { o.SearchRequestFiltersPostcodeConditionType = searchRequestFiltersPostcodeConditionType } // WithSearchRequestFiltersPostcodeValue adds the searchRequestFiltersPostcodeValue to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) WithSearchRequestFiltersPostcodeValue(searchRequestFiltersPostcodeValue *string) *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams { o.SetSearchRequestFiltersPostcodeValue(searchRequestFiltersPostcodeValue) return o } // SetSearchRequestFiltersPostcodeValue adds the searchRequestFiltersPostcodeValue to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) SetSearchRequestFiltersPostcodeValue(searchRequestFiltersPostcodeValue *string) { o.SearchRequestFiltersPostcodeValue = searchRequestFiltersPostcodeValue } // WithSearchRequestFiltersRegionIDConditionType adds the searchRequestFiltersRegionIDConditionType to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) WithSearchRequestFiltersRegionIDConditionType(searchRequestFiltersRegionIDConditionType *string) *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams { o.SetSearchRequestFiltersRegionIDConditionType(searchRequestFiltersRegionIDConditionType) return o } // SetSearchRequestFiltersRegionIDConditionType adds the searchRequestFiltersRegionIdConditionType to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) SetSearchRequestFiltersRegionIDConditionType(searchRequestFiltersRegionIDConditionType *string) { o.SearchRequestFiltersRegionIDConditionType = searchRequestFiltersRegionIDConditionType } // WithSearchRequestFiltersRegionIDValue adds the searchRequestFiltersRegionIDValue to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) WithSearchRequestFiltersRegionIDValue(searchRequestFiltersRegionIDValue *string) *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams { o.SetSearchRequestFiltersRegionIDValue(searchRequestFiltersRegionIDValue) return o } // SetSearchRequestFiltersRegionIDValue adds the searchRequestFiltersRegionIdValue to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) SetSearchRequestFiltersRegionIDValue(searchRequestFiltersRegionIDValue *string) { o.SearchRequestFiltersRegionIDValue = searchRequestFiltersRegionIDValue } // WithSearchRequestFiltersRegionConditionType adds the searchRequestFiltersRegionConditionType to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) WithSearchRequestFiltersRegionConditionType(searchRequestFiltersRegionConditionType *string) *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams { o.SetSearchRequestFiltersRegionConditionType(searchRequestFiltersRegionConditionType) return o } // SetSearchRequestFiltersRegionConditionType adds the searchRequestFiltersRegionConditionType to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) SetSearchRequestFiltersRegionConditionType(searchRequestFiltersRegionConditionType *string) { o.SearchRequestFiltersRegionConditionType = searchRequestFiltersRegionConditionType } // WithSearchRequestFiltersRegionValue adds the searchRequestFiltersRegionValue to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) WithSearchRequestFiltersRegionValue(searchRequestFiltersRegionValue *string) *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams { o.SetSearchRequestFiltersRegionValue(searchRequestFiltersRegionValue) return o } // SetSearchRequestFiltersRegionValue adds the searchRequestFiltersRegionValue to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) SetSearchRequestFiltersRegionValue(searchRequestFiltersRegionValue *string) { o.SearchRequestFiltersRegionValue = searchRequestFiltersRegionValue } // WithSearchRequestFiltersStreetConditionType adds the searchRequestFiltersStreetConditionType to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) WithSearchRequestFiltersStreetConditionType(searchRequestFiltersStreetConditionType *string) *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams { o.SetSearchRequestFiltersStreetConditionType(searchRequestFiltersStreetConditionType) return o } // SetSearchRequestFiltersStreetConditionType adds the searchRequestFiltersStreetConditionType to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) SetSearchRequestFiltersStreetConditionType(searchRequestFiltersStreetConditionType *string) { o.SearchRequestFiltersStreetConditionType = searchRequestFiltersStreetConditionType } // WithSearchRequestFiltersStreetValue adds the searchRequestFiltersStreetValue to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) WithSearchRequestFiltersStreetValue(searchRequestFiltersStreetValue *string) *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams { o.SetSearchRequestFiltersStreetValue(searchRequestFiltersStreetValue) return o } // SetSearchRequestFiltersStreetValue adds the searchRequestFiltersStreetValue to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) SetSearchRequestFiltersStreetValue(searchRequestFiltersStreetValue *string) { o.SearchRequestFiltersStreetValue = searchRequestFiltersStreetValue } // WithSearchRequestPageSize adds the searchRequestPageSize to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) WithSearchRequestPageSize(searchRequestPageSize *int64) *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams { o.SetSearchRequestPageSize(searchRequestPageSize) return o } // SetSearchRequestPageSize adds the searchRequestPageSize to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) SetSearchRequestPageSize(searchRequestPageSize *int64) { o.SearchRequestPageSize = searchRequestPageSize } // WithSearchRequestScopeCode adds the searchRequestScopeCode to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) WithSearchRequestScopeCode(searchRequestScopeCode *string) *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams { o.SetSearchRequestScopeCode(searchRequestScopeCode) return o } // SetSearchRequestScopeCode adds the searchRequestScopeCode to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) SetSearchRequestScopeCode(searchRequestScopeCode *string) { o.SearchRequestScopeCode = searchRequestScopeCode } // WithSearchRequestScopeType adds the searchRequestScopeType to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) WithSearchRequestScopeType(searchRequestScopeType *string) *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams { o.SetSearchRequestScopeType(searchRequestScopeType) return o } // SetSearchRequestScopeType adds the searchRequestScopeType to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) SetSearchRequestScopeType(searchRequestScopeType *string) { o.SearchRequestScopeType = searchRequestScopeType } // WithSearchRequestSort0Direction adds the searchRequestSort0Direction to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) WithSearchRequestSort0Direction(searchRequestSort0Direction *string) *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams { o.SetSearchRequestSort0Direction(searchRequestSort0Direction) return o } // SetSearchRequestSort0Direction adds the searchRequestSort0Direction to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) SetSearchRequestSort0Direction(searchRequestSort0Direction *string) { o.SearchRequestSort0Direction = searchRequestSort0Direction } // WithSearchRequestSort0Field adds the searchRequestSort0Field to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) WithSearchRequestSort0Field(searchRequestSort0Field *string) *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams { o.SetSearchRequestSort0Field(searchRequestSort0Field) return o } // SetSearchRequestSort0Field adds the searchRequestSort0Field to the inventory in store pickup Api get pickup locations v1 execute get params func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) SetSearchRequestSort0Field(searchRequestSort0Field *string) { o.SearchRequestSort0Field = searchRequestSort0Field } // WriteToRequest writes these params to a swagger request func (o *InventoryInStorePickupAPIGetPickupLocationsV1ExecuteGetParams) WriteToRequest(r runtime.ClientRequest, reg strfmt.Registry) error { if err := r.SetTimeout(o.timeout); err != nil { return err } var res []error if o.SearchRequestAreaRadius != nil { // query param searchRequest[area][radius] var qrSearchRequestAreaRadius int64 if o.SearchRequestAreaRadius != nil { qrSearchRequestAreaRadius = *o.SearchRequestAreaRadius } qSearchRequestAreaRadius := swag.FormatInt64(qrSearchRequestAreaRadius) if qSearchRequestAreaRadius != "" { if err := r.SetQueryParam("searchRequest[area][radius]", qSearchRequestAreaRadius); err != nil { return err } } } if o.SearchRequestAreaSearchTerm != nil { // query param searchRequest[area][searchTerm] var qrSearchRequestAreaSearchTerm string if o.SearchRequestAreaSearchTerm != nil { qrSearchRequestAreaSearchTerm = *o.SearchRequestAreaSearchTerm } qSearchRequestAreaSearchTerm := qrSearchRequestAreaSearchTerm if qSearchRequestAreaSearchTerm != "" { if err := r.SetQueryParam("searchRequest[area][searchTerm]", qSearchRequestAreaSearchTerm); err != nil { return err } } } if o.SearchRequestCurrentPage != nil { // query param searchRequest[currentPage] var qrSearchRequestCurrentPage int64 if o.SearchRequestCurrentPage != nil { qrSearchRequestCurrentPage = *o.SearchRequestCurrentPage } qSearchRequestCurrentPage := swag.FormatInt64(qrSearchRequestCurrentPage) if qSearchRequestCurrentPage != "" { if err := r.SetQueryParam("searchRequest[currentPage]", qSearchRequestCurrentPage); err != nil { return err } } } if o.SearchRequestExtensionAttributesProductsInfo0Sku != nil { // query param searchRequest[extensionAttributes][productsInfo][0][sku] var qrSearchRequestExtensionAttributesProductsInfo0Sku string if o.SearchRequestExtensionAttributesProductsInfo0Sku != nil { qrSearchRequestExtensionAttributesProductsInfo0Sku = *o.SearchRequestExtensionAttributesProductsInfo0Sku } qSearchRequestExtensionAttributesProductsInfo0Sku := qrSearchRequestExtensionAttributesProductsInfo0Sku if qSearchRequestExtensionAttributesProductsInfo0Sku != "" { if err := r.SetQueryParam("searchRequest[extensionAttributes][productsInfo][0][sku]", qSearchRequestExtensionAttributesProductsInfo0Sku); err != nil { return err } } } if o.SearchRequestFiltersCityConditionType != nil { // query param searchRequest[filters][city][conditionType] var qrSearchRequestFiltersCityConditionType string if o.SearchRequestFiltersCityConditionType != nil { qrSearchRequestFiltersCityConditionType = *o.SearchRequestFiltersCityConditionType } qSearchRequestFiltersCityConditionType := qrSearchRequestFiltersCityConditionType if qSearchRequestFiltersCityConditionType != "" { if err := r.SetQueryParam("searchRequest[filters][city][conditionType]", qSearchRequestFiltersCityConditionType); err != nil { return err } } } if o.SearchRequestFiltersCityValue != nil { // query param searchRequest[filters][city][value] var qrSearchRequestFiltersCityValue string if o.SearchRequestFiltersCityValue != nil { qrSearchRequestFiltersCityValue = *o.SearchRequestFiltersCityValue } qSearchRequestFiltersCityValue := qrSearchRequestFiltersCityValue if qSearchRequestFiltersCityValue != "" { if err := r.SetQueryParam("searchRequest[filters][city][value]", qSearchRequestFiltersCityValue); err != nil { return err } } } if o.SearchRequestFiltersCountryConditionType != nil { // query param searchRequest[filters][country][conditionType] var qrSearchRequestFiltersCountryConditionType string if o.SearchRequestFiltersCountryConditionType != nil { qrSearchRequestFiltersCountryConditionType = *o.SearchRequestFiltersCountryConditionType } qSearchRequestFiltersCountryConditionType := qrSearchRequestFiltersCountryConditionType if qSearchRequestFiltersCountryConditionType != "" { if err := r.SetQueryParam("searchRequest[filters][country][conditionType]", qSearchRequestFiltersCountryConditionType); err != nil { return err } } } if o.SearchRequestFiltersCountryValue != nil { // query param searchRequest[filters][country][value] var qrSearchRequestFiltersCountryValue string if o.SearchRequestFiltersCountryValue != nil { qrSearchRequestFiltersCountryValue = *o.SearchRequestFiltersCountryValue } qSearchRequestFiltersCountryValue := qrSearchRequestFiltersCountryValue if qSearchRequestFiltersCountryValue != "" { if err := r.SetQueryParam("searchRequest[filters][country][value]", qSearchRequestFiltersCountryValue); err != nil { return err } } } if o.SearchRequestFiltersNameConditionType != nil { // query param searchRequest[filters][name][conditionType] var qrSearchRequestFiltersNameConditionType string if o.SearchRequestFiltersNameConditionType != nil { qrSearchRequestFiltersNameConditionType = *o.SearchRequestFiltersNameConditionType } qSearchRequestFiltersNameConditionType := qrSearchRequestFiltersNameConditionType if qSearchRequestFiltersNameConditionType != "" { if err := r.SetQueryParam("searchRequest[filters][name][conditionType]", qSearchRequestFiltersNameConditionType); err != nil { return err } } } if o.SearchRequestFiltersNameValue != nil { // query param searchRequest[filters][name][value] var qrSearchRequestFiltersNameValue string if o.SearchRequestFiltersNameValue != nil { qrSearchRequestFiltersNameValue = *o.SearchRequestFiltersNameValue } qSearchRequestFiltersNameValue := qrSearchRequestFiltersNameValue if qSearchRequestFiltersNameValue != "" { if err := r.SetQueryParam("searchRequest[filters][name][value]", qSearchRequestFiltersNameValue); err != nil { return err } } } if o.SearchRequestFiltersPickupLocationCodeConditionType != nil { // query param searchRequest[filters][pickupLocationCode][conditionType] var qrSearchRequestFiltersPickupLocationCodeConditionType string if o.SearchRequestFiltersPickupLocationCodeConditionType != nil { qrSearchRequestFiltersPickupLocationCodeConditionType = *o.SearchRequestFiltersPickupLocationCodeConditionType } qSearchRequestFiltersPickupLocationCodeConditionType := qrSearchRequestFiltersPickupLocationCodeConditionType if qSearchRequestFiltersPickupLocationCodeConditionType != "" { if err := r.SetQueryParam("searchRequest[filters][pickupLocationCode][conditionType]", qSearchRequestFiltersPickupLocationCodeConditionType); err != nil { return err } } } if o.SearchRequestFiltersPickupLocationCodeValue != nil { // query param searchRequest[filters][pickupLocationCode][value] var qrSearchRequestFiltersPickupLocationCodeValue string if o.SearchRequestFiltersPickupLocationCodeValue != nil { qrSearchRequestFiltersPickupLocationCodeValue = *o.SearchRequestFiltersPickupLocationCodeValue } qSearchRequestFiltersPickupLocationCodeValue := qrSearchRequestFiltersPickupLocationCodeValue if qSearchRequestFiltersPickupLocationCodeValue != "" { if err := r.SetQueryParam("searchRequest[filters][pickupLocationCode][value]", qSearchRequestFiltersPickupLocationCodeValue); err != nil { return err } } } if o.SearchRequestFiltersPostcodeConditionType != nil { // query param searchRequest[filters][postcode][conditionType] var qrSearchRequestFiltersPostcodeConditionType string if o.SearchRequestFiltersPostcodeConditionType != nil { qrSearchRequestFiltersPostcodeConditionType = *o.SearchRequestFiltersPostcodeConditionType } qSearchRequestFiltersPostcodeConditionType := qrSearchRequestFiltersPostcodeConditionType if qSearchRequestFiltersPostcodeConditionType != "" { if err := r.SetQueryParam("searchRequest[filters][postcode][conditionType]", qSearchRequestFiltersPostcodeConditionType); err != nil { return err } } } if o.SearchRequestFiltersPostcodeValue != nil { // query param searchRequest[filters][postcode][value] var qrSearchRequestFiltersPostcodeValue string if o.SearchRequestFiltersPostcodeValue != nil { qrSearchRequestFiltersPostcodeValue = *o.SearchRequestFiltersPostcodeValue } qSearchRequestFiltersPostcodeValue := qrSearchRequestFiltersPostcodeValue if qSearchRequestFiltersPostcodeValue != "" { if err := r.SetQueryParam("searchRequest[filters][postcode][value]", qSearchRequestFiltersPostcodeValue); err != nil { return err } } } if o.SearchRequestFiltersRegionIDConditionType != nil { // query param searchRequest[filters][regionId][conditionType] var qrSearchRequestFiltersRegionIDConditionType string if o.SearchRequestFiltersRegionIDConditionType != nil { qrSearchRequestFiltersRegionIDConditionType = *o.SearchRequestFiltersRegionIDConditionType } qSearchRequestFiltersRegionIDConditionType := qrSearchRequestFiltersRegionIDConditionType if qSearchRequestFiltersRegionIDConditionType != "" { if err := r.SetQueryParam("searchRequest[filters][regionId][conditionType]", qSearchRequestFiltersRegionIDConditionType); err != nil { return err } } } if o.SearchRequestFiltersRegionIDValue != nil { // query param searchRequest[filters][regionId][value] var qrSearchRequestFiltersRegionIDValue string if o.SearchRequestFiltersRegionIDValue != nil { qrSearchRequestFiltersRegionIDValue = *o.SearchRequestFiltersRegionIDValue } qSearchRequestFiltersRegionIDValue := qrSearchRequestFiltersRegionIDValue if qSearchRequestFiltersRegionIDValue != "" { if err := r.SetQueryParam("searchRequest[filters][regionId][value]", qSearchRequestFiltersRegionIDValue); err != nil { return err } } } if o.SearchRequestFiltersRegionConditionType != nil { // query param searchRequest[filters][region][conditionType] var qrSearchRequestFiltersRegionConditionType string if o.SearchRequestFiltersRegionConditionType != nil { qrSearchRequestFiltersRegionConditionType = *o.SearchRequestFiltersRegionConditionType } qSearchRequestFiltersRegionConditionType := qrSearchRequestFiltersRegionConditionType if qSearchRequestFiltersRegionConditionType != "" { if err := r.SetQueryParam("searchRequest[filters][region][conditionType]", qSearchRequestFiltersRegionConditionType); err != nil { return err } } } if o.SearchRequestFiltersRegionValue != nil { // query param searchRequest[filters][region][value] var qrSearchRequestFiltersRegionValue string if o.SearchRequestFiltersRegionValue != nil { qrSearchRequestFiltersRegionValue = *o.SearchRequestFiltersRegionValue } qSearchRequestFiltersRegionValue := qrSearchRequestFiltersRegionValue if qSearchRequestFiltersRegionValue != "" { if err := r.SetQueryParam("searchRequest[filters][region][value]", qSearchRequestFiltersRegionValue); err != nil { return err } } } if o.SearchRequestFiltersStreetConditionType != nil { // query param searchRequest[filters][street][conditionType] var qrSearchRequestFiltersStreetConditionType string if o.SearchRequestFiltersStreetConditionType != nil { qrSearchRequestFiltersStreetConditionType = *o.SearchRequestFiltersStreetConditionType } qSearchRequestFiltersStreetConditionType := qrSearchRequestFiltersStreetConditionType if qSearchRequestFiltersStreetConditionType != "" { if err := r.SetQueryParam("searchRequest[filters][street][conditionType]", qSearchRequestFiltersStreetConditionType); err != nil { return err } } } if o.SearchRequestFiltersStreetValue != nil { // query param searchRequest[filters][street][value] var qrSearchRequestFiltersStreetValue string if o.SearchRequestFiltersStreetValue != nil { qrSearchRequestFiltersStreetValue = *o.SearchRequestFiltersStreetValue } qSearchRequestFiltersStreetValue := qrSearchRequestFiltersStreetValue if qSearchRequestFiltersStreetValue != "" { if err := r.SetQueryParam("searchRequest[filters][street][value]", qSearchRequestFiltersStreetValue); err != nil { return err } } } if o.SearchRequestPageSize != nil { // query param searchRequest[pageSize] var qrSearchRequestPageSize int64 if o.SearchRequestPageSize != nil { qrSearchRequestPageSize = *o.SearchRequestPageSize } qSearchRequestPageSize := swag.FormatInt64(qrSearchRequestPageSize) if qSearchRequestPageSize != "" { if err := r.SetQueryParam("searchRequest[pageSize]", qSearchRequestPageSize); err != nil { return err } } } if o.SearchRequestScopeCode != nil { // query param searchRequest[scopeCode] var qrSearchRequestScopeCode string if o.SearchRequestScopeCode != nil { qrSearchRequestScopeCode = *o.SearchRequestScopeCode } qSearchRequestScopeCode := qrSearchRequestScopeCode if qSearchRequestScopeCode != "" { if err := r.SetQueryParam("searchRequest[scopeCode]", qSearchRequestScopeCode); err != nil { return err } } } if o.SearchRequestScopeType != nil { // query param searchRequest[scopeType] var qrSearchRequestScopeType string if o.SearchRequestScopeType != nil { qrSearchRequestScopeType = *o.SearchRequestScopeType } qSearchRequestScopeType := qrSearchRequestScopeType if qSearchRequestScopeType != "" { if err := r.SetQueryParam("searchRequest[scopeType]", qSearchRequestScopeType); err != nil { return err } } } if o.SearchRequestSort0Direction != nil { // query param searchRequest[sort][0][direction] var qrSearchRequestSort0Direction string if o.SearchRequestSort0Direction != nil { qrSearchRequestSort0Direction = *o.SearchRequestSort0Direction } qSearchRequestSort0Direction := qrSearchRequestSort0Direction if qSearchRequestSort0Direction != "" { if err := r.SetQueryParam("searchRequest[sort][0][direction]", qSearchRequestSort0Direction); err != nil { return err } } } if o.SearchRequestSort0Field != nil { // query param searchRequest[sort][0][field] var qrSearchRequestSort0Field string if o.SearchRequestSort0Field != nil { qrSearchRequestSort0Field = *o.SearchRequestSort0Field } qSearchRequestSort0Field := qrSearchRequestSort0Field if qSearchRequestSort0Field != "" { if err := r.SetQueryParam("searchRequest[sort][0][field]", qSearchRequestSort0Field); err != nil { return err } } } if len(res) > 0 { return errors.CompositeValidationError(res...) } return nil }
/* * USB PhidgetServo driver 1.0 * * Copyright (C) 2004, 2006 Sean Young <[email protected]> * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This is a driver for the USB PhidgetServo version 2.0 and 3.0 servo * controllers available at: http://www.phidgets.com/ * * Note that the driver takes input as: degrees.minutes * * CAUTION: Generally you should use 0 < degrees < 180 as anything else * is probably beyond the range of your servo and may damage it. */ #include <linux/kernel.h> #include <linux/errno.h> #include <linux/init.h> #include <linux/slab.h> #include <linux/module.h> #include <linux/usb.h> #include "phidget.h" #define DRIVER_AUTHOR "Sean Young <[email protected]>" #define DRIVER_DESC "USB PhidgetServo Driver" #define VENDOR_ID_GLAB 0x06c2 #define DEVICE_ID_GLAB_PHIDGETSERVO_QUAD 0x0038 #define DEVICE_ID_GLAB_PHIDGETSERVO_UNI 0x0039 #define VENDOR_ID_WISEGROUP 0x0925 #define VENDOR_ID_WISEGROUP_PHIDGETSERVO_QUAD 0x8101 #define VENDOR_ID_WISEGROUP_PHIDGETSERVO_UNI 0x8104 #define SERVO_VERSION_30 0x01 #define SERVO_COUNT_QUAD 0x02 static struct usb_device_id id_table[] = { { USB_DEVICE(VENDOR_ID_GLAB, DEVICE_ID_GLAB_PHIDGETSERVO_QUAD), .driver_info = SERVO_VERSION_30 | SERVO_COUNT_QUAD }, { USB_DEVICE(VENDOR_ID_GLAB, DEVICE_ID_GLAB_PHIDGETSERVO_UNI), .driver_info = SERVO_VERSION_30 }, { USB_DEVICE(VENDOR_ID_WISEGROUP, VENDOR_ID_WISEGROUP_PHIDGETSERVO_QUAD), .driver_info = SERVO_COUNT_QUAD }, { USB_DEVICE(VENDOR_ID_WISEGROUP, VENDOR_ID_WISEGROUP_PHIDGETSERVO_UNI), .driver_info = 0 }, {} }; MODULE_DEVICE_TABLE(usb, id_table); static int unsigned long device_no; struct phidget_servo { struct usb_device *udev; struct device *dev; int dev_no; ulong type; int pulse[4]; int degrees[4]; int minutes[4]; }; static int change_position_v30(struct phidget_servo *servo, int servo_no, int degrees, int minutes) { int retval; unsigned char *buffer; if (degrees < -23 || degrees > 362) return -EINVAL; buffer = kmalloc(6, GFP_KERNEL); if (!buffer) { dev_err(&servo->udev->dev, "%s - out of memory\n", __FUNCTION__); return -ENOMEM; } /* * pulse = 0 - 4095 * angle = 0 - 180 degrees * * pulse = angle * 10.6 + 243.8 */ servo->pulse[servo_no] = ((degrees*60 + minutes)*106 + 2438*60)/600; servo->degrees[servo_no]= degrees; servo->minutes[servo_no]= minutes; /* * The PhidgetServo v3.0 is controlled by sending 6 bytes, * 4 * 12 bits for each servo. * * low = lower 8 bits pulse * high = higher 4 bits pulse * * offset bits * +---+-----------------+ * | 0 | low 0 | * +---+--------+--------+ * | 1 | high 1 | high 0 | * +---+--------+--------+ * | 2 | low 1 | * +---+-----------------+ * | 3 | low 2 | * +---+--------+--------+ * | 4 | high 3 | high 2 | * +---+--------+--------+ * | 5 | low 3 | * +---+-----------------+ */ buffer[0] = servo->pulse[0] & 0xff; buffer[1] = (servo->pulse[0] >> 8 & 0x0f) | (servo->pulse[1] >> 4 & 0xf0); buffer[2] = servo->pulse[1] & 0xff; buffer[3] = servo->pulse[2] & 0xff; buffer[4] = (servo->pulse[2] >> 8 & 0x0f) | (servo->pulse[3] >> 4 & 0xf0); buffer[5] = servo->pulse[3] & 0xff; dev_dbg(&servo->udev->dev, "data: %02x %02x %02x %02x %02x %02x\n", buffer[0], buffer[1], buffer[2], buffer[3], buffer[4], buffer[5]); retval = usb_control_msg(servo->udev, usb_sndctrlpipe(servo->udev, 0), 0x09, 0x21, 0x0200, 0x0000, buffer, 6, 2000); kfree(buffer); return retval; } static int change_position_v20(struct phidget_servo *servo, int servo_no, int degrees, int minutes) { int retval; unsigned char *buffer; if (degrees < -23 || degrees > 278) return -EINVAL; buffer = kmalloc(2, GFP_KERNEL); if (!buffer) { dev_err(&servo->udev->dev, "%s - out of memory\n", __FUNCTION__); return -ENOMEM; } /* * angle = 0 - 180 degrees * pulse = angle + 23 */ servo->pulse[servo_no]= degrees + 23; servo->degrees[servo_no]= degrees; servo->minutes[servo_no]= 0; /* * The PhidgetServo v2.0 is controlled by sending two bytes. The * first byte is the servo number xor'ed with 2: * * servo 0 = 2 * servo 1 = 3 * servo 2 = 0 * servo 3 = 1 * * The second byte is the position. */ buffer[0] = servo_no ^ 2; buffer[1] = servo->pulse[servo_no]; dev_dbg(&servo->udev->dev, "data: %02x %02x\n", buffer[0], buffer[1]); retval = usb_control_msg(servo->udev, usb_sndctrlpipe(servo->udev, 0), 0x09, 0x21, 0x0200, 0x0000, buffer, 2, 2000); kfree(buffer); return retval; } #define show_set(value) \ static ssize_t set_servo##value (struct device *dev, \ struct device_attribute *attr, \ const char *buf, size_t count) \ { \ int degrees, minutes, retval; \ struct phidget_servo *servo = dev_get_drvdata(dev); \ \ minutes = 0; \ /* must at least convert degrees */ \ if (sscanf(buf, "%d.%d", &degrees, &minutes) < 1) { \ return -EINVAL; \ } \ \ if (minutes < 0 || minutes > 59) \ return -EINVAL; \ \ if (servo->type & SERVO_VERSION_30) \ retval = change_position_v30(servo, value, degrees, \ minutes); \ else \ retval = change_position_v20(servo, value, degrees, \ minutes); \ \ return retval < 0 ? retval : count; \ } \ \ static ssize_t show_servo##value (struct device *dev, \ struct device_attribute *attr, \ char *buf) \ { \ struct phidget_servo *servo = dev_get_drvdata(dev); \ \ return sprintf(buf, "%d.%02d\n", servo->degrees[value], \ servo->minutes[value]); \ } #define servo_attr(value) \ __ATTR(servo##value, S_IWUGO | S_IRUGO, \ show_servo##value, set_servo##value) show_set(0); show_set(1); show_set(2); show_set(3); static struct device_attribute dev_attrs[] = { servo_attr(0), servo_attr(1), servo_attr(2), servo_attr(3) }; static int servo_probe(struct usb_interface *interface, const struct usb_device_id *id) { struct usb_device *udev = interface_to_usbdev(interface); struct phidget_servo *dev; int bit, value, rc; int servo_count, i; dev = kzalloc(sizeof (struct phidget_servo), GFP_KERNEL); if (dev == NULL) { dev_err(&interface->dev, "%s - out of memory\n", __FUNCTION__); rc = -ENOMEM; goto out; } dev->udev = usb_get_dev(udev); dev->type = id->driver_info; dev->dev_no = -1; usb_set_intfdata(interface, dev); do { bit = find_first_zero_bit(&device_no, sizeof(device_no)); value = test_and_set_bit(bit, &device_no); } while (value); dev->dev_no = bit; dev->dev = device_create(phidget_class, &dev->udev->dev, 0, "servo%d", dev->dev_no); if (IS_ERR(dev->dev)) { rc = PTR_ERR(dev->dev); dev->dev = NULL; goto out; } dev_set_drvdata(dev->dev, dev); servo_count = dev->type & SERVO_COUNT_QUAD ? 4 : 1; for (i=0; i<servo_count; i++) { rc = device_create_file(dev->dev, &dev_attrs[i]); if (rc) goto out2; } dev_info(&interface->dev, "USB %d-Motor PhidgetServo v%d.0 attached\n", servo_count, dev->type & SERVO_VERSION_30 ? 3 : 2); if (!(dev->type & SERVO_VERSION_30)) dev_info(&interface->dev, "WARNING: v2.0 not tested! Please report if it works.\n"); return 0; out2: while (i-- > 0) device_remove_file(dev->dev, &dev_attrs[i]); out: if (dev) { if (dev->dev) device_unregister(dev->dev); if (dev->dev_no >= 0) clear_bit(dev->dev_no, &device_no); kfree(dev); } return rc; } static void servo_disconnect(struct usb_interface *interface) { struct phidget_servo *dev; int servo_count, i; dev = usb_get_intfdata(interface); usb_set_intfdata(interface, NULL); if (!dev) return; servo_count = dev->type & SERVO_COUNT_QUAD ? 4 : 1; for (i=0; i<servo_count; i++) device_remove_file(dev->dev, &dev_attrs[i]); device_unregister(dev->dev); usb_put_dev(dev->udev); dev_info(&interface->dev, "USB %d-Motor PhidgetServo v%d.0 detached\n", servo_count, dev->type & SERVO_VERSION_30 ? 3 : 2); clear_bit(dev->dev_no, &device_no); kfree(dev); } static struct usb_driver servo_driver = { .name = "phidgetservo", .probe = servo_probe, .disconnect = servo_disconnect, .id_table = id_table }; static int __init phidget_servo_init(void) { int retval; retval = usb_register(&servo_driver); if (retval) err("usb_register failed. Error number %d", retval); return retval; } static void __exit phidget_servo_exit(void) { usb_deregister(&servo_driver); } module_init(phidget_servo_init); module_exit(phidget_servo_exit); MODULE_AUTHOR(DRIVER_AUTHOR); MODULE_DESCRIPTION(DRIVER_DESC); MODULE_LICENSE("GPL");
// Private is an xmlstream.Transformer that excludes all top level <message/> elements from // being forwarded to other Carbons-enabled resources, by adding a <private/> element // and a <no-copy/> hint. func Private(r xml.TokenReader) xml.TokenReader { return xmlstream.InsertFunc( func(start xml.StartElement, level uint64, w xmlstream.TokenWriter) error { if level == 1 && start.Name.Local == "message" && (start.Name.Space == stanza.NSClient || start.Name.Space == stanza.NSServer) { _, err := xmlstream.Copy(w, xmlstream.MultiReader( xmlstream.Wrap(nil, xml.StartElement{ Name: xml.Name{Space: NS, Local: "private"}, }), xmlstream.Wrap(nil, xml.StartElement{ Name: xml.Name{Space: "urn:xmpp:hints", Local: "no-copy"}, }), )) return err } return nil }, )(r) }
/** * Utilities class containing methods for restricting {@link VariantContext} and {@link GenotypesContext} objects to a * reduced set of alleles, as well as for choosing the best set of alleles to keep and for cleaning up annotations and * genotypes after subsetting. * * @author David Benjamin &lt;[email protected]&gt; */ public final class AlleleSubsettingUtils { private AlleleSubsettingUtils() {} // prevent instantiation private static final int PL_INDEX_OF_HOM_REF = 0; public static final int NUM_OF_STRANDS = 2; // forward and reverse strands private static final GenotypeLikelihoodCalculators GL_CALCS = new GenotypeLikelihoodCalculators(); /** * Create the new GenotypesContext with the subsetted PLs and ADs * * Will reorder subsetted alleles according to the ordering provided by the list allelesToKeep * * @param originalGs the original GenotypesContext * @param originalAlleles the original alleles * @param allelesToKeep the subset of alleles to use with the new Genotypes * @param assignmentMethod assignment strategy for the (subsetted) PLs * @param depth the original variant DP or 0 if there was no DP * @return a new non-null GenotypesContext */ public static GenotypesContext subsetAlleles(final GenotypesContext originalGs, final int defaultPloidy, final List<Allele> originalAlleles, final List<Allele> allelesToKeep, final GenotypeAssignmentMethod assignmentMethod, final int depth) { Utils.nonNull(originalGs, "original GenotypesContext must not be null"); Utils.nonNull(allelesToKeep, "allelesToKeep is null"); Utils.nonEmpty(allelesToKeep, "must keep at least one allele"); Utils.validateArg(allelesToKeep.get(0).isReference(), "First allele must be the reference allele"); final GenotypesContext newGTs = GenotypesContext.create(originalGs.size()); final Permutation<Allele> allelePermutation = new IndexedAlleleList<>(originalAlleles).permutation(new IndexedAlleleList<>(allelesToKeep)); final Map<Integer, int[]> subsettedLikelihoodIndicesByPloidy = new TreeMap<>(); for (final Genotype g : originalGs) { final int ploidy = g.getPloidy() > 0 ? g.getPloidy() : defaultPloidy; if (!subsettedLikelihoodIndicesByPloidy.containsKey(ploidy)) { subsettedLikelihoodIndicesByPloidy.put(ploidy, subsettedPLIndices(ploidy, originalAlleles, allelesToKeep)); } final int[] subsettedLikelihoodIndices = subsettedLikelihoodIndicesByPloidy.get(ploidy); final int expectedNumLikelihoods = GenotypeLikelihoods.numLikelihoods(originalAlleles.size(), ploidy); // create the new likelihoods array from the alleles we are allowed to use double[] newLikelihoods = null; double newLog10GQ = -1; if (g.hasLikelihoods()) { final double[] originalLikelihoods = g.getLikelihoods().getAsVector(); newLikelihoods = originalLikelihoods.length == expectedNumLikelihoods ? MathUtils.scaleLogSpaceArrayForNumericalStability(Arrays.stream(subsettedLikelihoodIndices) .mapToDouble(idx -> originalLikelihoods[idx]).toArray()) : null; if (newLikelihoods != null) { final int PLindex = MathUtils.maxElementIndex(newLikelihoods); newLog10GQ = GenotypeLikelihoods.getGQLog10FromLikelihoods(PLindex, newLikelihoods); } } final boolean useNewLikelihoods = newLikelihoods != null && (depth != 0 || GATKVariantContextUtils.isInformative(newLikelihoods)); final GenotypeBuilder gb = new GenotypeBuilder(g); if (useNewLikelihoods) { final Map<String, Object> attributes = new HashMap<>(g.getExtendedAttributes()); gb.PL(newLikelihoods).log10PError(newLog10GQ); attributes.remove(GATKVCFConstants.PHRED_SCALED_POSTERIORS_KEY); gb.noAttributes().attributes(attributes); } else { gb.noPL().noGQ(); } GATKVariantContextUtils.makeGenotypeCall(g.getPloidy(), gb, assignmentMethod, newLikelihoods, allelesToKeep, g.getAlleles()); // restrict SAC to the new allele subset if (g.hasExtendedAttribute(GATKVCFConstants.STRAND_COUNT_BY_SAMPLE_KEY)) { final int[] newSACs = subsetSACAlleles(g, originalAlleles, allelesToKeep); gb.attribute(GATKVCFConstants.STRAND_COUNT_BY_SAMPLE_KEY, newSACs); } // restrict AD to the new allele subset if(g.hasAD()) { final int[] oldAD = g.getAD(); final int[] newAD = IntStream.range(0, allelesToKeep.size()).map(n -> oldAD[allelePermutation.fromIndex(n)]).toArray(); final int nonRefIndex = allelesToKeep.indexOf(Allele.NON_REF_ALLELE); if (nonRefIndex != -1 && nonRefIndex < newAD.length) { newAD[nonRefIndex] = 0; //we will "lose" coverage here, but otherwise merging NON_REF AD counts with other alleles "creates" reads } gb.AD(newAD); } newGTs.add(gb.make()); } return newGTs; } /** * Remove alternate alleles from a set of genotypes turning removed alleles to no-call and dropping other per-allele attributes * * @param outputHeader header for the final output VCF, used to validate annotation counts and types * @param originalGs genotypes with full set of alleles * @param allelesToKeep contains the reference allele and may contain the NON_REF * @param relevantIndices indices of allelesToKeep w.r.t. the original VC (including ref and possibly NON_REF) * @return */ public static GenotypesContext subsetSomaticAlleles(final VCFHeader outputHeader, final GenotypesContext originalGs, final List<Allele> allelesToKeep, final int[] relevantIndices) { final GenotypesContext newGTs = GenotypesContext.create(originalGs.size()); GenotypeBuilder gb; for (final Genotype g : originalGs) { gb = new GenotypeBuilder(g); gb.noAttributes(); List<Allele> keepGTAlleles = new ArrayList<>(g.getAlleles()); //keep the "ploidy", (i.e. number of different called alleles) the same, but no-call the ones we drop for (Allele a : keepGTAlleles) { if (!allelesToKeep.contains(a)) { keepGTAlleles.set(keepGTAlleles.indexOf(a), Allele.NO_CALL); } } gb.alleles(keepGTAlleles); gb.AD(generateAD(g.getAD(), relevantIndices)); Set<String> keys = g.getExtendedAttributes().keySet(); for (final String key : keys) { final VCFFormatHeaderLine headerLine = outputHeader.getFormatHeaderLine(key); gb.attribute(key, ReferenceConfidenceVariantContextMerger.generateAnnotationValueVector(headerLine.getCountType(), VariantContextGetters.attributeToList(g.getAnyAttribute(key)), relevantIndices)); } newGTs.add(gb.make()); } return newGTs; } /** * Add the VCF INFO field annotations for the used alleles when subsetting * * @param vc original variant context * @param builder variant context builder with subset of original variant context's alleles * @param keepOriginalChrCounts keep the original chromosome counts before subsetting * @return variant context builder with updated INFO field attribute values */ public static void addInfoFieldAnnotations(final VariantContext vc, final VariantContextBuilder builder, final boolean keepOriginalChrCounts) { Utils.nonNull(vc); Utils.nonNull(builder); Utils.nonNull(builder.getAlleles()); final List<Allele> alleles = builder.getAlleles(); if (alleles.size() < 2) throw new IllegalArgumentException("the variant context builder must contain at least 2 alleles"); // don't have to subset, the original vc has the same number and hence, the same alleles boolean keepOriginal = (vc.getAlleles().size() == alleles.size()); List<Integer> alleleIndices = builder.getAlleles().stream().map(vc::getAlleleIndex).collect(Collectors.toList()); if (keepOriginalChrCounts) { if (vc.hasAttribute(VCFConstants.ALLELE_COUNT_KEY)) builder.attribute(GATKVCFConstants.ORIGINAL_AC_KEY, keepOriginal ? vc.getAttribute(VCFConstants.ALLELE_COUNT_KEY) : alleleIndices.stream().filter(i -> i > 0).map(j -> vc.getAttributeAsList(VCFConstants.ALLELE_COUNT_KEY).get(j - 1)).collect(Collectors.toList()).get(0)); if (vc.hasAttribute(VCFConstants.ALLELE_FREQUENCY_KEY)) builder.attribute(GATKVCFConstants.ORIGINAL_AF_KEY, keepOriginal ? vc.getAttribute(VCFConstants.ALLELE_FREQUENCY_KEY) : alleleIndices.stream().filter(i -> i > 0).map(j -> vc.getAttributeAsList(VCFConstants.ALLELE_FREQUENCY_KEY).get(j - 1)).collect(Collectors.toList()).get(0)); if (vc.hasAttribute(VCFConstants.ALLELE_NUMBER_KEY)) { builder.attribute(GATKVCFConstants.ORIGINAL_AN_KEY, vc.getAttribute(VCFConstants.ALLELE_NUMBER_KEY)); } } VariantContextUtils.calculateChromosomeCounts(builder, true); } /** * From a given genotype, extract a given subset of alleles and return the new SACs * * @param g genotype to subset * @param originalAlleles the original alleles before subsetting * @param allelesToUse alleles to use in subset * @return the subsetted SACs */ private static int[] subsetSACAlleles(final Genotype g, final List<Allele> originalAlleles, final List<Allele> allelesToUse) { // Keep original SACs if using all of the alleles if ( originalAlleles.size() == allelesToUse.size() ) { return getSACs(g); } else { return makeNewSACs(g, originalAlleles, allelesToUse); } } /** * Make a new SAC array from the a subset of the genotype's original SAC * * @param g the genotype * @param originalAlleles the original alleles before subsetting * @param allelesToUse alleles to use in subset * @return subset of SACs from the original genotype, the original SACs if sacIndicesToUse is null */ private static int[] makeNewSACs(final Genotype g, final List<Allele> originalAlleles, final List<Allele> allelesToUse) { final int[] oldSACs = getSACs(g); final int[] newSACs = new int[NUM_OF_STRANDS * allelesToUse.size()]; int newIndex = 0; for (int alleleIndex = 0; alleleIndex < originalAlleles.size(); alleleIndex++) { if (allelesToUse.contains(originalAlleles.get(alleleIndex))) { newSACs[NUM_OF_STRANDS * newIndex] = oldSACs[NUM_OF_STRANDS * alleleIndex]; newSACs[NUM_OF_STRANDS * newIndex + 1] = oldSACs[NUM_OF_STRANDS * alleleIndex + 1]; newIndex++; } } return newSACs; } /** * Get the genotype SACs * * @param g the genotype * @return an arrays of SACs * @throws IllegalArgumentException if the genotype does not have an SAC attribute * @throws GATKException if the type of the SACs is unexpected */ private static int[] getSACs(final Genotype g) { if ( !g.hasExtendedAttribute(GATKVCFConstants.STRAND_COUNT_BY_SAMPLE_KEY) ) { throw new IllegalArgumentException("Genotype must have SAC"); } Class<?> clazz = g.getExtendedAttributes().get(GATKVCFConstants.STRAND_COUNT_BY_SAMPLE_KEY).getClass(); if ( clazz.equals(String.class) ) { final String SACsString = (String) g.getExtendedAttributes().get(GATKVCFConstants.STRAND_COUNT_BY_SAMPLE_KEY); String[] stringSACs = SACsString.split(","); final int[] intSACs = new int[stringSACs.length]; int i = 0; for (String sac : stringSACs) { intSACs[i++] = Integer.parseInt(sac); } return intSACs; } else if ( clazz.equals(int[].class) ) { return (int[]) g.getExtendedAttributes().get(GATKVCFConstants.STRAND_COUNT_BY_SAMPLE_KEY); } else { throw new GATKException("Unexpected SAC type"); } } /** * Returns the new set of alleles to use based on a likelihood score: alleles' scores are the sum of their counts in * sample genotypes, weighted by the confidence in the genotype calls. * * In the case of ties, the alleles will be chosen from lowest index to highest index. * * @param vc target variant context. * @param numAltAllelesToKeep number of alt alleles to keep. * @return the list of alleles to keep, including the reference and {@link Allele#NON_REF_ALLELE} if present * */ public static List<Allele> calculateMostLikelyAlleles(final VariantContext vc, final int defaultPloidy, final int numAltAllelesToKeep) { Utils.nonNull(vc, "vc is null"); Utils.validateArg(defaultPloidy > 0, () -> "default ploidy must be > 0 but defaultPloidy=" + defaultPloidy); Utils.validateArg(numAltAllelesToKeep > 0, () -> "numAltAllelesToKeep must be > 0, but numAltAllelesToKeep=" + numAltAllelesToKeep); final boolean hasSymbolicNonRef = vc.hasAllele(Allele.NON_REF_ALLELE); final int numberOfAllelesThatArentProperAlts = hasSymbolicNonRef ? 2 : 1; final int numberOfProperAltAlleles = vc.getNAlleles() - numberOfAllelesThatArentProperAlts; if (numAltAllelesToKeep >= numberOfProperAltAlleles) { return vc.getAlleles(); } final double[] likelihoodSums = calculateLikelihoodSums(vc, defaultPloidy); return filterToMaxNumberOfAltAllelesBasedOnScores(numAltAllelesToKeep, vc.getAlleles(), likelihoodSums); } /** * @param alleles a list of alleles including the reference and possible the NON_REF * @return a list of the best proper alt alleles based on the likelihood sums, keeping the reference allele and {@link Allele#NON_REF_ALLELE} * the list will include no more than {@code numAltAllelesToKeep + 2} alleles and will maintain the order of the original alleles in {@code vc} * */ public static List<Allele> filterToMaxNumberOfAltAllelesBasedOnScores(int numAltAllelesToKeep, List<Allele> alleles, double[] likelihoodSums) { final int nonRefAltAlleleIndex = alleles.indexOf(Allele.NON_REF_ALLELE); final int numAlleles = alleles.size(); final Set<Integer> properAltIndexesToKeep = IntStream.range(1, numAlleles).filter(n -> n != nonRefAltAlleleIndex).boxed() .sorted(Comparator.comparingDouble((Integer n) -> likelihoodSums[n]).reversed()) .limit(numAltAllelesToKeep) .collect(Collectors.toSet()); return IntStream.range(0, numAlleles) .filter( i -> i == 0 || i == nonRefAltAlleleIndex || properAltIndexesToKeep.contains(i) ) .mapToObj(alleles::get) .collect(Collectors.toList()); } /** the likelihood sum for an alt allele is the sum over all samples whose likeliest genotype contains that allele of * the GL difference between the most likely genotype and the hom ref genotype * * Since GLs are log likelihoods, this quantity is thus * SUM_{samples whose likeliest genotype contains this alt allele} log(likelihood alt / likelihood hom ref) */ @VisibleForTesting static double[] calculateLikelihoodSums(final VariantContext vc, final int defaultPloidy) { final double[] likelihoodSums = new double[vc.getNAlleles()]; for ( final Genotype genotype : vc.getGenotypes().iterateInSampleNameOrder() ) { final GenotypeLikelihoods gls = genotype.getLikelihoods(); if (gls == null) { continue; } final double[] glsVector = gls.getAsVector(); final int indexOfMostLikelyGenotype = MathUtils.maxElementIndex(glsVector); final double GLDiffBetweenRefAndBest = glsVector[indexOfMostLikelyGenotype] - glsVector[PL_INDEX_OF_HOM_REF]; final int ploidy = genotype.getPloidy() > 0 ? genotype.getPloidy() : defaultPloidy; final int[] alleleCounts = new GenotypeLikelihoodCalculators() .getInstance(ploidy, vc.getNAlleles()).genotypeAlleleCountsAt(indexOfMostLikelyGenotype) .alleleCountsByIndex(vc.getNAlleles() - 1); for (int allele = 1; allele < alleleCounts.length; allele++) { if (alleleCounts[allele] > 0) { likelihoodSums[allele] += GLDiffBetweenRefAndBest; } } } return likelihoodSums; } /** * Given a list of original alleles and a subset of new alleles to retain, find the array of old PL indices that correspond * to new PL indices i.e. result[7] = old PL index of genotype containing same alleles as the new genotype with PL index 7. * * This method is written in terms f indices rather than subsetting PLs directly in order to produce output that can be * recycled from sample to sample, provided that the ploidy is the same. * * @param ploidy Ploidy (number of chromosomes describing PL's) * @param originalAlleles List of original alleles * @param newAlleles New alleles -- must be a subset of {@code originalAlleles} * @return old PL indices of new genotypes */ public static int[] subsettedPLIndices(final int ploidy, final List<Allele> originalAlleles, final List<Allele> newAlleles) { final int[] result = new int[GenotypeLikelihoods.numLikelihoods(newAlleles.size(), ploidy)]; final Permutation<Allele> allelePermutation = new IndexedAlleleList<>(originalAlleles).permutation(new IndexedAlleleList<>(newAlleles)); final GenotypeLikelihoodCalculator glCalc = GL_CALCS.getInstance(ploidy, originalAlleles.size()); for (int oldPLIndex = 0; oldPLIndex < glCalc.genotypeCount(); oldPLIndex++) { final GenotypeAlleleCounts oldAlleleCounts = glCalc.genotypeAlleleCountsAt(oldPLIndex); final boolean containsOnlyNewAlleles = IntStream.range(0, oldAlleleCounts.distinctAlleleCount()) .map(oldAlleleCounts::alleleIndexAt).allMatch(allelePermutation::isKept); if (containsOnlyNewAlleles) { // make an array in the format described in {@link GenotypeAlleleCounts}: // [(new) index of first allele, count of first allele, (new) index of second allele, count of second allele. . .] final int[] newAlleleCounts = IntStream.range(0, newAlleles.size()).flatMap(newAlleleIndex -> IntStream.of(newAlleleIndex, oldAlleleCounts.alleleCountFor(allelePermutation.fromIndex(newAlleleIndex)))).toArray(); final int newPLIndex = glCalc.alleleCountsToIndex(newAlleleCounts); result[newPLIndex] = oldPLIndex; } } return result; } /** * Determines the allele mapping from myAlleles to the targetAlleles, substituting the generic "<NON_REF>" as appropriate. * If the remappedAlleles set does not contain "<NON_REF>" as an allele, it throws an exception. * * @param remappedAlleles the list of alleles to evaluate * @param targetAlleles the target list of alleles * @param position position to output error info * @param g genotype from which targetAlleles are derived * @return non-null array of ints representing indexes */ public static int[] getIndexesOfRelevantAllelesForGVCF(final List<Allele> remappedAlleles, final List<Allele> targetAlleles, final int position, final Genotype g, final boolean doSomaticMerge) { Utils.nonEmpty(remappedAlleles); Utils.nonEmpty(targetAlleles); if ( !remappedAlleles.contains(Allele.NON_REF_ALLELE) ) { throw new UserException("The list of input alleles must contain " + Allele.NON_REF_ALLELE + " as an allele but that is not the case at position " + position + "; please use the Haplotype Caller with gVCF output to generate appropriate records"); } final int indexOfNonRef = remappedAlleles.indexOf(Allele.NON_REF_ALLELE); final int[] indexMapping = new int[targetAlleles.size()]; // the reference likelihoods should always map to each other (even if the alleles don't) indexMapping[0] = 0; // create the index mapping, using the <NON-REF> allele whenever such a mapping doesn't exist for ( int i = 1; i < targetAlleles.size(); i++ ) { // if there's more than 1 spanning deletion (*) allele then we need to use the best one if (targetAlleles.get(i) == Allele.SPAN_DEL && !doSomaticMerge && g.hasPL()) { final int occurrences = Collections.frequency(remappedAlleles, Allele.SPAN_DEL); if (occurrences > 1) { final int indexOfBestDel = indexOfBestDel(remappedAlleles, g.getPL(), g.getPloidy()); indexMapping[i] = (indexOfBestDel == -1 ? indexOfNonRef : indexOfBestDel); continue; } } final int indexOfRemappedAllele = remappedAlleles.indexOf(targetAlleles.get(i)); indexMapping[i] = indexOfRemappedAllele == -1 ? indexOfNonRef : indexOfRemappedAllele; } return indexMapping; } public static int[] getIndexesOfRelevantAlleles(final List<Allele> remappedAlleles, final List<Allele> targetAlleles, final int position, final Genotype g) { Utils.nonEmpty(remappedAlleles); Utils.nonEmpty(targetAlleles); final int[] indexMapping = new int[targetAlleles.size()]; // the reference likelihoods should always map to each other (even if the alleles don't) indexMapping[0] = 0; for ( int i = 1; i < targetAlleles.size(); i++ ) { // if there's more than 1 spanning deletion (*) allele then we need to use the best one if (targetAlleles.get(i) == Allele.SPAN_DEL && g.hasPL()) { final int occurrences = Collections.frequency(remappedAlleles, Allele.SPAN_DEL); if (occurrences > 1) { final int indexOfBestDel = indexOfBestDel(remappedAlleles, g.getPL(), g.getPloidy()); if (indexOfBestDel == -1) { throw new IllegalArgumentException("At position " + position + " targetAlleles contains a spanning deletion, but remappedAlleles does not."); } indexMapping[i] = indexOfBestDel; continue; } } final int indexOfRemappedAllele = remappedAlleles.indexOf(targetAlleles.get(i)); if (indexOfRemappedAllele == -1) { throw new IllegalArgumentException("At position " + position + " targetAlleles contains a " + targetAlleles.get(i) + " allele, but remappedAlleles does not."); } indexMapping[i] = indexOfRemappedAllele; } return indexMapping; } /** * Returns the index of the best spanning deletion allele based on AD counts * * @param alleles the list of alleles * @param PLs the list of corresponding PL values * @param ploidy the ploidy of the sample * @return the best index or -1 if not found */ private static int indexOfBestDel(final List<Allele> alleles, final int[] PLs, final int ploidy) { int bestIndex = -1; int bestPL = Integer.MAX_VALUE; for ( int i = 0; i < alleles.size(); i++ ) { if ( alleles.get(i) == Allele.SPAN_DEL ) { final int homAltIndex = findHomIndex(GL_CALCS.getInstance(ploidy, alleles.size()), i, ploidy); final int PL = PLs[homAltIndex]; if ( PL < bestPL ) { bestIndex = i; bestPL = PL; } } } return bestIndex; } /** //TODO simplify these methods * Returns the index of the PL that represents the homozygous genotype of the given i'th allele * * @param i the index of the allele with the list of alleles * @param ploidy the ploidy of the sample * @return the hom index */ private static int findHomIndex(final GenotypeLikelihoodCalculator calculator, final int i, final int ploidy) { // some quick optimizations for the common case if ( ploidy == 2 ) return GenotypeLikelihoods.calculatePLindex(i, i); if ( ploidy == 1 ) return i; final int[] alleleIndexes = new int[ploidy]; Arrays.fill(alleleIndexes, i); return calculator.allelesToIndex(alleleIndexes); } /** * Generates a new AD array by adding zeros for missing alleles given the set of indexes of the Genotype's current * alleles from the original AD. * * @param originalAD the original AD to extend * @param indexesOfRelevantAlleles the indexes of the original alleles corresponding to the new alleles * @return non-null array of new AD values */ public static int[] generateAD(final int[] originalAD, final int[] indexesOfRelevantAlleles) { final List<Integer> adList = remapRLengthList(Arrays.stream(originalAD).boxed().collect(Collectors.toList()), indexesOfRelevantAlleles, 0); return Ints.toArray(adList); } /** * Generates a new AF (allele fraction) array * @param originalAF * @param indexesOfRelevantAlleles * @return non-null array of new AFs */ public static double[] generateAF(final double[] originalAF, final int[] indexesOfRelevantAlleles) { final List<Double> afList = remapALengthList(Arrays.stream(originalAF).boxed().collect(Collectors.toList()), indexesOfRelevantAlleles, 0.0); return Doubles.toArray(afList); } /** * Given a list of per-allele attributes including the reference allele, subset to relevant alleles * @param originalList * @param indexesOfRelevantAlleles * @return */ public static <T> List<T> remapRLengthList(final List<T> originalList, final int[] indexesOfRelevantAlleles, T filler) { Utils.nonNull(originalList); Utils.nonNull(indexesOfRelevantAlleles); return remapList(originalList, indexesOfRelevantAlleles, 0, filler); } /** * Given a list of per-alt-allele attributes, subset to relevant alt alleles * @param originalList * @param indexesOfRelevantAlleles * @return */ public static <T> List<T> remapALengthList(final List<T> originalList, final int[] indexesOfRelevantAlleles, T filler) { Utils.nonNull(originalList); Utils.nonNull(indexesOfRelevantAlleles); return remapList(originalList, indexesOfRelevantAlleles, 1, filler); } /** * Subset a list of per-allele attributes * * @param originalList input per-allele attributes * @param indexesOfRelevantAlleles indexes of alleles to keep, including the reference * @param offset used to indicate whether to include the ref allele values in the output or not * @param filler default value to use if no value is mapped * @return a non-null List */ private static <T> List<T> remapList(final List<T> originalList, final int[] indexesOfRelevantAlleles, final int offset, T filler) { final int numValues = indexesOfRelevantAlleles.length - offset; //since these are log odds, this should just be alts final List<T> newValues = new ArrayList<>(); for ( int i = offset; i < numValues + offset; i++ ) { final int oldIndex = indexesOfRelevantAlleles[i]; if ( oldIndex >= originalList.size() + offset ) { newValues.add(i-offset, filler); } else { newValues.add(i-offset, originalList.get(oldIndex-offset)); } } return newValues; } }
Kreisel Electric, a small Austria-based startup known for their impressive electric vehicle conversions, inaugurated their new research and development center this week and with it, they unveiled a new all-electric Hummer prototype. Who best than Arnold Schwarzenegger to unveil the vehicle. The movie star and former Governor of California was an owner and a fan of the gas-guzzling Hummer before becoming an environmentalist. Earlier this year, Kreisel built a custom all-electric Mercedes G-Class for Schwarzenegger and he was present again yesterday at the inauguration of Kreisel’s new research and development to unveil the electric Hummer. “Kreisel Electric electrified my G–class last winter. And now a Hummer. If Kreisel keeps it up at this pace, I will soon be able to fly here from LA in an electric airplane,” said Arnold Schwarzenegger Here are a few pictures from the event: From left to right: Christian Schlögl (CEO Kreisel Electric), Arnold Schwarzenegger, Patrick Knapp-Schwarzenegger, strategic partner at Kreisel Electric, opening the new Kreisel Electric high-tech research and development center.(Copyright: Martin Hesz / Kreisel Electric) (PRNewsfoto/Kreisel Electric) From left to right: Arnold Schwarzenegger with the Austrian Federal Chancellor Mag. Christian Kern and Vice Chancellor Univ.-Prof. Dr. Wolfgang Brandstetter (Copyright: Martin Hesz / Kreisel Electric) (PRNewsfoto/Kreisel Electric) Arnold Schwarzenegger in front of the world’s first electrified Hummer H1 at the opening of the Kreisel Electric high-tech research and development center (Copyright: Martin Hesz / Kreisel Electric) (PRNewsfoto/Kreisel Electric) Kreisel’s main business is to develop battery packs for electric vehicles and to showcase their tech, they made impressive EV conversions like a VW e-Golf with a 55 kWh battery pack and s $1 million classic Porsche 910. Now they describe their latest project: Kreisel Electric developed an off-road prototype on the basis of the H1 model in just two months’ time. It is equipped with high-performance batteries from Kreisel Electric featuring a 100 kWh capacity and two electric motors on the front and back axles, with a system output of 360 kW (490 PS). The vehicle can reach speeds of up to 120 km/h and has a range of about 300 kilometers and a total weight of 3,300 kg. While the Hummer grabbed the attention, Kreisel’s event was to inaugurate its new 7,000 m2 (75,000 sq-ft) space in Rainbach. The company says that they will use the space for “a prototype workshop and a completely automated manufacturing line for Kreisel Electric battery storage devices for use in the small-batch production of passenger vehicles, utility vehicles, buses, boats and airplanes, as well as in storage solutions.”
Instead, most of the news and commentary written about Amherst Uprising has focused on the group’s 11 demands. They’re what students chose to declare to the world. And they were extraordinary ill-conceived. (Most have already been rejected.) There are exceptions. The students pushed for the school to distance itself from an unofficial mascot, Lord Jeff, who is said to have joined in giving blankets infected with smallpox to Native Americans. (On Wednesday, a majority of the faculty voted to change the school’s mascot; the athletic department had been quietly removing him from apparel since September.) And the activists believe that faculty, staff, and students who express agreement with their movement should not be punished. After all, people in an academic community shouldn’t be penalized for taking an earnest position in campus discourse. But the activists didn’t adhere to that same principal. Here’s the fifth demand in all its ignominy: President Martin must issue a statement to the Amherst College community at large that states we do not tolerate the actions of student(s) who posted the “All Lives Matter” posters, and the “Free Speech” posters that stated that “in memoriam of the true victim of the Missouri Protests: Free Speech.” Also let the student body know that it was racially insensitive to the students of color on our college campus and beyond who are victim to racial harassment and death threats; alert them that Student Affairs may require them to go through the Disciplinary Process if a formal complaint is filed, and that they will be required to attend extensive training for racial and cultural competency. Protestors were trying to punish counter-protests with an extensive, compulsory racial-reeducation program. Perhaps the curriculum could be issued in a little red book. The sixth demand, keeping to the illiberal theme, says: “ President Martin must issue a statement of support for the revision of the Honor Code to reflect a zero-tolerance policy for racial insensitivity and hate speech.” One can assert, as the students do in their letter, that Amherst is a place of institutionalized white supremacy. Or one can believe that Amherst administrators should be in charge of deciding what’s racially insensitive and punishing it. To believe both things at once is incoherent. The 11th demand, that “Dean Epstein must encourage faculty to provide a space for students to discuss this week’s events during class time,” treads too close to impinging on academic freedom for my taste, although faculty are welcome to choose to teach the controversy. Of course, to teach a controversy requires dispassionate analysis and the airing of some opinions that student activists find insensitive. Those are the worst of the demands. And yet, the others are of interest too, for the insight they offer into the impulses of the students who created them. A bit of context is useful. Many of history’s most successful protest movements involved untold hours of careful planning. No one would expect college students to dash off demands as thoughtful in an afternoon. But the students–– unexpectedly occupying a library with a few hours to decide what they ought to demand––might have Googled the most successful movement for racial equality in this country’s history and used the demands that they issued as a template. Relevant inspiration from the civil-rights era is readily available. Before the March on Washington in 1963, the coalition of civil-rights organizers and activists who planned the protest published a list of ten demands in a brief program. Their aim was nothing less than securing full equality under the law and decent economic opportunity for a race that had never enjoyed either in the history of the country. In that sense, their demands were wildly ambitious. In another sense, however, the document was extremely pragmatic. All demands were at least within the realm of political possibility. Each pushed a very specific policy change with obvious relevance to the lives of black Americans, and were likely to directly benefit large swaths of the black community and beyond. And the benefits were concrete, not spiritual (as uplifting as it was when various items on the list were met). Students could no more equal that document in an afternoon than Amherst’s science faculty could gather one evening and recreate the Manhattan Project. But even with that caveat, it is instructive to compare the demands of the 1960s civil rights leaders to the demands of the student activists, who’ve taken something like an opposite approach. Their demands don’t just seem impractical; they’re also surprisingly lacking in ambition. Take the 10th demand: “The Office of Alumni and Parent Programs must send former students an email of current events on campus including a statement that Amherst College does not condone any racist or culturally insensitive reactions to this information.” ​Shouldn’t the activists demand the ability to send their own emails? They seem to believe that taking action entails demanding that authority figures take action. It is striking that the first four demands on their list were all calls for authority figures to issue statements of various sorts. And three of the four requested statements would serve no real function save for the students to have their world view validated by adults: President Martin must issue a statement of apology to students, alumni and former students, faculty, administration and staff who have been victims of several injustices including but not limited to our institutional legacy of white supremacy, colonialism, anti-black racism, anti-Latinx racism, anti-Native American racism, anti-Native/ indigenous racism, anti-Asian racism, anti-Middle Eastern racism, heterosexism, cis-sexism, xenophobia, anti-Semitism, ableism, mental health stigma, and classism. Also include that marginalized communities and their allies should feel safe at Amherst College. We demand Cullen Murphy ‘74, Chairman of the Board of Trustees, to issue a statement of apology to students, alumni and former students, faculty, administration, and staff who have been victims of several injustices including but not limited to our institutional legacy of white supremacy, colonialism, anti-black racism, anti-Latinx racism, anti-Native American racism, anti-Native/ indigenous racism, anti-Asian racism, anti-Middle Eastern racism, heterosexism, cis-sexism, xenophobia, anti-Semitism, ableism, mental health stigma, and classism Amherst College Police Department must issue a statement of protection and defense from any form of violence, threats, or retaliation of any kind resulting from this movement. President Martin must issue a statement of apology to faculty, staff and administrators of color as well as their allies, neither of whom were provided a safe space for them to thrive while at Amherst College. What good would coerced statements do? And the only demand that seems to serve a functional purpose–that police declare that no force be used against their movement–seems at odds with the preamble. “If these goals are not initiated within the next 24 to 48 hours, and completed by November 18th, we will organize and respond in a radical manner, through civil disobedience,” Amherst Uprising declared right at the top. “If there is a continued failure to meet our demands, it will result in an escalation of our response.” How likely are the cops to pledge forbearance in response to a document that opens with an implicit threat of disorder? Campus social-justice activists frequently use words in ways that don’t correspond to their usual meanings, and while the students talked of a “radical” response and an “escalation,” it was most likely just campus-activist hyperbole. But imagine an Amherst lawyer advising the college president. Failing to act on the message as a warning of violence might expose the university to liability, if a student or staff member were subsequently hurt. The statement, on the whole, seems more likely to provoke aggressive policing than to restrain it. The way that students framed their demands could only have had an effect opposite of what they desired. But it was the contrast between the faux-militant preamble, with its stark warnings about radical measures and escalation, and the eighth demand on the list that actually made me laugh out loud: “Dean Epstein must ask faculty to excuse all students from all 5 College classes, work shifts, and assignments from November 12th, 2015 to November 13th, 2015 given their organization of and attendance at the Sit-In.” They’re posing as radicals, but still asking permission to skip class. The shortcomings of the 11 demands are a shame, and ought to provoke introspection among campus progressives. Why were these illiberal demands the go-to impulse? There are better ways to protest and solve problems. The testimony of black students in the school library evidently had a powerful effect. Their words should not be diminished by the missteps of the activists, but neither should the power of their testimony be invoked as if they make these demands more forgivable. Trying to subject classmates who disagree with you to punishment and compulsory reeducation is abhorrent. Other demands are obviously unmeetable or written as if their core objective was to secure rhetorical validation from authority figures. If one believes there’s nothing actually wrong on American college campuses today, that the widespread upset of black students is much ado about nothing, then it doesn’t really matter that activism aimed at improving the situation is so rife with intolerant ideology, opaque jargon, and faulty premises. The semester will end. If protests even restart next term, the movements keeping them alive will fade as students graduate and activist excesses cause supporters to fall away. The activism on display at Amherst isn’t going to win converts off campus.
<filename>service-container/src/main/java/io/zeebe/servicecontainer/ServiceGroupReference.java /* * Copyright © 2017 camunda services GmbH (<EMAIL>) * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package io.zeebe.servicecontainer; import java.util.Collection; import java.util.Map; import java.util.function.BiConsumer; public final class ServiceGroupReference<S> { @SuppressWarnings("rawtypes") private static final BiConsumer NOOP_CONSUMER = (n, v) -> { // ignore }; protected BiConsumer<ServiceName<S>, S> addHandler; protected BiConsumer<ServiceName<S>, S> removeHandler; @SuppressWarnings("unchecked") private ServiceGroupReference() { this(NOOP_CONSUMER, NOOP_CONSUMER); } private ServiceGroupReference( BiConsumer<ServiceName<S>, S> addHandler, BiConsumer<ServiceName<S>, S> removeHandler) { this.addHandler = addHandler; this.removeHandler = removeHandler; } public BiConsumer<ServiceName<S>, S> getAddHandler() { return addHandler; } public BiConsumer<ServiceName<S>, S> getRemoveHandler() { return removeHandler; } public static <S> ServiceGroupReference<S> collection(Collection<S> collection) { final BiConsumer<ServiceName<S>, S> addHandler = (name, v) -> collection.add(v); final BiConsumer<ServiceName<S>, S> removeHandler = (name, v) -> collection.remove(v); return new ServiceGroupReference<>(addHandler, removeHandler); } public static <S, K> ServiceGroupReference<S> map(Map<ServiceName<S>, S> map) { final BiConsumer<ServiceName<S>, S> addHandler = (name, v) -> map.put(name, v); final BiConsumer<ServiceName<S>, S> removeHandler = (name, v) -> map.remove(name, v); return new ServiceGroupReference<>(addHandler, removeHandler); } public static <S> ReferenceBuilder<S> create() { return new ReferenceBuilder<>(); } public static class ReferenceBuilder<S> { protected final ServiceGroupReference<S> referenceCollection = new ServiceGroupReference<>(); public ReferenceBuilder<S> onRemove(BiConsumer<ServiceName<S>, S> removeConsumer) { referenceCollection.removeHandler = removeConsumer; return this; } public ReferenceBuilder<S> onAdd(BiConsumer<ServiceName<S>, S> addConsumer) { referenceCollection.addHandler = addConsumer; return this; } public ServiceGroupReference<S> build() { return referenceCollection; } } }
export = Dtype; declare function Dtype(dtype: string): Function | void;
<gh_stars>0 /* * BayesClassifier.cpp * * Created on: Mar 25, 2016 * Author: derek */ #include "BayesClassifier.h" #include <cmath> BayesClassifier::BayesClassifier( const std::vector<CovarianceMatrix>& cmInverses, const std::vector<Decimal>& cmDeterminants, const std::vector<RowVector>& meanVectors) : cmInverses{cmInverses}, cmDeterminants{cmDeterminants}, meanVectors{meanVectors} { } uint8_t BayesClassifier::classify(const RowVector& point) const { for (auto a = 0; a < meanVectors.size(); ++a) { auto allPositive = true; // Compare A to everything else too see if anything is better for (auto b = 0; b < meanVectors.size(); ++b) { if (a == b) { continue; } auto value = std::log(cmDeterminants[b]) - std::log(cmDeterminants[a]) + (point - meanVectors[b]) * cmInverses[b] * (point - meanVectors[b]).transpose() - (point - meanVectors[a]) * cmInverses[a] * (point - meanVectors[a]).transpose(); if (value < 0) { // The point was closer to the other mean, so reject class A allPositive = false; break; } } // If we didn't find anything better, A really is our best class if (allPositive) { return a + 1; // Types start at index 1, but vectors at index 0 } } assert(false); // Failed to find a class the point was closest to return 0; }
<reponame>FullscreenSauna/Haskell<filename>generateMathEquation.hs generateMathEquation :: [Integer] -> String generateMathEquation [] = "" generateMathEquation array = foldr(\x y -> concat["(",x," + ",y,")"]) (show(last array)) (map show (init array)) generateMathEquationReversed :: [Integer] -> String generateMathEquationReversed [] = "" generateMathEquationReversed array = foldl(\x y -> concat["(",x," + ",y,")"]) (show(head array)) (map show (tail array)) main = do let list = [1, 2, 3, 4, 5] print $ generateMathEquation list print $ generateMathEquationReversed list --print $ generateMathEquation 5 6
/** * Created by Sylvain in 2022/01. */ @Service @Slf4j public class FetchBookService { @Autowired @Qualifier("restTemplate") private RestTemplate restTemplate; /** * set image's and text's URL for a book object * download the text to files in /books/id.txt * @param book a book object to be completed init * @return Future<Entry<Id of Book, Book>> a Future Task of work on the book object */ @Async("asyncTaskExecutor") public Future<Map.Entry<Integer, Book>> getBook(Book book){ Format format = book.getFormats(); String textURL = getTextURL(format); if (textURL == null) return new AsyncResult<>(null); if (format.getImage() != null){ book.setImage(format.getImage().replace("small", "medium")); } book.setText(textURL); try { String text = restTemplate.getForObject(textURL, String.class); if (text == null) return new AsyncResult<>(null); if (text.split("\\s+").length > 10000){ PrintWriter pw = new PrintWriter(new FileOutputStream("books/" + book.getId() + ".txt")); pw.println(text); pw.flush(); pw.close(); Map.Entry<Integer, Book> b = new AbstractMap.SimpleEntry<>(book.getId(), book); return new AsyncResult<>(b); }else { return new AsyncResult<>(null); } }catch (HttpClientErrorException | FileNotFoundException ignored){ } return new AsyncResult<>(null); } /** * get at least one valid URL to download the text of a book * @param format the URLs given by "gutendex" api * @return a URL to download txt */ private String getTextURL(Format format){ if (format.getText1() != null){ if (format.getText1().endsWith(".txt") || format.getText1().endsWith(".txt.utf-8")) return format.getText1(); } if (format.getText2() != null) { if (format.getText2().endsWith(".txt") || format.getText2().endsWith(".txt.utf-8")) return format.getText2(); } if (format.getText3() != null) { if (format.getText3().endsWith(".txt") || format.getText3().endsWith(".txt.utf-8")) return format.getText3(); } return null; } }
package com.devy.droidipc; import android.annotation.SuppressLint; import android.content.Intent; import android.os.Bundle; import android.os.IBinder; import android.os.Parcel; import android.os.RemoteException; import com.devy.droidipc.LogControler.Level; @SuppressLint("NewApi") public class RemoteClientCommandListener extends RemoteCommandListener { @Override protected void onHandleCommand(int cmd, Intent intent) { if(cmd == ServiceContext.CMD_GET_SERVER_SERVICE_MANAGER) { Bundle binders = intent.getBundleExtra(ServiceContext.EXTRA_BUNDLE); IBinder serverReadyBinder = binders.getBinder(ServiceContext.EXTRA_BUNDLE_SERVER_READY_BINDER); String packageName = binders.getString(ServiceContext.EXTRA_BUNDLE_PACKAGE_NAME); ServerReadyNotifier serverReadyNotifier = new ServerReadyNotifier(packageName,serverReadyBinder); serverReadyNotifier.runInCoreThread(); } } private class ServerReadyNotifier implements Runnable { private IBinder mClient; private String mPackageName; public ServerReadyNotifier(String packageName,IBinder client) { mPackageName = packageName; mClient = client; } public void runInCoreThread() { CoreThread.getHandler().post(this); } private void log(String log){ LogControler.print(Level.INFO, "[RemoteClientCommandListener] (" + mPackageName + ") " + log); } @Override public void run() { if(mClient!=null){ Parcel data = Parcel.obtain(); Parcel reply = Parcel.obtain(); try { log("package name = " + mPackageName); data.writeStrongBinder(ServiceManagerThread.getDefault()); mClient.transact(ServiceContext.SERVER_READY, data, reply, 0); int result = reply.readInt(); if(result == ServiceContext.SUCCESS) { log("ServerReadyNotifier success"); } else { log("ServerReadyNotifier error"); } } catch (RemoteException e) { e.printStackTrace(); log("getService exception"); } finally { data.recycle(); reply.recycle(); } } } } }
import { Component, ViewChild } from '@angular/core'; import { FilePondComponent } from './modules/filepond/filepond.component'; import { FilePondOptions } from 'filepond'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { @ViewChild('myPond') myPond: FilePondComponent pondOptions: FilePondOptions = { allowMultiple: true, labelIdle: 'Drop files here...', // fake server to simulate loading a 'local' server file and processing a file server: { process: (fieldName, file, metadata, load) => { // simulates uploading a file setTimeout(() => { load(Date.now().toString) }, 1500); }, load: (source, load) => { // simulates loading a file from the server fetch(source).then(res => res.blob()).then(load); } } } pondFiles: FilePondOptions["files"] = [ { source: 'assets/photo.jpeg', options: { type: 'local' } } ] pondHandleInit() { console.log('FilePond has initialised', this.myPond); } pondHandleAddFile(event: any) { console.log('A file was added', event); } pondHandleActivateFile(event: any) { console.log('A file was activated', event) } }
package test import ( "encoding/json" "net/http/httptest" "testing" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) func TestSchemaArrayFloat(t *testing.T) { h := GetPetsIDsHandlerFunc(func(_ GetPetsIDsRequester) GetPetsIDsResponder { return GetPetsIDsResponse200JSON([]float64{0.8}) }) w := httptest.NewRecorder() h.ServeHTTP(w, httptest.NewRequest("GET", "/pets", nil)) require.Equal(t, 200, w.Code) var out []float64 err := json.Unmarshal(w.Body.Bytes(), &out) require.NoError(t, err) require.Len(t, out, 1) assert.Equal(t, float64(0.8), out[0]) }
// GetVolumeStatus retrieves an array of replica statuses. func GetVolumeStatus(cStorVolume *apis.CStorVolume) (*apis.CVStatus, error) { statuses, err := UnixSockVar.SendCommand(util.IstgtReplicaCmd) if err != nil { glog.Errorf("Failed to list replicas.") return nil, err } stringResp := fmt.Sprintf("%s", statuses) TODO: Find a better approach jsonBeginIndex := strings.Index(stringResp, "{") jsonEndIndex := strings.LastIndex(stringResp, "}") if jsonBeginIndex >= jsonEndIndex { return nil, nil } return extractReplicaStatusFromJSON(stringResp[jsonBeginIndex : jsonEndIndex+1]) }
// Only prepared statements needed for the reader private static void createReaderPreparedStatements() throws SQLException { sqlIdentifierNameQuery = connection.prepareStatement(IDENTIFIER_NAME_QUERY_STATEMENT); sqlIdentifierNameInsert = connection.prepareStatement( IDENTIFIER_NAME_INSERT_STATEMENT, Statement.RETURN_GENERATED_KEYS); sqlIdentifierNameBySpeciesQuery = connection.prepareStatement(IDENTIFIER_NAME_BY_SPECIES_QUERY); sqlComponentWordKeyQuery = connection.prepareStatement(COMPONENT_WORD_KEY_QUERY_STATEMENT); sqlComponentWordQuery = connection.prepareStatement(COMPONENT_WORD_QUERY); sqlComponentWordInsert = connection.prepareStatement( COMPONENT_WORD_INSERT_STATEMENT, Statement.RETURN_GENERATED_KEYS); sqlComponentWordXrefInsert = connection.prepareStatement( COMPONENT_WORD_XREF_INSERT_STATEMENT, Statement.RETURN_GENERATED_KEYS); sqlComponentWordsXrefQuery = connection.prepareStatement(COMPONENT_WORDS_XREF_QUERY); sqlProjectInsert = connection.prepareStatement( PROJECT_INSERT_STATEMENT, Statement.RETURN_GENERATED_KEYS); sqlPackageInsert = connection.prepareStatement( PACKAGE_INSERT_STATEMENT, Statement.RETURN_GENERATED_KEYS); sqlProjectsQuery = connection.prepareStatement(PROJECTS_QUERY_STATEMENT); sqlPackageNameKeysForProjectQuery = connection.prepareStatement(PACKAGE_NAME_KEYS_FOR_PROJECT_QUERY); sqlPackageNameQuery = connection.prepareStatement(PACKAGE_NAME_QUERY); sqlPackageNameKeyQuery = connection.prepareStatement( PACKAGE_NAME_KEY_QUERY ); sqlTypeNameQuery = connection.prepareStatement(TYPE_NAME_QUERY); sqlTypeNameIdentifierQuery = connection.prepareStatement( TYPE_NAME_IDENTIFIER_QUERY ); sqlSuperClassQuery = connection.prepareStatement(SUPER_CLASS_QUERY); sqlSuperTypeQuery = connection.prepareStatement(SUPER_TYPE_QUERY); sqlNamedPackageKeyQuery = connection.prepareStatement( NAMED_PACKAGE_KEY_QUERY ); sqlClassNameKeysForPackageInProjectQuery = connection.prepareStatement( CLASS_NAME_KEYS_FOR_PACKAGE_QUERY ); sqlModifiersXrefQuery = connection.prepareStatement(MODIFIER_KEYS_QUERY); sqlAllClassDataQuery = connection.prepareStatement( ALL_CLASS_DATA_FOR_PROJECT_QUERY ); sqlAllNamesForSpeciesQuery = connection.prepareStatement(ALL_NAMES_FOR_SPECIES_QUERY); sqlAllNamesForSpeciesByProjectQuery = connection.prepareStatement(ALL_NAMES_FOR_SPECIES_BY_PROJECT_QUERY); sqlAllNamesForProjectQuery = connection.prepareStatement(ALL_NAMES_FOR_PROJECT_QUERY); sqlAllNamesQuery = connection.prepareStatement(ALL_NAMES_QUERY); sqlAllIdentifierDataQuery = connection.prepareStatement(ALL_IDENTIFIER_DATA_FOR_PROJECT); sqlProgramEntitiesBySpeciesQuery = connection.prepareStatement( ALL_PROGRAM_ENTITIES_BY_SPECIES_FOR_PROJECT_QUERY ); sqlClassOrInterfaceForFqnQuery = connection.prepareStatement( CLASS_OR_INTERFACE_FOR_FQN_QUERY ); sqlEntityCandidatesForNameQuery = connection.prepareStatement( ENTITY_CANDIDATES_FOR_NAME_QUERY ); sqlTypeNameKeyByIdentifierNameKeyQuery = connection.prepareStatement( TYPE_NAME_KEY_BY_IDENTIFIER_NAME_KEY_QUERY ); sqlSubClassKeyQuery = connection.prepareStatement( SUB_CLASS_KEY_QUERY ); sqlSubTypeKeyQuery = connection.prepareStatement( SUB_TYPE_KEY_QUERY ); sqlInheritableProgramEntityQuery = connection.prepareStatement( INHERITABLE_ENTITY_BY_KEY_QUERY ); sqlProjectDetailsQuery = connection.prepareStatement( PROJECT_DETAILS_QUERY ); sqlAllEntitiesBySpeciesQuery = connection.prepareStatement( PROGRAM_ENTITY_BY_SPECIES_QUERY ); sqlAllEntitiesByProjectQuery = connection.prepareStatement( PROGRAM_ENTITY_BY_PROJECT_QUERY ); }
//sends message so main server knows we joined a room public boolean sendGSPJoinedRoom(int userId, int roomId) { ByteBuffer block = ByteBuffer.allocate(9); block.order(ByteOrder.LITTLE_ENDIAN); block.put((byte) 0x52); block.putInt(userId); block.putInt(roomId); byte[] array = block.array(); ByteBuffer lbuf = null; try { byte[] encrypted = crypt.aesEncrypt(array); lbuf = ByteBuffer.allocate(4 + encrypted.length); lbuf.order(ByteOrder.LITTLE_ENDIAN); lbuf.putShort((short) encrypted.length); lbuf.put((byte) 0); lbuf.put((byte) 1); lbuf.put(encrypted); } catch(Exception e) { Main.println(6, "[GInterface " + id + "] Encryption error in sendGSPJoinedRoom: " + e.getLocalizedMessage()); return false; } try { if(out != null) out.write(lbuf.array()); return true; } catch(IOException ioe) { Main.println(6, "[GInterface " + id + "] I/O error in sendGSPJoinedRoom: " + ioe.getLocalizedMessage()); return false; } }
// Copyright 2015-2016 Espressif Systems (Shanghai) PTE LTD // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. #include <stdlib.h> #include "esp_spi_flash.h" #include "esp_private/system_internal.h" #include "soc/soc_memory_layout.h" #include "soc/cpu.h" #include "soc/soc_caps.h" #include "soc/rtc.h" #include "hal/soc_hal.h" #include "hal/cpu_hal.h" #include "sdkconfig.h" #include "esp_rom_sys.h" #if CONFIG_IDF_TARGET_ESP32 #include "esp32/dport_access.h" #include "esp32/cache_err_int.h" #elif CONFIG_IDF_TARGET_ESP32S2 #include "esp32s2/memprot.h" #include "esp32s2/cache_err_int.h" #elif CONFIG_IDF_TARGET_ESP32S3 #include "esp32s3/memprot.h" #include "esp32s3/cache_err_int.h" #elif CONFIG_IDF_TARGET_ESP32C3 #include "esp32c3/memprot.h" #include "esp32c3/cache_err_int.h" #endif #include "esp_private/panic_internal.h" #include "esp_private/panic_reason.h" #include "hal/wdt_types.h" #include "hal/wdt_hal.h" extern int _invalid_pc_placeholder; extern void esp_panic_handler_reconfigure_wdts(void); extern void esp_panic_handler(panic_info_t *); static wdt_hal_context_t wdt0_context = {.inst = WDT_MWDT0, .mwdt_dev = &TIMERG0}; void *g_exc_frames[SOC_CPU_CORES_NUM] = {NULL}; /* Panic handlers; these get called when an unhandled exception occurs or the assembly-level task switching / interrupt code runs into an unrecoverable error. The default task stack overflow handler and abort handler are also in here. */ /* Note: The linker script will put everything in this file in IRAM/DRAM, so it also works with flash cache disabled. */ static void print_state_for_core(const void *f, int core) { /* On Xtensa (with Window ABI), register dump is not required for backtracing. * Don't print it on abort to reduce clutter. * On other architectures, register values need to be known for backtracing. */ #if defined(__XTENSA__) && defined(XCHAL_HAVE_WINDOWED) if (!g_panic_abort) { #else if (true) { #endif panic_print_registers(f, core); panic_print_str("\r\n"); } panic_print_backtrace(f, core); } static void print_state(const void *f) { #if !CONFIG_ESP_SYSTEM_SINGLE_CORE_MODE int err_core = f == g_exc_frames[0] ? 0 : 1; #else int err_core = 0; #endif print_state_for_core(f, err_core); panic_print_str("\r\n"); #if !CONFIG_ESP_SYSTEM_SINGLE_CORE_MODE // If there are other frame info, print them as well for (int i = 0; i < SOC_CPU_CORES_NUM; i++) { // `f` is the frame for the offending core, see note above. if (err_core != i && g_exc_frames[i] != NULL) { print_state_for_core(g_exc_frames[i], i); panic_print_str("\r\n"); } } #endif } static void frame_to_panic_info(void *frame, panic_info_t *info, bool pseudo_excause) { info->core = cpu_hal_get_core_id(); info->exception = PANIC_EXCEPTION_FAULT; info->details = NULL; info->reason = "Unknown"; info->pseudo_excause = pseudo_excause; if (pseudo_excause) { panic_soc_fill_info(frame, info); } else { panic_arch_fill_info(frame, info); } info->state = print_state; info->frame = frame; } static void panic_handler(void *frame, bool pseudo_excause) { panic_info_t info = { 0 }; /* * Setup environment and perform necessary architecture/chip specific * steps here prior to the system panic handler. * */ int core_id = cpu_hal_get_core_id(); // If multiple cores arrive at panic handler, save frames for all of them g_exc_frames[core_id] = frame; #if !CONFIG_ESP_SYSTEM_SINGLE_CORE_MODE // These are cases where both CPUs both go into panic handler. The following code ensures // only one core proceeds to the system panic handler. if (pseudo_excause) { #define BUSY_WAIT_IF_TRUE(b) { if (b) while(1); } // For WDT expiry, pause the non-offending core - offending core handles panic BUSY_WAIT_IF_TRUE(panic_get_cause(frame) == PANIC_RSN_INTWDT_CPU0 && core_id == 1); BUSY_WAIT_IF_TRUE(panic_get_cause(frame) == PANIC_RSN_INTWDT_CPU1 && core_id == 0); // For cache error, pause the non-offending core - offending core handles panic if (panic_get_cause(frame) == PANIC_RSN_CACHEERR && core_id != esp_cache_err_get_cpuid()) { // Only print the backtrace for the offending core in case of the cache error g_exc_frames[core_id] = NULL; while (1) { ; } } } // Need to reconfigure WDTs before we stall any other CPU esp_panic_handler_reconfigure_wdts(); esp_rom_delay_us(1); SOC_HAL_STALL_OTHER_CORES(); #endif #if CONFIG_IDF_TARGET_ESP32 esp_dport_access_int_abort(); #endif #if !CONFIG_ESP_PANIC_HANDLER_IRAM // Re-enable CPU cache for current CPU if it was disabled if (!spi_flash_cache_enabled()) { spi_flash_enable_cache(core_id); panic_print_str("Re-enable cpu cache.\r\n"); } #endif if (esp_cpu_in_ocd_debug_mode()) { #if __XTENSA__ if (!(esp_ptr_executable(cpu_ll_pc_to_ptr(panic_get_address(frame))) && (panic_get_address(frame) & 0xC0000000U))) { /* Xtensa ABI sets the 2 MSBs of the PC according to the windowed call size * Incase the PC is invalid, GDB will fail to translate addresses to function names * Hence replacing the PC to a placeholder address in case of invalid PC */ panic_set_address(frame, (uint32_t)&_invalid_pc_placeholder); } #endif if (panic_get_cause(frame) == PANIC_RSN_INTWDT_CPU0 #if !CONFIG_ESP_SYSTEM_SINGLE_CORE_MODE || panic_get_cause(frame) == PANIC_RSN_INTWDT_CPU1 #endif ) { wdt_hal_write_protect_disable(&wdt0_context); wdt_hal_handle_intr(&wdt0_context); wdt_hal_write_protect_enable(&wdt0_context); } } // Convert architecture exception frame into abstracted panic info frame_to_panic_info(frame, &info, pseudo_excause); // Call the system panic handler esp_panic_handler(&info); } void panicHandler(void *frame) { // This panic handler gets called for when the double exception vector, // kernel exception vector gets used; as well as handling interrupt-based // faults cache error, wdt expiry. EXCAUSE register gets written with // one of PANIC_RSN_* values. panic_handler(frame, true); } void xt_unhandled_exception(void *frame) { panic_handler(frame, false); } void __attribute__((noreturn)) panic_restart(void) { bool digital_reset_needed = false; #ifdef CONFIG_IDF_TARGET_ESP32 // On the ESP32, cache error status can only be cleared by system reset if (esp_cache_err_get_cpuid() != -1) { digital_reset_needed = true; } #endif #if CONFIG_ESP_SYSTEM_MEMPROT_FEATURE if (esp_memprot_is_intr_ena_any() || esp_memprot_is_locked_any()) { digital_reset_needed = true; } #endif if (digital_reset_needed) { esp_restart_noos_dig(); } esp_restart_noos(); }
/** * called from onCreate method of Application * @param application */ public void onAppCreate(Application application){ this.application = application; for(IApplicationListener lis : applicationListeners){ lis.onProxyCreate(); } }
import numpy as np A,B,H,M = list(map(int,input().split())) alpha = 2 * np.pi * (H/12 + M/(60*12)) beta = 2 * np.pi * M/60 print(((A * np.cos(alpha) - B * np.cos(beta))**2 + (A * np.sin(alpha) - B * np.sin(beta))**2)**0.5)
// ds.h // // Generic data structures. // // Author: <NAME> <<EMAIL>> #ifndef _DS_H_ #define _DS_H_ #include <stddef.h> #include <stdint.h> typedef struct list_node_s { void *value; struct list_node_s *next; struct list_node_s *prev; } list_node_t; typedef struct list_s { list_node_t *head; list_node_t *tail; size_t size; } list_t; typedef struct tree_node_s { void *value; list_t *children; struct tree_node_s *parent; } tree_node_t; void list_destroy(list_t *); void list_push_back(list_t *, void *); void list_pop_back(list_t *); void list_push_front(list_t *, void *); void list_pop_front(list_t *); void list_insert_after(list_t *, list_node_t *, void *); void list_insert_before(list_t *, list_node_t *, void *); void list_remove(list_t *, list_node_t *, uint8_t); tree_node_t *tree_init(void *); void tree_insert(tree_node_t *, tree_node_t *); void tree_destroy(tree_node_t *); #define list_foreach(i, l) \ for (list_node_t *i = (l)->head; i != NULL; i = i->next) #endif /* _DS_H_ */
/** * Truncate the data of entry to the given size * @param e Pointer to an entry * @param size new size in bytes */ static void ent_trunc(lv_mem_ent_t * e, size_t size) { #ifdef LV_ARCH_64 if(size & 0x7) { size = size & (~0x7); size += 8; } #else if(size & 0x3) { size = size & (~0x3); size += 4; } #endif if(e->header.s.d_size == size + sizeof(lv_mem_header_t)) { size = e->header.s.d_size; } if(e->header.s.d_size != size) { uint8_t * e_data = &e->first_data; lv_mem_ent_t * after_new_e = (lv_mem_ent_t *)&e_data[size]; after_new_e->header.s.used = 0; after_new_e->header.s.d_size = (uint32_t)e->header.s.d_size - size - sizeof(lv_mem_header_t); } e->header.s.d_size = (uint32_t)size; }
/** * @author Marcelo Guimaraes */ public class ElementCopyTest { private TestObject testObject; private Function<Properties, String> property(String name) { return props -> props.getProperty(name); } @Before public void init() { testObject = new TestObject("Marcelo", "Guimaraes"); testObject.age = 23; testObject.setHeight(1.9); testObject.setWeight(80.2); testObject.setNickName(null); } private Function<TestObject, ?> age() { return (obj) -> obj.getAge(); } private Function<TestObject, ?> nickName() { return (obj) -> obj.getNickName(); } private Function<TestObject, ?> name() { return (obj) -> obj.getName(); } private Function<TestObject, ?> lastName() { return (obj) -> obj.getLastName(); } private Function<TestObject, ?> weight() { return (obj) -> obj.getWeight(); } private Function<TestObject, ?> height() { return (obj) -> obj.getHeight(); } private Consumer<TestObject> ageIsSetTo(int value) { return (obj) -> obj.setAge(value); } private Consumer<TestObject> weightIsSetTo(double value) { return (obj) -> obj.setWeight(value); } private Consumer<TestObject> nickNameIsSetTo(String nick) { return (obj) -> obj.setNickName(nick); } private Consumer<TestObject> elementsAreCopiedFrom(TestObject o) { return (obj) -> copy().from(o).notNull().to(obj); } private Consumer<TestObject> elementsButAgeAreCopiedFrom(TestObject o) { return (obj) -> copy().from(o).notNull() .filter(copy -> !copy.dest().name().equals("age")) .to(obj); } @Test public void testCopyToSame() { Spec.given(new TestObject("Marcelo", "Guimaraes")) .when(weightIsSetTo(30.4) .andThen(nickNameIsSetTo("Nick")) .andThen(elementsAreCopiedFrom(testObject))) .expect(age(), to().be(testObject.getAge())) .expect(age(), to().be(23)) .expect(height(), to().be(testObject.getHeight())) .expect(height(), to().be(1.9)) .expect(weight(), to().be(testObject.getWeight())) .expect(weight(), to().be(80.2)) .expect(nickName(), to().not().be(testObject.getNickName())) .expect(nickName(), to().be("Nick")); } @Test public void testFilterCopy() { Spec.given(new TestObject("Marcelo", "Guimaraes")) .when(weightIsSetTo(30.4) .andThen(nickNameIsSetTo("Nick")) .andThen(ageIsSetTo(25)) .andThen(elementsButAgeAreCopiedFrom(testObject))) .expect(age(), to().not().be(testObject.getAge())) .expect(age(), to().be(25)) .expect(height(), to().be(testObject.getHeight())) .expect(height(), to().be(1.9)) .expect(weight(), to().be(testObject.getWeight())) .expect(weight(), to().be(80.2)) .expect(nickName(), to().not().be(testObject.getNickName())) .expect(nickName(), to().be("Nick")); } private static class ToStringTransformer implements Function<ElementCopy, String> { public String apply(ElementCopy object) { return String.valueOf((Object) object.value()); } } @Test public void testCopyToDifferentTypes() { Consumer<OtherTestObject> nickNameIsChanged = obj -> obj.setNickName("Nick"); Consumer<OtherTestObject> weightIsChanged = obj -> obj.setWeight(30.4); Consumer<OtherTestObject> elementsAreCopiedFromTestObject = obj -> copy().from(testObject).to(obj); Function<OtherTestObject, Object> weight = OtherTestObject::getWeight; Function<OtherTestObject, Object> nickName = OtherTestObject::getNickName; Spec.given(new OtherTestObject()) .when(nickNameIsChanged .andThen(weightIsChanged) .andThen(elementsAreCopiedFromTestObject)) .expect(weight, to().be(testObject.getWeight())) .expect(nickName, to().be(testObject.getNickName())); Spec.given(new Properties()) .when(props -> copy() .from(testObject) .notNull() .map(new ToStringTransformer()) .to(props)) .expect(property("age"), to().be("23")) .expect(property("nickName"), to().beNull()) .expect(property("name"), to().be("Marcelo")) .expect(property("lastName"), to().be("Guimaraes")) .expect(property("height"), to().be("1.9")) .expect(property("weight"), to().be("80.2")); } @Test public void testCopyWithSelector() { Spec.given(new TestObject("John", "Smith")) .when(o -> copy(elements().filter(e -> false)).from(testObject).to(o)) .expect(name(), to().be("John")) .expect(lastName(), to().be("Smith")) .expect(nickName(), to().beNull()) .expect(age(), to().be(0)) .expect(height(), to().be(0.0)) .expect(weight(), to().be(0.0)); } @Test public void testFilter() { copy(elements().filter(ofName("name"))) .from(testObject) .filter(copy -> assertFilterForDifferentType(copy)) .to(new Properties()); copy(elements().filter(ofName("name"))) .from(testObject) .filter(copy -> assertFilterForSameType(copy)) .to(new TestObject(null, null)); } private void assertFilter(ElementCopy copy) { assertEquals("name", copy.src().name()); assertEquals("name", copy.dest().name()); assertEquals("Marcelo", copy.src().getValue()); assertEquals(String.class, copy.src().type()); assertEquals(String.class, copy.dest().type()); } private boolean assertFilterForDifferentType(ElementCopy copy) { assertFilter(copy); assertNotEquals(copy.src().declaringClass(), copy.dest().declaringClass()); return true; } private boolean assertFilterForSameType(ElementCopy copy) { assertFilter(copy); assertEquals(copy.src().declaringClass(), copy.dest().declaringClass()); return true; } }
/** * Copyright © 2019 IBM Corporation * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include "json_parser.hpp" #include "anyof.hpp" #include "fallback.hpp" #include "gpio.hpp" #include "json_config.hpp" #include "sdbusplus.hpp" #include "tach.hpp" #include <nlohmann/json.hpp> #include <phosphor-logging/log.hpp> #include <sdbusplus/bus.hpp> #include <xyz/openbmc_project/Logging/Create/server.hpp> #include <xyz/openbmc_project/Logging/Entry/server.hpp> #include <filesystem> #include <fstream> #include <string> namespace phosphor { namespace fan { namespace presence { using json = nlohmann::json; namespace fs = std::filesystem; using namespace phosphor::logging; policies JsonConfig::_policies; const std::map<std::string, methodHandler> JsonConfig::_methods = { {"tach", method::getTach}, {"gpio", method::getGpio}}; const std::map<std::string, rpolicyHandler> JsonConfig::_rpolicies = { {"anyof", rpolicy::getAnyof}, {"fallback", rpolicy::getFallback}}; const auto loggingPath = "/xyz/openbmc_project/logging"; const auto loggingCreateIface = "xyz.openbmc_project.Logging.Create"; JsonConfig::JsonConfig(sdbusplus::bus::bus& bus) : _bus(bus) {} void JsonConfig::start() { using config = fan::JsonConfig; process(config::load(config::getConfFile(_bus, confAppName, confFileName))); for (auto& p : _policies) { p->monitor(); } } const policies& JsonConfig::get() { return _policies; } void JsonConfig::sighupHandler(sdeventplus::source::Signal& sigSrc, const struct signalfd_siginfo* sigInfo) { try { using config = fan::JsonConfig; _reporter.reset(); // Load and process the json configuration process( config::load(config::getConfFile(_bus, confAppName, confFileName))); for (auto& p : _policies) { p->monitor(); } log<level::INFO>("Configuration loaded successfully"); } catch (const std::runtime_error& re) { log<level::ERR>("Error loading config, no config changes made", entry("LOAD_ERROR=%s", re.what())); } } void JsonConfig::process(const json& jsonConf) { policies policies; std::vector<fanPolicy> fans; // Set the expected number of fan entries // to be size of the list of fan json config entries // (Must be done to eliminate vector reallocation of fan references) fans.reserve(jsonConf.size()); for (auto& member : jsonConf) { if (!member.contains("name") || !member.contains("path") || !member.contains("methods") || !member.contains("rpolicy")) { log<level::ERR>("Missing required fan presence properties", entry("REQUIRED_PROPERTIES=%s", "{name, path, methods, rpolicy}")); throw std::runtime_error( "Missing required fan presence properties"); } // Loop thru the configured methods of presence detection std::vector<std::unique_ptr<PresenceSensor>> sensors; for (auto& method : member["methods"].items()) { if (!method.value().contains("type")) { log<level::ERR>( "Missing required fan presence method type", entry("FAN_NAME=%s", member["name"].get<std::string>().c_str())); throw std::runtime_error( "Missing required fan presence method type"); } // The method type of fan presence detection // (Must have a supported function within the method namespace) auto type = method.value()["type"].get<std::string>(); std::transform(type.begin(), type.end(), type.begin(), tolower); auto func = _methods.find(type); if (func != _methods.end()) { // Call function for method type auto sensor = func->second(fans.size(), method.value()); if (sensor) { sensors.emplace_back(std::move(sensor)); } } else { log<level::ERR>( "Invalid fan presence method type", entry("FAN_NAME=%s", member["name"].get<std::string>().c_str()), entry("METHOD_TYPE=%s", type.c_str())); throw std::runtime_error("Invalid fan presence method type"); } } // Get the amount of time a fan must be not present before // creating an error. std::optional<size_t> timeUntilError; if (member.contains("fan_missing_error_time")) { timeUntilError = member["fan_missing_error_time"].get<size_t>(); } auto fan = std::make_tuple(member["name"], member["path"], timeUntilError); // Create a fan object fans.emplace_back(std::make_tuple(fan, std::move(sensors))); // Add fan presence policy auto policy = getPolicy(member["rpolicy"], fans.back()); if (policy) { policies.emplace_back(std::move(policy)); } } // Success, refresh fans and policies lists _fans.clear(); _fans.swap(fans); _policies.clear(); _policies.swap(policies); // Create the error reporter class if necessary if (std::any_of(_fans.begin(), _fans.end(), [](const auto& fan) { return std::get<std::optional<size_t>>(std::get<Fan>(fan)) != std::nullopt; })) { _reporter = std::make_unique<ErrorReporter>(_bus, _fans); } } std::unique_ptr<RedundancyPolicy> JsonConfig::getPolicy(const json& rpolicy, const fanPolicy& fpolicy) { if (!rpolicy.contains("type")) { log<level::ERR>( "Missing required fan presence policy type", entry("FAN_NAME=%s", std::get<fanPolicyFanPos>(std::get<Fan>(fpolicy)).c_str()), entry("REQUIRED_PROPERTIES=%s", "{type}")); throw std::runtime_error("Missing required fan presence policy type"); } // The redundancy policy type for fan presence detection // (Must have a supported function within the rpolicy namespace) auto type = rpolicy["type"].get<std::string>(); std::transform(type.begin(), type.end(), type.begin(), tolower); auto func = _rpolicies.find(type); if (func != _rpolicies.end()) { // Call function for redundancy policy type and return the policy return func->second(fpolicy); } else { log<level::ERR>( "Invalid fan presence policy type", entry("FAN_NAME=%s", std::get<fanPolicyFanPos>(std::get<Fan>(fpolicy)).c_str()), entry("RPOLICY_TYPE=%s", type.c_str())); throw std::runtime_error("Invalid fan presence methods policy type"); } } /** * Methods of fan presence detection function definitions */ namespace method { // Get a constructed presence sensor for fan presence detection by tach std::unique_ptr<PresenceSensor> getTach(size_t fanIndex, const json& method) { if (!method.contains("sensors") || method["sensors"].size() == 0) { log<level::ERR>("Missing required tach method properties", entry("FAN_ENTRY=%d", fanIndex), entry("REQUIRED_PROPERTIES=%s", "{sensors}")); throw std::runtime_error("Missing required tach method properties"); } std::vector<std::string> sensors; for (auto& sensor : method["sensors"]) { sensors.emplace_back(sensor.get<std::string>()); } return std::make_unique<PolicyAccess<Tach, JsonConfig>>(fanIndex, std::move(sensors)); } // Get a constructed presence sensor for fan presence detection by gpio std::unique_ptr<PresenceSensor> getGpio(size_t fanIndex, const json& method) { if (!method.contains("physpath") || !method.contains("devpath") || !method.contains("key")) { log<level::ERR>( "Missing required gpio method properties", entry("FAN_ENTRY=%d", fanIndex), entry("REQUIRED_PROPERTIES=%s", "{physpath, devpath, key}")); throw std::runtime_error("Missing required gpio method properties"); } auto physpath = method["physpath"].get<std::string>(); auto devpath = method["devpath"].get<std::string>(); auto key = method["key"].get<unsigned int>(); try { return std::make_unique<PolicyAccess<Gpio, JsonConfig>>( fanIndex, physpath, devpath, key); } catch (const sdbusplus::exception_t& e) { namespace sdlogging = sdbusplus::xyz::openbmc_project::Logging::server; log<level::ERR>( fmt::format( "Error creating Gpio device bridge, hardware not detected: {}", e.what()) .c_str()); auto severity = sdlogging::convertForMessage(sdlogging::Entry::Level::Error); std::map<std::string, std::string> additionalData{ {"PHYSPATH", physpath}, {"DEVPATH", devpath}, {"FANINDEX", std::to_string(fanIndex)}}; try { util::SDBusPlus::lookupAndCallMethod( loggingPath, loggingCreateIface, "Create", "xyz.openbmc_project.Fan.Presence.Error.GPIODeviceUnavailable", severity, additionalData); } catch (const util::DBusError& e) { log<level::ERR>(fmt::format("Call to create an error log for " "presence-sensor failure failed: {}", e.what()) .c_str()); } return std::make_unique<PolicyAccess<NullGpio, JsonConfig>>(); } } } // namespace method /** * Redundancy policies for fan presence detection function definitions */ namespace rpolicy { // Get an `Anyof` redundancy policy for the fan std::unique_ptr<RedundancyPolicy> getAnyof(const fanPolicy& fan) { std::vector<std::reference_wrapper<PresenceSensor>> pSensors; for (auto& fanSensor : std::get<fanPolicySensorListPos>(fan)) { pSensors.emplace_back(*fanSensor); } return std::make_unique<AnyOf>(std::get<fanPolicyFanPos>(fan), pSensors); } // Get a `Fallback` redundancy policy for the fan std::unique_ptr<RedundancyPolicy> getFallback(const fanPolicy& fan) { std::vector<std::reference_wrapper<PresenceSensor>> pSensors; for (auto& fanSensor : std::get<fanPolicySensorListPos>(fan)) { // Place in the order given to fallback correctly pSensors.emplace_back(*fanSensor); } return std::make_unique<Fallback>(std::get<fanPolicyFanPos>(fan), pSensors); } } // namespace rpolicy } // namespace presence } // namespace fan } // namespace phosphor
# -*- coding: utf-8 -*- # Copyright 2015 <NAME>. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You # may not use this file except in compliance with the License. A copy of # the License is located at # # http://www.apache.org/licenses/LICENSE-2.0 # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF # ANY KIND, either express or implied. See the License for the specific # language governing permissions and limitations under the License. from __future__ import unicode_literals from __future__ import print_function import mock from compat import unittest from gitsome.github import GitHub from tests.mock_feed_parser import MockFeedParser from tests.mock_github_api import MockGitHubApi from tests.mock_pretty_date_time import pretty_date_time from tests.data.email import formatted_emails from tests.data.emoji import formatted_emojis from tests.data.events import formatted_events from tests.data.user import formatted_org, formatted_user, formatted_users from tests.data.gitignores import formatted_gitignores, formatted_gitignores_tip from tests.data.issue import formatted_issues, formatted_pull_requests from tests.data.license import formatted_licenses, formatted_licenses_tip from tests.data.thread import formatted_threads from tests.data.trends import formatted_trends from tests.data.user_feed import formatted_user_feed class GitHubTest(unittest.TestCase): def setUp(self): self.github = GitHub() self.github.config.api = MockGitHubApi() self.github.formatter.pretty_dt = pretty_date_time self.github.trend_parser = MockFeedParser() def test_avatar_no_pil(self): avatar_text = self.github.avatar( 'https://avatars.githubusercontent.com/u/583231?v=3', False) assert avatar_text == 'PIL not found.\n' @mock.patch('gitsome.github.click.secho') def test_create_comment(self, mock_click_secho): self.github.create_comment('user1/repo1/1', 'text') mock_click_secho.assert_called_with( 'Created comment: text', fg=self.github.config.clr_message) @mock.patch('gitsome.github.click.secho') def test_create_comment_invalid_args(self, mock_click_secho): self.github.create_comment('invalid/repo1/1', 'text') mock_click_secho.assert_called_with( 'Error creating comment', fg=self.github.config.clr_error) self.github.create_comment('user1/repo1/foo', 'text') mock_click_secho.assert_called_with( 'Expected argument: user/repo/# and option -t "comment".', fg=self.github.config.clr_error) @mock.patch('gitsome.github.click.secho') def test_create_issue(self, mock_click_secho): self.github.create_issue('user1/repo1', 'title', 'desc') mock_click_secho.assert_called_with( 'Created issue: title\ndesc', fg=self.github.config.clr_message) @mock.patch('gitsome.github.click.secho') def test_create_issue_no_desc(self, mock_click_secho): self.github.create_issue('user1/repo1', 'title', issue_desc=None) mock_click_secho.assert_called_with( 'Created issue: title\n', fg=self.github.config.clr_message) @mock.patch('gitsome.github.click.secho') def test_create_issue_invalid_args(self, mock_click_secho): self.github.create_issue('invalid/repo1', 'title', 'desc') mock_click_secho.assert_called_with( 'Error creating issue.', fg=self.github.config.clr_error) self.github.create_issue('u', 'title', 'desc') mock_click_secho.assert_called_with( 'Expected argument: user/repo and option -t "title".', fg=self.github.config.clr_error) @mock.patch('gitsome.github.click.secho') def test_create_repo(self, mock_click_secho): self.github.create_repo('name', 'desc', True) mock_click_secho.assert_called_with( 'Created repo: name\ndesc', fg=self.github.config.clr_message) @mock.patch('gitsome.github.click.secho') def test_create_repo_no_desc(self, mock_click_secho): self.github.create_repo('name', repo_desc=None) mock_click_secho.assert_called_with( 'Created repo: name\n', fg=self.github.config.clr_message) @mock.patch('gitsome.github.click.secho') def test_create_repo_invalid_args(self, mock_click_secho): self.github.create_repo('repo1', 'desc', True) mock_click_secho.assert_called_with( 'Error creating repo: foobar', fg=self.github.config.clr_error) @mock.patch('gitsome.github.click.secho') def test_emails(self, mock_click_secho): self.github.emails() mock_click_secho.assert_called_with(formatted_emails) @mock.patch('gitsome.github.click.secho') @mock.patch('gitsome.config.Config.prompt_news_feed') def test_feed_config(self, mock_config_prompt_news_feed, mock_click_secho): self.github.feed() mock_config_prompt_news_feed.assert_called_with() @mock.patch('gitsome.github.click.secho') def test_feed(self, mock_click_secho): self.github.config.user_feed = 'user_feed' self.github.feed() mock_click_secho.assert_called_with(formatted_user_feed) @mock.patch('gitsome.github.click.secho') @mock.patch('gitsome.config.Config') def test_feed_user(self, mock_config, mock_click_secho): self.github.feed('user1') mock_click_secho.assert_called_with(formatted_events) @mock.patch('gitsome.github.click.secho') def test_emojis(self, mock_click_secho): self.github.emojis() mock_click_secho.assert_called_with(formatted_emojis) @mock.patch('gitsome.github.click.secho') def test_followers(self, mock_click_secho): self.github.followers('foo') mock_click_secho.assert_called_with(formatted_users) @mock.patch('gitsome.github.click.secho') def test_following(self, mock_click_secho): self.github.following('foo') mock_click_secho.assert_called_with(formatted_users) @mock.patch('gitsome.github.click.secho') def test_gitignore_template(self, mock_click_secho): self.github.gitignore_template('valid_language') mock_click_secho.assert_called_with( 'template', fg=self.github.config.clr_message) @mock.patch('gitsome.github.click.secho') def test_gitignore_template_invalid(self, mock_click_secho): self.github.gitignore_template('invalid_language') mock_click_secho.assert_called_with( ('Invalid case-sensitive template requested, run the ' 'following command to see available templates:\n' ' gh gitignore-templates'), fg=self.github.config.clr_error) @mock.patch('gitsome.github.click.secho') def test_gitignore_templates(self, mock_click_secho): self.github.gitignore_templates() mock_click_secho.assert_any_call(formatted_gitignores) mock_click_secho.assert_any_call(formatted_gitignores_tip, fg=self.github.config.clr_message) @mock.patch('gitsome.web_viewer.WebViewer.view_url') def test_issue(self, mock_view_url): self.github.issue('user1/repo1/1') mock_view_url.assert_called_with( 'https://github.com/user1/repo1/issues/1') @mock.patch('gitsome.github.click.secho') def test_issue_invalid_args(self, mock_click_secho): self.github.issue('user1/repo1/foo') mock_click_secho.assert_called_with( 'Expected argument: user/repo/#.', fg=self.github.config.clr_error) @mock.patch('gitsome.github.click.secho') def test_issues_setup(self, mock_click_secho): self.github.issues_setup() mock_click_secho.assert_called_with(formatted_issues) @mock.patch('gitsome.github.click.secho') def test_license(self, mock_click_secho): self.github.license('valid_license') mock_click_secho.assert_called_with( 'template', fg=self.github.config.clr_message) @mock.patch('gitsome.github.click.secho') def test_license_invalid(self, mock_click_secho): self.github.license('invalid_license') mock_click_secho.assert_called_with( (' Invalid case-sensitive license requested, run the ' 'following command to see available licenses:\n' ' gh licenses'), fg=self.github.config.clr_error) @mock.patch('gitsome.github.click.secho') def test_licenses(self, mock_click_secho): self.github.licenses() mock_click_secho.assert_any_call(formatted_licenses) mock_click_secho.assert_any_call(formatted_licenses_tip, fg=self.github.config.clr_message) @mock.patch('gitsome.github.click.secho') def test_notifications(self, mock_click_secho): self.github.notifications() mock_click_secho.assert_called_with(formatted_threads) @mock.patch('gitsome.github.click.secho') def test_octocat(self, mock_click_secho): self.github.octocat('foo\\nbar') mock_click_secho.assert_called_with( 'foo\nbar', fg=self.github.config.clr_message) @mock.patch('gitsome.github.click.secho') def test_pull_requests(self, mock_click_secho): self.github.pull_requests() mock_click_secho.assert_called_with(formatted_pull_requests) @mock.patch('gitsome.github.click.secho') def test_rate_limit(self, mock_click_secho): self.github.rate_limit() mock_click_secho.assert_called_with( 'Rate limit: 5000', fg=self.github.config.clr_message) @mock.patch('gitsome.web_viewer.WebViewer.view_url') def test_repository(self, mock_view_url): self.github.repository('user1/repo1') mock_view_url.assert_called_with( 'https://github.com/user1/repo1') @mock.patch('gitsome.github.click.secho') def test_repository_invalid(self, mock_click_secho): self.github.repository('user1/repo1/1') mock_click_secho.assert_called_with( 'Expected argument: user/repo.', fg=self.github.config.clr_error) @mock.patch('gitsome.github.click.secho') @mock.patch('gitsome.github.GitHub.issues') def test_search_issues(self, mock_github_issues, mock_click_secho): self.github.search_issues('foo') mock_github_issues.assert_called_with( ['foobar', 'foobar', 'foobar'], 1000, False, sort=False) @mock.patch('gitsome.github.click.secho') @mock.patch('gitsome.github.GitHub.repositories') def test_search_repos(self, mock_github_repositories, mock_click_secho): self.github.search_repositories('foo', 'stars') mock_github_repositories.assert_called_with( ['foobar'], 1000, False, sort=False) @mock.patch('gitsome.github.click.secho') def test_trending(self, mock_click_secho): self.github.trending('Python', False, False, False) mock_click_secho.assert_called_with(formatted_trends) @mock.patch('gitsome.github.click.secho') def test_user(self, mock_click_secho): self.github.user('user1') mock_click_secho.assert_called_with(formatted_user) self.github.user('user2') mock_click_secho.assert_called_with(formatted_org) @mock.patch('gitsome.github.click.secho') def test_user_invalid(self, mock_click_secho): self.github.user('invalid_user') mock_click_secho.assert_called_with( 'Invalid user.', fg=self.github.config.clr_error) @mock.patch('gitsome.github.click.secho') @mock.patch('gitsome.github.webbrowser.open') def test_user_browser(self, mock_webbrowser_open, mock_click_secho): self.github.user('invalid_user', browser=True) mock_webbrowser_open.assert_called_with( 'https://github.com/invalid_user') @mock.patch('gitsome.github.click.secho') @mock.patch('gitsome.github.webbrowser.open') def test_view_browser(self, mock_webbrowser_open, mock_click_secho): self.github.config.load_urls = lambda x: ['user1/foo'] self.github.view(1, view_in_browser=True) mock_webbrowser_open.assert_called_with( 'https://github.com/user1/foo') @mock.patch('gitsome.github.click.secho') @mock.patch('gitsome.github.GitHub.issue') def test_view_issue(self, mock_github_issue, mock_click_secho): self.github.config.load_urls = lambda x: ['user1/foo/issues/1'] self.github.view(0) mock_github_issue.assert_called_with('user1/foo/1') @mock.patch('gitsome.github.click.secho') @mock.patch('gitsome.github.GitHub.repository') def test_view_repo(self, mock_github_repository, mock_click_secho): self.github.config.load_urls = lambda x: ['user1/foo'] self.github.view(0) mock_github_repository.assert_called_with('user1/foo') @mock.patch('gitsome.github.click.secho') @mock.patch('gitsome.web_viewer.WebViewer.view_url') def test_view_user(self, mock_view_url, mock_click_secho): self.github.config.load_urls = lambda x: ['user1'] self.github.view(0) mock_view_url.assert_called_with('https://github.com/user1') def test_base_url(self): self.github.config.enterprise_url = 'https://github.intra.example.com' assert self.github.base_url == 'https://github.intra.example.com' self.github.config.enterprise_url = None assert self.github.base_url == self.github._base_url def test_add_base_url(self): expected = self.github.base_url + 'foo.html' assert self.github.add_base_url('foo.html') == expected assert self.github.add_base_url(expected) == expected
#include<stdio.h> int main(){ long long int q,w,e,r,k; scanf("%lld %lld %lld %lld",&q,&w,&e,&r); k=q; if(k<w)k=w; if(k<e)k=e; if(k<r)k=r; if(k-q!=0)printf("%lld ",k-q); if(k-w!=0)printf("%lld ",k-w); if(k-e!=0)printf("%lld ",k-e); if(k-r!=0)printf("%lld ",k-r); }
<filename>BFS/743. Network Delay Time.cpp /* 743. Network Delay Time Time Space Difficulty O(|E| + |V|log|V|) O(|E| + |V|) Medium There are N network nodes, labelled 1 to N. Given times, a list of travel times as directed edges times[i] = (u, v, w), where u is the source node, v is the target node, and w is the time it takes for a signal to travel from source to target. Now, we send a signal from a certain node K. How long will it take for all nodes to receive the signal? If it is impossible, return -1. Note: N will be in the range [1, 100]. K will be in the range [1, N]. The length of times will be in the range [1, 6000]. All edges times[i] = (u, v, w) will have 1 <= u, v <= N and 1 <= w <= 100. */ class Solution { public: int networkDelayTime(vector<vector<int>>& times, int N, int K) { vector<vector<pair<int,int>>>graph(N+1,vector<pair<int,int>>()); for(auto t: times) graph[t[0]].push_back({t[1],t[2]}); //initialize path, key is source, pair<sink, distance> unordered_map<int,int>dist; for(int i = 1; i<=N; i++) dist[i] = INT_MAX; //initialize distance from source dist[K] = 0; auto compareFunc = [](pair<int,int>a, pair<int,int>b){return a.first>b.first;}; priority_queue<pair<int,int>, vector<pair<int,int>>,decltype(compareFunc)>pq(compareFunc); pq.push({0,K}); while(pq.size()){ int start = pq.top().first; //distance of current pont from sink int point = pq.top().second; //current point index; pq.pop(); for(auto p: graph[point]){ // p.first is neighbour that can be reached from point. p.second is the distance from point to neighbour if(dist[p.first] > start + p.second) { dist[p.first] = start + p.second; pq.push({dist[p.first],p.first}); } } } int maxdist = 0; for(auto i: dist) maxdist = max(i.second, maxdist); return maxdist == INT_MAX ? -1 : maxdist; } }; class Solution { public: int networkDelayTime(vector<vector<int>>& times, int N, int K) { vector<vector<pair<int,int>>>graph(N+1,vector<pair<int,int>>()); for(auto t: times) graph[t[0]].push_back({t[1],t[2]}); //initialize path, key is source, pair<sink, distance> unordered_map<int,int>dist; for(int i = 1; i<=N; i++) dist[i] = INT_MAX; //initialize distance from source dist[K] = 0; priority_queue<pair<int,int>>pq; pq.push({0,K}); while(pq.size()){ int start = pq.top().first; //distance of current pont from sink int point = pq.top().second; //current point index; pq.pop(); //cout<<start<<" p "<<point<<endl; for(auto p: graph[point]){ // p.first is neighbour that can be reached from point. p.second is the distance from point to neighbour //cout<<"negitbour "<<p.first<<" dist[] "<<endl; //cout<<dist[p.first]<<" second "<<p.second<<" start "<<start<<endl; if(dist[p.first] > start + p.second) { dist[p.first] = start + p.second; pq.push({dist[p.first],p.first}); //cout<<"push dist "<<dist[p.first]<<" first "<<p.first<<endl; } } } int maxdist = 0; for(auto i: dist) maxdist = max(i.second, maxdist); return maxdist == INT_MAX ? -1 : maxdist; } }; class Solution { public: int networkDelayTime(vector<vector<int>>& times, int N, int K) { vector<vector<pair<int,int>>>graph(N+1,vector<pair<int,int>>()); for(auto t: times) graph[t[0]].push_back({t[1],t[2]}); //initialize path, key is source, pair<sink, distance> unordered_map<int,int>dist; for(int i = 1; i<=N; i++) dist[i] = INT_MAX; dist.erase(K); for(auto p: graph[K]) dist[p.first] = p.second; int maxdist = 0; int point,start; while(dist.size()){ point = -1, start = INT_MAX; for(auto i: dist){ if(i.second<start){ start = i.second; point = i.first; } } if(point == -1) return -1; dist.erase(point); maxdist = max(maxdist,start); for(auto p: graph[point]) if(dist.find(p.first)!=dist.end() && dist[p.first] > start + p.second) dist[p.first] = start+p.second; } return maxdist; } };
def find_idx_match(simple_vertices, complex_vertices): ''' Thanks to juhuntenburg. Functions taken from https://github.com/juhuntenburg/brainsurfacescripts Finds those points on the complex mesh that correspoind best to the simple mesh while forcing a one-to-one mapping ''' # make array for writing in final voronoi seed indices voronoi_seed_idx = np.zeros((simple_vertices.shape[0],), dtype='int64')-1 missing = np.where(voronoi_seed_idx == -1)[0].shape[0] mapping_single = np.zeros_like(voronoi_seed_idx) neighbours = 0 col = 0 while missing != 0: neighbours += 100 # find nearest neighbours inaccuracy, mapping = spatial.KDTree( complex_vertices).query(simple_vertices, k=neighbours) # go through columns of nearest neighbours until unique mapping is # achieved, if not before end of neighbours, extend number of # neighbours while col < neighbours: # find all missing voronoi seed indices missing_idx = np.where(voronoi_seed_idx == -1)[0] missing = missing_idx.shape[0] if missing == 0: break else: # for missing entries fill in next neighbour mapping_single[missing_idx] = np.copy( mapping[missing_idx, col]) # find unique values in mapping_single unique, double_idx = np.unique( mapping_single, return_inverse=True) # empty voronoi seed index voronoi_seed_idx = np.zeros( (simple_vertices.shape[0],), dtype='int64')-1 # fill voronoi seed idx with unique values for u in range(unique.shape[0]): # find the indices of this value in mapping entries = np.where(double_idx == u)[0] # set the first entry to the value voronoi_seed_idx[entries[0]] = unique[u] # go to next column col += 1 return voronoi_seed_idx, inaccuracy def competetive_fast_marching(vertices, graph, seeds): ''' Label all vertices on highres mesh to the closest seed vertex using a balanced binary search tree ''' # make a labelling container to be filled with the search tree # first column are the vertex indices of the complex mesh # second column are the labels from the simple mesh # (-1 for all but the corresponding points for now) labels = np.zeros((vertices.shape[0], 2), dtype='int64')-1 labels[:, 0] = range(vertices.shape[0]) for i in range(seeds.shape[0]): labels[seeds[i]][1] = i # initiate AVLTree for binary search tree = FastAVLTree() # organisation of the tree will be # key: edge length; value: tuple of vertices (source, target) # add all neighbours of the voronoi seeds for v in seeds: add_neighbours(v, 0, graph, labels, tree) # Competetive fast marching starting from voronoi seeds while tree.count > 0: # pdb.set_trace() # pop the item with minimum edge length min_item = tree.pop_min() length = min_item[0] source = min_item[1][0] target = min_item[1][1] # if target no label yet (but source does!), assign label of source if labels[target][1] == -1: if labels[source][1] == -1: sys.exit('Source has no label, something went wrong!') else: # assign label of source to target labels[target][1] = labels[source][1] # test if labelling is complete if any(labels[:, 1] == -1): # if not, add neighbours of target to tree add_neighbours(target, length, graph, labels, tree) else: break # print 'tree '+str(tree.count) # print 'labels '+str(np.where(labels[:,1]==-1)[0].shape[0]) return labels def sample_simple(highres_data, labels): ''' Computes the mean of data from highres mesh that is assigned to the same label (typical simple mesh vertices). ''' # create new empty lowres data array lowres_data = np.empty((int(labels.max()+1), highres_data.shape[1])) # find all vertices on highres and mean for l in range(int(labels.max())): patch = np.where(labels == l)[0] patch_data = highres_data[patch] patch_mean = np.mean(patch_data, axis=0) lowres_data[l] = patch_mean return lowres_data
def turn_end_all(skills: Iterable[Skill], unit: ActiveUnit, arena: ActiveArena) -> List[Dict]: return list(filter(_exists, (turn_end[s.turn_end_effect](arena, unit) for s in skills)))
{-# LANGUAGE OverloadedStrings #-} module Client where import qualified Data.ByteString.Char8 as S8 import qualified Data.ByteString.Lazy.Char8 as L8 import Network.HTTP.Simple import Solver import Entities import System.Console.ANSI attack :: String -> String -> IO () attack gameName playerName = do putStrLn "~~~~~~~~~~~~~~~~~ GAME begin: ATTACK ~~~~~~~~~~~~~~~~~" run "attack" gameName playerName putStrLn "~~~~~~~~~~~~~~~~~ GAME over : ATTACK ~~~~~~~~~~~~~~~~~" defend :: String -> String -> IO () defend gameName playerName = do putStrLn "~~~~~~~~~~~~~~~~~ GAME begin: DEFEND ~~~~~~~~~~~~~~~~~" run "defend" gameName playerName putStrLn "~~~~~~~~~~~~~~~~~ GAME begin: DEFEND ~~~~~~~~~~~~~~~~~" run :: String -> String -> String -> IO () run mode gameName playerName = do origMsg <- case mode of "attack" -> return "{}" "defend" -> makeGetRequest gameName playerName case Solver.solve origMsg of Left "ERROR: Game already over." -> do setSGR [SetColor Foreground Vivid Green] putStrLn "Congratulations, Kazimieras robot won!" setSGR [Reset] Left errorMsg -> do setSGR [SetColor Foreground Vivid Red] putStrLn errorMsg setSGR [Reset] Right (Just (x,y,sym)) -> do makePostRequest newMsg gameName playerName run "defend" gameName playerName where newMsg = concatToMoveString origMsg (x,y,sym) playerName concatToMoveString :: [Char] -> (Int, Int, Char) -> String -> [Char] concatToMoveString oldMsg (x,y,sym) playerName = newMsg where newMsg = "{\"c\": {\"0\": " ++ show x ++ ", \"1\": " ++ show y ++ "}, \"v\": \"" ++ [sym] ++ "\", \"id\": \"" ++ playerName ++ "\"" ++ case oldMsg of "{}" -> "}" _ -> ", \"prev\": " ++ oldMsg ++ "}" makePostRequest :: String -> String -> String -> IO () makePostRequest msg gameName playerName = do putStrLn "------------- POST begin -------------" let request = setRequestMethod "POST" $ setRequestPath (S8.pack ("/game/" ++ gameName ++ "/player/" ++ playerName)) $ setRequestHost "tictactoe.haskell.lt" $ setRequestHeader "Content-Type" ["application/json+map"] $ setRequestBodyLBS (L8.pack (msg)) $ defaultRequest response <- httpLBS request case (getResponseStatusCode response) of 200 -> do setSGR [SetColor Foreground Vivid Green] putStrLn ("POST: status: " ++ show (getResponseStatusCode response)) setSGR [Reset] _ -> do setSGR [SetColor Foreground Vivid Red] putStrLn ("POST: status: " ++ show (getResponseStatusCode response)) setSGR [Reset] print $ getResponseHeader "Content-Type" response L8.putStrLn (getResponseBody response) putStrLn ("Message: " ++ msg) putStrLn "------------- POST end -------------\n" makeGetRequest :: String -> String -> IO String makeGetRequest gameName playerName = do putStrLn "------------- GET begin -------------" let request = setRequestPath (S8.pack ("/game/" ++ gameName ++ "/player/" ++ playerName)) $ setRequestHost "tictactoe.haskell.lt" $ setRequestHeader "Accept" ["application/json+map"] $ defaultRequest response <- httpLBS request case (getResponseStatusCode response) of 200 -> do setSGR [SetColor Foreground Vivid Green] putStrLn ("GET: status: " ++ show (getResponseStatusCode response)) setSGR [Reset] _ -> do setSGR [SetColor Foreground Vivid Red] putStrLn ("GET: status: " ++ show (getResponseStatusCode response)) setSGR [Reset] print $ getResponseHeader "Content-Type" response let line = L8.unpack (getResponseBody response) putStrLn ("Message: " ++ line) putStrLn "------------- GET end -------------\n" return line
CLIMATE POLICIES: CHALLENGES, OBSTACLES AND TOOLS A four-pronged approach to climate policy is presented consisting of carbon pricing, subsidies for renewable energies, transformative green investments and climate finance and engendering flywheel effects. Then, a variety of societal and political challenges and obstacles faced by such a climate policy and what can be done to overcome them are discussed. These range from stranded assets, the very long time scales needed to adapt and deal with global warming, intergenerational conflict, international free-rider problems, carbon leakage, green paradoxes, policy failure and capture, adverse income distributional effects and spatial scarcity to the problem of climate deniers and sceptics. The paper also discusses the various tools that are needed for the analysis of both ideal and workable climate policies, and the need to collaborate with complexity scholars, political scientists, sociologists and psychologists.
<reponame>mcxiaoke/python-labs #!/usr/bin/env python # -*- coding: utf-8 -*- # @Author: mcxiaoke # @Date: 2015-08-09 16:04:45 #from tkinter.filedialog import askopenfilename #from tkinter.colorchooser import askcolor #from tkinter.messagebox import askquestion, showerror #from tkinter.simpledialog import askfloat from tkFileDialog import askopenfilename from tkColorChooser import askcolor from tkMessageBox import askquestion, showerror from tkSimpleDialog import askfloat #https://docs.python.org/2/library/tkinter.html demos = { 'Open': askopenfilename, 'Color': askcolor, 'Query': lambda: askquestion('Warning', 'Your typed "rm *"\nConfirm?'), 'Error': lambda: showerror('Error!', "He's dead, Jim"), 'Input': lambda: askfloat('Entry', 'Enter credit card number') }
import { Column, Entity, OneToMany } from 'typeorm'; import { CascadeDeleteEntity } from '../base.entity'; import { SubsectionEntity } from '../sub-sections/sub-section.entity'; @Entity('section') export class SectionEntity extends CascadeDeleteEntity { @Column() public name: string; @Column({ nullable: true }) public label: string; @Column({ nullable: true }) public orderPriority: number; @OneToMany(() => SubsectionEntity, (subQuestion) => subQuestion.section) public subsections: SubsectionEntity[]; }
<gh_stars>0 package org.continuity.api.entities.links; import com.fasterxml.jackson.annotation.JsonBackReference; import com.fasterxml.jackson.annotation.JsonIgnore; public abstract class AbstractLinks<T extends AbstractLinks<T>> { @JsonBackReference private final LinkExchangeModel parent; public AbstractLinks(LinkExchangeModel parent) { this.parent = parent; } public LinkExchangeModel parent() { return parent; } @JsonIgnore public abstract boolean isEmpty(); public abstract void merge(T other) throws IllegalArgumentException, IllegalAccessException; public static class ValueFilter { @Override public boolean equals(Object obj) { if ((obj == null) || !(obj instanceof AbstractLinks)) { return false; } AbstractLinks<?> links = (AbstractLinks<?>) obj; return links.isEmpty(); } } }
// NewMessageValue create a new object func NewMessageValue(protocol SubProtocol, msgID, originalID MsgID, timestamp int64, payload []byte) *MessageValue { msg := NewLiteMessageValue(protocol, msgID, originalID, timestamp) msg.SetPayload(payload) return msg }
#include<bits/stdc++.h> #ifdef LOCAL FILE* FP = freopen("text.in", "r", stdin); #endif using namespace std; /* 使用了比较串与前缀一样的特点, 通过若比较串优更新维护前缀范围,劣则退出,这个平则比下一个的完美衔接, 实现非常简单 */ int n, k,p=1;//p:维护的前缀有效范围[0,p) char s[500005]; signed main() { scanf("%d%d%s", &n, &k,s); for (int i = 0; i < n; i++) { if (s[i] > s[i % p])break; if (s[i] < s[i % p])p = i + 1; } for (int i = 0; i < k; i++) { putchar(s[i % p]); } }
Most comprehensive stretching manual I haven’t encountered any source on this subject as broad, accessible, and easily applied as Bob Anderson’s classic Stretching, a patient and friendly stand-in for my eight-grade P.E. teacher. The 30th anniversary edition of this guidebook came out recently, with even more stretches and illustrations, and it’s easily the most comprehensive work on the subject. I love the activity-specific sections: cyclists, for instance, are shown stretches that not only address the muscle groups made tight and tense by our specific sport, but the stretches geared toward bike riders even include a bicycle to be utilized as a support. Activities from weightlifting to computer using get their own sections, too. Organizationally, Stretching shines. Tight neck? Rigid shoulders? Thumb through to your prescribed routine and get to work. With minimal flexibility but a willingness to make an effort, almost anyone can use this book to become more limber, healthier. -- Elon Schoenholz
#ifndef _INCLUDED_ASW_WEAPON_AMMO_SATCHEL_H #define _INCLUDED_ASW_WEAPON_AMMO_SATCHEL_H #pragma once #ifdef CLIENT_DLL #include "c_asw_weapon.h" #define CASW_Weapon_Ammo_Satchel C_ASW_Weapon_Ammo_Satchel #define CASW_Weapon C_ASW_Weapon #define CASW_Marine C_ASW_Marine #else #include "asw_weapon.h" #include "npc_combine.h" #endif #include "asw_shareddefs.h" #define AMMO_SATCHEL_DEFAULT_DROP_COUNT 3 class CASW_Weapon_Ammo_Satchel : public CASW_Weapon { public: DECLARE_CLASS( CASW_Weapon_Ammo_Satchel, CASW_Weapon ); DECLARE_NETWORKCLASS(); DECLARE_PREDICTABLE(); CASW_Weapon_Ammo_Satchel(); virtual ~CASW_Weapon_Ammo_Satchel(); void Precache(); Activity GetPrimaryAttackActivity( void ); virtual void PrimaryAttack(); virtual bool OffhandActivate(); void DeployAmmoDrop(); virtual bool UsesClipsForAmmo1( void ) const { return false; } #ifndef CLIENT_DLL DECLARE_DATADESC(); int CapabilitiesGet( void ) { return 0; } virtual const char* GetPickupClass() { return "asw_pickup_ammo_satchel"; } void SetAmmoDrops( int nAmmoDrops ) { m_iClip1 = nAmmoDrops; } #endif //int m_nAmmoDrops; float m_fLastAmmoDropTime; virtual bool IsOffensiveWeapon() { return false; } virtual Class_T Classify( void ) { return (Class_T)CLASS_ASW_AMMO_SATCHEL; } }; #endif /* _INCLUDED_ASW_WEAPON_AMMO_SATCHEL_H */
<reponame>jantap/rsmixer mod actor_type; mod instance; mod item; mod message_queue; mod registered_actors; mod status; use std::{pin::Pin, sync::Arc}; pub use actor_type::ActorType; use async_trait::async_trait; use futures::Future; pub use instance::ActorInstance; pub use item::ActorItem; pub use message_queue::MessageQueue; pub use status::{ActorStatus, LockedActorStatus}; use tokio::sync::{RwLock, RwLockReadGuard, RwLockWriteGuard}; use super::{context::Ctx, messages::BoxedMessage, prelude::LockedReceiver}; use crate::prelude::*; pub type BoxedEventfulActor = Box<dyn EventfulActor + Send + Sync>; pub type BoxedContinousActor = Box<dyn ContinousActor + Send + Sync>; pub type BoxedResultFuture = Pin<Box<dyn Future<Output = Result<()>> + Send + Sync>>; pub type ActorFactory = &'static (dyn Fn() -> Actor + Send + Sync); #[async_trait] pub trait EventfulActor { async fn start(&mut self, ctx: Ctx); async fn stop(&mut self); fn handle_message<'a>( &'a mut self, ctx: Ctx, msg: BoxedMessage, ) -> Pin<Box<dyn Future<Output = Result<()>> + Send + Sync + 'a>>; } #[async_trait] pub trait ContinousActor { async fn start(&mut self, ctx: Ctx); fn run(&mut self, ctx: Ctx, events_rx: LockedReceiver) -> BoxedResultFuture; async fn stop(&mut self); } pub enum Actor { Eventful(BoxedEventfulActor), Continous(BoxedContinousActor), } impl Actor { pub fn actor_type(&self) -> ActorType { match self { Self::Eventful(_) => ActorType::Eventful, Self::Continous(_) => ActorType::Continous, } } pub async fn start(&mut self, ctx: Ctx) { match self { Self::Eventful(actor) => actor.start(ctx).await, Self::Continous(actor) => actor.start(ctx).await, } } pub async fn stop(&mut self) { match self { Self::Eventful(actor) => actor.stop().await, Self::Continous(actor) => actor.stop().await, } } pub fn as_continous(&mut self) -> Option<&mut BoxedContinousActor> { match self { Self::Continous(a) => Some(a), Self::Eventful(_) => None, } } #[allow(dead_code)] pub fn as_eventful(&mut self) -> Option<&mut BoxedEventfulActor> { match self { Self::Eventful(a) => Some(a), Self::Continous(_) => None, } } } #[derive(Clone)] pub struct LockedActor(Arc<RwLock<Actor>>); impl LockedActor { pub fn new(actor: Actor) -> Self { Self { 0: Arc::new(RwLock::new(actor)), } } #[allow(dead_code)] pub async fn read(&self) -> RwLockReadGuard<'_, Actor> { self.0.read().await } pub async fn write(&self) -> RwLockWriteGuard<'_, Actor> { self.0.write().await } }