content
stringlengths
10
4.9M
def execute(self, operation, parameters=None): sql = _bind_parameters(operation, parameters) if parameters else operation self.job_id = self.run_query(sql)
ITHACA, N.Y. โ€” An estimated 0.5 to 1 million gallons of sewage spilled into the Cayuga Lake inlet over the last five weeks as a result of a malfunctioning underwater pipe that went unfixed for weeks. A crew of recreational boaters discovered the overflow at the inlet earlier this week, triggering renewed inspection efforts by city and state officials. The problem was fixed within hours after it was reported. But officials believe the sewage had been streaming into the lake for about five weeks before it was found, according to Scott Gibson, environmental engineer for the city of Ithaca. The .5 to 1 million number is a guess based off an estimate that about 14,000 gallons of sewage were flowing into the lake every day from Sept. 19 until Tuesday, Gibson said. The leak was caused by "blockage" in the pipes beneath the Siphon Station on the city's west end. 5 takeaways from sewage problem Here are five things we learned about the discovered sewage on the inlet: 1 โ€” How serious is this? | Gibson characterized the amount of spillage as serious but not unprecedented or a cause for alarm. "There's always a concern for the safety of the lake," Gibson said, "But it's not an imminent public health threat." "The sewage that's there is typically degraded, environmentally; it's not the nicest thing to look at or to see. But there's not a whole lot you can do after the fact." While city officials are monitoring the situation, Gibson said that this kind of sewage overflow is not particularly out of the ordinary. He noted that a similar overflow occurred in the same area in 2009. "Though the severity seems pretty intense, it's pretty typical of this type of flow," he said. Gibson stressed that the numbers he gave were estimates and not confirmed. "We have no idea how many gallons went out," Gibson said. 2 โ€” Should the overflow have been spotted earlier? | Gibson said city officials responded as quickly as they could and are tasked with overseeing an 85-mile long pipeline that's very difficult to constantly track. "The public sits there and says, 'How come you didn't know about that?' But it's an 85 mile collection system: It's impossible to monitor all of it at once," he said. "Once it's reported, we react quickly and that's all we can do." Given the length of the pipe, Gibson said, it's not surprising that a malfunction โ€” especially one in a part of the city that doesn't see much boat traffic or any pedestrian traffic โ€” could take as long as 5 weeks to be noticed. "Picture what 85 miles of roadway look like," he said. "And we only have a handful of guys maintaining the system." 3 โ€” 'Odor complaint' triggers city response | The local boating team called the city with an "odor complaint" at 2 p.m. on Tuesday. ARTICLE CONTINUES BELOW City crews cleared the blockage in the pipe and had it working normally again by 3:30 p.m. The sewage was found near the City of Ithaca's Siphon Station, which is located near Floral Avenue on the west side of the city. More specifically, the problem stemmed from blockage to a "large diameter pipe" that takes sewage east across the inlet to the Wastewater Treatment Facility. 4 โ€” Increased inspections | Before this incident, city crews only checked the Siphon Station for pipe blockage on a "bi-monthly" cycle, according to Gibson. Now, they will be conducting inspections every week. "(Crews) will be monitoring the chamber and the pipe and make sure the wastewater is flowing adequately," Gibson said. "If they see there's a restriction, they'll open it up." 5 โ€” Working with the state | New York state officials are involved in ensuring that the city properly documents the sewage discharge and responds appropriately. Gibson is currently in the process of filing a report for a state engineer outlining the city's reaction to the overflow problem.
/** * Teardown the test by deleting the test data. * * @throws Exception */ @After public void after() throws Exception { jdbcTemplate.execute(DELETE_TABLE_SQL); jdbcTemplate.execute(SHUTDOWN_HSQLDB); }
U.S. Homeland Security Secretary John Kelly came to Ottawa last week to talk national security. The issue on everyone's mind: border security. Canada has seen an influx of asylum-seekers illegally coming in from the U.S. over recent months, and it seems no one knows quite what to do. CBC commenters, however, have a few suggestions. Get on it, America Hopefully, Public Safety Minister Ralph Goodale will tell Kelly to start doing his job. U.S. border controls include both entry and exit control. Given the influx of refugees escaping the U.S., I'd say Donald Trump has undermined their exit controls. - Jim Graham No clear path It is all very well to dislike a situation, but what's the solution? A border wall? That is a rather long border. Surely, we cannot consider stationing thousands of police or military along the border and shooting at people as they try to cross. Shooting into the U.S. would actually be an act of war. So if arresting is not enough โ€” what is? - Sean Fordyceโ€‹ Problem for years Migrants were coming across the border before Trump came to power. Indeed, for years, several hundred crossed. Canada is a weak country when it comes to upholding the law. Why bother having a border at all? Yet Canadian residents going into the U.S. have been rejected because they did not seek a pardon for a previous offence. We cannot continue to welcome any and everyone who wants a new life. The pressure on the taxpayer is hitting the point of protest. Something has to give. Stop the madness, Mr. Trudeau. - Shirley Witt Cut off the leaders โ€‹Economic migration has been an issue for some time now. Migration due to failed governance in other lands has also been an issue. Time to start holding the failed nation states to account, and force the change in the homeland of the migrants. Cut off the travel of the leaders of the failed states. Close down their offshore bank accounts. - Kevin Delaney A border fence Build a fence. We do this in our own yards to keep people from cutting though. The border crossings are only to control the honest people. All those miles with no control are just a problem waiting to happen. - George Hansen Payable in USD I believe we should build a fence and the U.S. should pay for it. - David MacKinnon โ€‹Calm down To all those fear-driven posters who think refugees coming to Canada makes this country unsafe: how? Is the country less safe right now because of the last few refugees who crossed the border? Is the pregnant refugee who recently came to Canada a terrorist? Stop listening to the American fear and rhetoric. And yes, I would allow refugees into my home, out from the cold. They are humans and have the same needs as you. - Thomas Magnum Stop paying taxes โ€‹I am so annoyed by the government's laissez-faire attitude in regard to people illegally crossing the border that I am considering refusing to pay my income tax. When I am told that seniors' subsidy for rent in my province cannot be raised to realistic levels because there is "no money," I realize that the wellbeing of our own citizens is secondary. I guess the CRA can raid my bank account, if need be. - Susan Smith Turn them back Close the loophole: make the Safe Third Country Agreement effective across the whole border between Canada and the USA, instead of just at the guarded check points. Anyone who is caught crossing over will be turned back to the U.S. - Jamie Smart Comments have been edited for length and clarity.
/* * @(#)BrowseFileDirectoryAction.java * Copyright ยฉ 2021 The authors and contributors of JHotDraw. MIT License. */ package org.jhotdraw8.app.action.file; import javafx.event.ActionEvent; import org.jhotdraw8.annotation.NonNull; import org.jhotdraw8.annotation.Nullable; import org.jhotdraw8.app.FileBasedActivity; import org.jhotdraw8.app.action.AbstractActivityAction; import java.awt.Desktop; import java.io.File; import java.lang.reflect.InvocationTargetException; import java.net.URI; import java.nio.file.FileSystemNotFoundException; import java.nio.file.Path; import java.nio.file.Paths; import java.util.logging.Logger; public class BrowseFileDirectoryAction extends AbstractActivityAction<FileBasedActivity> { public static final String ID = "file.browseFileDirectory"; /** * Creates a new instance. * * @param activity the view */ public BrowseFileDirectoryAction(@NonNull FileBasedActivity activity) { super(activity); activity.getApplication().getResources().configureAction(this, ID); } @Override protected void onActionPerformed(ActionEvent event, @NonNull FileBasedActivity activity) { if (isDisabled()) { return; } final URI uri = activity.getURI(); doIt(uri); } private void doIt(@Nullable URI uri) { browseFileDirectory(uri); } public static void browseFileDirectory(@Nullable URI uri) { if (uri == null) { return; } try { Path path = Paths.get(uri); if (path != null) { //Desktop.getDesktop().browseFileDirectory(path.toFile()); try { try { Desktop.class.getMethod("browseFileDirectory", File.class).invoke(Desktop.getDesktop(), path.toFile()); } catch (IllegalAccessException | InvocationTargetException e) { e.printStackTrace(); } } catch (NoSuchMethodException e) { e.printStackTrace(); } } } catch (FileSystemNotFoundException e) { Logger.getLogger(BrowseFileDirectoryAction.class.getName()).warning(e.getMessage()); } } }
import {MongoClient} from "mongodb"; import {TsMongodbOrmError} from "./errors/TsMongodbOrmError"; import {LockManager} from "./locks/LockManager"; import {RankManager} from "./ranks/RankManager"; import {Repository} from "./Repository"; import {TransactionManager} from "./transactions/TransactionManager"; import {tsMongodbOrm} from "./tsMongodbOrm"; import { IConnectionOptions, IDocumentClass, IGetLockManagerOptions, IGetRankManagerOptions, IGetRepositoryOptions, IGetTransactionManagerOptions, } from "./types"; import {updateStack} from "./utils"; export class Connection { public readonly mongoClient: MongoClient; public readonly dbName: string; constructor(options: IConnectionOptions) { this.mongoClient = options.mongoClient; this.dbName = options.dbName; } public async close() { const friendlyErrorStack = tsMongodbOrm.getFriendlyErrorStack(); try { await this.mongoClient.close(); } catch (err) { throw Object.assign(err, friendlyErrorStack && {stack: updateStack(friendlyErrorStack, err)}); } } public getRepository<T extends IDocumentClass>(classObject: T, options: IGetRepositoryOptions = {}): Repository<T> { const documentMeta = tsMongodbOrm.getDocumentMeta(classObject); return new Repository<T>({ mongoClient: this.mongoClient, classObject, dbName: options.dbName || this.dbName, collectionName: options.collectionName || documentMeta.collectionName, }); } public getTransactionManager<T extends any>(options: Partial<IGetTransactionManagerOptions> = {}) { return new TransactionManager(Object.assign({ mongoClient: this.mongoClient, transactionOptions: {}, maxRetry: options.maxRetry || -1, }, options)); } public getLockManager<T extends any>(options: Partial<IGetLockManagerOptions> = {}) { if (options.expiresIn && options.expiresIn > 60 * 1000) { throw new TsMongodbOrmError(`You cannot expiresIn more than 60000.`); } return new LockManager(Object.assign({ mongoClient: this.mongoClient, dbName: this.dbName, collectionName: "Lock", expiresIn: 1000, maxRetry: 0, retryDelay: 0, }, options)); } public getRankManager<T extends any>(options: IGetRankManagerOptions) { return new RankManager(Object.assign({ mongoClient: this.mongoClient, skipTransaction: !!options.skipTransaction, transaction: { maxRetry: options.transaction?.maxRetry || -1, transactionOptions: options.transaction?.transactionOptions || {}, }, dbName: this.dbName, collectionName: "Rank", }, options)); } }
The Irish question is back in British politics. Next month the European Union must decide whether sufficient progress has been made in the Brexit negotiations to move on to the next phase of talks. Should Ireland veto an unsatisfactory UK offer on Northern Irelandโ€™s future relations with the EU or would that prejudice an acceptable deal at the end of the talks? It is as fine a call as any in the history of Irish foreign policy. The decision requires an assessment of the balance of bargaining power between the EU and the UK, of Irelandโ€™s comparative leverage in this phase of the negotiations, and of the likely effect of an Irish veto now on future Irish-British relations. Irish diplomacy did exceptionally well to have the Northern Ireland issue inserted as one of three issues on which progress would have to be made in this first phase of the Brexit talks. Along with the divorce bill and citizenship rights, the EU seeks clarity on the UKโ€™s plans to keep the Irish Border open for people and goods even though the British government has unilaterally ruled out staying in the single market and the customs union. Its offers so far have relied on unconvincing assurances predicated on an unrealistic final goal to achieve a tailor-made outcome. The EUโ€™s demand that there should be no regulatory divergence between future EU and UK policies which would disrupt the all-island economy or undermine the peace process in Ireland is based on extensive research showing how deeply embedded these are in EU agreements and funding. Some 142 sectors are affected, ranging from health to agriculture, food , energy, tourism, energy and sport. Bargaining card The UK side is slow to respond because it is ill-prepared, divided on objectives and seeking to use Ireland as a bargaining card in the wider negotiations on trade and market access to come in the next phase of the negotiations. The balance of bargaining power is tilted against the UK because the supply of solutions on the EU side outweighs demands made by the UK for a bespoke outcome, according to the Swiss political scientist Frank Schimmelfennig. He argued at a seminar in Norway this week that whereas the UK could more easily negotiate internal differentiation or opt-outs from EU policies while it remained a member state, since others have an incentive to keep it in, it becomes much more difficult to do that when it has decided to leave, raising the spectre of disintegration for those who stay. Those most affected, like Ireland, now have more bargaining power. They also have more leverage at this stage of the negotiations than they will in the next, wider stage when many more national interests will shake out. The British anticipate this will give them opportunities to divide the EU27. Their strategy raises the historical perspective of perfidious Albion dividing Europe in pursuit of a balance of power from continental hegemony. Their approach is more visible now to a united EU side. One must ask whether in their present state of political disorganisation and diplomatic isolation the British government is capable of perfidy. The fear they might be capable of perfidy animates the current debate on how to get them to agree written guarantees about an open border in next monthโ€™s EU summit conclusions. These decisions are made by consensus, so that Ireland has a continuing veto on the outcome in the next phase, but less leverage. Goodwill scarce Ireland also has a deep interest in securing a deal at the end which will preserve trade both ways with the UK, and cannot expect to conclude it now. It may also be necessary to work bilaterally with London on the North to bring jointly agreed solutions to Brussels, which needs goodwill and trust. Both are now in short supply. The UK has reached a key moment of clarification about the kind of flexible and imaginative outcome it wants from the Brexit talks. Many possible solutions have been put forward for the Northโ€™s unique problems, ranging from full or partial inclusion in the single market and the customs union and where to place Border controls, to distinct treatment like other small territories. Trust and goodwill can be restored by a British willingness to give guarantees in writing to conclude this stage of the negotiations. Unless they do so, the Irish Government should insist that sufficient progress has not been made, the effect of which would prevent them moving on until next year. [email protected]
def coordinates_distance(t1: tuple, t2: tuple, /, type: NeighboursType = NeighboursType.CROSS) -> int | float: if type == Map.NeighboursType.CROSS: return sum(abs(x1 - x2) for x1, x2 in zip(t1, t2)) elif type == Map.NeighboursType.ALL: return sum((x1 - x2)**2 for x1, x2 in zip(t1, t2))**0.5
def _find_params_value_of_url(self, services, url): values_of_query = list() i = 0 url_split = url.split('/') values = [item for item in url_split if item not in services and item != ''] for v in values: if v != None: values_of_query.append(v) i += 1 return values_of_query
<filename>frontend/packages/ceph-storage-plugin/src/components/create-storage-system/create-storage-system-steps/security-and-network-step/security-and-network-step.tsx import * as React from 'react'; import { Form } from '@patternfly/react-core'; import { useFlag, getNamespace, getName } from '@console/shared'; import { K8sResourceCommon } from '@console/internal/module/k8s'; import { Encryption } from './encryption'; import { NetworkType, NADSelectorType } from '../../../../types'; import { GUARDED_FEATURES } from '../../../../features'; import { WizardDispatch, WizardState } from '../../reducer'; import { NetworkFormGroup } from '../../../ocs-install/install-wizard/configure'; export const SecurityAndNetwork: React.FC<SecurityAndNetworkProps> = ({ state, dispatch }) => { const isMultusSupported = useFlag(GUARDED_FEATURES.OCS_MULTUS); const { networkType: nwType, clusterNetwork, publicNetwork } = state; const setNetworkType = (networkType: NetworkType) => { dispatch({ type: 'securityAndNetwork/setNetworkType', payload: networkType }); if (networkType === NetworkType.DEFAULT) { dispatch({ type: 'securityAndNetwork/setClusterNetwork', payload: '' }); dispatch({ type: 'securityAndNetwork/setPublicNetwork', payload: '' }); } }; const setNetwork = (network: NADSelectorType, resource: K8sResourceCommon) => dispatch({ type: network === NADSelectorType.CLUSTER ? 'securityAndNetwork/setClusterNetwork' : 'securityAndNetwork/setPublicNetwork', payload: `${getNamespace(resource)}/${getName(resource)}`, }); return ( <Form noValidate={false}> <Encryption state={state} dispatch={dispatch} /> {isMultusSupported && ( <NetworkFormGroup networkType={nwType} setNetworkType={setNetworkType} setNetwork={setNetwork} publicNetwork={publicNetwork} clusterNetwork={clusterNetwork} /> )} </Form> ); }; type SecurityAndNetworkProps = { state: WizardState['securityAndNetwork']; dispatch: WizardDispatch; };
k = int(input()) n=[] x=[] t=[] for i in range(k): inp = input().split(' ') n.append(int(inp[0])) x.append(int(inp[1])) t.append(int(inp[2])) for i in range(k): temp = t[i]//x[i] c1 = (temp*n[i]) - ((temp*(temp+1))//2) c2 = (n[i]*(n[i]-1))//2 if n[i]*x[i]>t[i]: print(int(c1)) else: print(int(c2))
K, N = map(int, input().split()) A = list(map(int, input().split())) A.insert(0, 0) A.append(K) # print(A) start = A[1] - A[0] goal = A[-1] - A[-2] mixi = 0 sum_len = 0 # SorG = max(start, goal) SorG = start + goal for i in range(len(A) - 1): if A[i+1] - A[i] > mixi: mixi = A[i+1] - A[i] sum_len += A[i+1] - A[i] if max(SorG, mixi) == SorG or SorG == mixi: result = sum_len - (start + goal) elif max(SorG, mixi) == mixi: result = sum_len - mixi print(result)
Star Wars: The Force Awakens products are set to be unveiled in the worldโ€™s first ever global live toy unboxing event. Unfolding over 18 hours in 15 cities and 12 countries, the event will see highlights from the range of epic merchandise revealed in a rolling New Yearโ€™s Eve style celebration featuring top digital stars from the Maker Studios network. โ€œStar Wars toys have always played an important role in how our fans interact with the saga,โ€ says Lucasfilm President Kathleen Kennedy. โ€œTheyโ€™ve inspired multiple generations to relive the experience of the movies and to create new adventures all their own. These spectacular Star Wars: The Force Awakens products will continue that tradition.โ€ Kicking off in Sydney, Australia on the morning of September 3 and continuing through Asia, Europe, Canada, and North and South America, selections from the new toy line will debut to global fanfare leading up to retailers around the globe opening their doors at midnight. โ€œOver the course of 18 hours, some of Maker Studiosโ€™ biggest stars will each be unboxing a new toy from the Star Wars: The Force Awakens product line in a different city around the world and sharing the experience live on YouTube,โ€ said Leslie Ferraro, President, Disney Consumer Products. โ€œWeโ€™ve seen tremendous excitement for these new products and canโ€™t wait to see the global reaction from the Star Wars fan community.โ€ The Star Wars YouTube channel will host the live stream, which kicks off with the first unboxing in Sydney, Australia at 7:45 a.m. local time on Thursday, Sept. 3 (5:45 p.m. EDT on Wednesday, Sept. 2), with the grand finale at Lucasfilm in San Francisco at 8 a.m. PDT (11 a.m. EDT) on Thursday, Sept. 3. Each of the 15 locations will reveal a new product inspired by Star Wars: The Force Awakens. These โ€œunboxingโ€ videos will feature online personalities sharing the excitement of opening the new Star Wars: The Force Awakens toys. Unboxing videos have captivated millions of Internet viewers and continue to grow in popularity โ€” 18 of the top 100 most viewed YouTube Channels worldwide are dedicated to toys and toy unboxings. These 18 channels accounted for 8.1 billion views in Q1 2015. โ€œWeโ€™re excited to be part of this first-of-its-kind initiative,โ€ said Chris M. Williams, Chief Audience Officer, Maker Studios. โ€œThe unboxing world continues to expand rapidly as audiences around the world connect with digital creators who share their passions. This shared fandom helps attract billions of monthly views and consistently puts the genre at the top of the YouTube charts.โ€ The global event will incorporate traditional toy unboxers such as the popular EvanTubeHD, channels featuring families such as Bratayley, lifehack specialists such as ExpCaseros, gamers such as AlexBy11, and Star Wars fans from around the world like Chris Pirillo who bring with them a broad audience and appeal. The global lineup includes: COUNTRY CITY MAKER TALENT LOCAL TIME 9/3 EDT Australia Sydney Bratayley 07:45 17:45 9/2 Japan Tokyo Einshine 11:00 22:00 9/2 Korea Seoul Dollastic 12:00 23:00 9/2 Hong Kong Hong Kong Dante Basco 12:00 00:00 9/3 France Paris AyPierre 09:00 03:00 9/3 Spain Madrid AlexBy11 11:00 05:00 9/3 Germany Berlin Reyst 12:00 06:00 9/3 England London GamingBeaver 11:30 06:30 9/3 Brazil Rio de Janeiro Malena010102 08:30 07:30 9/3 USA New York EvanTubeHD 08:30 08:30 9/3 Canada Toronto Quill18 09:00 09:00 9/3 USA Chicago HobbyKidsTV 08:30 09:30 9/3 Mexico Mexico City ExpCaseros 09:30 10:30 9/3 USA San Francisco Chris Pirillo 08:00 11:00 9/3 Several countries will join the live stream from YouTube Spaces (Berlin, Tokyo, London), with Star Wars commentators Anthony Carboni and Andi Gutierrez hosting the event from the YouTube Space, Los Angeles. Fans can tune in to watch live unboxings at home or on mobile devices, and also enjoy the latest trailers on the upcoming movie, commentary from special guests, fun product videos and demos, and footage from recent Star Wars events. The global celebration culminates at midnight local time, when many retailers around the world will open their doors to fans looking to purchase Star Wars: The Force Awakens toys, collectibles, books, apparel, and more. Additionally, select Disney Store locations around the world will take part. More details are available at the Disney Store website. Fans are encouraged to document their experiences around the event on social media using the hashtag #ForceFriday. StarWars.com. All Star Wars, all the time.
<filename>dist/pip-suite-split.d.ts<gh_stars>0 declare module pip.split { export interface ISplitService { forwardTransition(toState: any, fromState: any): boolean; goToDesktopState(fromState: string): string; } export interface ISplitProvider { addTransitionSequence(sequence: string[], mobileStates?: MobileState[]): void; } export class MobileState { name: string; toStateName: string; } }
package boj_match_april; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.util.Arrays; import java.util.PriorityQueue; import java.util.StringTokenizer; /** * @author <EMAIL> * @time 2020. 4. 10. ์˜คํ›„ 4:13:30 * @category * @problem_description ์†Œ์ง€๊ธˆ์„ ์ตœ์†Œ๋กœ ์žƒ์œผ๋ฉด์„œ [n-1][n-1]๊นŒ์ง€ ์ด๋™ํ• ๋•Œ ์žƒ์€ ์ตœ์†Œ ๊ธˆ์•ก์„ ๊ตฌํ•˜๋ผ * ์ด๊ฑด ๋งˆ์น˜ 0,0๋ถ€ํ„ฐ n-1,n-1๊นŒ์ง€ ๊ฐ€๋Š” ๊ฒฝ๋กœ์ค‘ ์ตœ์†Œ ๊ฐ€์ค‘์น˜์˜ํ•ฉ์„ ๊ตฌํ•˜๋Š” ๋ฌธ์ œ์™€ ๊ฐ™๋‹ค. (๋‹ค์ต์ŠคํŠธ๋ผ) * @solving_description * ๊ทธ๋ž˜ํ”„๋กœ ์—ฐ๊ฒฐ๋˜์–ด ์žˆ๋Š” ๊ฒƒ์ด ์•„๋‹ˆ๋ผ ์ƒํ•˜์ขŒ์šฐ๊ฐ€ ์—ฐ๊ฒฐ๋˜์–ด ์žˆ๋Š” ๊ฒƒ์ด๋ฏ€๋กœ 4๋ฐฉํ–ฅ์„ * ๊ฒ€์‚ฌํ•ด์„œ ๊ธฐ์กด ์ €์žฅ๋œ ๊ฒฝ๋กœ ๊ฐ’๋ณด๋‹ค ์ž‘์œผ๋ฉด ๊ฐฑ์‹ ์‹œ์ผœ์ค˜์•ผ ํ•œ๋‹ค. */ public class BOJ_4485_Dijkstra_PQ { private static int[][] map; static int[] dy = { -1, 1, 0, 0 }; static int[] dx = { 0, 0, -1, 1 }; public static void main(String[] args) throws NumberFormatException, IOException { BufferedReader bufferedReader =new BufferedReader(new InputStreamReader(System.in)); int tc =1; while(true) { int n = Integer.parseInt(bufferedReader.readLine()); if(n==0) break; map = new int[n][n]; Edge[][] D = new Edge[n][n]; PriorityQueue<Edge> pq = new PriorityQueue<>(); for(int i=0;i<n;i++) { StringTokenizer stringTokenizer =new StringTokenizer(bufferedReader.readLine()); for(int j=0;j<n;j++) { map[i][j] = Integer.parseInt(stringTokenizer.nextToken()); //๊ฐ€์ค‘์น˜ ์ €์žฅ if(i==0&&j==0) { D[i][j] = new Edge(i, j, map[i][j]); } else { D[i][j] = new Edge(i, j, Integer.MAX_VALUE); } pq.add(D[i][j]); } } //์ƒํ•˜์ขŒ์šฐ๋งŒ ๋‹ค ์—ฐ๊ฒฐ๋˜์–ด ์žˆ๋‹ค. ๊ทธ๋ž˜์„œ ๊ฐฑ์‹ ํ•  ๋•Œ ์ƒํ•˜์ขŒ์šฐ๋งŒ ๊ฐฑ์‹ ํ•˜๋ฉด ๋ ๋“ฏ! //00์—์„œ ๋‹ค์ต์ŠคํŠธ๋ผ๋ฅผ ์‹œ์ž‘ํ•œ๋‹ค. ์ด๊ฑฐ๋Š” ์ธ์ ‘ํ–‰๋ ฌ์ด๋‹ค. //๋…ธ๋“œ์˜ ์ธ๋ฑ์Šค๋ฅผ ์ €์žฅํ•˜๋Š” ๊ฒƒ์ด ์•„๋‹Œ ํ•ด๋‹น ์ขŒํ‘œ y,x๋ฅผ ์ €์žฅํ•ด์•ผํ•œ๋‹ค. //i๋ถ€ํ„ฐ n๊นŒ์ง€๊ฐ€๋Š” ์ตœ์†Œ ๊ฒฝ๋กœ๋ฅผ ์ €์žฅํ•˜๋Š” ๋ฐฐ์—ด D boolean[][] check = new boolean[n][n]; while(!pq.isEmpty()) { Edge edge = pq.poll(); int y = edge.y; int x =edge.x; if(edge.weight==Integer.MAX_VALUE) continue; //MAX๋ฉด ์—ฐ๊ฒฐ์ด ์•ˆ๋˜์–ด ์žˆ์Œ์„ ์˜๋ฏธ for(int k=0;k<4;k++) { int ny = y+dy[k]; int nx = x+dx[k]; //๋ฒ”์œ„ if(ny<0||ny>=n||nx<0||nx>=n) continue; //๋ฐฉ๋ฌธํ•˜์ง€ ์•Š์•˜๊ณ  ํ˜„์žฌ๊ฐ’๋ณด๋‹ค ์ž‘์œผ๋ฉด ๊ฐฑ์‹  if(!check[ny][nx]&&D[y][x].weight+map[ny][nx]<D[ny][nx].weight) { D[ny][nx].weight=D[y][x].weight+map[ny][nx]; pq.remove(D[ny][nx]); pq.add(D[ny][nx]); } } check[y][x] = true; }//while System.out.println("Problem "+tc+": "+D[n-1][n-1].weight); tc++; } }//main static class Edge implements Comparable<Edge>{ int y,x, weight; public Edge(int y, int x, int weight) { super(); this.y = y; this.x = x; this.weight = weight; } @Override public int compareTo(Edge o) { // TODO Auto-generated method stub return Integer.compare(this.weight, o.weight); } } }
Iterative Demodulation and Decoding of Uplink Multiuser M-ary FSK Using OFDMA Mapping In this paper, we study iterative demodulation and decoding (IDD) for a multiuser system where users employ M-ary frequency shift keying to transmit their signals to a base station (BS). Since joint multiuser detection requires a prohibitively high computational complexity, we propose a suboptimal low-complexity detection method that can take into account extrinsic bit information, which is well-suited to IDD. Under a certain condition, its complexity grows linearly with the number of users.
/* Creates an instance of the AnimationManager class pwmPin : the pwm pin to which the fairy light led string is connected (through a mosfet) */ AnimationManager::AnimationManager( uint8_t pwmPin ) { this->offBlinkAnimation = new OffBlinkAnimation( pwmPin, 3, 300 ); this->smoothTransistionAnimation = new SmoothBrightnessTransistion( pwmPin ); this->offBlinkAnimation->setAnimationListener( this ); this->smoothTransistionAnimation->setAnimationListener( this ); }
<filename>cloud/cloud/doctype/cloud_company_requisition/cloud_company_requisition.py # -*- coding: utf-8 -*- # Copyright (c) 2019, <NAME> and contributors # For license information, please see license.txt from __future__ import unicode_literals import frappe from frappe import throw from frappe.model.document import Document class CloudCompanyRequisition(Document): def validate(self): if self.docstatus == 0: if frappe.db.exists("Cloud Company", {"comp_name": self.comp_name, "name": ('!=', self.name)}): throw("company_duplicated_comp_name") if frappe.db.exists("Cloud Company", {"full_name": self.full_name, "name": ('!=', self.name)}): throw("company_duplicated_full_name") if frappe.db.exists("Cloud Company", {"domain": self.domain, "name": ('!=', self.name)}): throw("company_duplicated_domain") ''' if self.domain != self.admin.split("@")[1]: throw("company_domain_must_be_same_as_admin_email_domain") ''' if frappe.db.exists("Cloud Company Requisition", {"comp_name": self.comp_name, "docstatus": ('!=', '1'), "name": ('!=', self.name)}): throw('company_requisition_duplicated_comp_name') if frappe.db.exists("Cloud Company Requisition", {"full_name": self.full_name, "docstatus": ('!=', '1'), "name": ('!=', self.name)}): throw('company_requisition_duplicated_full_name') if frappe.db.exists("Cloud Company Requisition", {"domain": self.domain, "docstatus": ('!=', '1'), "name": ('!=', self.name)}): throw('company_requisition_duplicated_domain') if frappe.db.exists("Cloud Company Requisition", {"credit_code": self.credit_code, "docstatus": ('!=', '1'), "name": ('!=', self.name)}): throw('company_requisition_duplicated_credit_code') if frappe.db.exists("Cloud Company Requisition", {"telephone": self.telephone, "docstatus": ('!=', '1'), "name": ('!=', self.name)}): throw('company_requisition_duplicated_telephone') def on_submit(self): data = { "doctype": "Cloud Company", "comp_name": self.comp_name, "full_name": self.full_name, "domain": self.domain, "admin": self.admin, "address": self.address, "contact": self.contact, "enabled": 1, "wechat": 0 } doc = frappe.get_doc(data).insert() group_data = { "doctype": "Cloud Company Group", "company": doc.name, "group_name": "root" } frappe.get_doc(group_data).insert()
/*This method initiates the variables needed for step b. *It is run inside synchronization_1, before counting.*/ private void initiate_step_B(){ bit_index = 0; sum = 0; count_length = (1 << bit[bit_index]); allCount = new int[k][count_length]; sumCount = new int[count_length]; }
<gh_stars>0 export interface Reaction { reaction: ReactionType; count?: number; postId: number; } export type ReactionType = 'LIKE' | 'SMILE' | 'LOVE';
use crate::transport::{JetFuture, JetSinkImpl, JetSinkType, JetStreamImpl, JetStreamType, Transport}; use crate::utils::{danger_transport, resolve_url_to_socket_arr}; use futures::{ready, Sink, Stream}; use hyper::upgrade::Upgraded; use spsc_bip_buffer::{BipBufferReader, BipBufferWriter}; use std::io::Cursor; use std::net::SocketAddr; use std::pin::Pin; use std::sync::atomic::AtomicU64; use std::sync::Arc; use std::task::{Context, Poll}; use tokio::io::{self, AsyncRead, AsyncWrite, ReadBuf}; use tokio::net::TcpStream; use tokio_rustls::{rustls, TlsConnector, TlsStream}; use tokio_tungstenite::tungstenite::handshake::client::Request; use tokio_tungstenite::tungstenite::protocol::Role; use tokio_tungstenite::{tungstenite, WebSocketStream}; use url::Url; enum WsStreamSendState { Idle, SendInProgress, } pub struct WsStream { inner: WsStreamWrapper, previous_message: Option<Cursor<Vec<u8>>>, previous_send_state: WsStreamSendState, } impl WsStream { fn peer_addr(&self) -> Option<SocketAddr> { self.inner.peer_addr() } pub async fn shutdown(&mut self) -> Result<(), std::io::Error> { self.inner.shutdown().await } } impl From<WsStreamWrapper> for WsStream { fn from(wrapper: WsStreamWrapper) -> Self { WsStream { inner: wrapper, previous_message: None, previous_send_state: WsStreamSendState::Idle, } } } impl AsyncRead for WsStream { fn poll_read( mut self: Pin<&mut Self>, cx: &mut Context<'_>, buf: &mut ReadBuf<'_>, ) -> Poll<Result<(), std::io::Error>> { match self.previous_message.take() { Some(mut message) => { ready!(Pin::new(&mut message).poll_read(cx, buf))?; slog_scope::trace!( "Received next part of WebSockets message ({} of {} bytes read)", message.position(), message.get_ref().len() ); if message.position() == message.get_ref().len() as u64 { slog_scope::trace!("Segmented message was completely read"); self.previous_message = None; } else { self.previous_message = Some(message); } Poll::Ready(Ok(())) } None => { let message_result = match self.inner { WsStreamWrapper::Http((ref mut stream, _)) => Pin::new(stream).poll_next(cx), WsStreamWrapper::Tcp((ref mut stream, _)) => Pin::new(stream).poll_next(cx), WsStreamWrapper::Tls((ref mut stream, _)) => Pin::new(stream).poll_next(cx), }; let message = ready!(message_result) .map(|e| e.map_err(tungstenite_err_to_io_err)) .unwrap_or_else(|| Err(io::Error::new(io::ErrorKind::Other, "Connection closed".to_string())))?; slog_scope::trace!( "New {} message received (length: {} bytes)", tungstenite_message_type_to_string(&message), message.len() ); if (message.is_binary() || message.is_text()) && !message.is_empty() { let mut message = Cursor::new(message.into_data()); match Pin::new(&mut message).poll_read(cx, buf) { Poll::Ready(Ok(_)) => { if message.position() < message.get_ref().len() as u64 { // Current WS message is not yet read completely, provided input buffer // has been overflowed slog_scope::trace!( "Received first part of WebSockets message ({} of {} bytes read)", message.position(), message.get_ref().len() ); self.previous_message = Some(message); } Poll::Ready(Ok(())) } Poll::Ready(Err(e)) => Poll::Ready(Err(e)), Poll::Pending => { // Generally, with Cursor's poll_read this should not be triggered, // but we will keep that here as a safe measure if something will // change in the Cursor in the future self.previous_message = Some(message); Poll::Pending } } } else { // Skip non-text / non-binary messages and wait for more data cx.waker().clone().wake(); Poll::Pending } } } } } impl AsyncWrite for WsStream { fn poll_write(mut self: Pin<&mut Self>, cx: &mut Context<'_>, buf: &[u8]) -> Poll<Result<usize, std::io::Error>> { match self.previous_send_state { WsStreamSendState::Idle => { let message = tungstenite::Message::Binary(buf.to_vec()); let result = match self.inner { WsStreamWrapper::Http((ref mut stream, _)) => { let mut pinned = Pin::new(stream); ready!(pinned.as_mut().poll_ready(cx)).map_err(tungstenite_err_to_io_err)?; pinned.as_mut().start_send(message).map_err(tungstenite_err_to_io_err) } WsStreamWrapper::Tcp((ref mut stream, ref mut _addr)) => { let mut pinned = Pin::new(stream); ready!(pinned.as_mut().poll_ready(cx)).map_err(tungstenite_err_to_io_err)?; pinned.as_mut().start_send(message).map_err(tungstenite_err_to_io_err) } WsStreamWrapper::Tls((ref mut stream, ref mut _addr)) => { let mut pinned = Pin::new(stream); ready!(pinned.as_mut().poll_ready(cx)).map_err(tungstenite_err_to_io_err)?; pinned.as_mut().start_send(message).map_err(tungstenite_err_to_io_err) } }; match result { Ok(()) => Poll::Ready(Ok(buf.len())), Err(e) if e.kind() == io::ErrorKind::WouldBlock => { self.previous_send_state = WsStreamSendState::SendInProgress; Poll::Pending } Err(e) => Poll::Ready(Err(e)), } } WsStreamSendState::SendInProgress => { let result = match self.inner { WsStreamWrapper::Http((ref mut stream, _)) => Pin::new(stream).poll_flush(cx), WsStreamWrapper::Tcp((ref mut stream, ref mut _addr)) => Pin::new(stream).poll_flush(cx), WsStreamWrapper::Tls((ref mut stream, ref mut _addr)) => Pin::new(stream).poll_flush(cx), }; result .map_ok(|_| { self.previous_send_state = WsStreamSendState::Idle; buf.len() }) .map_err(tungstenite_err_to_io_err) } } } fn poll_flush(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Result<(), std::io::Error>> { let result = match self.inner { WsStreamWrapper::Http((ref mut stream, _)) => Pin::new(stream).poll_flush(cx), WsStreamWrapper::Tcp((ref mut stream, _)) => Pin::new(stream).poll_flush(cx), WsStreamWrapper::Tls((ref mut stream, _)) => Pin::new(stream).poll_flush(cx), }; result.map_err(tungstenite_err_to_io_err) } fn poll_shutdown(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Result<(), std::io::Error>> { let result = match self.inner { WsStreamWrapper::Http((ref mut stream, _)) => Pin::new(stream).poll_close(cx), WsStreamWrapper::Tcp((ref mut stream, _)) => Pin::new(stream).poll_close(cx), WsStreamWrapper::Tls((ref mut stream, _)) => Pin::new(stream).poll_close(cx), }; result.map_err(tungstenite_err_to_io_err) } } #[allow(clippy::large_enum_variant)] pub enum WsStreamWrapper { Http((WebSocketStream<Upgraded>, Option<SocketAddr>)), Tcp((WebSocketStream<TcpStream>, Option<SocketAddr>)), Tls((WebSocketStream<TlsStream<TcpStream>>, Option<SocketAddr>)), } impl WsStreamWrapper { fn peer_addr(&self) -> Option<SocketAddr> { match self { WsStreamWrapper::Http((_stream, addr)) => *addr, WsStreamWrapper::Tcp((_stream, addr)) => *addr, WsStreamWrapper::Tls((_stream, addr)) => *addr, } } pub async fn shutdown(&mut self) -> Result<(), std::io::Error> { match self { WsStreamWrapper::Http((stream, _)) => stream .close(None) .await .map_err(|e| io::Error::new(io::ErrorKind::Other, e)), WsStreamWrapper::Tcp((stream, _)) => stream .close(None) .await .map_err(|e| io::Error::new(io::ErrorKind::Other, e)), WsStreamWrapper::Tls((stream, _)) => stream .close(None) .await .map_err(|e| io::Error::new(io::ErrorKind::Other, e)), } } } pub struct WsTransport { stream: WsStream, nb_bytes_read: Arc<AtomicU64>, nb_bytes_written: Arc<AtomicU64>, } impl WsTransport { pub async fn new_http(upgraded: Upgraded, addr: Option<SocketAddr>) -> Self { WsTransport { stream: WsStreamWrapper::Http(( WebSocketStream::from_raw_socket(upgraded, Role::Server, None).await, addr, )) .into(), nb_bytes_read: Arc::new(AtomicU64::new(0)), nb_bytes_written: Arc::new(AtomicU64::new(0)), } } pub fn new_tcp(stream: WebSocketStream<TcpStream>, addr: Option<SocketAddr>) -> Self { WsTransport { stream: WsStreamWrapper::Tcp((stream, addr)).into(), nb_bytes_read: Arc::new(AtomicU64::new(0)), nb_bytes_written: Arc::new(AtomicU64::new(0)), } } pub fn new_tls(stream: WebSocketStream<TlsStream<TcpStream>>, addr: Option<SocketAddr>) -> Self { WsTransport { stream: WsStreamWrapper::Tls((stream, addr)).into(), nb_bytes_read: Arc::new(AtomicU64::new(0)), nb_bytes_written: Arc::new(AtomicU64::new(0)), } } pub fn clone_nb_bytes_read(&self) -> Arc<AtomicU64> { self.nb_bytes_read.clone() } pub fn clone_nb_bytes_written(&self) -> Arc<AtomicU64> { self.nb_bytes_written.clone() } async fn async_connect(url: Url) -> Result<Self, std::io::Error> { let socket_addr = if let Some(addr) = resolve_url_to_socket_arr(&url).await { addr } else { return Err(io::Error::new( io::ErrorKind::ConnectionRefused, format!("couldn't resolve {}", url), )); }; let request = match Request::builder().uri(url.as_str()).body(()) { Ok(req) => req, Err(e) => return Err(io::Error::new(io::ErrorKind::Other, e)), }; match url.scheme() { "ws" => { let stream = TcpStream::connect(&socket_addr) .await .map_err(|e| io::Error::new(io::ErrorKind::Other, e))?; let peer_addr = stream.peer_addr().ok(); match tokio_tungstenite::client_async(request, stream).await { Ok((stream, _)) => Ok(WsTransport::new_tcp(stream, peer_addr)), Err(e) => Err(io::Error::new(io::ErrorKind::Other, e)), } } "wss" => { let tcp_stream = TcpStream::connect(&socket_addr) .await .map_err(|e| io::Error::new(io::ErrorKind::Other, e))?; let mut client_config = rustls::ClientConfig::default(); client_config .dangerous() .set_certificate_verifier(Arc::new(danger_transport::NoCertificateVerification)); let config_ref = Arc::new(client_config); let cx = TlsConnector::from(config_ref); let dns_name = webpki::DNSNameRef::try_from_ascii_str("stub_string").unwrap(); let tls_stream = cx.connect(dns_name, tcp_stream).await?; let peer_addr = tls_stream.get_ref().0.peer_addr().ok(); match tokio_tungstenite::client_async(request, TlsStream::Client(tls_stream)).await { Ok((stream, _)) => Ok(WsTransport::new_tls(stream, peer_addr)), Err(e) => Err(io::Error::new(io::ErrorKind::Other, e)), } } scheme => { panic!("Unsupported scheme: {}", scheme); } } } } impl AsyncRead for WsTransport { fn poll_read(mut self: Pin<&mut Self>, cx: &mut Context<'_>, buf: &mut ReadBuf<'_>) -> Poll<Result<(), io::Error>> { Pin::new(&mut self.stream).poll_read(cx, buf) } } impl AsyncWrite for WsTransport { fn poll_write(mut self: Pin<&mut Self>, cx: &mut Context<'_>, buf: &[u8]) -> Poll<Result<usize, io::Error>> { Pin::new(&mut self.stream).poll_write(cx, buf) } fn poll_flush(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Result<(), io::Error>> { Pin::new(&mut self.stream).poll_flush(cx) } fn poll_shutdown(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Result<(), io::Error>> { Pin::new(&mut self.stream).poll_shutdown(cx) } } impl Transport for WsTransport { fn connect(url: &Url) -> JetFuture<Self> where Self: Sized, { Box::pin(Self::async_connect(url.clone())) } fn peer_addr(&self) -> Option<SocketAddr> { self.stream.peer_addr() } fn split_transport( self, buffer_writer: BipBufferWriter, buffer_reader: BipBufferReader, ) -> (JetStreamType<usize>, JetSinkType<usize>) { let peer_addr = self.peer_addr(); let (reader, writer) = tokio::io::split(self.stream); let stream = Box::pin(JetStreamImpl::new(reader, self.nb_bytes_read, peer_addr, buffer_writer)); let sink = Box::pin(JetSinkImpl::new( writer, self.nb_bytes_written, peer_addr, buffer_reader, )); (stream, sink) } } fn tungstenite_err_to_io_err(err: tungstenite::Error) -> io::Error { match err { tungstenite::Error::Io(e) => e, other => io::Error::new(io::ErrorKind::Other, other), } } fn tungstenite_message_type_to_string(msg: &tungstenite::Message) -> &str { match msg { tungstenite::Message::Text(_) => "Text", tungstenite::Message::Binary(_) => "Binary", tungstenite::Message::Ping(_) => "Ping", tungstenite::Message::Pong(_) => "Pong", tungstenite::Message::Close(_) => "Close", } }
A JPMorgan Chase office in New York City. (Photo: 2012 photo by Spencer Platt, Getty Images) NEW YORK โ€“ A former JPMorgan Chase investment adviser was arrested Thursday on charges he stole $20 million from customers and spent the funds on unprofitable trading and other personal expenses. Michael Oppenheim, 48, took money from at least seven bank clients in a fraud scheme he operated from March 2011 to March 2015, federal prosecutors alleged. He was arrested Thursday at his Livingston, N.J., home and brought to Manhattan federal court for an initial hearing, authorities said. Oppenheim worked as a JPMorgan investment adviser from February 2002 through last month, the Securities and Exchange Commission said in a civil complaint. He advised approximately 500 clients who collectively kept roughly $89 million in assets under his management, according to a criminal complaint filed by Manhattan federal prosecutors. FBI Assistant Director-in-Charge Diego Rodriguez said Oppenheim allegedly concealed clients' funds "in a game of hide-and-seek and personally benefited from illegitimately obtained profits." Robert Gamburg, an attorney representing Oppenheim, did not immediately respond to a message seeking comment. JPMorgan Chase spokesman Michael Fusco said the bank terminated Oppenheim last month, alerted federal authorities and began working with affected customers. "We are sorry and angry that this happened," Fusco said in a statement issued by the nation's largest bank Thursday. "We always stand by our customers and will ensure no customer who had their money stolen will lose any funds related to this." According to the criminal complaint, Oppenheim persuaded clients to allow transfers as large as millions of dollars from their accounts with promises he would invest the funds in low-risk municipal bonds held in a JPMorgan account. In other instances, he allegedly withdrew hundreds of thousands of dollars from client accounts without notification or authorization. Prosecutors charged that Oppenheim used the funds to obtain cashier's checks he deposited in at least three online brokerage accounts he or his wife, Alexandra Oppenheim, held at other financial institutions. The investment adviser "lost the bulk of the stolen funds in highly unprofitable options trading," the SEC civil complaint charged. Oppenheim also used some of the money to pay personal expenses, such as a home loan and other bills, prosecutors alleged. He tried to hide the scheme by giving some clients fraudulent account statements that reflected bonds held by other JPMorgan customers, federal prosecutors charged. On several occasions, he allegedly withdrew funds from one client's account and shifted it to another in an effort to prevent the scheme's discovery. Criminal charges against Oppenheim include wire fraud, embezzlement, securities fraud and investment adviser fraud. He could face a maximum 30-year prison term if convicted on the embezzlement count. The SEC court complaint seeks an order requiring Oppenheim to disgorge all ill-gotten gains. The financial regulator also named Alexandra Oppenheim as a relief defendant in the case because some of the money went to a broker account in her name, while other funds went to joint bank accounts she held with her husband. Read or Share this story: http://usat.ly/1yyZFJn
def compute_transition_matrix(self, *args, **kwargs) -> "SimpleNaryExpression": if isinstance(self, KernelSimpleAdd): self._maybe_recalculate_constants(Constant) elif isinstance(self, KernelAdaptiveAdd): self._maybe_recalculate_constants(ConstantMatrix) for kexpr in self: if kexpr.transition_matrix is None: if isinstance(kexpr, Kernel): raise RuntimeError( f"Kernel `{kexpr}` is uninitialized. " f"Compute its transition matrix as `.compute_transition_matrix()`." ) kexpr.compute_transition_matrix() elif isinstance(kexpr, Kernel): logg.debug(_LOG_USING_CACHE) self.transition_matrix = csr_matrix( self._fn([kexpr.transition_matrix for kexpr in self]) ) if self._parent is None: self._maybe_compute_cond_num() return self
package com.fei.table; import org.apache.kudu.client.KuduClient; import org.apache.kudu.client.KuduException; import org.junit.After; import org.junit.Before; /** * @description:ๆต‹่ฏ•ๅฏนkudu่กจ็š„ๅˆ›ๅปบๅˆ ้™ค */ public class KuduTable { private KuduClient kuduClient = null; @Before public void init() { //ๆž„ๅปบKuduClientๅฎžไพ‹ๅฏน่ฑก kuduClient = new KuduClient.KuduClientBuilder("node2.itcast.cn:7051") //่ฎพ็ฝฎ่ถ…ๆ—ถๆ—ถ้—ด้—ด้š”๏ผŒ้ป˜่ฎคๅ€ผ10s .defaultSocketReadTimeoutMs(6000) //้‡‡็”จๅปบ้€ ่€…ๆจกๅผๆž„ๅปบๅฎžไพ‹ๅฏน่ฑก .build(); } @After public void clean() throws KuduException{ //ๅ…ณ้—ญKuduClientๅฏน่ฑก๏ผŒ้‡Šๆ”พ่ต„ๆบ if (null != kuduClient) kuduClient.close(); } }
// Run the Delegator, performing all necessary internal operations. Return err // on failure and nil on success. func (d *Delegator) Run(m MetaContext) (err error) { var jw *jsonw.Wrapper defer m.Trace("Delegator#Run", func() error { return err })() if err = d.CheckArgs(m); err != nil { return } d.MerkleRoot = m.G().MerkleClient.LastRoot() d.Ctime = m.G().Clock().Now().Unix() if d.DelegationType == DelegationTypeSibkey { if jw, err = KeyProof(m, *d); err != nil { m.Debug("| Failure in intermediate KeyProof()") return err } if d.RevSig, _, _, err = SignJSON(jw, d.NewKey); err != nil { m.Debug("| Failure in intermediate SignJson()") return err } } if m.G().LocalDb == nil { panic("should have a local DB") } if jw, err = KeyProof(m, *d); err != nil { m.Debug("| Failure in KeyProof()") return } return d.SignAndPost(m, jw) }
package model import ( "fmt" sup "ml_console/support_functions" ) var ( // String for cmd_handler to know to pass to this module Module_init_command = "model" // This is passed to cmd_handler to generate the Main Menu Module_about = "Manage Models and DDP" // evenly space command descriptions in menus tab_over = " " //cmd list l = "list" u = "up" d = "down" de = "del" c = "check" t = "train" tt = "test" r = "results" // describe cmds for putting in menu l_d = "List Datasets Ready for Use" u_d = "Upload a dataset" d_d = "Downlaod a dataset" de_d = "Delete a Dataset From 'Ready to Use'" c_d = "See if a dataset was copied correctly" t_d = "train" tt_d = "test" r_d = "results" ) func Module_Menu() { // see Make_Menu in support functions var menu_name = "Model Module Menu" var menu_options = []string{ sup.Help, l, u, d, de, c, t, tt, r} var menu_options_desc = []string{ sup.Help_about, l_d, u_d, d_d, de_d, c_d, t_d, tt_d, r_d} sup.Make_Menu(menu_name, menu_options, menu_options_desc, sup.Magenta, sup.Blue, tab_over) } func Module_Menu_Logic(cmd string) { // cut out Module initialization string and first space cmd = cmd[len(Module_init_command)+1:] if cmd == l { fmt.Println(sup.Yellow + "List submodule in progress...") } else if cmd == u { fmt.Println(sup.Yellow + "Upload submodule in progress...") } else if cmd == d { fmt.Println(sup.Yellow + "Download submodule in progress...") } else if cmd == de { fmt.Println(sup.Yellow + "Delete submodule in progress...") } else if cmd == c { fmt.Println(sup.Yellow + "Check submodule in progress...") } else if cmd == t { fmt.Println(sup.Yellow + "Training submodule in progress...") } else if cmd == tt { fmt.Println(sup.Yellow + "Testing submodule in progress...") } else if cmd == r { fmt.Println(sup.Yellow + "Results submodule in progress...") } else if cmd == sup.Help { Module_Menu() } else { fmt.Println(sup.Err1) } }
package com.dotcms.rest.exception; import com.dotcms.repackage.javax.ws.rs.core.Response; public class InvalidRuleParameterException extends HttpStatusCodeException { private static final long serialVersionUID = 1L; private static final String ERROR_KEY = "dotcms.api.error.invalid_parameter"; public InvalidRuleParameterException(String message, String... messageArgs) { super(Response.Status.BAD_REQUEST, ERROR_KEY, message, messageArgs); } }
package events import ( "errors" "io/ioutil" "net/http" ) var ( // TriggerTypeMQTT - Event trigger of type MQTT - pub/sub TriggerTypeMQTT TriggerType = "mqtt" // TriggerTypeHTTP - Event trigger of type HTTP TriggerTypeHTTP TriggerType = "http" // ValidTriggerTypes - List of supported trigger types ValidTriggerTypes = []TriggerType{TriggerTypeMQTT} // ErrorNotSupportedTrigger - Error when event is assigned to not supported trigger types ErrorNotSupportedTrigger = errors.New("Trigger Type is not supported by Scaleway Functions Runtime") ) // TriggerType - Enumeration of valid trigger types supported by runtime type TriggerType string // GetTriggerType - check that a given trigger type is supported by runtime func GetTriggerType(triggerType string) (trigger TriggerType, err error) { if triggerType == "" { return TriggerTypeHTTP, nil } for _, validType := range ValidTriggerTypes { if string(validType) == triggerType { return validType, nil } } return "", ErrorNotSupportedTrigger } // FormatEvent - Format event according to given trigger type, if trigger type if not HTTP, then we assume that event // has already been formatted by event-source func FormatEvent(req *http.Request, triggerType TriggerType) (interface{}, error) { if triggerType == TriggerTypeHTTP { return formatEventHTTP(req), nil } // request body is the event reqBody, err := ioutil.ReadAll(req.Body) if err != nil { return nil, errors.New("Unable to read request body") } return string(reqBody), nil }
package imgui // #cgo linux LDFLAGS: -L./cimgui -l:cimgui.a -lstdc++ -lm // #include "native.h" import "C" import ( "fmt" "unsafe" "github.com/FooSoft/lazarus/graphics" "github.com/FooSoft/lazarus/math" ) func Begin(label string) bool { labelC := C.CString(label) defer C.free(unsafe.Pointer(labelC)) return bool(C.igBegin(labelC, nil, 0)) } func End() { C.igEnd() } func Button(label string) bool { labelC := C.CString(label) defer C.free(unsafe.Pointer(labelC)) return bool(C.igButton(labelC, C.ImVec2{})) } func Image(texture graphics.Texture) { ImageSized(texture, texture.Size()) } func ImageSized(texture graphics.Texture, size math.Vec2i) { C.igImage( C.nativeHandleCast(C.uintptr_t(texture.Id())), C.ImVec2{x: C.float(size.X), y: C.float(size.Y)}, C.ImVec2{0, 0}, C.ImVec2{1, 1}, C.ImVec4{1, 1, 1, 1}, C.ImVec4{0, 0, 0, 0}, ) } func SameLine() { C.igSameLine(0, -1) } func SliderInt(label string, value *int, min, max int) bool { labelC := C.CString(label) defer C.free(unsafe.Pointer(labelC)) valueC := C.int(*value) result := bool(C.igSliderInt(labelC, &valueC, (C.int)(min), (C.int)(max), nil)) *value = int(valueC) return result } func Text(format string, args ...interface{}) { label := fmt.Sprintf(format, args...) labelStartC := C.CString(label) labelEndC := (*C.char)(unsafe.Pointer(uintptr(unsafe.Pointer(labelStartC)) + uintptr(len(label)))) defer C.free(unsafe.Pointer(labelStartC)) C.igTextUnformatted(labelStartC, labelEndC) } func Columns(count int) { C.igColumns(C.int(count), nil, true) } func NextColumn() { C.igNextColumn() } func ShowDemoWindow() { C.igShowDemoWindow(nil) } func SetNextWindowPos(pos math.Vec2i) { C.igSetNextWindowPos(C.ImVec2{x: C.float(pos.X), y: C.float(pos.Y)}, C.ImGuiCond_FirstUseEver, C.ImVec2{}) } func SetNextWindowSize(size math.Vec2i) { C.igSetNextWindowSize(C.ImVec2{x: C.float(size.X), y: C.float(size.Y)}, C.ImGuiCond_FirstUseEver) }
/** * TODO: Move Interestial to their own executor and move all tests! */ @ExtendWith(MockitoExtension.class) public class AdMobTest { @Mock Context mockedContext; @Mock AppCompatActivity mockedActivity; @Mock PluginCall pluginCallMock; @Mock MockedConstruction<BannerExecutor> bannerExecutorMockedConstruction; AdMob sut; @BeforeEach public void beforeEach() { reset(pluginCallMock, mockedContext); sut = new AdMob() { @Override public Context getContext() { return mockedContext; } @Override public AppCompatActivity getActivity() { return mockedActivity; } @Override public String getLogTag() { return "LogTag"; } }; } @AfterEach public void afterEach() { bannerExecutorMockedConstruction.close(); } @Nested @DisplayName("Initialize()") class Initialize { MockedStatic<MobileAds> mobileAdsMockedStatic; JSArray testingDevices; ArgumentCaptor<RequestConfiguration> argumentCaptor; @BeforeEach void beforeEachInitializeTest() { mobileAdsMockedStatic = Mockito.mockStatic(MobileAds.class); argumentCaptor = ArgumentCaptor.forClass(RequestConfiguration.class); } @AfterEach void afterEachInitializeTest() { mobileAdsMockedStatic.close(); } @Test @DisplayName("If we initialize in not testing mode, then set the testing devices to an empty list") public void emptyTestingDevices() { when(pluginCallMock.getBoolean("initializeForTesting", false)).thenReturn(false); assertEquals(argumentCaptor.getAllValues().size(), 0); // Correct env sut.initialize(pluginCallMock); mobileAdsMockedStatic.verify(times(1), () -> MobileAds.setRequestConfiguration(argumentCaptor.capture())); assertEquals(0, argumentCaptor.getValue().getTestDeviceIds().size()); } @Test @DisplayName("Register Testing Devices if in testing Mode") public void registerTestingDevices() { when(pluginCallMock.getBoolean("initializeForTesting", false)).thenReturn(true); testingDevices = new JSArray(); testingDevices.put("One"); testingDevices.put("Two"); when(pluginCallMock.getArray("testingDevices", AdMob.EMPTY_TESTING_DEVICES)).thenReturn(testingDevices); assertEquals(argumentCaptor.getAllValues().size(), 0); // Correct env sut.initialize(pluginCallMock); mobileAdsMockedStatic.verify(times(1), () -> MobileAds.setRequestConfiguration(argumentCaptor.capture())); try { assertEquals(testingDevices.toList(), argumentCaptor.getValue().getTestDeviceIds()); } catch (JSONException e) { throw new RuntimeException(e); } } @Test @DisplayName("Initializes the banner executor") public void bannerExecutorInitialize() { when(pluginCallMock.getBoolean("initializeForTesting", false)).thenReturn(false); sut.initialize(pluginCallMock); BannerExecutor bannerExecutor = bannerExecutorMockedConstruction.constructed().get(0); verify(bannerExecutor).initialize(); } } @Nested @DisplayName("Ads Creation") class AdsCreation { AdOptions adOptionsWithNpaTrue = new AdOptions.TesterAdOptionsBuilder().setNpa(true).build(); MockedStatic<MobileAds> mobileAdsMockedStatic; MockedConstruction<AdView> adViewMockedConstruction; MockedStatic<AdOptions> adOptionsStaticMocked; MockedStatic<AdViewIdHelper> adViewIdHelperMockedStatic; MockedConstruction<CoordinatorLayout.LayoutParams> layoutParamsMockedConstruction; MockedStatic<RequestHelper> requestHelperMockedStatic; @Mock AdOptions.AdOptionsFactory adOptionsFactoryMock; ArgumentCaptor<Runnable> runnableArgumentCaptor; @BeforeEach void beforeEachAdCreation() { reset(pluginCallMock, adOptionsFactoryMock); runnableArgumentCaptor = ArgumentCaptor.forClass(Runnable.class); mobileAdsMockedStatic = Mockito.mockStatic(MobileAds.class); adViewMockedConstruction = Mockito.mockConstruction(AdView.class); layoutParamsMockedConstruction = Mockito.mockConstruction(CoordinatorLayout.LayoutParams.class); adViewIdHelperMockedStatic = Mockito.mockStatic(AdViewIdHelper.class); adOptionsStaticMocked = Mockito.mockStatic(AdOptions.class); adOptionsStaticMocked.when(AdOptions::getFactory).thenReturn(adOptionsFactoryMock); requestHelperMockedStatic = Mockito.mockStatic(RequestHelper.class); } @AfterEach void afterEachAdCreation() { mobileAdsMockedStatic.close(); adViewMockedConstruction.close(); adOptionsStaticMocked.close(); layoutParamsMockedConstruction.close(); adViewIdHelperMockedStatic.close(); requestHelperMockedStatic.close(); } @Nested @DisplayName("Build request in the same way for all ad types") class RequestBuilding { @BeforeEach public void beforeEach() { sut.initialize(pluginCallMock); lenient().when(pluginCallMock.getArray("testingDevices", AdMob.EMPTY_TESTING_DEVICES)).thenReturn(new JSArray()); } @Test @DisplayName("Interstitial constructs the request using the RequestHelper") void prepareInterstitial() { try (MockedConstruction<InterstitialAd> interstitialAdMockedConstruction = Mockito.mockConstruction(InterstitialAd.class)) { when(adOptionsFactoryMock.createInterstitialOptions(any())).thenReturn(adOptionsWithNpaTrue); sut.prepareInterstitial(pluginCallMock); verify(mockedActivity).runOnUiThread(runnableArgumentCaptor.capture()); Runnable uiThreadRunnable = runnableArgumentCaptor.getValue(); uiThreadRunnable.run(); requestHelperMockedStatic.verify(() -> RequestHelper.createRequest(adOptionsWithNpaTrue)); } } @Test @DisplayName("Interstitial does not initialize the same add two times") void prepareInterstitialJustOneTime() { try (MockedConstruction<InterstitialAd> interstitialAdMockedConstruction = Mockito.mockConstruction(InterstitialAd.class)) { when(adOptionsFactoryMock.createInterstitialOptions(any())).thenReturn(adOptionsWithNpaTrue); sut.prepareInterstitial(pluginCallMock); sut.prepareInterstitial(pluginCallMock); sut.prepareInterstitial(pluginCallMock); sut.prepareInterstitial(pluginCallMock); verify(mockedActivity, atMostOnce()).runOnUiThread(runnableArgumentCaptor.capture()); } } @Test @DisplayName("Rewarded Video Ad constructs the request using the RequestHelper") void prepareRewardVideo() { mobileAdsMockedStatic.when(() -> MobileAds.getRewardedVideoAdInstance(any())).thenReturn(mock(RewardedVideoAd.class)); when(adOptionsFactoryMock.createRewardVideoOptions(any())).thenReturn(adOptionsWithNpaTrue); sut.prepareRewardVideoAd(pluginCallMock); verify(mockedActivity).runOnUiThread(runnableArgumentCaptor.capture()); Runnable uiThreadRunnable = runnableArgumentCaptor.getValue(); uiThreadRunnable.run(); requestHelperMockedStatic.verify(() -> RequestHelper.createRequest(adOptionsWithNpaTrue)); } } } }
/** * internal: parent image read wrapper for compacting. */ static int vdParentRead(void *pvUser, uint64_t uOffset, void *pvBuf, size_t cbRead) { PVDPARENTSTATEDESC pParentState = (PVDPARENTSTATEDESC)pvUser; bool fLocked = ASMAtomicXchgBool(&pParentState->pDisk->fLocked, true); AssertMsgReturn(!fLocked, ("Calling synchronous parent read while another thread holds the disk lock\n"), VERR_VD_INVALID_STATE); RTSGSEG Segment; RTSGBUF SgBuf; VDIOCTX IoCtx; Segment.pvSeg = pvBuf; Segment.cbSeg = cbRead; RTSgBufInit(&SgBuf, &Segment, 1); vdIoCtxInit(&IoCtx, pParentState->pDisk, VDIOCTXTXDIR_READ, uOffset, cbRead, pParentState->pImage, &SgBuf, NULL, NULL, VDIOCTX_FLAGS_SYNC | VDIOCTX_FLAGS_ZERO_FREE_BLOCKS); int rc = vdReadHelperAsync(&IoCtx); ASMAtomicXchgBool(&pParentState->pDisk->fLocked, false); return rc; }
def test_can_import_pandapower_and_pandas(): import pandas import pandapower.timeseries print(f'Pandapower version: {pandapower.__version__}') print(f'Pandas version: {pandas.__version__}')
Getty Images It wasn't supposed to be this way for Arkansas head coach Bret Bielema. Bielema made the jump from Wisconsin of the Big Ten to the SEC's Arkansas Razorbacks prior to the 2013 season, with three straight Big Ten titles and three straight Rose Bowl appearances under his belt. The "three" theme continued early in 2013, as the Hogs won three straight to start the season on the heels of three straight 100-yard rushing performances from then-true freshman running back Alex Collinsโ€”who just so happens to wear No. 3. Then the wheels came off. The Hogs lost nine straight games to close the season, and they finished 0-8 in the SEC for the first time in program history. Quite a change from Bielema's dominance of the Big Ten. Butch Dill/Associated Press "The part that jumps out to me is the week-to-week grind," Bielema said. "Certain coaches were hacked off about the SEC only having eight conference games. Well I'd love to see them come try those eight. There's just nothing like it in the world of college football." The big change Bielema noticed in his inaugural campaign in the SEC was up front on defense, where teams rotated members of the front four often to keep bodies fresh to combat his power rushing attack with Collins and rising junior Jonathan Williams. "Specifically, the power, the speed and the depth in the defensive linemen was very impressive," Bielema said. Bielema found that out the hard way last year. His quarterback Brandon Allen hurt his shoulder diving into the end zone for a touchdown against Southern Miss in the third game of the season. Allen sat out the next gameโ€”a loss at Rutgersโ€”and struggled to stay healthy because of the constant barrage of big men. "Obviously I can't go into great detail, but there were about four or five straight weeks where he wan't able to practice and not really doing anything except walkthroughs and play on Saturdays," Bielema said. "That's a true testament to his character and what he's all about." Bill Haber/Associated Press The ability to keep those big, athletic bodies fresh is obviously a huge advantage for SEC teams, especially during the season and when that grind starts to take a toll. But it's not just about the defense. Last year was a banner year for the SEC in the quarterback department, with Heisman Trophy winner Johnny Manziel taking snaps at Texas A&M, highly decorated senior AJ McCarron at Alabama and record-setting signal-caller Aaron Murray at Georgia, among others. That, coupled with his quarterback's struggle to stay healthy, created a perfect storm that contributed to the rough road in Year 1. Sam Greenwood/Getty Images "Every league is quarterback-driven," Bielema said. "In this league in particular, if you have a guy who knows the league, knows how to manage the game, get you out of some difficult situations and not put you in bad ones, it will work very well." Off the field, one change was welcomed by Bielema with open arms. The ability to hire and retain a staff at Arkansas was a big selling point for the former Wisconsin head coach, and even though he lost some assistants between his first and second campaign in Fayetteville, the possibility to hire top-notch assistants separates the SEC. "It was the No. 1 reason for leaving Wisconsin," Bielema said. "I just didn't have the support financially to get it done. They've changed a bit now, but it's just the world of college football. The SEC, in general, sort of sets the standard for what goes on around the world of college football and it's fun to be a part of it." Bret Bielema's 1,000-Yard Rushers Year Player Team Rush Yds. 2006 P.J. Hill Wisconsin 1,569 2007 P.J. HIll Wisconsin 1,212 2008 P.J. Hill Wisconsin 1,161 2009 John Clay Wisconsin 1,517 2010 James White Wisconsin 1,052 2010 John Clay Wisconsin 1,012 2011 Montee Ball Wisconsin 1,923 2012 Montee Ball Wisconsin 1,830 2013 Alex Collins Arkansas 1,026 ESPN.com/Wisconsin Media Guide As for this year, Bielema has some pieces in place to make a surprise turnaround if the Hogs stay healthy. Collins and Williams are back at running back, and the emergence of sophomore speedster Korliss Marshall as a home run threat will give the coaching staff the ability to produce a multi-dimensional rushing attack even if the passing game struggles in 2014. The deep stable of running backs presents a "rich man's problem" for Bielema. Luckily for him, balancing three running backs is something he experienced quite a bit at Wisconsin, including the 2010 season when James White and John Clay broke the 1,000-yard mark and Montee Ball added 996 of his own. The ability to manage carries and, perhaps more importantly, egos, will be a huge benefit to this Hogs team. Chris Graythen/Getty Images "It's not a 'me, me, me' game, it's a 'we, we, we' game," Bielema said. "Those guys know that, when they tapped their helmets to come out, the next play could go the distance and they want to make sure the fresh guy is in there. The more I can help build a selfless attitude and help the guys understand that it's a team trying to win a game play by play and person by person." Defensively, the Hogs lost defensive end Chris Smith and defensive coordinator Chris Ash left his post to take a job at Ohio State. In Ash's place is Robb Smith, who will have the luxury of having some quality pieces along the defensive line, including Trey Flowers and Darius Philon. The overwhelming theme for this year's Hogs defense is simplifying the defense and building a unit that generates pressure with four linemen and allows the secondaryโ€”which is long on experience but short on productionโ€”to take advantage. "Robb Smith brings a simplicity," Bielema said. "He has a background in both the NFL and college football. Our defense is going to play a lot more aggressively at the line of scrimmage and get after the quarterback. I'm very, very excited about him." Year 1 didn't go according to Bielema's plan, but now he knows what to expect in the SEC and is working to implement changes that could get the Hogs program back on the right track. That needs to happen in a hurry, because Year 3 is looming in 2015, and it could be an important one for Bielema in Fayetteville. After all, it is the magic number. * Barrett Sallee is the lead SEC college football writer for Bleacher Report. All quotes were obtained firsthand, all stats are courtesy of CFBStats.com and all recruiting information is courtesy of 247Sports.com. Follow @BarrettSallee
Mixed noise reduction method based on fuzzy morphological filtering To remove mixed noises from the image, we studied the noise reduction performance of fuzzy morphological filter and introduce a new method to reduce noise. Firstly, the method uses S-function to fuzz an image, and then do fuzzy morphological opening-closing operation with the fuzzed image. To improve the noise reduction performance, we adopt double structural elements which combined fuzzy square structural elements with fuzzy linear structural elements. Experiment show that the proposed method may effectively reduce impulse noise, Gaussian noise, and mixed noise. Especially, it has obvious advantages in dealing with the image that has serious impulse noise.
<filename>cpp/tests/interop/dlpack_test.cpp<gh_stars>0 /* * Copyright (c) 2019-2022, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include <cudf/interop.hpp> #include <cudf_test/base_fixture.hpp> #include <cudf_test/column_utilities.hpp> #include <cudf_test/column_wrapper.hpp> #include <cudf_test/table_utilities.hpp> #include <cudf_test/type_lists.hpp> #include <dlpack/dlpack.h> #include <thrust/host_vector.h> using namespace cudf::test; struct dlpack_deleter { void operator()(DLManagedTensor* tensor) { tensor->deleter(tensor); } }; using unique_managed_tensor = std::unique_ptr<DLManagedTensor, dlpack_deleter>; template <typename T> DLDataType get_dtype() { uint8_t const bits{sizeof(T) * 8}; uint16_t const lanes{1}; if (std::is_floating_point_v<T>) { return DLDataType{kDLFloat, bits, lanes}; } else if (std::is_signed_v<T>) { return DLDataType{kDLInt, bits, lanes}; } else if (std::is_unsigned_v<T>) { return DLDataType{kDLUInt, bits, lanes}; } else { static_assert(true, "unsupported type"); } } template <typename T> void validate_dtype(DLDataType const& dtype) { switch (dtype.code) { case kDLInt: EXPECT_TRUE(std::is_integral_v<T> && std::is_signed_v<T>); break; case kDLUInt: EXPECT_TRUE(std::is_integral_v<T> && std::is_unsigned_v<T>); break; case kDLFloat: EXPECT_TRUE(std::is_floating_point_v<T>); break; default: FAIL(); } EXPECT_EQ(1, dtype.lanes); EXPECT_EQ(sizeof(T) * 8, dtype.bits); } class DLPackUntypedTests : public BaseFixture { }; TEST_F(DLPackUntypedTests, EmptyTableToDlpack) { cudf::table_view empty(std::vector<cudf::column_view>{}); EXPECT_EQ(nullptr, cudf::to_dlpack(empty)); } TEST_F(DLPackUntypedTests, EmptyColsToDlpack) { fixed_width_column_wrapper<int32_t> col1({}); fixed_width_column_wrapper<int32_t> col2({}); cudf::table_view input({col1, col2}); EXPECT_EQ(nullptr, cudf::to_dlpack(input)); } TEST_F(DLPackUntypedTests, NullTensorFromDlpack) { EXPECT_THROW(cudf::from_dlpack(nullptr), cudf::logic_error); } TEST_F(DLPackUntypedTests, MultipleTypesToDlpack) { fixed_width_column_wrapper<int16_t> col1({1, 2, 3, 4}); fixed_width_column_wrapper<int32_t> col2({1, 2, 3, 4}); cudf::table_view input({col1, col2}); EXPECT_THROW(cudf::to_dlpack(input), cudf::logic_error); } TEST_F(DLPackUntypedTests, InvalidNullsToDlpack) { fixed_width_column_wrapper<int32_t> col1({1, 2, 3, 4}); fixed_width_column_wrapper<int32_t> col2({1, 2, 3, 4}, {1, 0, 1, 1}); cudf::table_view input({col1, col2}); EXPECT_THROW(cudf::to_dlpack(input), cudf::logic_error); } TEST_F(DLPackUntypedTests, StringTypeToDlpack) { strings_column_wrapper col({"foo", "bar", "baz"}); cudf::table_view input({col}); EXPECT_THROW(cudf::to_dlpack(input), cudf::logic_error); } TEST_F(DLPackUntypedTests, UnsupportedDeviceTypeFromDlpack) { fixed_width_column_wrapper<int32_t> col({1, 2, 3, 4}); cudf::table_view input({col}); unique_managed_tensor tensor(cudf::to_dlpack(input)); // Spoof an unsupported device type tensor->dl_tensor.device.device_type = kDLOpenCL; EXPECT_THROW(cudf::from_dlpack(tensor.get()), cudf::logic_error); } TEST_F(DLPackUntypedTests, InvalidDeviceIdFromDlpack) { fixed_width_column_wrapper<int32_t> col({1, 2, 3, 4}); cudf::table_view input({col}); unique_managed_tensor tensor(cudf::to_dlpack(input)); // Spoof the wrong device ID tensor->dl_tensor.device.device_id += 1; EXPECT_THROW(cudf::from_dlpack(tensor.get()), cudf::logic_error); } TEST_F(DLPackUntypedTests, UnsupportedDimsFromDlpack) { fixed_width_column_wrapper<int32_t> col({1, 2, 3, 4}); cudf::table_view input({col}); unique_managed_tensor tensor(cudf::to_dlpack(input)); // Spoof an unsupported number of dims tensor->dl_tensor.ndim = 3; EXPECT_THROW(cudf::from_dlpack(tensor.get()), cudf::logic_error); } TEST_F(DLPackUntypedTests, TooManyRowsFromDlpack) { fixed_width_column_wrapper<int32_t> col({1, 2, 3, 4}); cudf::table_view input({col}); unique_managed_tensor tensor(cudf::to_dlpack(input)); // Spoof too many rows constexpr int64_t max_size_type{std::numeric_limits<int32_t>::max()}; tensor->dl_tensor.shape[0] = max_size_type + 1; EXPECT_THROW(cudf::from_dlpack(tensor.get()), cudf::logic_error); } TEST_F(DLPackUntypedTests, TooManyColsFromDlpack) { fixed_width_column_wrapper<int32_t> col1({1, 2, 3, 4}); fixed_width_column_wrapper<int32_t> col2({5, 6, 7, 8}); cudf::table_view input({col1, col2}); unique_managed_tensor tensor(cudf::to_dlpack(input)); // Spoof too many cols constexpr int64_t max_size_type{std::numeric_limits<int32_t>::max()}; tensor->dl_tensor.shape[1] = max_size_type + 1; EXPECT_THROW(cudf::from_dlpack(tensor.get()), cudf::logic_error); } TEST_F(DLPackUntypedTests, InvalidTypeFromDlpack) { fixed_width_column_wrapper<int32_t> col({1, 2, 3, 4}); cudf::table_view input({col}); unique_managed_tensor tensor(cudf::to_dlpack(input)); // Spoof an invalid data type tensor->dl_tensor.dtype.code = 3; EXPECT_THROW(cudf::from_dlpack(tensor.get()), cudf::logic_error); } TEST_F(DLPackUntypedTests, UnsupportedIntBitsizeFromDlpack) { fixed_width_column_wrapper<int32_t> col({1, 2, 3, 4}); cudf::table_view input({col}); unique_managed_tensor tensor(cudf::to_dlpack(input)); // Spoof an unsupported bitsize tensor->dl_tensor.dtype.bits = 7; EXPECT_THROW(cudf::from_dlpack(tensor.get()), cudf::logic_error); } TEST_F(DLPackUntypedTests, UnsupportedFloatBitsizeFromDlpack) { fixed_width_column_wrapper<float> col({1, 2, 3, 4}); cudf::table_view input({col}); unique_managed_tensor tensor(cudf::to_dlpack(input)); // Spoof an unsupported bitsize tensor->dl_tensor.dtype.bits = 7; EXPECT_THROW(cudf::from_dlpack(tensor.get()), cudf::logic_error); } TEST_F(DLPackUntypedTests, UnsupportedLanesFromDlpack) { fixed_width_column_wrapper<int32_t> col({1, 2, 3, 4}); cudf::table_view input({col}); unique_managed_tensor tensor(cudf::to_dlpack(input)); // Spoof an unsupported number of lanes tensor->dl_tensor.dtype.lanes = 2; EXPECT_THROW(cudf::from_dlpack(tensor.get()), cudf::logic_error); } TEST_F(DLPackUntypedTests, UnsupportedBroadcast1DTensorFromDlpack) { using T = float; constexpr int ndim = 1; // Broadcasted (stride-0) 1D tensor auto const data = cudf::test::make_type_param_vector<T>({1}); int64_t shape[ndim] = {5}; int64_t strides[ndim] = {0}; DLManagedTensor tensor{}; tensor.dl_tensor.device.device_type = kDLCPU; tensor.dl_tensor.dtype = get_dtype<T>(); tensor.dl_tensor.ndim = ndim; tensor.dl_tensor.byte_offset = 0; tensor.dl_tensor.shape = shape; tensor.dl_tensor.strides = strides; thrust::host_vector<T> host_vector(data.begin(), data.end()); tensor.dl_tensor.data = host_vector.data(); EXPECT_THROW(cudf::from_dlpack(&tensor), cudf::logic_error); } TEST_F(DLPackUntypedTests, UnsupportedStrided1DTensorFromDlpack) { using T = float; constexpr int ndim = 1; // Strided 1D tensor auto const data = cudf::test::make_type_param_vector<T>({1, 2, 3, 4}); int64_t shape[ndim] = {2}; int64_t strides[ndim] = {2}; DLManagedTensor tensor{}; tensor.dl_tensor.device.device_type = kDLCPU; tensor.dl_tensor.dtype = get_dtype<T>(); tensor.dl_tensor.ndim = ndim; tensor.dl_tensor.byte_offset = 0; tensor.dl_tensor.shape = shape; tensor.dl_tensor.strides = strides; thrust::host_vector<T> host_vector(data.begin(), data.end()); tensor.dl_tensor.data = host_vector.data(); EXPECT_THROW(cudf::from_dlpack(&tensor), cudf::logic_error); } TEST_F(DLPackUntypedTests, UnsupportedImplicitRowMajor2DTensorFromDlpack) { using T = float; constexpr int ndim = 2; // Row major 2D tensor auto const data = cudf::test::make_type_param_vector<T>({1, 2, 3, 4}); int64_t shape[ndim] = {2, 2}; DLManagedTensor tensor{}; tensor.dl_tensor.device.device_type = kDLCPU; tensor.dl_tensor.dtype = get_dtype<T>(); tensor.dl_tensor.ndim = ndim; tensor.dl_tensor.byte_offset = 0; tensor.dl_tensor.shape = shape; tensor.dl_tensor.strides = nullptr; thrust::host_vector<T> host_vector(data.begin(), data.end()); tensor.dl_tensor.data = host_vector.data(); EXPECT_THROW(cudf::from_dlpack(&tensor), cudf::logic_error); } TEST_F(DLPackUntypedTests, UnsupportedExplicitRowMajor2DTensorFromDlpack) { using T = float; constexpr int ndim = 2; // Row major 2D tensor with explicit strides auto const data = cudf::test::make_type_param_vector<T>({1, 2, 3, 4}); int64_t shape[ndim] = {2, 2}; int64_t strides[ndim] = {2, 1}; DLManagedTensor tensor{}; tensor.dl_tensor.device.device_type = kDLCPU; tensor.dl_tensor.dtype = get_dtype<T>(); tensor.dl_tensor.ndim = ndim; tensor.dl_tensor.byte_offset = 0; tensor.dl_tensor.shape = shape; tensor.dl_tensor.strides = strides; thrust::host_vector<T> host_vector(data.begin(), data.end()); tensor.dl_tensor.data = host_vector.data(); EXPECT_THROW(cudf::from_dlpack(&tensor), cudf::logic_error); } TEST_F(DLPackUntypedTests, UnsupportedStridedColMajor2DTensorFromDlpack) { using T = float; constexpr int ndim = 2; // Column major, but strided in fastest dimension auto const data = cudf::test::make_type_param_vector<T>({1, 2, 3, 4, 5, 6, 7, 8}); int64_t shape[ndim] = {2, 2}; int64_t strides[ndim] = {2, 4}; DLManagedTensor tensor{}; tensor.dl_tensor.device.device_type = kDLCPU; tensor.dl_tensor.dtype = get_dtype<T>(); tensor.dl_tensor.ndim = ndim; tensor.dl_tensor.byte_offset = 0; tensor.dl_tensor.shape = shape; tensor.dl_tensor.strides = strides; thrust::host_vector<T> host_vector(data.begin(), data.end()); tensor.dl_tensor.data = host_vector.data(); EXPECT_THROW(cudf::from_dlpack(&tensor), cudf::logic_error); } template <typename T> class DLPackTimestampTests : public BaseFixture { }; TYPED_TEST_SUITE(DLPackTimestampTests, ChronoTypes); TYPED_TEST(DLPackTimestampTests, ChronoTypesToDlpack) { fixed_width_column_wrapper<TypeParam, int32_t> col({1, 2, 3, 4}); cudf::table_view input({col}); EXPECT_THROW(cudf::to_dlpack(input), cudf::logic_error); } template <typename T> class DLPackNumericTests : public BaseFixture { }; // The list of supported types comes from DLDataType_to_data_type() in cpp/src/dlpack/dlpack.cpp // TODO: Replace with `NumericTypes` when unsigned support is added. Issue #5353 using SupportedTypes = cudf::test::RemoveIf<cudf::test::ContainedIn<cudf::test::Types<bool>>, cudf::test::NumericTypes>; TYPED_TEST_SUITE(DLPackNumericTests, SupportedTypes); TYPED_TEST(DLPackNumericTests, ToDlpack1D) { // Test nullable column with no nulls fixed_width_column_wrapper<TypeParam> col({1, 2, 3, 4}, {1, 1, 1, 1}); auto const col_view = static_cast<cudf::column_view>(col); EXPECT_FALSE(col_view.has_nulls()); EXPECT_TRUE(col_view.nullable()); cudf::table_view input({col}); unique_managed_tensor result(cudf::to_dlpack(input)); auto const& tensor = result->dl_tensor; validate_dtype<TypeParam>(tensor.dtype); EXPECT_EQ(kDLCUDA, tensor.device.device_type); EXPECT_EQ(1, tensor.ndim); EXPECT_EQ(uint64_t{0}, tensor.byte_offset); EXPECT_EQ(nullptr, tensor.strides); EXPECT_NE(nullptr, tensor.data); EXPECT_NE(nullptr, tensor.shape); // Verify that data matches input column constexpr cudf::data_type type{cudf::type_to_id<TypeParam>()}; cudf::column_view const result_view(type, tensor.shape[0], tensor.data, col_view.null_mask()); CUDF_TEST_EXPECT_COLUMNS_EQUAL(col_view, result_view); } TYPED_TEST(DLPackNumericTests, ToDlpack2D) { using T = TypeParam; auto const col1_tmp = cudf::test::make_type_param_vector<T>({1, 2, 3, 4}); auto const col2_tmp = cudf::test::make_type_param_vector<T>({4, 5, 6, 7}); std::vector<fixed_width_column_wrapper<TypeParam>> cols; cols.push_back(fixed_width_column_wrapper<TypeParam>(col1_tmp.cbegin(), col1_tmp.cend())); cols.push_back(fixed_width_column_wrapper<TypeParam>(col2_tmp.cbegin(), col2_tmp.cend())); std::vector<cudf::column_view> col_views; std::transform(cols.begin(), cols.end(), std::back_inserter(col_views), [](auto const& col) { return static_cast<cudf::column_view>(col); }); cudf::table_view input(col_views); unique_managed_tensor result(cudf::to_dlpack(input)); auto const& tensor = result->dl_tensor; validate_dtype<TypeParam>(tensor.dtype); EXPECT_EQ(kDLCUDA, tensor.device.device_type); EXPECT_EQ(2, tensor.ndim); EXPECT_EQ(uint64_t{0}, tensor.byte_offset); EXPECT_NE(nullptr, tensor.data); EXPECT_NE(nullptr, tensor.shape); EXPECT_NE(nullptr, tensor.strides); EXPECT_EQ(1, tensor.strides[0]); EXPECT_EQ(tensor.shape[0], tensor.strides[1]); // Verify that data matches input columns cudf::size_type offset{0}; for (auto const& col : input) { constexpr cudf::data_type type{cudf::type_to_id<TypeParam>()}; cudf::column_view const result_view(type, tensor.shape[0], tensor.data, nullptr, 0, offset); CUDF_TEST_EXPECT_COLUMNS_EQUAL(col, result_view); offset += tensor.strides[1]; } } TYPED_TEST(DLPackNumericTests, FromDlpack1D) { // Use to_dlpack to generate an input tensor fixed_width_column_wrapper<TypeParam> col({1, 2, 3, 4}); cudf::table_view input({col}); unique_managed_tensor tensor(cudf::to_dlpack(input)); // Verify that from_dlpack(to_dlpack(input)) == input auto result = cudf::from_dlpack(tensor.get()); CUDF_TEST_EXPECT_TABLES_EQUAL(input, result->view()); } TYPED_TEST(DLPackNumericTests, FromDlpack2D) { // Use to_dlpack to generate an input tensor using T = TypeParam; auto const col1 = cudf::test::make_type_param_vector<T>({1, 2, 3, 4}); auto const col2 = cudf::test::make_type_param_vector<T>({4, 5, 6, 7}); std::vector<fixed_width_column_wrapper<TypeParam>> cols; cols.push_back(fixed_width_column_wrapper<T>(col1.cbegin(), col1.cend())); cols.push_back(fixed_width_column_wrapper<T>(col2.cbegin(), col2.cend())); std::vector<cudf::column_view> col_views; std::transform(cols.begin(), cols.end(), std::back_inserter(col_views), [](auto const& col) { return static_cast<cudf::column_view>(col); }); cudf::table_view input(col_views); unique_managed_tensor tensor(cudf::to_dlpack(input)); // Verify that from_dlpack(to_dlpack(input)) == input auto result = cudf::from_dlpack(tensor.get()); CUDF_TEST_EXPECT_TABLES_EQUAL(input, result->view()); } TYPED_TEST(DLPackNumericTests, FromDlpackCpu) { // Host buffer with stride > rows and byte_offset > 0 using T = TypeParam; auto const data = cudf::test::make_type_param_vector<T>({0, 1, 2, 3, 4, 0, 5, 6, 7, 8, 0}); uint64_t const offset{sizeof(T)}; int64_t shape[2] = {4, 2}; int64_t strides[2] = {1, 5}; DLManagedTensor tensor{}; tensor.dl_tensor.device.device_type = kDLCPU; tensor.dl_tensor.dtype = get_dtype<T>(); tensor.dl_tensor.ndim = 2; tensor.dl_tensor.byte_offset = offset; tensor.dl_tensor.shape = shape; tensor.dl_tensor.strides = strides; thrust::host_vector<T> host_vector(data.begin(), data.end()); tensor.dl_tensor.data = host_vector.data(); fixed_width_column_wrapper<TypeParam> col1({1, 2, 3, 4}); fixed_width_column_wrapper<TypeParam> col2({5, 6, 7, 8}); cudf::table_view expected({col1, col2}); auto result = cudf::from_dlpack(&tensor); CUDF_TEST_EXPECT_TABLES_EQUAL(expected, result->view()); } TYPED_TEST(DLPackNumericTests, FromDlpackEmpty1D) { // Use to_dlpack to generate an input tensor cudf::table_view input(std::vector<cudf::column_view>{}); unique_managed_tensor tensor(cudf::to_dlpack(input)); // Verify that from_dlpack(to_dlpack(input)) == input EXPECT_THROW(cudf::from_dlpack(tensor.get()), cudf::logic_error); }
/** * Look for a reference to a package, class, field, method or markup document, * in the context of a markup document path * * @param name reference name. * @param sourcePosition current position in the source containing the reference. * @param markupDocContainer markup doc container to search for relative references. * @return a DocReferenceable, null if not found. */ public DocReferenceable find(String name, SourcePosition sourcePosition, String markupDocContainer) { if (name.contains("/") || markupDocContainer != null) { MarkupDoc doc = findMarkupDoc(name, markupDocContainer); if (doc != null) { return getMarkupDocRef(doc); } ResourceDoc resourceDoc = findResourceFile(name, markupDocContainer); if (resourceDoc != null) { return getResourceFileRef(resourceDoc); } } List<DocReferenceable> matches = findAll(name); if (matches.size() == 1) { return matches.get(0); } else if (matches.size() > 0) { StringBuilder error = new StringBuilder(); error.append("Ambiguous reference ").append(name).append(":\n"); for (DocReferenceable match : matches) { error.append("\t"); error.append(match.getQualifiedName()); error.append("\n"); } DocdownDoclet.getErrorReporter().printWarning(sourcePosition, error.toString()); } else if (matches.size() == 0) { DocdownDoclet.getErrorReporter().printWarning(sourcePosition, "Can't find reference " + name); } return null; }
// ID returns the ID of the node func (dp Dispatcher) ID() string { if !reflect.ValueOf(dp.p2pnet).IsNil() { return dp.p2pnet.ID() } if !reflect.ValueOf(dp.p2plnet).IsNil() { return dp.p2plnet.ID() } return "" }
// NewField returns a Field struct with information distilled from an // *ast.Field. If the provided *ast.Field does not match the conventions of // code generated by protoc-gen-go, an error will be returned. func NewField(f *ast.Field) (*Field, error) { | Type Genres | Repeated | Naked | |-------------|------------------------|---------------| | Enum | Array -> Ident | Ident | | Message | Array -> Star -> Ident | Star -> Ident | | BaseType | Array -> Ident | Ident | Map types will always have a KeyType which is ident, and a value that is one of the Type Genres specified in the table above. rv := &Field{ Name: f.Names[0].Name, Type: &FieldType{}, } TypeFollower 'follows' the type of the provided ast.Field, determining the name of this fields type and if it's a StarExpr, an ArrayType, or both, and modifying the return value accordingly. var typeFollower func(ast.Expr) error typeFollower = func(e ast.Expr) error { if f.Tag != nil { pbTag := reflect.StructTag(f.Tag.Value[1 : len(f.Tag.Value)-1]).Get("protobuf") subFields := strings.Split(pbTag, ",") if len(subFields) >= 4 { if idx := strings.Index(subFields[3], "="); idx != -1 { rv.PBFieldName = subFields[3][idx+1:] } else if idx := strings.Index(subFields[4], "="); idx != -1 { rv.PBFieldName = subFields[4][idx+1:] } } } switch ex := e.(type) { case *ast.Ident: ๅ‡ฝๆ•ฐๅ/ๅ˜้‡ๅ rv.Type.Name += ex.Name if oneof, ok := oneofs[ex.Name]; ok { rv.Type.Oneof = oneof } case *ast.StarExpr: ๆŒ‡้’ˆ่กจ่พพๅผ rv.Type.StarExpr = true typeFollower(ex.X) case *ast.ArrayType: ๆ•ฐ็ป„็ฑปๅž‹ Handle multi-nested slices, such as repeated bytes, which maps to [][]byte if rv.Type.ArrayType { rv.Type.Name = "[]" + rv.Type.Name } rv.Type.ArrayType = true typeFollower(ex.Elt) case *ast.MapType: ๅญ—ๅ…ธ็ฑปๅž‹ mp, err := NewMap(ex) if err != nil { return errors.Wrapf(err, "failed to create map for field %q", rv.Name) } rv.Type.Map = mp case *ast.SelectorExpr: ้€‰ๆ‹ฉ็ป“ๆž„, ็ฑปไผผไบŽa.b็š„็ป“ๆž„ var tname string if xnode, ok := ex.X.(*ast.Ident); ok { tname += xnode.Name + "." } tname += ex.Sel.Name rv.Type.Name += tname } return nil } err := typeFollower(f.Type) if err != nil { return nil, err } isBaseType grpc ๆ ‡้‡็ฑปๅž‹, ๆญคๅค„ไธ่ƒฝไธฅๆ ผๅˆคๆ–ญๆ˜ฏๅฆไธบๅŸบ็ก€็ฑปๅž‹, Type.Message่ฟ˜ๆฒกๆœ‰่ต‹ๅ€ผ isBaseType := rv.Type.Message == nil && rv.Type.Enum == nil && rv.Type.Map == nil log.Tracef("[svcdef/svcdef.go][NewField] new field[%v].Type=%+v,IsBaseType=%v,StarExpr=%v,Repeated=%v\n", rv.Name, rv.Type.Name, isBaseType, rv.Type.StarExpr, rv.Type.ArrayType) log.Tracef("[svcdef/svcdef.go][NewField] new field[%v].Message=%v,Enum=%v,Map=%v,Repeated=%v\n", rv.Name, rv.Type.Message, rv.Type.Enum, rv.Type.Map, rv.Type.ArrayType) return rv, nil }
Lingling Ou,1,2 Shaoqiang Lin,2 Bin Song,1 Jia Liu,1 Renfa Lai,2 Longquan Shao1 1Department of Stomatology, Nanfang Hospital, Southern Medical University, Guangzhou, Peopleโ€™s Republic of China; 2Department of Stomatology, the First Affiliated Hospital of Jinan University, Guangzhou, Peopleโ€™s Republic of China Abstract: Graphene-based materials (GBMs) are widely used in many fields, including biomedicine. To date, much attention had been paid to the potential unexpected toxic effects of GBMs. Here, we review the recent literature regarding the impact of GBMs on programmed cell death (PCD). Apoptosis, autophagy, and programmed necrosis are three major PCDs. Mechanistic studies demonstrated that the mitochondrial pathways and MAPKs (JNK, ERK, and p38)- and TGF-ฮฒ-related signaling pathways are implicated in GBMs-induced apoptosis. Autophagy, unlike apoptosis and necroptosis which are already clear cell death types, plays a vital pro-survival role in cell homeostasis, so its role in cell death should be carefully considered. However, GBMs always induce unrestrained autophagy accelerating cell death. GBMs trigger autophagy through inducing autophagosome accumulation and lysosome impairment. Mitochondrial dysfunction, ER stress, TLRs signaling pathways, and p38 MAPK and NF-ฮบB pathways participate in GBMs-induced autophagy. Programmed necrosis can be activated by RIP kinases, PARP, and TLR-4 signaling in macrophages after GBMs exposure. Though apoptosis, autophagy, and necroptosis are distinguished by some characteristics, their numerous signaling pathways comprise an interconnected network and correlate with each other, such as the TLRs, p53 signaling pathways, and the Beclin-1 and Bcl-2 interaction. A better understanding of the mechanisms of PCD induced by GBMs may allow for a thorough study of the toxicology of GBMs and a more precise determination of the consequences of human exposure to GBMs. These determinations will also benefit safety assessments of the biomedical and therapeutic applications of GBMs. Keywords: graphene based materials, cell toxicity, programmed cell death, mechanisms
Ayurvedic management in cervical spondylotic myelopathy The age related spondylotic changes may result in direct compressive and ischemic dysfunction of the spinal cord known as cervical spondylotic myelopathy (CSM). Symptoms often develop insidiously and are characterized by neck stiffness, unilateral or bilateral deep aching neck, arm and shoulder pain, and possibly stiffness or clumsiness while walking. The management available in current mainstream medicine is not satisfactory. Various Ayurvedic treatments have been in use for these manifestations. We present a case of CSM, which was treated with a combination of Panchakarma procedures and Ayurvedic oral drugs. The patient was considered suffering from Greevastambha (neck stiffness) and was treated with Shalishastika pinda svedana (sudation with medicated cooked bolus of rice) for one month and Mustadi yapana basti (enema with medicated milk) for 16 days along with oral Ayurvedic drugs such as Brihatavata chintamani rasa 50 mg, Ekangaveer ras-250 mg, Ardhangavatari rasa-125 mg Amrita satva (dry extract of Tinospora cordifolia Willd)-500 mg, Muktasukti pisti-500 mg, Ashwagandha churna (powder of Withania somnifera Dunal)-500 mg Dashmool kvatha ghana (solid extract of Dashmool kvatha)-500 mg, Trayodashanga guggulu-575 mg, twice a day with honey and Eranda paka-10 g twice a day with milk. Patient's condition which was assessed for symptoms of CSM and Chile's modified Japanese Orthopaedic Association (mJOA) score for cervical spondylotic myelopathy showed substantial improvement. This study shows that the cases of CSM may be successfully managed with Ayurvedic treatment. Introduction A degenerative cascade due to age-related changes in the spinal column is known as spondylosis. These spondylotic changes may result in direct compressive and ischemic dysfunction of the spinal cord known as cervical spondylotic myelopathy (CSM) . Symptoms often develop insidiously and are characterized by neck stiffness, unilateral or bilateral deep, aching neck, arm and shoulder pain; and possibly stiffness or clumsiness while walking. The hallmark symptom of CSM is weakness or stiffness in the arms. Clumsiness or weakness of the hands in conjunction with the legs is also characteristic of CSM. The incidence of CSM-caused hospitalization in eastern Asia is 4.04 per 100,000 person-years, with higher incidences observed in older and male patients . The incidence of Ossification of the Posterior Longitudinal Ligament , a common cause of cervical spondylotic myelopathy is 2.4% in the Asian population, and 0.16% in the non-Asian population . The overall prevalence in Indian population is unknown. The pathophysiology of CSM is thought to be multifactorial. Both static factors causing stenosis and dynamic factors resulting in repetitive injury to the spinal cord and spinal cord ischemia are involved in pathophysiology. Only limited conservative and surgical procedures are available in modern medicine for disease but there is much limitation to use these procedures. The standard treatment for moderate to severe CSM is operative procedures which are least preferred by the elderly patients. Hence there is a need to search for effective treatment in alternative medicine. No study is published in PubMed for Ayurvedic approach on CSM till date. Here we represent a case of CSM which was successfully treated with Ayurvedic management with Greevastambha (neck stiffness) as the Ayurvedic diagnosis . Case report A 62 years old male patient was consulted in Out-Patient Department of National Institute of Ayurveda, Jaipur for complaint of gradually progressive weakness of both upper and lower limbs. Patient also had the complaint of giddiness, neck stiffness and pain around the neck region. Patient had suffered from these problems since 4 years. Symptoms were aggravated by prolonged sitting and standing and minimally eased with gentle movement. The patient also reported intermittent low back pain to varying degrees over the past 2 years which radiated to bilateral lower limbs and intermittent numbness and tingling in the posterior calf region. The patient had undergone neurologic and orthopedic consultations in a tertiary care hospital of Jaipur a year before and conservative and surgical management was recommended. He didn't have complaints of any bowel or bladder changes. The medical history was unremarkable, and his general health was good. He was not taking any medications at the time of consultation. Clinical findings The case was subsequently admitted to the male Panchakarma ward of National Institute of Ayurveda, Jaipur on March-10, 2016 for the administration of therapeutic procedures. On physical examination, patient was anxious, appetite was apparently normal and tongue was uncoated. Micturition and bowel movement were normal. Patient had Vatapitta prakriti with Madhyam samhanana (medium body built), Madhyam sara (medium purest body tissue), Sama pramana (symmetrical body proportion), Madhyam satmya (medium homologation), Madhayam satva (medium mental strength), Madhyam vyayamshakti (medium capability of physical activities), Madhyam Aharshakti and Jaranshakti (medium food intake and digestive power). The patient demonstrated normal gait. The active movements of lumbar spine were within functional limits with reported pain at the end of forward flexion. Straight leg raise (S.L.R.) was negative bilaterally. Tenderness was noted over the spinous processes of L4 and L5. The range of motion for the bilateral knee and ankle joints was normal and the strength of the hamstrings and quadriceps musculature was also normal. On neurological examination, higher mental function and speech were normal. All cranial nerves were normal. On motor examination, bulk, tone, power and coordination of arms and legs were normal bilaterally. Power in both upper limbs was grade 4 on medical research council score. Power in left leg was grade 4รพ and in right leg was grade 5. Hyperreflexia was found in upper extremities bilaterally. Hoffman reflex and Babinski reflex were positive bilaterally. A multidermatomal decrease of sensation in bilateral upper extremities during pinprick testing was revealed during examination. Lhermitte's sign was positive. Deep tendon reflex examination revealed a diminished left Achilles tendon reflex. Joint position sense and vibration sensation was normal bilaterally. All laboratory and biochemical investigations were normal. Magnetic resonance imaging (MRI) of cervical spine that was done on March 2, 2016 revealed diffuse desiccated disc bulging at C3-4, C4-5, C5-6 and C6-7 level causing indentation over ventral thecal sac with associated ligamentum flavum hypertrophy causing spinal canal narrowing and spinal cord compression at multiple levels most notably at C-3-4 level with thinning of spinal cord at this level with T2 and STIR hyper intensity cord edema-suggestive of compressive myelopathy. Diagnostic focus and assessment The patient was a known case of cervical spondylotic myelopathy. It was confirmed by previously done MRI. The condition was also associated with lumbar spondylosis. Greevastambha was considered as Ayurvedic diagnosis which is included in Nanatamaj Vatavyadhi (~neurological, rheumatic and musculoskeletal diseases). Amyotrophic lateral sclerosis (ALS), primary spinal cord tumors, syringomyelia, extramedullary conditions (e.g., metastatic tumors), sub acute combined degeneration of the spinal cord (vitamin B 12 deficiency), hereditary spastic paraplegia, normal pressure hydrocephalus and spinal cord infarction were the differential diagnosis for the case. The presence of extremity sensory abnormalities and the absence of fasciculation on examination in this case excluded the diagnosis of ALS. Other conditions were excluded on the basis of characteristic MRI findings. In cervical spondylotic myelopathy, MRI shows narrowing of the spinal canal caused by osteophytes, herniated discs and ligamentum flavum hypertrophy . Treatment plan As no specific line of treatment is described for Greevastambha in Ayurvedic texts, general line of management of Vatavyadhi such as Abhyanga (massage), Svedana (sudation), Mridu virechana (mild purgation) and Basti procedures were adopted for the patient . Considering the patient's Vatapitta prakriti and physical constitution, mild massage and mild sudation in the form of Shalishastika pinda svedana and Mridu basti (a milder form of Basti) in the form of Mustadi yapana basti were given to the patient. Intervention Various Panchakarma interventions were adopted to treat this patient. Mridu virechana with castor oil in dose of 20 ml with lukewarm milk was given at night prior to the beginning of medical intervention to the patient. From next day Shalishastika pinda svedana for 30 days along with Mustadi yapana basti for 16 days were adopted . Along with these Panchakarma interventions, selected Ayurvedic oral medicine-Brihatavata chintamani rasa 50 mg, Ekangaveera rasa-250 mg, Ardhangavatari rasa-125 mg Amrita satva (starch of Tinospora cordifolia Willd)-500 mg, Muktasukti pisti-500 mg, Aswagandha churna (powder of Withania somnifera Dunal)-500 mg Dashmool kvatha ghana (solid extract of Dashmool kvatha)-500 mg and Trayodashanga guggulu-575 mg (The said combinations prescribed in a single dose of 3 g with proprietary name-Aghatโ„ข) administered with honey twice a day and Eranda paka-10 g twice a day with milk. These oral medicines were continued for next 2 months. Outcome measures and follow up After completion of Panchakarma procedures patient condition was assessed for pain, giddiness, neck stiffness, neck motion, power and reflexes of upper and lower limbs. Pain had subsided. Patient had no giddiness. Neck stiffness had substantially reduced. Range of motion of neck was normal. Power of both upper and lower limbs was 5/5 on medical research council scale. Reflexes of both upper and lower limbs were 2รพ. Bilateral straight leg rising test had increased to 90 for hip flexion. Bilateral Hoffman reflex, bilateral Babinski reflex and Lhermitte's sign was negative at this time. mJOA score for cervical spondylotic myelopathy was-08 before treatment and improved to 14 after one month of treatment . Patient was discharged on April 12, 2016 with instruction to continue oral medicines. Patient condition was stable after one month of treatment but patient felt some stiffness in lumbar region. MRI done on May 31, 2016 revealed concentric desiccated diffuse disc bulge seen at C3-4 to C6-7 levels with postero-lateral disc protrusion causing central canal and bilateral neural foraminal narrowing resulting mild compression over bilateral exiting nerve roots (Table 3). There was a remarkable improvement in MRI as ligamentum flavum hypertrophy causing spinal canal narrowing and spinal cord compression at multiple levels most notably at C-3-4 level with thinning of spinal cord at this level with cord edema were not notable in this MRI as compared to previous MRI on March 2, 2016 where all these were present. Serum glutamic oxaloacetic transaminase (SGOT), Serum glutamic pyruvic transaminase (SGPT), bilirubin (direct and indirect) and serum creatinine that was tested on June 11, 2016 for assessment of safety profile of treatment were also within limit. Discussion The three main pathophysiologic factors in the development of CSM are static mechanical compression, dynamic mechanical compression and spinal cord ischemia . Static mechanical factors result in the reduction of spinal canal diameter and spinal cord compression. With aging, the intervertebral discs dry out resulting in the loss of disc height which puts greater stress on the articular cartilage of the vertebrae and their respective end plates. Osteophytic spurs that are developed at the margins of these end plates stabilizes adjacent vertebrae whose hyper mobility is caused by the degeneration of the disc. The calcified disc further stabilizes the vertebrae. The ligamentum flavum may also stiffen and buckle into the spinal cord dorsally. These causes direct compression of the spinal cord resulting in myelopathy. The normal motion of the Table 1 Panchakarma procedures for the case of cervical spondylotic myelopathy. Method of preparation Method of application Days of treatment Shalishastika Pinda Svedana 300 g of Shashtika shali is cooked with 1.5 L of milk and decoction of Bala moola (root of Sidaretusa L.). This mixture was kept in four pieces of cloth to make 4 boluses. Another portion of milk and decoction of the same quantity was mixed and heated in low temperature to dip the above boluses for warming the Pottali. Massage with Asvagandha oil was done on whole body for 15 min followed by whole body massage for 45 min with the help of a cotton bag filled with bolus of processed rice. days Mustadi Yapana Basti Saindhava salt 5 g, honey 25 g, Ashwagandha oil 50 ml, Panchatikta Ghrita 25 ml and milk processed with Mustadi yapana basti kwatha drugs 300 ml. Powdered rock-salt was added to honey and stirred. Then oil and ghrita was added to this mixture and again stirred. Then paste of Satahva (Anethum sowa Kurz) followed by decoction was added and mixed properly. 50 ml soup of goat femur bone marrow was added in this emulsion and then mixed properly to make homogenous emulsion. This emulsion was heated gently in a water bath. Given before meal with basti yantra. Total 16 basti was given daily. No separate Anuvasan basti was given as no separate Anuvasan basti is needed for Yapana basti. Table 3 Timeline. Year cervical spine may aggravate spinal cord damage precipitated by this direct mechanical and static mechanical compression. The spinal cord lengthens during flexion, thus stretching over ventral osteophytic ridges. The ligamentum flavum may buckle into the spinal cord during extension, causing a reduction of available space for the spinal cord . Ayurveda diagnosis of these problems can be correlated with Greevastambha, Bhrama (vertigo) and Bahushosha (weakness and emaciation of upper limbs). All these symptoms are considered in Nanatamaj Vatavyadhi (disorders only due to Vata dosha). Vata is vitiated due to several etiological factors, Margavarana (obstruction in natural course of Vata such as normal distribution, synthesis of tissues elements etc.) and Dhatukshaya (~depletion of body tissue). This vitiated Vata leads to Margavarana and Dhatukshaya in vicious cycle and may lead to manifestation of CSM . There is depletion of Sthanik Kapha (localized Kapha dosha at cervical region) due to vitiated Vata dosha. Vitiated Pitta and Vata doshas lead to Bhrama. Vitiated Vata and depleted Kapha dosha may lead to Bahushosha. All the pathology of CSM is included in these major groups of Ayurvedic Samprapti (pathology). Brihmana (~nourishment) is the treatment for Dhatukshaya. Snigdha (unctuous), Srotosodhaka (biopurification of micro-channels) Vatanulomaka (~correction of function of Vata dosha) treatment and treatment which is compatible to Kapha and Pitta doshas should be adopted for any Avarana or Margavarodha. Yapana basti, Guggulu; Shilajeeta (black bitumen) and Rasayana (immunomodulator) are also indicated for Nanatamaj vata, Avrita vata and chronic Vata vyadhi . Panchakarma procedures and selected Ayurvedic oral drugs were employed according to all above said facts to manage this case of CSM. In Ayurveda, brain and spinal cord is considered to be form of Majjadhara kala (~membrane surrounding the bone marrow) Bhrama, Tamahapravesha (temporary vision loss) are also the symptoms of Majja-pradoshaj vikaras. Various non-surgical strategies have been in use such as cervical traction, cervical immobilization (collar or neck brace), skull traction and physical therapy. A study demonstrates the benefits of cervical immobilization, while other study shows that immobilization does not improve the patient's condition . In the case of myelopathy, surgical intervention is necessary. The cervical laminectomy is not appropriate for all patients. It may lead to neurologic deterioration and attributed to a development of latent instability of the spine with development of kyphotic spinal deformities . SGOT, SGPT and serum creatinine that was investigated after treatment were within normal limit. This demonstrates the safety profile of multi-ingredient formulation and Panchakarma procedures. Hence this case study is important one as this shows the clinical and radiological improvement in cervical compressive myelopathy with Panchakarma and Ayurvedic medicinal interventions. There was no need to use any surgical intervention for this case. Conclusion The case report demonstrates clinical and radiological improvement in a cervical spondylotic myelopathy with Panchakarma and Ayurvedic medicinal interventions. Patient consent Written permission for publication of this case study had been obtained from the patient.
/** * Retrieve a summary of today's activations at a set of locations. * * @param locationIds * @return a map of maps: locationId -> activation status -> count */ public Map<String,Map<Activation.Status, Integer>> summarizeByLocation(Set<Integer> locationIds) { if (CollectionUtils.isEmpty(locationIds)) { return new HashMap<>(0); } List<SummaryResultHolder> rows = (List<SummaryResultHolder>) execute(manager -> { return manager.createNamedQuery("activation.summarizeByLocation") .setParameter("now", new Date()) .setParameter("locationIds", locationIds) .getResultList(); }); Map<String, Map<Activation.Status, Integer>> statusCountsByLocation = new HashMap<>(); rows.forEach(row -> { Map<Activation.Status, Integer> countByStatus = statusCountsByLocation.computeIfAbsent(row.getLocationId(), status -> new HashMap<>()); int count = countByStatus.getOrDefault(row.getStatus(), 0); countByStatus.put(row.getStatus(), count+row.getCount()); }); return statusCountsByLocation; }
/* * Solution 2: Pre-compute increase. */ class Solution { double GetIncrease(int dividend, int divisor) { double original_value = dividend * 1.0 / divisor; double new_value = (dividend + 1) * 1.0 / (divisor + 1); return new_value - original_value; } public: double maxAverageRatio(vector<vector<int>>& classes, int extraStudents) { auto cmp = [](const pair<pair<int, int>, double>& a, const pair<pair<int, int>, double>& b) { return a.second < b.second; }; priority_queue<pair<pair<int, int>, double>, vector<pair<pair<int, int>, double>>, decltype(cmp)> pq(cmp); for (vector<int>& myclass : classes) { pq.push({{myclass[0], myclass[1]}, GetIncrease(myclass[0], myclass[1])}); } for (int i = 0; i < extraStudents; i++) { pair<pair<int, int>, double> cur = pq.top(); pq.pop(); cur.first.first++; cur.first.second++; cur.second = GetIncrease(cur.first.first, cur.first.second); pq.push(cur); } double sum = 0.0; while (!pq.empty()) { pair<pair<int, int>, double> cur = pq.top(); pq.pop(); sum += cur.first.first * 1.0 / cur.first.second; } return sum / classes.size(); } }
A postdoctoral researcher in the immunology division at the Walter and Eliza Hall Institute, and a practising gastroenterologist, Dr Tye-Din believes he and former colleague Dr Bob Anderson may have found a means of eliminating coeliac disease. If clinical trials of the treatment are successful, the approach could also be applied to tackling other autoimmune diseases, such as type 1 diabetes, rheumatoid arthritis and multiple sclerosis. Almost as important, though, the discovery could improve the diagnosis of coeliac disease for the 80 per cent of Australians unaware they have it. He says undiagnosed coeliac disease is worrying because its effect on the small intestine means the body is less able to absorb nutrients, leading to loss of weight, fatigue or lack of energy and, in children, stunted growth. Coeliac disease is also associated with a range of even more serious problems such as liver disease, infertility, osteoporosis, other autoimmune disease and cancers such as lymphoma. People with undiagnosed coeliac disease have a two to fourfold higher rate of premature death. "The only treatment available at present is a gluten-free diet and that dates back to the 1950s when gluten was first identified as the cause of the disease. But it is not straightforward: the diet is tricky, you have to be ever vigilant, you have to pay more, put up with food that doesn't taste as good and often people don't always fully heal," Dr Tye-Din says. "Yet healing is critical because persistent damage in the gut is linked to long-term complications such as thinning of the bones and some forms of cancer. I've seen patients, even in their 20s, with bones like an 80-year-old, so people with the disease should have a bone density scan because of the risk of premature osteoporosis. Unfortunately, medical awareness and management of coeliac disease is far from optimal โ€” meaning this doesn't always happen." Gluten is a complex protein that enhances food texture. It allows bread to rise and imparts fluffiness, while gluten-free breads are heavy, crumbly, and far less tasty. Dr Tye-Din says the average Australian consumes 20-30 grams of gluten a day and the problem for coeliac sufferers is trying to ensure the foods they eat don't contain gluten when so many do โ€” even Vegemite and liquorice. Yet consumption of tiny amounts of gluten as low as 50 milligrams, or a few crumbs from a slice of bread, can damage the small intestine. This fact, plus the complexity, cost and lifestyle restrictions of the gluten-free diet, spurred Dr Tye-Din and his colleagues to look for more effective treatments. "Coeliac disease has evolved from being a simple gut disorder that causes damage to the bowel and poor absorption of nutrients to being recognised as a primary immune condition with a multitude of manifestations โ€” not just in the gut but also other organs. The knowledge certain genes are involved in the way the immune system reacts to gluten is also shaping our understanding. Such advances allow new approaches to be designed so as to improve on the gluten-free diet." Over a decade of tests on more than 300 patients with coeliac disease, the institute researchers discovered that key fragments of gluten โ€” three "toxic peptides" โ€” caused the abnormal immune response in people carrying the common coeliac-associated gene. These three peptides out of more than 18,000 in gluten are the ones, Dr Tye-Din says, "that tell the immune system to react badly to gluten". "Our approach was revolutionary in that it involved feeding people with gluten-containing food for three days and then taking samples of their blood on day six in hospital. That was when we found a lot of T cells โ€” white blood cells โ€” in the bloodstream reacting specifically to gluten. This showed us exactly which fragments were responsible for the immune response and provided a "road map" of what was toxic in coeliac disease." The world's first therapy enabling coeliac disease patients to return to a normal diet would involve injecting them regularly with tiny quantities of the three peptides. The researchers believe the injections would induce an immune tolerance, allowing patients to again eat food made from wheat, barley or rye. "We showed that just three peptides were responsible for most of the immune response to gluten from all of the toxic cereals. Interestingly, the type of cereal consumed determined which gluten peptide was the most immunogenic," Dr Tye-Din says. "This allowed us to develop and test a peptide-based therapy, Nexvax2, which comprises these three peptides. Further development of the drug is now being led by a Boston-based company called ImmusanT, where Dr Bob Anderson has taken up the post as chief scientist." Discovery of the key peptides also means that improved diagnostic tests can be developed โ€” a significant need given the poor rate of diagnosis in the community, he says. At present, people on a gluten-free diet have to go back to eating foods with gluten for up to eight weeks and then be tested to confirm they do have coeliac disease. In contrast, a new diagnostic test based on Nexvax2 would only need three days. "The first trial delivering injections of Nexvax2 to coeliac disease volunteers was completed in 2010 and showed it was safe and capable of inducing the predicted responses in the immune system. The critical next step will be to test whether Nexvax2 can prevent the adverse effects of dietary gluten and, depending on successful progress, it could be five or more years before a drug to counter coeliac disease is available," Dr Tye-Din says. "The approach of 'retraining' the immune system works in mouse models that have other human diseases, but for people with coeliac disease it will be a world first. If such an approach is effective this will have huge implications for the millions of sufferers globally." He says similar immune-therapies could also be developed for other autoimmune diseases if the relevant disease causing triggers, or "antigens", could be as comprehensively defined as gluten has been โ€” an ongoing challenge for researchers. *The Walter and Eliza Hall Institute and the national organisation Coeliac Australia have formed a three-year, $570,000 partnership to support research into new treatments and diagnostic tests for coeliac disease. The partnership aims to develop better treatments for children with the disease; effective responses to overcome symptoms after accidental gluten consumption; and a diagnostic test for coeliac disease in people with gluten intolerance who are following a gluten-free diet. Loading More information about the disease is available on the Coeliac Australia website at coeliac.org.au. The latest issue of The Australian Coeliac Magazine contains a detailed account by Dr Tye-Din of coeliac disease, its effects and the reasons why some people are susceptible. Information on the clinical trial can be found at immusant.com Read Geoff Maslen's blog atgeoffmaslen.edublog-s.org/
/** * This function is called by NodeJoinedCallback * * @param notifId * @param value */ private void NodeJoined(Object clusterAddress, Object serverAddress, boolean reconnect, EventContext eventContext) { synchronized (ConnectionManager.getCallbackQueue()) { if (_client == null) { return; } ConnectionManager.getCallbackQueue().offer(new NodeJoinedEvent(_cacheId, (com.alachisoft.tayzgrid.common.net.Address) ((clusterAddress instanceof com.alachisoft.tayzgrid.common.net.Address) ? clusterAddress : null), (com.alachisoft.tayzgrid.common.net.Address) ((serverAddress instanceof com.alachisoft.tayzgrid.common.net.Address) ? serverAddress : null), _client.getClientID(), reconnect)); Monitor.pulse(ConnectionManager.getCallbackQueue()); if (SocketServer.getIsServerCounterEnabled()) { SocketServer.getPerfStatsColl().setEventQueueCount(ConnectionManager.getCallbackQueue().size()); } } }
Hey New York, just in case you forgot that CMJ is next week, um, well, CMJ is next week. This means it's time to start preparing yourself for staying up till 5AM in order to catch the last set of that one band you wanted to see but missed earlier and who cares that tomorrow is a workday because this is totally worth it because you get to sing along to your current favorite song. In particular, get ready to do this on Wednesday, October 16th at Santos Party House for Noisey's Rap Party, featuring Vic Mensa, Mr. MFN Exquire, World's Fair, Deniro Farrar, and oh my god so many more it's too much to list (so just look at the giant flyer above). All you have to do is RSVP, and it's free. Yes, that's right. Free. Like, you don't have to pay anything. RSVP HERE
"""Generated message classes for ml version v1beta1. An API to enable creating and using machine learning models. """ # NOTE: This file is autogenerated and should not be edited by hand. from apitools.base.protorpclite import messages as _messages from apitools.base.py import encoding from apitools.base.py import extra_types package = 'ml' class GoogleApiHttpBody(_messages.Message): """Message that represents an arbitrary HTTP body. It should only be used for payload formats that can't be represented as JSON, such as raw binary or an HTML page. This message can be used both in streaming and non-streaming API methods in the request as well as the response. It can be used as a top-level request field, which is convenient if one wants to extract parameters from either the URL or HTTP template into the request fields and also want access to the raw HTTP body. Example: message GetResourceRequest { // A unique request id. string request_id = 1; // The raw HTTP body is bound to this field. google.api.HttpBody http_body = 2; } service ResourceService { rpc GetResource(GetResourceRequest) returns (google.api.HttpBody); rpc UpdateResource(google.api.HttpBody) returns (google.protobuf.Empty); } Example with streaming methods: service CaldavService { rpc GetCalendar(stream google.api.HttpBody) returns (stream google.api.HttpBody); rpc UpdateCalendar(stream google.api.HttpBody) returns (stream google.api.HttpBody); } Use of this type only changes how the request and response bodies are handled, all other features will continue to work unchanged. Fields: contentType: The HTTP Content-Type string representing the content type of the body. data: HTTP body binary data. """ contentType = _messages.StringField(1) data = _messages.BytesField(2) class GoogleCloudMlV1beta1CancelJobRequest(_messages.Message): """Request message for the CancelJob method.""" class GoogleCloudMlV1beta1GetConfigResponse(_messages.Message): """Returns service account information associated with a project. Fields: serviceAccount: The service account Cloud ML uses to access resources in the project. serviceAccountProject: The project number for `service_account`. """ serviceAccount = _messages.StringField(1) serviceAccountProject = _messages.IntegerField(2) class GoogleCloudMlV1beta1HyperparameterOutput(_messages.Message): """Represents the result of a single hyperparameter tuning trial from a training job. The TrainingOutput object that is returned on successful completion of a training job with hyperparameter tuning includes a list of HyperparameterOutput objects, one for each successful trial. Messages: HyperparametersValue: The hyperparameters given to this trial. Fields: allMetrics: All recorded object metrics for this trial. finalMetric: The final objective metric seen for this trial. hyperparameters: The hyperparameters given to this trial. trialId: The trial id for these results. """ @encoding.MapUnrecognizedFields('additionalProperties') class HyperparametersValue(_messages.Message): """The hyperparameters given to this trial. Messages: AdditionalProperty: An additional property for a HyperparametersValue object. Fields: additionalProperties: Additional properties of type HyperparametersValue """ class AdditionalProperty(_messages.Message): """An additional property for a HyperparametersValue object. Fields: key: Name of the additional property. value: A string attribute. """ key = _messages.StringField(1) value = _messages.StringField(2) additionalProperties = _messages.MessageField('AdditionalProperty', 1, repeated=True) allMetrics = _messages.MessageField('GoogleCloudMlV1beta1HyperparameterOutputHyperparameterMetric', 1, repeated=True) finalMetric = _messages.MessageField('GoogleCloudMlV1beta1HyperparameterOutputHyperparameterMetric', 2) hyperparameters = _messages.MessageField('HyperparametersValue', 3) trialId = _messages.StringField(4) class GoogleCloudMlV1beta1HyperparameterOutputHyperparameterMetric(_messages.Message): """An observed value of a metric. Fields: objectiveValue: The objective value at this training step. trainingStep: The global training step for this metric. """ objectiveValue = _messages.FloatField(1) trainingStep = _messages.IntegerField(2) class GoogleCloudMlV1beta1HyperparameterSpec(_messages.Message): """Represents a set of hyperparameters to optimize. Enums: GoalValueValuesEnum: Required. The type of goal to use for tuning. Available types are `MAXIMIZE` and `MINIMIZE`. Defaults to `MAXIMIZE`. Fields: goal: Required. The type of goal to use for tuning. Available types are `MAXIMIZE` and `MINIMIZE`. Defaults to `MAXIMIZE`. hyperparameterMetricTag: Optional. The Tensorflow summary tag name to use for optimizing trials. For current versions of Tensorflow, this tag name should exactly match what is shown in Tensorboard, including all scopes. For versions of Tensorflow prior to 0.12, this should be only the tag passed to tf.Summary. By default, "training/hptuning/metric" will be used. maxParallelTrials: Optional. The number of training trials to run concurrently. You can reduce the time it takes to perform hyperparameter tuning by adding trials in parallel. However, each trail only benefits from the information gained in completed trials. That means that a trial does not get access to the results of trials running at the same time, which could reduce the quality of the overall optimization. Each trial will use the same scale tier and machine types. Defaults to one. maxTrials: Optional. How many training trials should be attempted to optimize the specified hyperparameters. Defaults to one. params: Required. The set of parameters to tune. """ class GoalValueValuesEnum(_messages.Enum): """Required. The type of goal to use for tuning. Available types are `MAXIMIZE` and `MINIMIZE`. Defaults to `MAXIMIZE`. Values: GOAL_TYPE_UNSPECIFIED: Goal Type will default to maximize. MAXIMIZE: Maximize the goal metric. MINIMIZE: Minimize the goal metric. """ GOAL_TYPE_UNSPECIFIED = 0 MAXIMIZE = 1 MINIMIZE = 2 goal = _messages.EnumField('GoalValueValuesEnum', 1) hyperparameterMetricTag = _messages.StringField(2) maxParallelTrials = _messages.IntegerField(3, variant=_messages.Variant.INT32) maxTrials = _messages.IntegerField(4, variant=_messages.Variant.INT32) params = _messages.MessageField('GoogleCloudMlV1beta1ParameterSpec', 5, repeated=True) class GoogleCloudMlV1beta1Job(_messages.Message): """Represents a training or prediction job. Enums: StateValueValuesEnum: Output only. The detailed state of a job. Fields: createTime: Output only. When the job was created. endTime: Output only. When the job processing was completed. errorMessage: Output only. The details of a failure or a cancellation. jobId: Required. The user-specified id of the job. predictionInput: Input parameters to create a prediction job. predictionOutput: The current prediction job result. startTime: Output only. When the job processing was started. state: Output only. The detailed state of a job. trainingInput: Input parameters to create a training job. trainingOutput: The current training job result. """ class StateValueValuesEnum(_messages.Enum): """Output only. The detailed state of a job. Values: STATE_UNSPECIFIED: The job state is unspecified. QUEUED: The job has been just created and processing has not yet begun. PREPARING: The service is preparing to run the job. RUNNING: The job is in progress. SUCCEEDED: The job completed successfully. FAILED: The job failed. `error_message` should contain the details of the failure. CANCELLING: The job is being cancelled. `error_message` should describe the reason for the cancellation. CANCELLED: The job has been cancelled. `error_message` should describe the reason for the cancellation. """ STATE_UNSPECIFIED = 0 QUEUED = 1 PREPARING = 2 RUNNING = 3 SUCCEEDED = 4 FAILED = 5 CANCELLING = 6 CANCELLED = 7 createTime = _messages.StringField(1) endTime = _messages.StringField(2) errorMessage = _messages.StringField(3) jobId = _messages.StringField(4) predictionInput = _messages.MessageField('GoogleCloudMlV1beta1PredictionInput', 5) predictionOutput = _messages.MessageField('GoogleCloudMlV1beta1PredictionOutput', 6) startTime = _messages.StringField(7) state = _messages.EnumField('StateValueValuesEnum', 8) trainingInput = _messages.MessageField('GoogleCloudMlV1beta1TrainingInput', 9) trainingOutput = _messages.MessageField('GoogleCloudMlV1beta1TrainingOutput', 10) class GoogleCloudMlV1beta1ListJobsResponse(_messages.Message): """Response message for the ListJobs method. Fields: jobs: The list of jobs. nextPageToken: Optional. Pass this token as the `page_token` field of the request for a subsequent call. """ jobs = _messages.MessageField('GoogleCloudMlV1beta1Job', 1, repeated=True) nextPageToken = _messages.StringField(2) class GoogleCloudMlV1beta1ListModelsResponse(_messages.Message): """Response message for the ListModels method. Fields: models: The list of models. nextPageToken: Optional. Pass this token as the `page_token` field of the request for a subsequent call. """ models = _messages.MessageField('GoogleCloudMlV1beta1Model', 1, repeated=True) nextPageToken = _messages.StringField(2) class GoogleCloudMlV1beta1ListVersionsResponse(_messages.Message): """Response message for the ListVersions method. Fields: nextPageToken: Optional. Pass this token as the `page_token` field of the request for a subsequent call. versions: The list of versions. """ nextPageToken = _messages.StringField(1) versions = _messages.MessageField('GoogleCloudMlV1beta1Version', 2, repeated=True) class GoogleCloudMlV1beta1Model(_messages.Message): """Represents a machine learning solution. A model can have multiple versions, each of which is a deployed, trained model ready to receive prediction requests. The model itself is just a container. Fields: defaultVersion: Output only. The default version of the model. This version will be used to handle prediction requests that do not specify a version. You can change the default version by calling [projects.method s.versions.setDefault](/ml/reference/rest/v1beta1/projects.models.versio ns/setDefault). description: Optional. The description specified for the model when it was created. name: Required. The name specified for the model when it was created. The model name must be unique within the project it is created in. onlinePredictionLogging: Optional. If true, enables StackDriver Logging for online prediction. Default is false. regions: Optional. The list of regions where the model is going to be deployed. Currently only one region per model is supported. Defaults to 'us-central1' if nothing is set. """ defaultVersion = _messages.MessageField('GoogleCloudMlV1beta1Version', 1) description = _messages.StringField(2) name = _messages.StringField(3) onlinePredictionLogging = _messages.BooleanField(4) regions = _messages.StringField(5, repeated=True) class GoogleCloudMlV1beta1OperationMetadata(_messages.Message): """Represents the metadata of the long-running operation. Enums: OperationTypeValueValuesEnum: The operation type. Fields: createTime: The time the operation was submitted. endTime: The time operation processing completed. isCancellationRequested: Indicates whether a request to cancel this operation has been made. modelName: Contains the name of the model associated with the operation. operationType: The operation type. startTime: The time operation processing started. version: Contains the version associated with the operation. """ class OperationTypeValueValuesEnum(_messages.Enum): """The operation type. Values: OPERATION_TYPE_UNSPECIFIED: Unspecified operation type. CREATE_VERSION: An operation to create a new version. DELETE_VERSION: An operation to delete an existing version. DELETE_MODEL: An operation to delete an existing model. """ OPERATION_TYPE_UNSPECIFIED = 0 CREATE_VERSION = 1 DELETE_VERSION = 2 DELETE_MODEL = 3 createTime = _messages.StringField(1) endTime = _messages.StringField(2) isCancellationRequested = _messages.BooleanField(3) modelName = _messages.StringField(4) operationType = _messages.EnumField('OperationTypeValueValuesEnum', 5) startTime = _messages.StringField(6) version = _messages.MessageField('GoogleCloudMlV1beta1Version', 7) class GoogleCloudMlV1beta1ParameterSpec(_messages.Message): """Represents a single hyperparameter to optimize. Enums: ScaleTypeValueValuesEnum: Optional. How the parameter should be scaled to the hypercube. Leave unset for categorical parameters. Some kind of scaling is strongly recommended for real or integral parameters (e.g., `UNIT_LINEAR_SCALE`). TypeValueValuesEnum: Required. The type of the parameter. Fields: categoricalValues: Required if type is `CATEGORICAL`. The list of possible categories. discreteValues: Required if type is `DISCRETE`. A list of feasible points. The list should be in strictly increasing order. For instance, this parameter might have possible settings of 1.5, 2.5, and 4.0. This list should not contain more than 1,000 values. maxValue: Required if typeis `DOUBLE` or `INTEGER`. This field should be unset if type is `CATEGORICAL`. This value should be integers if type is `INTEGER`. minValue: Required if type is `DOUBLE` or `INTEGER`. This field should be unset if type is `CATEGORICAL`. This value should be integers if type is INTEGER. parameterName: Required. The parameter name must be unique amongst all ParameterConfigs in a HyperparameterSpec message. E.g., "learning_rate". scaleType: Optional. How the parameter should be scaled to the hypercube. Leave unset for categorical parameters. Some kind of scaling is strongly recommended for real or integral parameters (e.g., `UNIT_LINEAR_SCALE`). type: Required. The type of the parameter. """ class ScaleTypeValueValuesEnum(_messages.Enum): """Optional. How the parameter should be scaled to the hypercube. Leave unset for categorical parameters. Some kind of scaling is strongly recommended for real or integral parameters (e.g., `UNIT_LINEAR_SCALE`). Values: NONE: By default, no scaling is applied. UNIT_LINEAR_SCALE: Scales the feasible space to (0, 1) linearly. UNIT_LOG_SCALE: Scales the feasible space logarithmically to (0, 1). The entire feasible space must be strictly positive. UNIT_REVERSE_LOG_SCALE: Scales the feasible space "reverse" logarithmically to (0, 1). The result is that values close to the top of the feasible space are spread out more than points near the bottom. The entire feasible space must be strictly positive. """ NONE = 0 UNIT_LINEAR_SCALE = 1 UNIT_LOG_SCALE = 2 UNIT_REVERSE_LOG_SCALE = 3 class TypeValueValuesEnum(_messages.Enum): """Required. The type of the parameter. Values: PARAMETER_TYPE_UNSPECIFIED: You must specify a valid type. Using this unspecified type will result in an error. DOUBLE: Type for real-valued parameters. INTEGER: Type for integral parameters. CATEGORICAL: The parameter is categorical, with a value chosen from the categories field. DISCRETE: The parameter is real valued, with a fixed set of feasible points. If `type==DISCRETE`, feasible_points must be provided, and {`min_value`, `max_value`} will be ignored. """ PARAMETER_TYPE_UNSPECIFIED = 0 DOUBLE = 1 INTEGER = 2 CATEGORICAL = 3 DISCRETE = 4 categoricalValues = _messages.StringField(1, repeated=True) discreteValues = _messages.FloatField(2, repeated=True) maxValue = _messages.FloatField(3) minValue = _messages.FloatField(4) parameterName = _messages.StringField(5) scaleType = _messages.EnumField('ScaleTypeValueValuesEnum', 6) type = _messages.EnumField('TypeValueValuesEnum', 7) class GoogleCloudMlV1beta1PredictRequest(_messages.Message): """Request for predictions to be issued against a trained model. The body of the request is a single JSON object with a single top-level field: <dl> <dt>instances</dt> <dd>A JSON array containing values representing the instances to use for prediction.</dd> </dl> The structure of each element of the instances list is determined by your model's input definition. Instances can include named inputs or can contain only unlabeled values. Not all data includes named inputs. Some instances will be simple JSON values (boolean, number, or string). However, instances are often lists of simple values, or complex nested lists. Here are some examples of request bodies: CSV data with each row encoded as a string value: <pre> {"instances": ["1.0,true,\\"x\\"", "-2.0,false,\\"y\\""]} </pre> Plain text: <pre> {"instances": ["the quick brown fox", "la bruja le dio"]} </pre> Sentences encoded as lists of words (vectors of strings): <pre> { "instances": [ ["the","quick","brown"], ["la","bruja","le"], ... ] } </pre> Floating point scalar values: <pre> {"instances": [0.0, 1.1, 2.2]} </pre> Vectors of integers: <pre> { "instances": [ [0, 1, 2], [3, 4, 5], ... ] } </pre> Tensors (in this case, two-dimensional tensors): <pre> { "instances": [ [ [0, 1, 2], [3, 4, 5] ], ... ] } </pre> Images can be represented different ways. In this encoding scheme the first two dimensions represent the rows and columns of the image, and the third contains lists (vectors) of the R, G, and B values for each pixel. <pre> { "instances": [ [ [ [138, 30, 66], [130, 20, 56], ... ], [ [126, 38, 61], [122, 24, 57], ... ], ... ], ... ] } </pre> JSON strings must be encoded as UTF-8. To send binary data, you must base64-encode the data and mark it as binary. To mark a JSON string as binary, replace it with a JSON object with a single attribute named `b64`: <pre>{"b64": "..."} </pre> For example: Two Serialized tf.Examples (fake data, for illustrative purposes only): <pre> {"instances": [{"b64": "X5ad6u"}, {"b64": "IA9j4nx"}]} </pre> Two JPEG image byte strings (fake data, for illustrative purposes only): <pre> {"instances": [{"b64": "ASa8asdf"}, {"b64": "JLK7ljk3"}]} </pre> If your data includes named references, format each instance as a JSON object with the named references as the keys: JSON input data to be preprocessed: <pre> { "instances": [ { "a": 1.0, "b": true, "c": "x" }, { "a": -2.0, "b": false, "c": "y" } ] } </pre> Some models have an underlying TensorFlow graph that accepts multiple input tensors. In this case, you should use the names of JSON name/value pairs to identify the input tensors, as shown in the following exmaples: For a graph with input tensor aliases "tag" (string) and "image" (base64-encoded string): <pre> { "instances": [ { "tag": "beach", "image": {"b64": "ASa8asdf"} }, { "tag": "car", "image": {"b64": "JLK7ljk3"} } ] } </pre> For a graph with input tensor aliases "tag" (string) and "image" (3-dimensional array of 8-bit ints): <pre> { "instances": [ { "tag": "beach", "image": [ [ [138, 30, 66], [130, 20, 56], ... ], [ [126, 38, 61], [122, 24, 57], ... ], ... ] }, { "tag": "car", "image": [ [ [255, 0, 102], [255, 0, 97], ... ], [ [254, 1, 101], [254, 2, 93], ... ], ... ] }, ... ] } </pre> If the call is successful, the response body will contain one prediction entry per instance in the request body. If prediction fails for any instance, the response body will contain no predictions and will contian a single error entry instead. Fields: httpBody: Required. The prediction request body. """ httpBody = _messages.MessageField('GoogleApiHttpBody', 1) class GoogleCloudMlV1beta1PredictionInput(_messages.Message): """Represents input parameters for a prediction job. Enums: DataFormatValueValuesEnum: Required. The format of the input data files. Fields: dataFormat: Required. The format of the input data files. inputPaths: Required. The Google Cloud Storage location of the input data files. May contain wildcards. maxWorkerCount: Optional. The maximum number of workers to be used for parallel processing. Defaults to 10 if not specified. modelName: Use this field if you want to use the default version for the specified model. The string must use the following format: `"projects/<var>[YOUR_PROJECT]</var>/models/<var>[YOUR_MODEL]</var>"` outputPath: Required. The output Google Cloud Storage location. region: Required. The Google Compute Engine region to run the prediction job in. runtimeVersion: Optional. The Google Cloud ML runtime version to use for this batch prediction. If not set, Google Cloud ML will pick the runtime version used during the CreateVersion request for this model version, or choose the latest stable version when model version information is not available such as when the model is specified by uri. uri: Use this field if you want to specify a GCS path to the model to use. versionName: Use this field if you want to specify a version of the model to use. The string is formatted the same way as `model_version`, with the addition of the version information: `"projects/<var>[YOUR_PROJECT] </var>/models/<var>YOUR_MODEL/versions/<var>[YOUR_VERSION]</var>"` """ class DataFormatValueValuesEnum(_messages.Enum): """Required. The format of the input data files. Values: DATA_FORMAT_UNSPECIFIED: Unspecified format. TEXT: The source file is a text file with instances separated by the new-line character. TF_RECORD: The source file is a TFRecord file. TF_RECORD_GZIP: The source file is a GZIP-compressed TFRecord file. """ DATA_FORMAT_UNSPECIFIED = 0 TEXT = 1 TF_RECORD = 2 TF_RECORD_GZIP = 3 dataFormat = _messages.EnumField('DataFormatValueValuesEnum', 1) inputPaths = _messages.StringField(2, repeated=True) maxWorkerCount = _messages.IntegerField(3) modelName = _messages.StringField(4) outputPath = _messages.StringField(5) region = _messages.StringField(6) runtimeVersion = _messages.StringField(7) uri = _messages.StringField(8) versionName = _messages.StringField(9) class GoogleCloudMlV1beta1PredictionOutput(_messages.Message): """Represents results of a prediction job. Fields: errorCount: The number of data instances which resulted in errors. nodeHours: Node hours used by the batch prediction job. outputPath: The output Google Cloud Storage location provided at the job creation time. predictionCount: The number of generated predictions. """ errorCount = _messages.IntegerField(1) nodeHours = _messages.FloatField(2) outputPath = _messages.StringField(3) predictionCount = _messages.IntegerField(4) class GoogleCloudMlV1beta1SetDefaultVersionRequest(_messages.Message): """Request message for the SetDefaultVersion request.""" class GoogleCloudMlV1beta1TrainingInput(_messages.Message): """Represents input parameters for a training job. Enums: ScaleTierValueValuesEnum: Required. Specifies the machine types, the number of replicas for workers and parameter servers. Fields: args: Optional. Command line arguments to pass to the program. hyperparameters: Optional. The set of Hyperparameters to tune. jobDir: Optional. A GCS path in which to store training outputs and other data needed for training. This path will be passed to your TensorFlow program as the 'job_dir' command-line arg. The benefit of specifying this field is that Cloud ML will validate the path for use in training. mainType: Optional. Specifies the type of virtual machine to use for your training job's main worker. The following types are supported: <dl> <dt>standard</dt> <dd> A basic machine configuration suitable for training simple models with small to moderate datasets. </dd> <dt>large_model</dt> <dd> A machine with a lot of memory, specially suited for parameter servers when your model is large (having many hidden layers or layers with very large numbers of nodes). </dd> <dt>complex_model_s</dt> <dd> A machine suitable for the main and workers of the cluster when your model requires more computation than the standard machine can handle satisfactorily. </dd> <dt>complex_model_m</dt> <dd> A machine with roughly twice the number of cores and roughly double the memory of <code suppresswarning="true">complex_model_s</code>. </dd> <dt>complex_model_l</dt> <dd> A machine with roughly twice the number of cores and roughly double the memory of <code suppresswarning="true">complex_model_m</code>. </dd> </dl> You must set this value when `scaleTier` is set to `CUSTOM`. packageUris: Required. The Google Cloud Storage location of the packages with the training program and any additional dependencies. parameterServerCount: Optional. The number of parameter server replicas to use for the training job. Each replica in the cluster will be of the type specified in `parameter_server_type`. This value can only be used when `scale_tier` is set to `CUSTOM`.If you set this value, you must also set `parameter_server_type`. parameterServerType: Optional. Specifies the type of virtual machine to use for your training job's parameter server. The supported values are the same as those described in the entry for `main_type`. This value must be present when `scaleTier` is set to `CUSTOM` and `parameter_server_count` is greater than zero. pythonModule: Required. The Python module name to run after installing the packages. region: Required. The Google Compute Engine region to run the training job in. runtimeVersion: Optional. The Google Cloud ML runtime version to use for training. If not set, Google Cloud ML will choose the latest stable version. scaleTier: Required. Specifies the machine types, the number of replicas for workers and parameter servers. workerCount: Optional. The number of worker replicas to use for the training job. Each replica in the cluster will be of the type specified in `worker_type`. This value can only be used when `scale_tier` is set to `CUSTOM`. If you set this value, you must also set `worker_type`. workerType: Optional. Specifies the type of virtual machine to use for your training job's worker nodes. The supported values are the same as those described in the entry for `mainType`. This value must be present when `scaleTier` is set to `CUSTOM` and `workerCount` is greater than zero. """ class ScaleTierValueValuesEnum(_messages.Enum): """Required. Specifies the machine types, the number of replicas for workers and parameter servers. Values: BASIC: A single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets. STANDARD_1: Many workers and a few parameter servers. PREMIUM_1: A large number of workers with many parameter servers. BASIC_GPU: A single worker instance with a GPU. CUSTOM: The CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You _must_ set `TrainingInput.mainType` to specify the type of machine to use for your main node. This is the only required setting. * You _may_ set `TrainingInput.workerCount` to specify the number of workers to use. If you specify one or more workers, you _must_ also set `TrainingInput.workerType` to specify the type of machine to use for your worker nodes. * You _may_ set `TrainingInput.parameterServerCount` to specify the number of parameter servers to use. If you specify one or more parameter servers, you _must_ also set `TrainingInput.parameterServerType` to specify the type of machine to use for your parameter servers. Note that all of your workers must use the same machine type, which can be different from your parameter server type and main type. Your parameter servers must likewise use the same machine type, which can be different from your worker type and main type. """ BASIC = 0 STANDARD_1 = 1 PREMIUM_1 = 2 BASIC_GPU = 3 CUSTOM = 4 args = _messages.StringField(1, repeated=True) hyperparameters = _messages.MessageField('GoogleCloudMlV1beta1HyperparameterSpec', 2) jobDir = _messages.StringField(3) mainType = _messages.StringField(4) packageUris = _messages.StringField(5, repeated=True) parameterServerCount = _messages.IntegerField(6) parameterServerType = _messages.StringField(7) pythonModule = _messages.StringField(8) region = _messages.StringField(9) runtimeVersion = _messages.StringField(10) scaleTier = _messages.EnumField('ScaleTierValueValuesEnum', 11) workerCount = _messages.IntegerField(12) workerType = _messages.StringField(13) class GoogleCloudMlV1beta1TrainingOutput(_messages.Message): """Represents results of a training job. Output only. Fields: completedTrialCount: The number of hyperparameter tuning trials that completed successfully. Only set for hyperparameter tuning jobs. consumedMLUnits: The amount of ML units consumed by the job. isHyperparameterTuningJob: Whether this job is a hyperparameter tuning job. trials: Results for individual Hyperparameter trials. Only set for hyperparameter tuning jobs. """ completedTrialCount = _messages.IntegerField(1) consumedMLUnits = _messages.FloatField(2) isHyperparameterTuningJob = _messages.BooleanField(3) trials = _messages.MessageField('GoogleCloudMlV1beta1HyperparameterOutput', 4, repeated=True) class GoogleCloudMlV1beta1Version(_messages.Message): """Represents a version of the model. Each version is a trained model deployed in the cloud, ready to handle prediction requests. A model can have multiple versions. You can get information about all of the versions of a given model by calling [projects.models.versions.list](/ml/reference/rest/v1 beta1/projects.models.versions/list). Fields: createTime: Output only. The time the version was created. deploymentUri: Required. The Google Cloud Storage location of the trained model used to create the version. See the [overview of model deployment](/ml/docs/concepts/deployment-overview) for more informaiton. When passing Version to [projects.models.versions.create](/ml/reference/ rest/v1beta1/projects.models.versions/create) the model service uses the specified location as the source of the model. Once deployed, the model version is hosted by the prediction service, so this location is useful only as a historical record. description: Optional. The description specified for the version when it was created. isDefault: Output only. If true, this version will be used to handle prediction requests that do not specify a version. You can change the default version by calling [projects.methods.versions.setDefault](/ml/re ference/rest/v1beta1/projects.models.versions/setDefault). lastUseTime: Output only. The time the version was last used for prediction. name: Required.The name specified for the version when it was created. The version name must be unique within the model it is created in. runtimeVersion: Optional. The Google Cloud ML runtime version to use for this deployment. If not set, Google Cloud ML will choose a version. """ createTime = _messages.StringField(1) deploymentUri = _messages.StringField(2) description = _messages.StringField(3) isDefault = _messages.BooleanField(4) lastUseTime = _messages.StringField(5) name = _messages.StringField(6) runtimeVersion = _messages.StringField(7) class GoogleLongrunningListOperationsResponse(_messages.Message): """The response message for Operations.ListOperations. Fields: nextPageToken: The standard List next-page token. operations: A list of operations that matches the specified filter in the request. """ nextPageToken = _messages.StringField(1) operations = _messages.MessageField('GoogleLongrunningOperation', 2, repeated=True) class GoogleLongrunningOperation(_messages.Message): """This resource represents a long-running operation that is the result of a network API call. Messages: MetadataValue: Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any. ResponseValue: The normal response of the operation in case of success. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`. Fields: done: If the value is `false`, it means the operation is still in progress. If true, the operation is completed, and either `error` or `response` is available. error: The error result of the operation in case of failure or cancellation. metadata: Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any. name: The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should have the format of `operations/some/unique/name`. response: The normal response of the operation in case of success. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`. """ @encoding.MapUnrecognizedFields('additionalProperties') class MetadataValue(_messages.Message): """Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any. Messages: AdditionalProperty: An additional property for a MetadataValue object. Fields: additionalProperties: Properties of the object. Contains field @type with type URL. """ class AdditionalProperty(_messages.Message): """An additional property for a MetadataValue object. Fields: key: Name of the additional property. value: A extra_types.JsonValue attribute. """ key = _messages.StringField(1) value = _messages.MessageField('extra_types.JsonValue', 2) additionalProperties = _messages.MessageField('AdditionalProperty', 1, repeated=True) @encoding.MapUnrecognizedFields('additionalProperties') class ResponseValue(_messages.Message): """The normal response of the operation in case of success. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`. Messages: AdditionalProperty: An additional property for a ResponseValue object. Fields: additionalProperties: Properties of the object. Contains field @type with type URL. """ class AdditionalProperty(_messages.Message): """An additional property for a ResponseValue object. Fields: key: Name of the additional property. value: A extra_types.JsonValue attribute. """ key = _messages.StringField(1) value = _messages.MessageField('extra_types.JsonValue', 2) additionalProperties = _messages.MessageField('AdditionalProperty', 1, repeated=True) done = _messages.BooleanField(1) error = _messages.MessageField('GoogleRpcStatus', 2) metadata = _messages.MessageField('MetadataValue', 3) name = _messages.StringField(4) response = _messages.MessageField('ResponseValue', 5) class GoogleProtobufEmpty(_messages.Message): """A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); } The JSON representation for `Empty` is empty JSON object `{}`. """ class GoogleRpcStatus(_messages.Message): """The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). The error model is designed to be: - Simple to use and understand for most users - Flexible enough to meet unexpected needs # Overview The `Status` message contains three pieces of data: error code, error message, and error details. The error code should be an enum value of google.rpc.Code, but it may accept additional error codes if needed. The error message should be a developer-facing English message that helps developers *understand* and *resolve* the error. If a localized user-facing error message is needed, put the localized message in the error details or localize it in the client. The optional error details may contain arbitrary information about the error. There is a predefined set of error detail types in the package `google.rpc` which can be used for common error conditions. # Language mapping The `Status` message is the logical representation of the error model, but it is not necessarily the actual wire format. When the `Status` message is exposed in different client libraries and different wire protocols, it can be mapped differently. For example, it will likely be mapped to some exceptions in Java, but more likely mapped to some error codes in C. # Other uses The error model and the `Status` message can be used in a variety of environments, either with or without APIs, to provide a consistent developer experience across different environments. Example uses of this error model include: - Partial errors. If a service needs to return partial errors to the client, it may embed the `Status` in the normal response to indicate the partial errors. - Workflow errors. A typical workflow has multiple steps. Each step may have a `Status` message for error reporting purpose. - Batch operations. If a client uses batch request and batch response, the `Status` message should be used directly inside batch response, one for each error sub- response. - Asynchronous operations. If an API call embeds asynchronous operation results in its response, the status of those operations should be represented directly using the `Status` message. - Logging. If some API errors are stored in logs, the message `Status` could be used directly after any stripping needed for security/privacy reasons. Messages: DetailsValueListEntry: A DetailsValueListEntry object. Fields: code: The status code, which should be an enum value of google.rpc.Code. details: A list of messages that carry the error details. There will be a common set of message types for APIs to use. message: A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client. """ @encoding.MapUnrecognizedFields('additionalProperties') class DetailsValueListEntry(_messages.Message): """A DetailsValueListEntry object. Messages: AdditionalProperty: An additional property for a DetailsValueListEntry object. Fields: additionalProperties: Properties of the object. Contains field @type with type URL. """ class AdditionalProperty(_messages.Message): """An additional property for a DetailsValueListEntry object. Fields: key: Name of the additional property. value: A extra_types.JsonValue attribute. """ key = _messages.StringField(1) value = _messages.MessageField('extra_types.JsonValue', 2) additionalProperties = _messages.MessageField('AdditionalProperty', 1, repeated=True) code = _messages.IntegerField(1, variant=_messages.Variant.INT32) details = _messages.MessageField('DetailsValueListEntry', 2, repeated=True) message = _messages.StringField(3) class MlProjectsGetConfigRequest(_messages.Message): """A MlProjectsGetConfigRequest object. Fields: name: Required. The project name. Authorization: requires `Viewer` role on the specified project. """ name = _messages.StringField(1, required=True) class MlProjectsJobsCancelRequest(_messages.Message): """A MlProjectsJobsCancelRequest object. Fields: googleCloudMlV1beta1CancelJobRequest: A GoogleCloudMlV1beta1CancelJobRequest resource to be passed as the request body. name: Required. The name of the job to cancel. Authorization: requires `Editor` role on the parent project. """ googleCloudMlV1beta1CancelJobRequest = _messages.MessageField('GoogleCloudMlV1beta1CancelJobRequest', 1) name = _messages.StringField(2, required=True) class MlProjectsJobsCreateRequest(_messages.Message): """A MlProjectsJobsCreateRequest object. Fields: googleCloudMlV1beta1Job: A GoogleCloudMlV1beta1Job resource to be passed as the request body. parent: Required. The project name. Authorization: requires `Editor` role on the specified project. """ googleCloudMlV1beta1Job = _messages.MessageField('GoogleCloudMlV1beta1Job', 1) parent = _messages.StringField(2, required=True) class MlProjectsJobsGetRequest(_messages.Message): """A MlProjectsJobsGetRequest object. Fields: name: Required. The name of the job to get the description of. Authorization: requires `Viewer` role on the parent project. """ name = _messages.StringField(1, required=True) class MlProjectsJobsListRequest(_messages.Message): """A MlProjectsJobsListRequest object. Fields: filter: Optional. Specifies the subset of jobs to retrieve. pageSize: Optional. The number of jobs to retrieve per "page" of results. If there are more remaining results than this number, the response message will contain a valid value in the `next_page_token` field. The default value is 20, and the maximum page size is 100. pageToken: Optional. A page token to request the next page of results. You get the token from the `next_page_token` field of the response from the previous call. parent: Required. The name of the project for which to list jobs. Authorization: requires `Viewer` role on the specified project. """ filter = _messages.StringField(1) pageSize = _messages.IntegerField(2, variant=_messages.Variant.INT32) pageToken = _messages.StringField(3) parent = _messages.StringField(4, required=True) class MlProjectsModelsCreateRequest(_messages.Message): """A MlProjectsModelsCreateRequest object. Fields: googleCloudMlV1beta1Model: A GoogleCloudMlV1beta1Model resource to be passed as the request body. parent: Required. The project name. Authorization: requires `Editor` role on the specified project. """ googleCloudMlV1beta1Model = _messages.MessageField('GoogleCloudMlV1beta1Model', 1) parent = _messages.StringField(2, required=True) class MlProjectsModelsDeleteRequest(_messages.Message): """A MlProjectsModelsDeleteRequest object. Fields: name: Required. The name of the model. Authorization: requires `Editor` role on the parent project. """ name = _messages.StringField(1, required=True) class MlProjectsModelsGetRequest(_messages.Message): """A MlProjectsModelsGetRequest object. Fields: name: Required. The name of the model. Authorization: requires `Viewer` role on the parent project. """ name = _messages.StringField(1, required=True) class MlProjectsModelsListRequest(_messages.Message): """A MlProjectsModelsListRequest object. Fields: pageSize: Optional. The number of models to retrieve per "page" of results. If there are more remaining results than this number, the response message will contain a valid value in the `next_page_token` field. The default value is 20, and the maximum page size is 100. pageToken: Optional. A page token to request the next page of results. You get the token from the `next_page_token` field of the response from the previous call. parent: Required. The name of the project whose models are to be listed. Authorization: requires `Viewer` role on the specified project. """ pageSize = _messages.IntegerField(1, variant=_messages.Variant.INT32) pageToken = _messages.StringField(2) parent = _messages.StringField(3, required=True) class MlProjectsModelsVersionsCreateRequest(_messages.Message): """A MlProjectsModelsVersionsCreateRequest object. Fields: googleCloudMlV1beta1Version: A GoogleCloudMlV1beta1Version resource to be passed as the request body. parent: Required. The name of the model. Authorization: requires `Editor` role on the parent project. """ googleCloudMlV1beta1Version = _messages.MessageField('GoogleCloudMlV1beta1Version', 1) parent = _messages.StringField(2, required=True) class MlProjectsModelsVersionsDeleteRequest(_messages.Message): """A MlProjectsModelsVersionsDeleteRequest object. Fields: name: Required. The name of the version. You can get the names of all the versions of a model by calling [projects.models.versions.list](/ml/refer ence/rest/v1beta1/projects.models.versions/list). Authorization: requires `Editor` role on the parent project. """ name = _messages.StringField(1, required=True) class MlProjectsModelsVersionsGetRequest(_messages.Message): """A MlProjectsModelsVersionsGetRequest object. Fields: name: Required. The name of the version. Authorization: requires `Viewer` role on the parent project. """ name = _messages.StringField(1, required=True) class MlProjectsModelsVersionsListRequest(_messages.Message): """A MlProjectsModelsVersionsListRequest object. Fields: pageSize: Optional. The number of versions to retrieve per "page" of results. If there are more remaining results than this number, the response message will contain a valid value in the `next_page_token` field. The default value is 20, and the maximum page size is 100. pageToken: Optional. A page token to request the next page of results. You get the token from the `next_page_token` field of the response from the previous call. parent: Required. The name of the model for which to list the version. Authorization: requires `Viewer` role on the parent project. """ pageSize = _messages.IntegerField(1, variant=_messages.Variant.INT32) pageToken = _messages.StringField(2) parent = _messages.StringField(3, required=True) class MlProjectsModelsVersionsSetDefaultRequest(_messages.Message): """A MlProjectsModelsVersionsSetDefaultRequest object. Fields: googleCloudMlV1beta1SetDefaultVersionRequest: A GoogleCloudMlV1beta1SetDefaultVersionRequest resource to be passed as the request body. name: Required. The name of the version to make the default for the model. You can get the names of all the versions of a model by calling [project s.models.versions.list](/ml/reference/rest/v1beta1/projects.models.versi ons/list). Authorization: requires `Editor` role on the parent project. """ googleCloudMlV1beta1SetDefaultVersionRequest = _messages.MessageField('GoogleCloudMlV1beta1SetDefaultVersionRequest', 1) name = _messages.StringField(2, required=True) class MlProjectsOperationsCancelRequest(_messages.Message): """A MlProjectsOperationsCancelRequest object. Fields: name: The name of the operation resource to be cancelled. """ name = _messages.StringField(1, required=True) class MlProjectsOperationsDeleteRequest(_messages.Message): """A MlProjectsOperationsDeleteRequest object. Fields: name: The name of the operation resource to be deleted. """ name = _messages.StringField(1, required=True) class MlProjectsOperationsGetRequest(_messages.Message): """A MlProjectsOperationsGetRequest object. Fields: name: The name of the operation resource. """ name = _messages.StringField(1, required=True) class MlProjectsOperationsListRequest(_messages.Message): """A MlProjectsOperationsListRequest object. Fields: filter: The standard list filter. name: The name of the operation collection. pageSize: The standard list page size. pageToken: The standard list page token. """ filter = _messages.StringField(1) name = _messages.StringField(2, required=True) pageSize = _messages.IntegerField(3, variant=_messages.Variant.INT32) pageToken = _messages.StringField(4) class MlProjectsPredictRequest(_messages.Message): """A MlProjectsPredictRequest object. Fields: googleCloudMlV1beta1PredictRequest: A GoogleCloudMlV1beta1PredictRequest resource to be passed as the request body. name: Required. The resource name of a model or a version. Authorization: requires `Viewer` role on the parent project. """ googleCloudMlV1beta1PredictRequest = _messages.MessageField('GoogleCloudMlV1beta1PredictRequest', 1) name = _messages.StringField(2, required=True) class StandardQueryParameters(_messages.Message): """Query parameters accepted by all methods. Enums: FXgafvValueValuesEnum: V1 error format. AltValueValuesEnum: Data format for response. Fields: f__xgafv: V1 error format. access_token: OAuth access token. alt: Data format for response. bearer_token: OAuth bearer token. callback: JSONP fields: Selector specifying which fields to include in a partial response. key: API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token. oauth_token: OAuth 2.0 token for the current user. pp: Pretty-print response. prettyPrint: Returns response with indentations and line breaks. quotaUser: Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters. trace: A tracing token of the form "token:<tokenid>" to include in api requests. uploadType: Legacy upload protocol for media (e.g. "media", "multipart"). upload_protocol: Upload protocol for media (e.g. "raw", "multipart"). """ class AltValueValuesEnum(_messages.Enum): """Data format for response. Values: json: Responses with Content-Type of application/json media: Media download with context-dependent Content-Type proto: Responses with Content-Type of application/x-protobuf """ json = 0 media = 1 proto = 2 class FXgafvValueValuesEnum(_messages.Enum): """V1 error format. Values: _1: v1 error format _2: v2 error format """ _1 = 0 _2 = 1 f__xgafv = _messages.EnumField('FXgafvValueValuesEnum', 1) access_token = _messages.StringField(2) alt = _messages.EnumField('AltValueValuesEnum', 3, default=u'json') bearer_token = _messages.StringField(4) callback = _messages.StringField(5) fields = _messages.StringField(6) key = _messages.StringField(7) oauth_token = _messages.StringField(8) pp = _messages.BooleanField(9, default=True) prettyPrint = _messages.BooleanField(10, default=True) quotaUser = _messages.StringField(11) trace = _messages.StringField(12) uploadType = _messages.StringField(13) upload_protocol = _messages.StringField(14) encoding.AddCustomJsonFieldMapping( StandardQueryParameters, 'f__xgafv', '$.xgafv', package=u'ml') encoding.AddCustomJsonEnumMapping( StandardQueryParameters.FXgafvValueValuesEnum, '_1', '1', package=u'ml') encoding.AddCustomJsonEnumMapping( StandardQueryParameters.FXgafvValueValuesEnum, '_2', '2', package=u'ml')
# Copyright (c) 2019 Cognizant Digital Business. # Issued under this Academic Public License: github.com/leaf-ai/muir/pytorch/muir/LICENSE. # # Pytorch dataset for crispr genomic data # import pickle import os import time import numpy as np import random import torch from torch.utils.data import Dataset, DataLoader dna_to_integer_map = {'A': 1, 'C': 2, 'G': 3, 'T': 4, 'N': 0} def dna_to_integer(dna): integer_dna = np.zeros(len(dna), dtype=int) for i, nucleotide in enumerate(dna): integer_dna[i] = dna_to_integer_map[nucleotide] return integer_dna class CrisprDataset(Dataset): def __init__(self, read_data, dna_data, split_sequences, batch_size, context_size, steps=None): self.read_data = read_data self.dna_data = dna_data self.split_sequences = split_sequences self.batch_size = batch_size self.context_size = context_size self.steps = steps self.samples = self.enumerate_samples() print("Num samples",len(self.samples)) random.shuffle(self.samples) def enumerate_samples(self): samples = [] for record, position in self.split_sequences: position_data = self.read_data[record][position] position_length = position_data['length'] for element in range(position_length): sample = (record, position, element) samples.append(sample) return samples def on_epoch_end(self): random.shuffle(self.samples) def get_dna(self, record_idx, position_data): record_dna_data = self.dna_data[record_idx] index_position = position_data['index_position'] start_offset = position_data['start_offset'] end_offset = position_data['end_offset'] return record_dna_data[index_position][start_offset:end_offset] def slice_dna(self, position_dna, element): start_idx = element end_idx = start_idx + 2 * self.context_size + 1 element_dna = position_dna[start_idx:end_idx] return element_dna def __len__(self): if self.steps is None: return len(self.samples) else: return self.steps def __getitem__(self, idx): record_idx, position, element = self.samples[idx] position_data = self.read_data[record_idx][position] position_dna = self.get_dna(record_idx, position_data) element_dna = self.slice_dna(position_dna, element) x = dna_to_integer(element_dna) y = position_data['aba'] x = torch.from_numpy(x) y = torch.Tensor([y]) return x, y def load_data(data_directory): print("Loading data.") with open(data_directory + '/read_data.pkl', 'rb') as f: read_data = pickle.load(f, encoding='latin1') with open(data_directory + '/dna_data.pkl', 'rb') as f: dna_data = pickle.load(f, encoding='latin1') with open(data_directory + '/splits.pkl', 'rb') as f: splits = pickle.load(f, encoding='latin1') return read_data, dna_data, splits def _init_fn(worker_id): np.random.seed(301 + worker_id) def load_crispr_genomic(dataset_folder, batch_size=512, context_size=100, steps_per_epoch=1000000, validation_steps=100000, dataset_percentage=1.0, num_workers=4): assert(dataset_percentage == 1.0) print("Creating generators.") read_data, dna_data, splits = load_data(dataset_folder) training_sequences = splits['training'] trainset = CrisprDataset(read_data, dna_data, training_sequences, batch_size, context_size, steps=steps_per_epoch) trainloader = DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=num_workers, pin_memory=True) validation_sequences = splits['validation'] valset = CrisprDataset(read_data, dna_data, validation_sequences, batch_size, context_size, steps=validation_steps) valloader = DataLoader(valset, batch_size=batch_size, shuffle=False, num_workers=num_workers, worker_init_fn=_init_fn, pin_memory=True) print(splits.keys()) testing_sequences = splits['test'] testset = CrisprDataset(read_data, dna_data, testing_sequences, batch_size, context_size, steps=validation_steps) testloader = DataLoader(testset, batch_size=batch_size, shuffle=False, num_workers=num_workers, worker_init_fn=_init_fn, pin_memory=True) classes = None return trainloader, valloader, testloader, None if __name__ == '__main__': import sys trainloader, valloader, testloader, classes = load_crispr_genomic(dataset_folder='./data', batch_size=2) i = 0 for input, target in trainloader: print(input, target) i += 1 if i == 10: sys.exit(0)
<filename>mmseg/datasets/pipelines/loading_hsi.py # Copyright (c) OpenMMLab. All rights reserved. import os.path as osp import torch import numpy as np import cv2 from ..builder import PIPELINES def load_ENVI_hyperspectral_image_from_file(filename): ENVI_data_type = [None, np.uint8, # 1 np.int16, # 2 np.int32, # 3 np.float32, # 4 np.float64, # 5 None, None, None, None, None, None, None, np.uint16, # 12 np.uint32,] # 13 hdr = dict() with open(filename) as f: for line in f.readlines(): if '=' not in line: continue else: key, value = line.split('=') key = key.strip() value = value.strip() hdr[key] = value # assert hdr['file type'] == 'ENVI Standard', \ # 'Require ENVI data: file type = ENVI Standard' # assert hdr['byte order'] == '0', \ # 'Require ENVI data: byte order = 0' # assert hdr['x start'] == '0', \ # 'Require ENVI data: x start = 0' # assert hdr['y start'] == '0', \ # 'Require ENVI data: y start = 0' assert int(hdr['data type']) <= len(ENVI_data_type) and ENVI_data_type[int(hdr['data type'])] != None, \ 'Unrecognized data type' data_type = int(hdr['data type']) header_offset = int(hdr['header offset']) height = int(hdr['lines']) width = int(hdr['samples']) bands = int(hdr['bands']) img_bytes = np.fromfile(filename.replace('.hdr', '.raw'), dtype=ENVI_data_type[data_type], offset=header_offset) if hdr['interleave'].lower() == 'bsq': img_bytes = img_bytes.reshape((bands, height, width)) img_bytes = np.transpose(img_bytes, (1, 2, 0)) elif hdr['interleave'].lower() == 'bip': img_bytes = img_bytes.reshape((height, width, bands)) elif hdr['interleave'].lower() == 'bil': img_bytes = img_bytes.reshape((height, bands, width)) img_bytes = np.transpose(img_bytes, (0, 2, 1)) else: raise ValueError('Unrecognized interleave, for more information please email:<EMAIL>') return img_bytes @PIPELINES.register_module() class LoadENVIHyperSpectralImageFromFile(object): """Load an ENVI Hyper Spectral Image from file. TODO: rewrite the helping document Required keys are "img_prefix" and "img_info" (a dict that must contain the key "filename"). Added or updated keys are "filename", "img", "img_shape", "ori_shape" (same as `img_shape`), "pad_shape" (same as `img_shape`), "scale_factor" (1.0) and "img_norm_cfg" (means=0 and stds=1). Args: to_float32 (bool): Whether to convert the loaded image to a float32 numpy array. If set to False, the loaded image is an uint8 array. Defaults to False. color_type (str): The flag argument for :func:`mmcv.imfrombytes`. Defaults to 'color'. file_client_args (dict): Arguments to instantiate a FileClient. See :class:`mmcv.fileio.FileClient` for details. Defaults to ``dict(backend='disk')``. imdecode_backend (str): Backend for :func:`mmcv.imdecode`. Default: 'cv2' """ def __init__(self, channel_select, to_float32=True, normalization=True, channel_to_show=(10, 20, 30), median_blur=True, npy_transpose=False): self.to_float32 = to_float32 self.normalization = normalization self.channel_select = channel_select self.channel_to_show = channel_to_show self.median_blur = median_blur self.npy_transpose = npy_transpose def __call__(self, results): """Call functions to load image and get image meta information. Args: results (dict): Result dict from :obj:`mmseg.CustomDataset`. Returns: dict: The dict contains loaded image and meta information. """ if results.get('img_prefix') is not None: filename = osp.join(results['img_prefix'], results['img_info']['filename']) else: filename = results['img_info']['filename'] if filename.endswith('hdr'): img_bytes = load_ENVI_hyperspectral_image_from_file(filename) elif filename.endswith('npy'): img_bytes = np.load(filename) if img_bytes.shape[0] == 1: img_bytes = img_bytes[0] if self.npy_transpose: img_bytes = np.transpose(img_bytes, (1, 2, 0)) img_bytes = img_bytes[:, :, self.channel_select] if self.to_float32: img_bytes = img_bytes.astype(np.float32) if self.normalization: # img_bytes -= np.mean(img_bytes,axis=(0,1),keepdims=True) # img_bytes /= np.clip(np.std(img_bytes,axis=(0,1),keepdims=True), 1e-6, 1e6) img_bytes -= np.min(img_bytes) img_bytes /= np.max(img_bytes) if self.median_blur: for band in range(img_bytes.shape[2]): img_bytes[:, :, band] = cv2.medianBlur(img_bytes[:, :, band], ksize=3) results['filename'] = filename.replace('.hdr', '.png') results['ori_filename'] = results['img_info']['filename'].replace('.hdr', '.png') results['img'] = img_bytes results['img_shape'] = img_bytes.shape results['ori_shape'] = img_bytes.shape # Set initial values for default meta_keys results['pad_shape'] = img_bytes.shape results['scale_factor'] = 1.0 results['channel_select'] = self.channel_select results['channel_to_show'] = self.channel_to_show num_channels = 1 if len(img_bytes.shape) < 3 else img_bytes.shape[2] mean = np.ones(num_channels, dtype=np.float32)*128 std = np.ones(num_channels, dtype=np.float32)*16 results['img_norm_cfg'] = dict( mean=mean, std=std, to_rgb=False) return results def __repr__(self): repr_str = self.__class__.__name__ repr_str += f'(channel_select={self.channel_select},' repr_str += f'to_float32={self.to_float32},' repr_str += f'normalization={self.normalization},' repr_str += f'channel_to_show={self.channel_to_show},' repr_str += f'median_blur={self.median_blur})' return repr_str @PIPELINES.register_module() class LoadENVIHyperSpectralImageFromFileAndPCA(object): """Load an ENVI Hyper Spectral Image from file. TODO: rewrite the helping document Required keys are "img_prefix" and "img_info" (a dict that must contain the key "filename"). Added or updated keys are "filename", "img", "img_shape", "ori_shape" (same as `img_shape`), "pad_shape" (same as `img_shape`), "scale_factor" (1.0) and "img_norm_cfg" (means=0 and stds=1). Args: to_float32 (bool): Whether to convert the loaded image to a float32 numpy array. If set to False, the loaded image is an uint8 array. Defaults to False. color_type (str): The flag argument for :func:`mmcv.imfrombytes`. Defaults to 'color'. file_client_args (dict): Arguments to instantiate a FileClient. See :class:`mmcv.fileio.FileClient` for details. Defaults to ``dict(backend='disk')``. imdecode_backend (str): Backend for :func:`mmcv.imdecode`. Default: 'cv2' """ def __init__(self, channel_keep, to_float32=True, normalization=True, channel_to_show=(10, 20, 30), median_blur=True): self.to_float32 = to_float32 self.normalization = normalization self.channel_keep = channel_keep self.channel_to_show = channel_to_show self.median_blur = median_blur self.mean_vector = torch.tensor(np.load('./data/HSI/mean_vector.npy'), dtype=torch.float32) self.std_vector = torch.tensor(np.load('./data/HSI/std_vector.npy'), dtype=torch.float32) self.pca_vector = torch.tensor(np.load('./data/HSI/pca_vector.npy')[:, :channel_keep], dtype=torch.float32).permute(1, 0) def __call__(self, results): """Call functions to load image and get image meta information. Args: results (dict): Result dict from :obj:`mmseg.CustomDataset`. Returns: dict: The dict contains loaded image and meta information. """ if results.get('img_prefix') is not None: filename = osp.join(results['img_prefix'], results['img_info']['filename']) else: filename = results['img_info']['filename'] img_bytes = load_ENVI_hyperspectral_image_from_file(filename) height, width, bands = img_bytes.shape with torch.no_grad(): img_bytes = torch.tensor(img_bytes, dtype=torch.float32) img_bytes -= self.mean_vector.view(1, 1, bands) img_bytes /= self.std_vector.view(1, 1, bands) img_bytes = torch.nn.functional.linear(img_bytes, self.pca_vector) img_bytes = img_bytes.numpy() if self.to_float32: img_bytes = img_bytes.astype(np.float32) if self.normalization: img_bytes -= np.mean(img_bytes,axis=(0,1),keepdims=True) img_bytes /= np.clip(np.std(img_bytes,axis=(0,1),keepdims=True), 1e-6, 1e6) # img_bytes -= np.min(img_bytes) # img_bytes /= np.max(img_bytes) if self.median_blur: for band in range(img_bytes.shape[2]): img_bytes[:, :, band] = cv2.medianBlur(img_bytes[:, :, band], ksize=3) results['filename'] = filename.replace('.hdr', '.png') results['ori_filename'] = results['img_info']['filename'].replace('.hdr', '.png') results['img'] = img_bytes results['img_shape'] = img_bytes.shape results['ori_shape'] = img_bytes.shape # Set initial values for default meta_keys results['pad_shape'] = img_bytes.shape results['scale_factor'] = 1.0 results['channel_select'] = self.channel_keep results['channel_to_show'] = self.channel_to_show num_channels = 1 if len(img_bytes.shape) < 3 else img_bytes.shape[2] mean = np.ones(num_channels, dtype=np.float32)*128 std = np.ones(num_channels, dtype=np.float32)*16 results['img_norm_cfg'] = dict( mean=mean, std=std, to_rgb=False) return results def __repr__(self): repr_str = self.__class__.__name__ repr_str += f'(channel_keep={self.channel_keep},' repr_str += f'to_float32={self.to_float32},' repr_str += f'normalization={self.normalization},' repr_str += f'channel_to_show={self.channel_to_show},' repr_str += f'median_blur={self.median_blur})' return repr_str @PIPELINES.register_module() class LoadENVIHyperSpectralImageFromFileWithExtra(object): """Load an ENVI Hyper Spectral Image from file. TODO: rewrite the helping document Required keys are "img_prefix" and "img_info" (a dict that must contain the key "filename"). Added or updated keys are "filename", "img", "img_shape", "ori_shape" (same as `img_shape`), "pad_shape" (same as `img_shape`), "scale_factor" (1.0) and "img_norm_cfg" (means=0 and stds=1). Args: to_float32 (bool): Whether to convert the loaded image to a float32 numpy array. If set to False, the loaded image is an uint8 array. Defaults to False. color_type (str): The flag argument for :func:`mmcv.imfrombytes`. Defaults to 'color'. file_client_args (dict): Arguments to instantiate a FileClient. See :class:`mmcv.fileio.FileClient` for details. Defaults to ``dict(backend='disk')``. imdecode_backend (str): Backend for :func:`mmcv.imdecode`. Default: 'cv2' """ def __init__(self, channel_select, to_float32=True, normalization=True, channel_to_show=(10, 20, 30), median_blur=True, label_smoothing=5): self.to_float32 = to_float32 self.normalization = normalization self.channel_select = channel_select self.channel_to_show = channel_to_show self.median_blur = median_blur self.label_smoothing = (label_smoothing, label_smoothing) def __call__(self, results): """Call functions to load image and get image meta information. Args: results (dict): Result dict from :obj:`mmseg.CustomDataset`. Returns: dict: The dict contains loaded image and meta information. """ if 'filename' in results['img_info'].keys(): if results.get('img_prefix') is not None: filename = osp.join(results['img_prefix'], results['img_info']['filename']) else: filename = results['img_info']['filename'] img_bytes = load_ENVI_hyperspectral_image_from_file(filename) else: if results.get('full_positive_prefix') is not None: positive = osp.join(results['full_positive_prefix'], results['img_info']['positive']['filename']) else: positive = results['img_info']['positive'] img_bytes_positive = load_ENVI_hyperspectral_image_from_file(positive) if self.to_float32: img_bytes_positive = img_bytes_positive.astype(np.float32) if self.normalization: img_bytes_positive -= np.mean(img_bytes_positive,axis=(0,1),keepdims=True) img_bytes_positive /= np.clip(np.std(img_bytes_positive,axis=(0,1),keepdims=True), 1e-6, 1e6) if results.get('full_negative_prefix') is not None: negative = osp.join(results['full_negative_prefix'], results['img_info']['negative']['filename']) else: negative = results['img_info']['negative'] img_bytes_negative = load_ENVI_hyperspectral_image_from_file(negative) if self.to_float32: img_bytes_negative = img_bytes_negative.astype(np.float32) if self.normalization: img_bytes_negative -= np.mean(img_bytes_negative,axis=(0,1),keepdims=True) img_bytes_negative /= np.clip(np.std(img_bytes_negative,axis=(0,1),keepdims=True), 1e-6, 1e6) if results.get('seg_prefix', None) is not None: ann = osp.join(results['seg_prefix'], results['ann_info']['seg_map']) else: ann = results['ann_info']['seg_map'] ann = cv2.imread(ann, flags=cv2.IMREAD_UNCHANGED) * 255 ann = cv2.blur(ann, ksize=self.label_smoothing).astype(np.float32) / 255 ann = np.expand_dims(ann, -1) img_bytes = ann * img_bytes_positive + (1 - ann) * img_bytes_negative __breakpoint = 0 img_bytes = img_bytes[:, :, self.channel_select] if self.to_float32: img_bytes = img_bytes.astype(np.float32) if self.normalization: img_bytes -= np.mean(img_bytes,axis=(0,1),keepdims=True) img_bytes /= np.clip(np.std(img_bytes,axis=(0,1),keepdims=True), 1e-6, 1e6) # img_bytes -= np.min(img_bytes) # img_bytes /= np.max(img_bytes) if self.median_blur: for band in range(img_bytes.shape[2]): img_bytes[:, :, band] = cv2.medianBlur(img_bytes[:, :, band], ksize=3) if 'filename' in results['img_info'].keys(): results['filename'] = filename.replace('.hdr', '.png') results['ori_filename'] = results['img_info']['filename'].replace('.hdr', '.png') else: results['filename'] = results['ann_info']['seg_map'].replace('.hdr', '.png') results['ori_filename'] = results['ann_info']['seg_map'].replace('.hdr', '.png') results['img'] = img_bytes results['img_shape'] = img_bytes.shape results['ori_shape'] = img_bytes.shape # Set initial values for default meta_keys results['pad_shape'] = img_bytes.shape results['scale_factor'] = 1.0 results['channel_select'] = self.channel_select results['channel_to_show'] = self.channel_to_show num_channels = 1 if len(img_bytes.shape) < 3 else img_bytes.shape[2] mean = np.ones(num_channels, dtype=np.float32)*128 std = np.ones(num_channels, dtype=np.float32)*16 results['img_norm_cfg'] = dict( mean=mean, std=std, to_rgb=False) return results def __repr__(self): repr_str = self.__class__.__name__ repr_str += f'(channel_select={self.channel_select},' repr_str += f'to_float32={self.to_float32},' repr_str += f'normalization={self.normalization},' repr_str += f'channel_to_show={self.channel_to_show},' repr_str += f'median_blur={self.median_blur},' repr_str += f'label_smoothing={self.label_smoothing})' return repr_str ''' from PIL import Image prefix='sample1_' cv2.imwrite(prefix+'ann.png', (ann[::4,::4]*255).astype(np.uint8)) _scale=16 _bias=128 image_mixed=Image.fromarray((img_bytes[::4,::4,0]*_scale+_bias).astype(np.uint8)) image_positive=Image.fromarray((img_bytes_positive[::4,::4,0]*_scale+_bias).astype(np.uint8)) image_negative=Image.fromarray((img_bytes_negative[::4,::4,0]*_scale+_bias).astype(np.uint8)) images_positive=[] images_negative=[] images_mixed=[] for i in range(1, 60): images_mixed.append(Image.fromarray((img_bytes[::4,::4,i]*_scale+_bias).astype(np.uint8))) images_positive.append(Image.fromarray((img_bytes_positive[::4,::4,i]*_scale+_bias).astype(np.uint8))) images_negative.append(Image.fromarray((img_bytes_negative[::4,::4,i]*_scale+_bias).astype(np.uint8))) image_mixed.save(prefix+'images_mixed.gif', save_all=True, append_images=images_mixed,loop=10086,duration=1000) image_positive.save(prefix+'images_positive.gif', save_all=True, append_images=images_positive,loop=10086,duration=1000) image_negative.save(prefix+'images_negative.gif', save_all=True, append_images=images_negative,loop=10086,duration=1000) ''' ''' prefix='sample1_' _scale=16 _bias=128 _down=8 _show=(10, 20, 30) _line=4 _h, _w, _c = img_bytes.shape _h //= _down _w //= _down _place_holder = np.ones(((_h+_line)*len(_show), (_w+_line)*4), dtype=np.uint8)*255 for _row in range(len(_show)): _row_start = _row*(_h+_line) _col_start = 0*(_w+_line) _place_holder[_row_start : _row_start+_h, _col_start : _col_start+_w] = (ann[::_down,::_down, 0]*255).astype(np.uint8) _col_start = 1*(_w+_line) _place_holder[_row_start : _row_start+_h, _col_start : _col_start+_w] = (img_bytes[::_down,::_down,_show[_row]]*_scale+_bias).astype(np.uint8) _col_start = 2*(_w+_line) _place_holder[_row_start : _row_start+_h, _col_start : _col_start+_w] = (img_bytes_positive[::_down,::_down,_show[_row]]*_scale+_bias).astype(np.uint8) _col_start = 3*(_w+_line) _place_holder[_row_start : _row_start+_h, _col_start : _col_start+_w] = (img_bytes_negative[::_down,::_down,_show[_row]]*_scale+_bias).astype(np.uint8) cv2.imwrite(prefix+'images_matrix.png', _place_holder) ''' ''' prefix='sample_without_prenorm1_' _scale=16 _bias=128 _down=8 _show=(10, 20, 30) _line=4 _h, _w, _c = img_bytes.shape _h //= _down _w //= _down img_bytes_positive -= np.mean(img_bytes_positive,axis=(0,1),keepdims=True) img_bytes_positive /= np.clip(np.std(img_bytes_positive,axis=(0,1),keepdims=True), 1e-6, 1e6) img_bytes_negative -= np.mean(img_bytes_negative,axis=(0,1),keepdims=True) img_bytes_negative /= np.clip(np.std(img_bytes_negative,axis=(0,1),keepdims=True), 1e-6, 1e6) img_bytes -= np.mean(img_bytes,axis=(0,1),keepdims=True) img_bytes /= np.clip(np.std(img_bytes,axis=(0,1),keepdims=True), 1e-6, 1e6) _place_holder = np.ones(((_h+_line)*len(_show), (_w+_line)*4), dtype=np.uint8)*255 for _row in range(len(_show)): _row_start = _row*(_h+_line) _col_start = 0*(_w+_line) _place_holder[_row_start : _row_start+_h, _col_start : _col_start+_w] = (ann[::_down,::_down, 0]*255).astype(np.uint8) _col_start = 1*(_w+_line) _place_holder[_row_start : _row_start+_h, _col_start : _col_start+_w] = (img_bytes[::_down,::_down,_show[_row]]*_scale+_bias).astype(np.uint8) _col_start = 2*(_w+_line) _place_holder[_row_start : _row_start+_h, _col_start : _col_start+_w] = (img_bytes_positive[::_down,::_down,_show[_row]]*_scale+_bias).astype(np.uint8) _col_start = 3*(_w+_line) _place_holder[_row_start : _row_start+_h, _col_start : _col_start+_w] = (img_bytes_negative[::_down,::_down,_show[_row]]*_scale+_bias).astype(np.uint8) cv2.imwrite(prefix+'images_matrix.png', _place_holder) '''
<gh_stars>0 package main import "gopkg.in/go-playground/validator.v8" var ( users Users validate *validator.Validate ) func init() { u := Users{} u.Users = map[string]User{ "1": User{ ID: "1", Name: "<NAME>", }, } users = u } func main() { config := &validator.Config{TagName: "validate"} validate = validator.New(config) startServer() }
<reponame>ruig2/WIFIProbe package com.codingfairy.data.repo; import com.codingfairy.data.entity.ActivenessEntity; import org.springframework.data.jpa.repository.JpaRepository; import java.sql.Timestamp; /** * Created by cuihao on 2017-05-16. * Activeness repository */ public interface ActivenessRepository extends JpaRepository<ActivenessEntity,Integer> { ActivenessEntity findByHourAndWifiProb(Timestamp hour, String wifiProb); }
// LastInsertId implements the Result interface func (m MockResult) LastInsertId() (int64, error) { if m.LastInsertIdFn == nil { panic(fmt.Errorf("ksql.MockResult.LastInsertId() called but ksql.MockResult.LastInsertIdFn is not set")) } return m.LastInsertIdFn() }
import type { TenantFeaturesDto } from "@/application/contracts/core/tenants/TenantFeaturesDto"; import type { AppUsageSummaryDto } from "@/application/dtos/app/usage/AppUsageSummaryDto"; import { AppUsageType } from "@/application/enums/app/usages/AppUsageType"; import { ApplicationLayout } from "@/application/enums/shared/ApplicationLayout"; import { writable } from "svelte/store"; import type { AppState } from "../types"; const initialState: AppState = JSON.parse(localStorage.getItem("app") ?? "{}") ?? { usage: { type: 0, providers: 0, providersInCompliance: 0, clients: 0, contracts: 0, employees: 0, storage: 0, pendingInvitations: 0, }, features: { maxUsers: 1, maxWorkspaces: 1, maxLinks: 1, maxStorage: 1, monthlyContracts: 1, }, layout: ApplicationLayout.SIDEBAR, }; export const appState = writable(initialState); export const appStore = { resetAppState: () => appState.update((self) => { self.usage = { type: 0, providers: 0, providersInCompliance: 0, clients: 0, contracts: 0, employees: 0, storage: 0, pendingInvitations: 0, }; self.features = { maxUsers: 1, maxWorkspaces: 1, maxLinks: 1, maxStorage: 1, monthlyContracts: 1, }; self.layout = ApplicationLayout.SIDEBAR; return self; }), setLayout: (payload: ApplicationLayout) => appState.update((self) => { self.layout = payload; return self; }), setUsage: (payload: AppUsageSummaryDto) => appState.update((self) => { if (self.usage) { if (payload.type === AppUsageType.ALL) { self.usage = payload; } else if (payload.type === AppUsageType.PROVIDERS) { self.usage.providers = payload.providers; self.usage.providersInCompliance = payload.providersInCompliance; } else if (payload.type === AppUsageType.CLIENTS) { self.usage.clients = payload.clients; } else if (payload.type === AppUsageType.EMPLOYEES) { self.usage.employees = payload.employees; } else if (payload.type === AppUsageType.CONTRACTS) { self.usage.contracts = payload.contracts; } else if (payload.type === AppUsageType.STORAGE) { self.usage.storage = payload.storage; } else if (payload.type === AppUsageType.PENDING_INVITATIONS) { self.usage.pendingInvitations = payload.pendingInvitations; } } return self; }), setFeatures: (payload: TenantFeaturesDto) => appState.update((self) => { self.features = payload; return self; }), }; appState.subscribe((val) => { localStorage.setItem("app", JSON.stringify(val)); });
/** * Returns the value of one or more static fields of the * reference type. Each field must be member of the reference type * or one of its superclasses, superinterfaces, or implemented interfaces. * Access control is not enforced; for example, the values of private * fields can be obtained. */ static class GetValues { static final int COMMAND = 6; static class Field { /** * A field to get */ final long fieldID; Field(long fieldID) { this.fieldID = fieldID; } private void write(PacketStream ps) { if ((ps.vm.traceFlags & VirtualMachineImpl.TRACE_SENDS) != 0) { ps.vm.printTrace("Sending: fieldID(long): " + fieldID); } ps.writeFieldRef(fieldID); } } static GetValues process(VirtualMachineImpl vm, ReferenceTypeImpl refType, Field[] fields) throws JDWPException { PacketStream ps = enqueueCommand(vm, refType, fields); return waitForReply(vm, ps); } static PacketStream enqueueCommand(VirtualMachineImpl vm, ReferenceTypeImpl refType, Field[] fields) { PacketStream ps = new PacketStream(vm, COMMAND_SET, COMMAND); if ((vm.traceFlags & VirtualMachineImpl.TRACE_SENDS) != 0) { vm.printTrace("Sending Command(id=" + ps.pkt.id + ") JDWP.ReferenceType.GetValues"+(ps.pkt.flags!=0?", FLAGS=" + ps.pkt.flags:"")); } if ((ps.vm.traceFlags & VirtualMachineImpl.TRACE_SENDS) != 0) { ps.vm.printTrace("Sending: refType(ReferenceTypeImpl): " + (refType==null?"NULL":"ref="+refType.ref())); } ps.writeClassRef(refType.ref()); if ((ps.vm.traceFlags & VirtualMachineImpl.TRACE_SENDS) != 0) { ps.vm.printTrace("Sending: fields(Field[]): " + ""); } ps.writeInt(fields.length); for (int i = 0; i < fields.length; i++) {; if ((ps.vm.traceFlags & VirtualMachineImpl.TRACE_SENDS) != 0) { ps.vm.printTrace("Sending: fields[i](Field): " + ""); } fields[i].write(ps); } ps.send(); return ps; } static GetValues waitForReply(VirtualMachineImpl vm, PacketStream ps) throws JDWPException { ps.waitForReply(); return new GetValues(vm, ps); } /** * The number of values returned, always equal to fields, * the number of values to get. */ final ValueImpl[] values; private GetValues(VirtualMachineImpl vm, PacketStream ps) { if (vm.traceReceives) { vm.printTrace("Receiving Command(id=" + ps.pkt.id + ") JDWP.ReferenceType.GetValues"+(ps.pkt.flags!=0?", FLAGS=" + ps.pkt.flags:"")+(ps.pkt.errorCode!=0?", ERROR CODE=" + ps.pkt.errorCode:"")); } if (vm.traceReceives) { vm.printReceiveTrace(4, "values(ValueImpl[]): " + ""); } int valuesCount = ps.readInt(); values = new ValueImpl[valuesCount]; for (int i = 0; i < valuesCount; i++) {; values[i] = ps.readValue(); if (vm.traceReceives) { vm.printReceiveTrace(5, "values[i](ValueImpl): " + values[i]); } } } }
def add_error(error_code, url, traceback, status="open", credentials_file=settings.DEFAULT_CREDENTIALS_FILE): connect_db(credentials_file) err = Error(error_code=error_code, url=url, traceback=traceback, status=status) e = err.save() return str(e["id"])
<reponame>Fingolfin7/Time-Tracker from matplotlib import pyplot as plt def showBarGraphs(subj_names=[], subj_totals=[]): plt.bar(subj_names, subj_totals, label="Total hours") plt.title("Time Subject Tracker") plt.xlabel("Subjects") plt.ylabel("Time (in hours)") plt.legend() plt.show() read = open("Saves.txt", "r") lines = list(read.readlines()) read.close() subj = [] time_totals = [] for line in lines: partitions = str(line).partition(": ") print(partitions) if float(partitions[2]) > 0: subj.append(partitions[0]) time_totals.append(float(partitions[2]) / 60) showBarGraphs(subj, time_totals)
#include "KCurrentLoopIntegrator.hh" #include "KEMConstants.hh" #include "KEllipticIntegrals.hh" #include <iomanip> namespace KEMField { KFieldVector KCurrentLoopIntegrator::VectorPotential(const KCurrentLoop& currentLoop, const KPosition& P) const { static KCompleteEllipticIntegral1stKind K_elliptic; static KEllipticEMinusKOverkSquared EK_elliptic; KPosition p = currentLoop.GetCoordinateSystem().ToLocal(P); double r = sqrt(p[0] * p[0] + p[1] * p[1]); double S = sqrt((currentLoop.GetP()[0] + r) * (currentLoop.GetP()[0] + r) + (p[2] - currentLoop.GetP()[2]) * (p[2] - currentLoop.GetP()[2])); double k = 2. * sqrt(currentLoop.GetP()[0] * r) / S; double k_Elliptic = K_elliptic(k); double ek_Elliptic = EK_elliptic(k); double A_theta = -KEMConstants::Mu0OverPi * currentLoop.GetCurrent() * currentLoop.GetP()[0] / S * (2. * ek_Elliptic + k_Elliptic); double sine = 0.; double cosine = 0.; if (r > 1.e-12) { cosine = p[0] / r; sine = p[1] / r; } return currentLoop.GetCoordinateSystem().ToGlobal(KFieldVector(-sine * A_theta, cosine * A_theta, 0.)); } KFieldVector KCurrentLoopIntegrator::MagneticField(const KCurrentLoop& currentLoop, const KPosition& P) const { static KCompleteEllipticIntegral1stKind K_elliptic; static KCompleteEllipticIntegral2ndKind E_elliptic; static KEllipticEMinusKOverkSquared EK_elliptic; KPosition p = currentLoop.GetCoordinateSystem().ToLocal(P); double r = sqrt(p[0] * p[0] + p[1] * p[1]); double S = sqrt((currentLoop.GetP()[0] + r) * (currentLoop.GetP()[0] + r) + (p[2] - currentLoop.GetP()[2]) * (p[2] - currentLoop.GetP()[2])); double D = sqrt((currentLoop.GetP()[0] - r) * (currentLoop.GetP()[0] - r) + (p[2] - currentLoop.GetP()[2]) * (p[2] - currentLoop.GetP()[2])); double k = 2. * sqrt(currentLoop.GetP()[0] * r) / S; double k_Elliptic = K_elliptic(k); double e_Elliptic = E_elliptic(k); double B_z = KEMConstants::Mu0OverPi * .5 * currentLoop.GetCurrent() / S * (k_Elliptic + e_Elliptic * (2. * currentLoop.GetP()[0] * (currentLoop.GetP()[0] - r) / (D * D) - 1.)); double B_r = 0; double cosine = 0; double sine = 0; if (r > 1.e-12) { double ek_Elliptic = EK_elliptic(k); B_r = KEMConstants::Mu0OverPi * currentLoop.GetCurrent() * (p[2] - currentLoop.GetP()[2]) * currentLoop.GetP()[0] / S * (2. / (S * S) * ek_Elliptic + e_Elliptic / (D * D)); cosine = p[0] / r; sine = p[1] / r; } return currentLoop.GetCoordinateSystem().ToGlobal(KFieldVector(cosine * B_r, sine * B_r, B_z)); } } // namespace KEMField
1 SHARES Facebook Twitter Google Whatsapp Pinterest Print Mail Flipboard Trumpโ€™s Deputy Press Secretary Sarah Huckabee Sanders refused to keep the Presidentโ€™s promise that people wonโ€™t lose their health care with the Republican Obamacare replacement during an interview on ABCโ€™s This Week. Video: ABC Breaking News | Latest News Videos Trump Deputy Press Secretary Sarah Huckabee Sanders was asked numerous times by ABCโ€™s This Weekโ€™s George Stephanopoulos repeatedly asked if the White House would keep Trumpโ€™s promise that everybody would have health care after Obamacare is repealed. Transcript via ABCโ€™s This Week: STEPHANOPOULOS: You say replace it with something better. So does that mean that no one who has coverage now will lose it? SANDERS: I know that the goal is that we make sure that people donโ€™t lose their coverage and that we have to put a high priority on people that need it most. We have to lower costs and we have to make sure that the people that need insurance the very most are covered. But at the same time, George, we cannot survive under the current system. We have to make a massive overall to the health care system in America, because it is simply just not sustainable, and everybody agrees with that. There is nobody that argues that weโ€™re on a track that we can maintain. So, weโ€™re looking at every possible way to do exactly that: repeal a terrible , failed system and replace with something better. STEPHANOPOULOS; Again, so Iโ€™ll have to ask one more time. You keep say replace it with something better. So will the president guarantee that he wonโ€™t sign a plan that will cause people that have coverage now to lose it? SANDERS: Look, Iโ€™m not going to speak specifically for the president on that topic, but what I can say is heโ€™s made it a high priority and a number one focus that we make sure that people that have insurance continue their insurance, particularly those in the highest need. The White Houseโ€™s refusal to say if Trump will keep his promise comes on the heels of a Bloomberg report that the Republican replacement plan will cause 51% of people who purchase plans through the exchange to lose their coverage, and 24% of the federal funding for Medicaid is going to be taken away. The numbers add up to roughly 20 million Americans losing their health care under the Republican replacement plan. The numbers donโ€™t include the hundreds of millions of Americans who will move from fully insured to underinsured and have their out of pocket health care costs increase. The duck and dodge on the health care question by Trumpโ€™s White House Deputy Press Secretary are proof that Republicans fully intend to take health care away from tens of millions of people. If youโ€™re ready to read more from the unbossed and unbought Politicus team, sign up for our newsletter here! Email address: Leave this field empty if you're human:
#ifndef _ASM_POWERPC_FTRACE #define _ASM_POWERPC_FTRACE #include <asm/types.h> #ifdef CONFIG_FUNCTION_TRACER #define MCOUNT_ADDR ((unsigned long)(_mcount)) #define MCOUNT_INSN_SIZE 4 /* sizeof mcount call */ #ifdef __ASSEMBLY__ /* Based off of objdump optput from glibc */ #define MCOUNT_SAVE_FRAME \ stwu r1,-48(r1); \ stw r3, 12(r1); \ stw r4, 16(r1); \ stw r5, 20(r1); \ stw r6, 24(r1); \ mflr r3; \ lwz r4, 52(r1); \ mfcr r5; \ stw r7, 28(r1); \ stw r8, 32(r1); \ stw r9, 36(r1); \ stw r10,40(r1); \ stw r3, 44(r1); \ stw r5, 8(r1) #define MCOUNT_RESTORE_FRAME \ lwz r6, 8(r1); \ lwz r0, 44(r1); \ lwz r3, 12(r1); \ mtctr r0; \ lwz r4, 16(r1); \ mtcr r6; \ lwz r5, 20(r1); \ lwz r6, 24(r1); \ lwz r0, 52(r1); \ lwz r7, 28(r1); \ lwz r8, 32(r1); \ mtlr r0; \ lwz r9, 36(r1); \ lwz r10,40(r1); \ addi r1, r1, 48 #else /* !__ASSEMBLY__ */ extern void _mcount(void); #ifdef CONFIG_DYNAMIC_FTRACE # define FTRACE_ADDR ((unsigned long)ftrace_caller) # define FTRACE_REGS_ADDR FTRACE_ADDR static inline unsigned long ftrace_call_adjust(unsigned long addr) { /* reloction of mcount call site is the same as the address */ return addr; } struct dyn_arch_ftrace { struct module *mod; }; #endif /* CONFIG_DYNAMIC_FTRACE */ #endif /* __ASSEMBLY__ */ #ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS #define ARCH_SUPPORTS_FTRACE_OPS 1 #endif #endif #if defined(CONFIG_FTRACE_SYSCALLS) && !defined(__ASSEMBLY__) #ifdef PPC64_ELF_ABI_v1 #define ARCH_HAS_SYSCALL_MATCH_SYM_NAME static inline bool arch_syscall_match_sym_name(const char *sym, const char *name) { /* * Compare the symbol name with the system call name. Skip the .sys or .SyS * prefix from the symbol name and the sys prefix from the system call name and * just match the rest. This is only needed on ppc64 since symbol names on * 32bit do not start with a period so the generic function will work. */ return !strcmp(sym + 4, name + 3); } #endif #endif /* CONFIG_FTRACE_SYSCALLS && !__ASSEMBLY__ */ #endif /* _ASM_POWERPC_FTRACE */
<reponame>Hgwxxdd/ladder-ui export * from './anchor' export * from './button' export * from './divider' export * from './grid' export * from './input' export * from './layout' // export * from './menu' export * from './space' export * from './tooltip'
/* Test d'insertion commun a tout les items */ @Test void testInsert() { Item[] items = new Item[] {new Item(dex, 10, 20)}; GildedRose app = new GildedRose(items); assertThat(app.items[0].name, is(dex)); assertThat(app.items[0].sellIn, is(10)); assertThat(app.items[0].quality, is(20)); }
James O'Brien: Why The Irish Border Is One Of The Biggest Brexit Issues Why bother trying to understand complicated issues when you could spit bile instead? James O'Brien asked. John McDonnell has warned a hard border between Ireland and Northern Ireland after Brexit would undermine the peace process and "be a nightmare." Ireland's Taoiseach Leo Varadkar has written to the UK asking it to clarify its position on the border, calling for a legal guarantee that it will not harden. It's the Irish premier's red line. So a fairly big issue then, James O'Brien acknowledged, and a complex one at that. But why expend the effort trying to understand the complexities of the border when it's so easy to spit bile, as the Sun has? The LBC presenter said: "You try and unravel this horribly complicated situation that some people warned was going to happen, Some people insisted it wasn't and now incontrovertibly is. "Either you try and unravel it or you follow the line of the Sun newspaper and just start spitting bile at Ireland." James then read out a text message, riddled with poor spelling, received from someone "that thinks they have a better understanding of this issue." "There's the voice of the Sun reader," he joked. "But the most cursory understanding of the Good Friday Agreement would explain why it requires equality of patient rights, regardless of which side of the border you live on you have the same healthcare access and rights as each other. "So patient rights, that would require single standards for medical devices, the approval of medicines at EU level, mutual recognition of medical qualifications, mutual acceptance of cross-border ambulance activity. "It's mess upon mess upon mess, but because we've allowed it to become so ludicrously nativistic and one dimensional, people don't like acknowledging the mess. "What do you do if you don't want to acknowledge the mess that you've in large part help to make? Start shouting abuse at someone. "Keep your eyes shut, keep your fingers in your ears and start spitting bile at foreigners. In this case Irish people. "They've never been on our side? Happy days, they built this country mate
/** * @author Thorben Lindhauer * */ public class CreateMigrationPlanCmd implements Command<MigrationPlan> { public static final MigrationLogger LOG = EngineUtilLogger.MIGRATION_LOGGER; protected MigrationPlanBuilderImpl migrationBuilder; public CreateMigrationPlanCmd(MigrationPlanBuilderImpl migrationPlanBuilderImpl) { this.migrationBuilder = migrationPlanBuilderImpl; } @Override public MigrationPlan execute(CommandContext commandContext) { ProcessDefinitionEntity sourceProcessDefinition = getProcessDefinition(commandContext, migrationBuilder.getSourceProcessDefinitionId(), "Source"); ProcessDefinitionEntity targetProcessDefinition = getProcessDefinition(commandContext, migrationBuilder.getTargetProcessDefinitionId(), "Target"); checkAuthorization(commandContext, sourceProcessDefinition, targetProcessDefinition); MigrationPlanImpl migrationPlan = new MigrationPlanImpl(sourceProcessDefinition.getId(), targetProcessDefinition.getId()); List<MigrationInstruction> instructions = new ArrayList<MigrationInstruction>(); if (migrationBuilder.isMapEqualActivities()) { instructions.addAll(generateInstructions(commandContext, sourceProcessDefinition, targetProcessDefinition, migrationBuilder.isUpdateEventTriggersForGeneratedInstructions())); } instructions.addAll(migrationBuilder.getExplicitMigrationInstructions()); migrationPlan.setInstructions(instructions); validateMigrationPlan(commandContext, migrationPlan, sourceProcessDefinition, targetProcessDefinition); return migrationPlan; } protected ProcessDefinitionEntity getProcessDefinition(CommandContext commandContext, String id, String type) { EnsureUtil.ensureNotNull(BadUserRequestException.class, type + " process definition id", id); try { return commandContext.getProcessEngineConfiguration() .getDeploymentCache().findDeployedProcessDefinitionById(id); } catch (NullValueException e) { throw LOG.processDefinitionDoesNotExist(id, type); } } protected void checkAuthorization(CommandContext commandContext, ProcessDefinitionEntity sourceProcessDefinition, ProcessDefinitionEntity targetProcessDefinition) { for(CommandChecker checker : commandContext.getProcessEngineConfiguration().getCommandCheckers()) { checker.checkCreateMigrationPlan(sourceProcessDefinition, targetProcessDefinition); } } protected List<MigrationInstruction> generateInstructions(CommandContext commandContext, ProcessDefinitionImpl sourceProcessDefinition, ProcessDefinitionImpl targetProcessDefinition, boolean updateEventTriggers) { ProcessEngineConfigurationImpl processEngineConfiguration = commandContext.getProcessEngineConfiguration(); // generate instructions MigrationInstructionGenerator migrationInstructionGenerator = processEngineConfiguration.getMigrationInstructionGenerator(); ValidatingMigrationInstructions generatedInstructions = migrationInstructionGenerator.generate(sourceProcessDefinition, targetProcessDefinition, updateEventTriggers); // filter only valid instructions generatedInstructions.filterWith(processEngineConfiguration.getMigrationInstructionValidators()); return generatedInstructions.asMigrationInstructions(); } protected void validateMigrationPlan(CommandContext commandContext, MigrationPlanImpl migrationPlan, ProcessDefinitionImpl sourceProcessDefinition, ProcessDefinitionImpl targetProcessDefinition) { List<MigrationInstructionValidator> migrationInstructionValidators = commandContext.getProcessEngineConfiguration().getMigrationInstructionValidators(); MigrationPlanValidationReportImpl planReport = new MigrationPlanValidationReportImpl(migrationPlan); ValidatingMigrationInstructions validatingMigrationInstructions = wrapMigrationInstructions(migrationPlan, sourceProcessDefinition, targetProcessDefinition, planReport); for (ValidatingMigrationInstruction validatingMigrationInstruction : validatingMigrationInstructions.getInstructions()) { MigrationInstructionValidationReportImpl instructionReport = validateInstruction(validatingMigrationInstruction, validatingMigrationInstructions, migrationInstructionValidators); if (instructionReport.hasFailures()) { planReport.addInstructionReport(instructionReport); } } if (planReport.hasInstructionReports()) { throw LOG.failingMigrationPlanValidation(planReport); } } protected MigrationInstructionValidationReportImpl validateInstruction(ValidatingMigrationInstruction instruction, ValidatingMigrationInstructions instructions, List<MigrationInstructionValidator> migrationInstructionValidators) { MigrationInstructionValidationReportImpl validationReport = new MigrationInstructionValidationReportImpl(instruction.toMigrationInstruction()); for (MigrationInstructionValidator migrationInstructionValidator : migrationInstructionValidators) { migrationInstructionValidator.validate(instruction, instructions, validationReport); } return validationReport; } protected ValidatingMigrationInstructions wrapMigrationInstructions(MigrationPlan migrationPlan, ProcessDefinitionImpl sourceProcessDefinition, ProcessDefinitionImpl targetProcessDefinition, MigrationPlanValidationReportImpl planReport) { ValidatingMigrationInstructions validatingMigrationInstructions = new ValidatingMigrationInstructions(); for (MigrationInstruction migrationInstruction : migrationPlan.getInstructions()) { MigrationInstructionValidationReportImpl instructionReport = new MigrationInstructionValidationReportImpl(migrationInstruction); String sourceActivityId = migrationInstruction.getSourceActivityId(); String targetActivityId = migrationInstruction.getTargetActivityId(); if (sourceActivityId != null && targetActivityId != null) { ActivityImpl sourceActivity = sourceProcessDefinition.findActivity(sourceActivityId); ActivityImpl targetActivity = targetProcessDefinition.findActivity(migrationInstruction.getTargetActivityId()); if (sourceActivity != null && targetActivity != null) { validatingMigrationInstructions.addInstruction(new ValidatingMigrationInstructionImpl(sourceActivity, targetActivity, migrationInstruction.isUpdateEventTrigger())); } else { if (sourceActivity == null) { instructionReport.addFailure("Source activity '" + sourceActivityId + "' does not exist"); } if (targetActivity == null) { instructionReport.addFailure("Target activity '" + targetActivityId + "' does not exist"); } } } else { if (sourceActivityId == null) { instructionReport.addFailure("Source activity id is null"); } if (targetActivityId == null) { instructionReport.addFailure("Target activity id is null"); } } if (instructionReport.hasFailures()) { planReport.addInstructionReport(instructionReport); } } return validatingMigrationInstructions; } }
def EquipmentModels(self): return self._session.query(EquipmentModels).all()
# -*- coding: utf-8 -*- # @Time : 20-6-4 ไธ‹ๅˆ2:13 # @Author : zhuying # @Company : Minivision # @File : utility.py # @Software : PyCharm import pdb from datetime import datetime import os import torch import torch.nn as nn from torch.optim.lr_scheduler import MultiStepLR from collections import Counter def get_time(): return (str(datetime.now())[:-10]).replace(' ', '-').replace(':', '-') def get_kernel(height, width): kernel_size = ((height + 15) // 16, (width + 15) // 16) return kernel_size def get_width_height(patch_info): w_input = int(patch_info.split('-')[-1]) h_input = int(patch_info.split('-')[0].split('_')[-1]) return w_input,h_input def parse_model_name(model_name): info = model_name.split('_')[0:-1] h_input, w_input = info[-2].split('-') model_type = model_name.split('.pth')[0].split('_')[-1] if info[3] == "org": scale = None else: scale = float(info[3]) return int(h_input), int(w_input), model_type, scale def make_if_not_exist(folder_path): if not os.path.exists(folder_path): os.makedirs(folder_path) class EstimatorCV(): def __init__(self, feature_num, class_num): super(EstimatorCV, self).__init__() self.class_num = class_num self.CoVariance = torch.zeros(class_num, feature_num, feature_num).cuda() self.Ave = torch.zeros(class_num, feature_num).cuda() self.Amount = torch.zeros(class_num).cuda() def update_CV(self, features, labels): N = features.size(0) C = self.class_num A = features.size(1) NxCxFeatures = features.view( N, 1, A ).expand( N, C, A ) onehot = torch.zeros(N, C).cuda() onehot.scatter_(1, labels.view(-1, 1), 1) NxCxA_onehot = onehot.view(N, C, 1).expand(N, C, A) features_by_sort = NxCxFeatures.mul(NxCxA_onehot) Amount_CxA = NxCxA_onehot.sum(0) Amount_CxA[Amount_CxA == 0] = 1 ave_CxA = features_by_sort.sum(0) / Amount_CxA var_temp = features_by_sort - \ ave_CxA.expand(N, C, A).mul(NxCxA_onehot) var_temp = torch.bmm( var_temp.permute(1, 2, 0), var_temp.permute(1, 0, 2) ).div(Amount_CxA.view(C, A, 1).expand(C, A, A)) sum_weight_CV = onehot.sum(0).view(C, 1, 1).expand(C, A, A) sum_weight_AV = onehot.sum(0).view(C, 1).expand(C, A) weight_CV = sum_weight_CV.div( sum_weight_CV + self.Amount.view(C, 1, 1).expand(C, A, A) ) weight_CV[weight_CV != weight_CV] = 0 weight_AV = sum_weight_AV.div( sum_weight_AV + self.Amount.view(C, 1).expand(C, A) ) weight_AV[weight_AV != weight_AV] = 0 additional_CV = weight_CV.mul(1 - weight_CV).mul( torch.bmm( (self.Ave - ave_CxA).view(C, A, 1), (self.Ave - ave_CxA).view(C, 1, A) ) ) self.CoVariance = (self.CoVariance.mul(1 - weight_CV) + var_temp .mul(weight_CV)).detach() + additional_CV.detach() self.Ave = (self.Ave.mul(1 - weight_AV) + ave_CxA.mul(weight_AV)).detach() self.Amount += onehot.sum(0) class ISDALoss(nn.Module): def __init__(self, feature_num, class_num): super(ISDALoss, self).__init__() self.estimator = EstimatorCV(feature_num, class_num) self.class_num = class_num self.cross_entropy = nn.CrossEntropyLoss() def isda_aug(self, fc, features, y, labels, cv_matrix, ratio): N = features.size(0) C = self.class_num A = features.size(1) weight_m = list(fc.parameters())[0] NxW_ij = weight_m.expand(N, C, A) NxW_kj = torch.gather(NxW_ij, 1, labels.view(N, 1, 1) .expand(N, C, A)) CV_temp = cv_matrix[labels] # sigma2 = ratio * \ # torch.bmm(torch.bmm(NxW_ij - NxW_kj, # CV_temp).view(N * C, 1, A), # (NxW_ij - NxW_kj).view(N * C, A, 1)).view(N, C) sigma2 = ratio * \ torch.bmm(torch.bmm(NxW_ij - NxW_kj, CV_temp), (NxW_ij - NxW_kj).permute(0, 2, 1)) sigma2 = sigma2.mul(torch.eye(C).cuda() .expand(N, C, C)).sum(2).view(N, C) aug_result = y + 0.5 * sigma2 return aug_result def forward(self, y, features, fc, target_x, ratio): self.estimator.update_CV(features.detach(), target_x) isda_aug_y = self.isda_aug(fc, features, y, target_x, self.estimator.CoVariance.detach(), ratio) return isda_aug_y class WarmUpMultiStepLR(MultiStepLR): def __init__(self, optimizer, milestones, gamma=0.1, last_epoch=-1, warm_up=None): self.milestones = Counter(milestones) self.gamma = gamma self.iter = 0 self.flag = True if isinstance(warm_up, dict): assert "warmup_iter" in warm_up assert "warmup_ratio" in warm_up self.warmup_ratio = warm_up['warmup_ratio'] self.warmup_iter = warm_up['warmup_iter'] super(WarmUpMultiStepLR, self).__init__(optimizer, milestones, gamma=0.1, last_epoch=-1) def get_lr(self, stride=1): if self.last_epoch in self.milestones and self.flag==True: self.flag = False res = [group['lr'] * self.gamma ** self.milestones[self.last_epoch] for group in self.optimizer.param_groups] return res else: return [group['lr'] for group in self.optimizer.param_groups] def get_warup_lr(self, cur_iter): return [base_lr * self.warmup_ratio + base_lr * \ (cur_iter / self.warmup_iter) * (1 - self.warmup_ratio) for base_lr in self.base_lrs] def step_iter(self, cur_iter): if cur_iter <= self.warmup_iter: values = self.get_warup_lr(cur_iter) else: # pdb.set_trace() values = self.get_lr() for param_group, lr in zip(self.optimizer.param_groups, values): param_group['lr'] = lr self._last_lr = [group['lr'] for group in self.optimizer.param_groups] def step_epoch(self): self.flag = True self.last_epoch += 1
#pragma once #include <torch/nn.h> #include "../macros.h" namespace vision { namespace models { // Densenet-BC model class, based on // "Densely Connected Convolutional Networks" // <https://arxiv.org/pdf/1608.06993.pdf> // Args: // num_classes (int) - number of classification classes // growth_rate (int) - how many filters to add each layer (`k` in paper) // block_config (list of 4 ints) - how many layers in each pooling block // num_init_features (int) - the number of filters to learn in the first // convolution layer // bn_size (int) - multiplicative factor for number of bottle neck layers // (i.e. bn_size * k features in the bottleneck layer) // drop_rate (float) - dropout rate after each dense layer struct VISION_API DenseNetImpl : torch::nn::Module { torch::nn::Sequential features{nullptr}; torch::nn::Linear classifier{nullptr}; explicit DenseNetImpl( int64_t num_classes = 1000, int64_t growth_rate = 32, const std::vector<int64_t>& block_config = {6, 12, 24, 16}, int64_t num_init_features = 64, int64_t bn_size = 4, double drop_rate = 0); torch::Tensor forward(torch::Tensor x); }; struct VISION_API DenseNet121Impl : DenseNetImpl { explicit DenseNet121Impl( int64_t num_classes = 1000, int64_t growth_rate = 32, const std::vector<int64_t>& block_config = {6, 12, 24, 16}, int64_t num_init_features = 64, int64_t bn_size = 4, double drop_rate = 0); }; struct VISION_API DenseNet169Impl : DenseNetImpl { explicit DenseNet169Impl( int64_t num_classes = 1000, int64_t growth_rate = 32, const std::vector<int64_t>& block_config = {6, 12, 32, 32}, int64_t num_init_features = 64, int64_t bn_size = 4, double drop_rate = 0); }; struct VISION_API DenseNet201Impl : DenseNetImpl { explicit DenseNet201Impl( int64_t num_classes = 1000, int64_t growth_rate = 32, const std::vector<int64_t>& block_config = {6, 12, 48, 32}, int64_t num_init_features = 64, int64_t bn_size = 4, double drop_rate = 0); }; struct VISION_API DenseNet161Impl : DenseNetImpl { explicit DenseNet161Impl( int64_t num_classes = 1000, int64_t growth_rate = 48, const std::vector<int64_t>& block_config = {6, 12, 36, 24}, int64_t num_init_features = 96, int64_t bn_size = 4, double drop_rate = 0); }; TORCH_MODULE(DenseNet); TORCH_MODULE(DenseNet121); TORCH_MODULE(DenseNet169); TORCH_MODULE(DenseNet201); TORCH_MODULE(DenseNet161); } // namespace models } // namespace vision
package fractal.visaapp; import android.content.Intent; import android.support.v7.app.AlertDialog; import android.support.v7.app.AppCompatActivity; import android.os.Bundle; import android.view.View; import android.widget.Button; import android.widget.EditText; import android.widget.RadioButton; import android.widget.RadioGroup; import android.widget.Toast; import com.android.volley.RequestQueue; import com.android.volley.Response; import org.json.JSONArray; import org.json.JSONException; import org.json.JSONObject; public class VisaForm extends AppCompatActivity { Intent intent; String empCode; RequestQueue queue; RadioGroup rg; int r; RadioButton rb; String visa_type; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_visa_form); intent = getIntent(); Button bvs=(Button)findViewById(R.id.submit); empCode = intent.getStringExtra("empCode"); rg =(RadioGroup) findViewById(R.id.type); } public void onClickVisaForm(View v) { try { r = rg.getCheckedRadioButtonId(); rb = (RadioButton) findViewById(r); visa_type = rb.getText().toString(); final EditText pno = (EditText) findViewById(R.id.epn); final EditText pi = (EditText) findViewById(R.id.editText); final EditText pe = (EditText) findViewById(R.id.editText2); final EditText coun = (EditText) findViewById(R.id.editText3); final String passport_no = pno.getText().toString(); final String passport_issue = pi.getText().toString(); final String passport_expiry = pe.getText().toString(); final String country = coun.getText().toString(); Response.Listener<String> responseListener = new Response.Listener<String>() { @Override public void onResponse(String response) { try { JSONObject jsonResponse = new JSONObject(response); int success = jsonResponse.getInt("success"); if (success == 1) { Intent intent = new Intent(VisaForm.this, EmployeeHome.class); Toast.makeText(VisaForm.this, "Successfully uploaded visa form.", Toast.LENGTH_LONG).show(); VisaForm.this.startActivity(intent); } else if (success == 2) { AlertDialog.Builder builder = new AlertDialog.Builder(VisaForm.this); builder.setTitle("Error").setMessage("You have already submitted your visa application.") .setNegativeButton("Confirm.", null) .create() .show(); } else { AlertDialog.Builder builder = new AlertDialog.Builder(VisaForm.this); builder.setTitle("Error").setMessage("Upload failed.") .setNegativeButton("Retry.", null) .create() .show(); } } catch (JSONException e) { e.printStackTrace(); } } }; VisaRequest visaReq = new VisaRequest(empCode, country, visa_type, passport_no, passport_issue, passport_expiry, responseListener); queue = RequestSingleton.getInstance(VisaForm.this).getRequestQueue(); RequestSingleton.getInstance(VisaForm.this).addToRequestQueue(visaReq); }catch (NullPointerException e){ AlertDialog.Builder builder = new AlertDialog.Builder(VisaForm.this); builder.setMessage("Fill all fields!!") .setNegativeButton("Retry", null) .create() .show(); } catch (Exception e) { AlertDialog.Builder builder = new AlertDialog.Builder(VisaForm.this); builder.setMessage("OOPS! Something went wrong!") .setNegativeButton("Retry", null) .create() .show(); } } }
Copyright by KRON - All rights reserved (KRON) -- A plan to put $750 million in public funds toward an NFL stadium that could house the Raiders in Las Vegas has cleared a second major vote in the Nevada Legislature, despite opposition to a project partly funded by billionaire Sheldon Adelson and a last-minute revelation about associated infrastructure costs. The Nevada Assembly voted 28-13 to pass a bill that would raise hotel taxes by up to 1.4 percentage points in the Las Vegas area to fund a convention center expansion and build a 65,000-seat domed stadium. The measure needed 28 votes to pass, and Republican leaders who were trying to round up sufficient votes called for a vote Friday morning before lawmakers could have any protracted discussion about the bill. The measure still needs final approval from the Senate because it has minor amendments from the Assembly, but that's expected to be an easy hurdle. Senators already voted 16-5 on Tuesday to pass the original bill. "It's exciting," said Andy Abboud, chief lobbyist for the casino mogul Adelson's Las Vegas Sands, after the surprise vote. "But this is really about jobs, and I think at the end of the day people saw this as a fantastic economic stimulus package." Nine Democrats and four Republicans opposed the bill, which made unlikely allies out of people on the far left and far right of the political spectrum. "I would like to thank Governor Sandoval, the Southern Nevada Tourism Infrastructure Committee, and the members of the Nevada Legislature on this historic day," said Raiders owner Mark Davis. "All parties have worked extremely hard to develop and approve this tremendous stadium project that will serve as a proud new home for the entire Raider Nation." The project was nearly derailed by a state report published late Thursday, which said the Nevada Department of Transportation wants to accelerate nearly $900 million in planned road work to accommodate stadium-related traffic. Lawmakers, who hadn't been warned about the estimate during routine discussions on the project, said they felt blindsided. Transportation officials clarified that the projects were already planned and wouldn't require raising additional revenue. Critics also decried the rushed deal, which is happening in an abbreviated special session rather than the four-month regular session next spring, and complained that the Legislature was applying new tax revenue to a stadium instead of reserving it to alleviate an anticipated state budget shortfall. The public contribution will be larger in raw dollars than for any other NFL stadium, although the public's share of the costs - 39 percent - is smaller than for stadiums in cities of a similar size, such as Indianapolis, Cleveland and Cincinnati. Critics pointed out that some outside economists, including Stanford professor and sports economist Roger Noll, have panned the deal as a boondoggle based on outlandish financial expectations. Defenders of the stadium say Las Vegas' outsized tourism economy, with 150,000 hotel rooms and 42 million visitors each year, is different than other markets that are more dependent on locals. "If we take the visitor component out of our economic impact model, it is negative," said economist Jeremy Aguero, who helped develop the deal. "I do not disagree with the analyses that have been done ... It's inappropriately applied here." Proponents project 451,000 new visitors will come to Las Vegas as a result of the stadium, ushering in $620 million in economic impact. That's based on the stadium hosting 46 events, including 10 NFL games, 6 UNLV football games and a variety of concerts, sports, and other events. Laborers and veterans testified that they needed the estimated 25,000 construction jobs the project will bring after the industry was devastated in the recession. The stadium is expected to bring 14,000 permanent jobs to the Las Vegas area. The total deal also sends $420 million for convention center improvements aimed at keeping Las Vegas' lucrative convention industry competitive. The hotel bill for an average night at a Las Vegas Strip hotel would go up about $1.50 as a result. NFL owners would still need to vote by a three-fourths majority to allow the Raiders to move from Oakland to Las Vegas. Oakland Mayor Libby Schaaf stresses she will contniue the fight to keep the Raiders in Oakland but only within reasonable means, explaining plans must be made responsibly to not risk city funds. "And it does not distract our focus on doing our job which is to develop a responsible, viable option for the Raiders to also consider staying in Oakland," Schaaf said. "At the end of the day, this decision belongs to the NFL owners."
import asyncLoader from '@app/util/asyncLoader'; import Docker from 'dockerode'; import colors from 'colors'; export abstract class BaseDockerAction { abstract cmd: string; // abstract handle(): Promise<any>; docker!: Docker; asyncLoader = asyncLoader; colors = colors; constructor() {} init(docker: Docker) { this.docker = docker; } }
// NewListFindingsByIssuePayload builds a issues service List findings by issue // endpoint payload. func NewListFindingsByIssuePayload(id string, status *string, sortBy *string, page *int, size *int, identifiers *string, labels *string) *issues.ListFindingsByIssuePayload { v := &issues.ListFindingsByIssuePayload{} v.ID = &id v.Status = status v.SortBy = sortBy v.Page = page v.Size = size v.Identifiers = identifiers v.Labels = labels return v }
import * as React from 'react'; import FxInput from '../../FxInput'; export class FxInputPlayground extends React.Component<{}, { value: string; auto: boolean }> { constructor(props: {}) { super(props); this.state = { auto: true, value: 'auto', }; } public render(): JSX.Element { return ( <FxInput auto={this.state.auto} type={'text'} value={this.state.value} onRestore={this.handleRestore} onChange={this.handleChange} width={150} /> ); } private handleChange = (_: any, value: string) => { this.setState({ value, auto: false }); }; private handleRestore = () => { this.setState({ value: 'auto', auto: true, }); }; }
Fear & anxiety in the time of COVID-19: How they influence behavior The emotional factors that influence adherence to public health guidelines for containing the spread of COVID-19 are poorly understood and are limiting policymakersโ€™ ability to elicit compliance. In this article, we report the results of a nationwide survey conducted in April 2020 to gain insight into the relation between emotional stress and adherence to the public health guidelines of the U.S. Centers for Disease Control and Prevention (CDC). We found that levels of anxiety and perceived risk from COVID-19 correlated with adherence to the CDCโ€™s recommended cleanliness behaviors, such as handwashing. High anxiety increased individualsโ€™ adherence in part by increasing the perceived seriousness of the risk COVID-19 posed to them. Anxiety and perceived risk were not, however, associated with adherence to social distancing guidelines. Our findings highlight a need for more research into the emotional factors that predict public compliance with the CDCโ€™s recommendations. The results also indicate that policymakers may need to deliver different messages to promote different COVID-limiting behaviors, such as handwashing and social distancing. ยฉ 2020, Brookings Institution Press. All rights reserved.
/** Get a range of auto increment values. Can only be used if the auto increment field is the first field in an index. This method is called by update_auto_increment which in turn is called by the individual handlers as part of write_row. We use the part_share->next_auto_inc_val, or search all partitions for the highest auto_increment_value if not initialized or if auto_increment field is a secondary part of a key, we must search every partition when holding a mutex to be sure of correctness. @param[in] increment Increment value. @param[in] nb_desired_values Number of desired values. @param[out] first_value First auto inc value reserved or MAX if failure. @param[out] nb_reserved_values Number of values reserved. */ void Partition_helper ::get_auto_increment_first_field(ulonglong increment, ulonglong nb_desired_values, ulonglong *first_value, ulonglong *nb_reserved_values) { THD *thd= get_thd(); DBUG_ENTER("Partition_helper::get_auto_increment_first_field"); DBUG_PRINT("info", ("inc: %lu desired_values: %lu first_value: %lu", (ulong) increment, (ulong) nb_desired_values, (ulong) *first_value)); DBUG_ASSERT(increment && nb_desired_values); DBUG_ASSERT(m_table->s->next_number_keypart == 0); *first_value= 0; lock_auto_increment(); if (!m_part_share->auto_inc_initialized) { initialize_auto_increment(false); } int binlog_format= thd_binlog_format(thd); if (!m_auto_increment_safe_stmt_log_lock && thd->lex->sql_command != SQLCOM_INSERT && binlog_format != BINLOG_FORMAT_UNSPEC && binlog_format != BINLOG_FORMAT_ROW) { DBUG_PRINT("info", ("locking auto_increment_safe_stmt_log_lock")); m_auto_increment_safe_stmt_log_lock= true; } *first_value= m_part_share->next_auto_inc_val; m_part_share->next_auto_inc_val+= nb_desired_values * increment; if (m_part_share->next_auto_inc_val < *first_value) { m_part_share->next_auto_inc_val= ULLONG_MAX; } unlock_auto_increment(); DBUG_PRINT("info", ("*first_value: %lu", (ulong) *first_value)); *nb_reserved_values= nb_desired_values; DBUG_VOID_RETURN; }
Last Thursday there was a rally outside the U.S. Capitol to protest pending health care legislation, featuring the kinds of things weโ€™ve grown accustomed to, including large signs showing piles of bodies at Dachau with the caption โ€œNational Socialist Healthcare.โ€ It was grotesque โ€” and it was also ominous. For what we may be seeing is America starting to be Californiafied. The key thing to understand about that rally is that it wasnโ€™t a fringe event. It was sponsored by the House Republican leadership โ€” in fact, it was officially billed as a G.O.P. press conference. Senior lawmakers were in attendance, and apparently had no problem with the tone of the proceedings. True, Eric Cantor , the second-ranking House Republican, offered some mild criticism after the fact. But the operative word is โ€œmild.โ€ The signs were โ€œinappropriate,โ€ said his spokesman, and the use of Hitler comparisons by such people as Rush Limbaugh , said Mr. Cantor, โ€œconjures up images that frankly are not, I think, very helpful.โ€ What all this shows is that the G.O.P. has been taken over by the people it used to exploit. The state of mind visible at recent right-wing demonstrations is nothing new. Back in 1964 the historian Richard Hofstadter published an essay titled, โ€œThe Paranoid Style in American Politics,โ€ which reads as if it were based on todayโ€™s headlines: Americans on the far right, he wrote, feel that โ€œAmerica has been largely taken away from them and their kind, though they are determined to try to repossess it and to prevent the final destructive act of subversion.โ€ Sound familiar? Advertisement Continue reading the main story But while the paranoid style isnโ€™t new, its role within the G.O.P. is.
/// This function replaces all the matches in the provided text. fn replace_match(&self, text: &mut String, matching_mode: &MatchingMode) { match matching_mode { MatchingMode::Regex(regex) => { if regex.is_match(text) { *text = regex.replace_all(text, &*self.replace_text).to_string(); } } MatchingMode::Pattern => { let mut index = 0; while let Some(start) = text.find(&self.pattern) { // Advance the index so we don't get trapped in an infinite loop... again. if start >= index { let end = start + self.pattern.len(); text.replace_range(start..end, &self.replace_text); index = end; } else { break; } } } } }
/** * An example POJO that represents cloud event data * * @author Oleg Zhurakousky * */ public class SpringReleaseEvent { @JsonFormat(shape = JsonFormat.Shape.STRING, pattern = "dd-MM-yyyy") private Date releaseDate; private String releaseName; private String version; public Date getReleaseDate() { return releaseDate; } public void setReleaseDate(Date releaseDate) { this.releaseDate = releaseDate; } public void setReleaseDateAsString(String releaseDate) { try { this.releaseDate = new SimpleDateFormat("dd-MM-yyyy").parse(releaseDate); } catch (ParseException e) { throw new IllegalArgumentException(e); } } public String getReleaseName() { return releaseName; } public void setReleaseName(String releaseName) { this.releaseName = releaseName; } public String getVersion() { return version; } public void setVersion(String version) { this.version = version; } @Override public String toString() { return "releaseDate:" + new SimpleDateFormat("dd-MM-yyyy").format(releaseDate) + "; releaseName:" + releaseName + "; version:" + version; } }
/** This is a utility class to fetch error messages encountered while parsing COBOL file. */ public class ErrorMessageHelper { private final MessageService messageService; static final String PERFORM_MISSING_END = "ErrorStrategy.performMissingEnd"; static final String REPORT_UNWANTED_TOKEN = "ErrorStrategy.reportUnwantedToken"; static final String END_OF_FILE_MESSAGE = "ErrorStrategy.endOfFile"; static final String REPORT_INPUT_MISMATCH = "ErrorStrategy.reportInputMismatch"; private static final String MSG_DELIMITER = ", "; private static final String MSG_PREFIX = "{"; private static final String MSG_SUFFIX = "}"; private static final Map<Class<? extends Parser>, Set<String>> IDENTIFIER_TOKENS = IdentifierReplacing.retrieveTokenToRemove(); ErrorMessageHelper(MessageService messageService) { this.messageService = messageService; } private static final Map<String, String> SPECIAL_TOKEN_MAPPING = SpecialTokenReplacing.loadSpecialTokenMapping(); /** * Returns an input mismatch error message for a {@link InputMismatchException} * * @param recognizer parser reference * @param e {@link InputMismatchException} * @param token token * @param offendingTokens offending token string * @return error message string */ public String getInputMismatchMessage( Parser recognizer, InputMismatchException e, Token token, String offendingTokens) { return token.getType() == EOF ? messageService.getMessage(END_OF_FILE_MESSAGE) : messageService.getMessage( REPORT_INPUT_MISMATCH, offendingTokens, getExpectedText(recognizer, e)); } /** * Returns a message in case unwanted token found while parsing. * * @param recognizer Parser reference * @param currentToken current token * @return error message string */ public String getUnwantedTokenMessage(Parser recognizer, Token currentToken) { return currentToken.getType() == EOF ? messageService.getMessage(END_OF_FILE_MESSAGE) : createMessage(recognizer, currentToken); } /** * Returns an expected text, in case {@link InputMismatchException} is encountered while parsing. * * @param recognizer Parser ref * @return an expected text */ public String getExpectedText(Parser recognizer) { return getExpectedText(recognizer, recognizer.getExpectedTokens()); } /** * Returns the last invocation rule while parsing. * * @param recognizer parser ref * @return last invocation rule */ public static String getRule(Parser recognizer) { return recognizer.getRuleInvocationStack().get(0); } /** * Returns input string which resulted in {@link NoViableAltException} * * @param recognizer parser ref * @param e {@link NoViableAltException} * @return input string */ public String retrieveInputForNoViableException(Parser recognizer, NoViableAltException e) { return Optional.ofNullable(recognizer.getInputStream()) .map(it -> it.getText(e.getStartToken(), e.getOffendingToken())) .orElse("<unknown input>"); } private String getExpectedText(Parser recognizer, InputMismatchException e) { return getExpectedText(recognizer, e.getExpectedTokens()); } private String getExpectedText(Parser recognizer, IntervalSet interval) { final String newMessage = buildErrorMessage( removeIdentifierTokens(recognizer, collectErrorTokens(recognizer, interval))); return interval.size() > 1 ? String.format("{%s}", newMessage) : newMessage; } private String createMessage(Parser recognizer, Token t) { String tokenName = SPECIAL_TOKEN_MAPPING.getOrDefault(t.getText(), t.getText()); return recognizer.getContext().getRuleIndex() == CobolParser.RULE_performInlineStatement ? messageService.getMessage(PERFORM_MISSING_END, tokenName) : messageService.getMessage(REPORT_UNWANTED_TOKEN, tokenName, getExpectedText(recognizer)); } private String buildErrorMessage(List<String> tokens) { return tokens.stream() .map(it -> SPECIAL_TOKEN_MAPPING.getOrDefault(it, it)) .filter(it -> !it.isEmpty()) .map(it -> it.replace("_", "-")) .collect(joining(MSG_DELIMITER)); } private List<String> removeIdentifierTokens(Parser recognizer, List<String> tokens) { final Set<String> identifierTokens = IDENTIFIER_TOKENS.getOrDefault(recognizer.getClass(), ImmutableSet.of()); if (tokens.containsAll(identifierTokens)) tokens.removeAll(identifierTokens); return tokens; } private List<String> collectErrorTokens(Parser recognizer, IntervalSet interval) { return Arrays.stream( interval .toString(recognizer.getVocabulary()) .replace(MSG_PREFIX, "") .replace(MSG_SUFFIX, "") .split(MSG_DELIMITER)) .collect(toList()); } }
/** * Ensures the reconnect dialog does not popup some time from now. */ private void stopDialogTimer() { if (dialogShowTimer.isRunning()) { dialogShowTimer.cancel(); } }
/* Given an IRQ name, return its index in the irq table */ int dsp56k_get_irq_index_by_tag(const char* tag) { int i; for (i = 0; i < 32; i++) { if (strcmp(tag, dsp56k_interrupt_sources[i].irq_source) == 0) { return i; } } fatalerror("DSP56K ERROR : IRQ TAG specified incorrectly (get_vector_by_tag) : %s.\n", tag); return -1; }
all workers on campus be paid at least $15/hour. Wednesday was a momentous day for Seattleโ€™s workers as the first phase of the minimum wage increase was enacted. While workers city-wide celebrated the increase and Seattle again took the national spotlight , UAW 4121 continued our efforts to ensure that every worker city-wide experiences the increase. On campus we took action with other workers and students to demand UW fully comply with the law. We then โ€“ amidst cheers and shouts from the gathered crowd โ€“ marched into bargaining to present our demand thatworkers on campus be paid at least $15/hour. The University stated in bargaining that it is currently willing to raise the minimum hourly wage for ASEs to $11.00; however they did not commit to raise the minimum for all hourly student workers. We also proposed an increase to the graduate base rate to win the salary competition among the Global Challenge Institutions, as well as commensurate percentage increases for variable rates and hourly rates above the minimum. In addition we proposed a full waiver of campus-wide fees for all ASEs. We impressed that these proposals were designed to ease costs of living increases and ensure that UW would be better positioned to recruit the best and brightest to come here. Finally we presented our demand to improve the health plan, including benefit improvements and an overhaul to the administration of the plan that makes it more transparent to members and ensures the Union can play a more central role as advocates. We impressed upon the University the urgent need for responses on all of our proposals as the contract expires in 30 days. We let the University know that the pressure will not let up until our contract is settled fairly.
<reponame>volsocccal/Jerry<filename>volcalenums/formazione.py from enum import Enum class FormazioneType(str, Enum): AUTISTA_ANPAS = 'AUTISTA ANPAS' ELI = 'ELI 10 ECG' SEDIA_C = 'SEDIA CINGOLATA' SEDIA_M = 'SEDIA MOTORIZZATA' TRASPORTO_SICUREZZA_PAZIENTE = 'TRASPORTO E SICUREZZA PAZIENTE' TRUCCABIMBI = 'TRUCCABIMBI' TRUCCATORE_BASE = 'TRUCCATORE BASE ANPAS' TRUCCATORE_AVANZATO = 'TRUCCATORE AVANZATO ANPAS'
use super::linked_list::*; fn get_new_linked_list_with_values<T: Copy>(vec: &Vec<T>) -> LinkedList<T> { let mut linked_list = LinkedList::<T>::new(); for element in vec.iter() { linked_list.append(*element); } return linked_list; } #[test] fn iter_mut_test() { let test_vector = vec![1, 2, 3, 4, 5]; let mut linked_list = get_new_linked_list_with_values(&test_vector); let mut iter_mut = linked_list.iter_mut(); for element in test_vector.iter() { let next = iter_mut.next().unwrap(); *next += 1; assert_eq!(*element + 1, *next); } } #[test] fn iter_test() { let test_vector = vec![1, 2, 3, 4, 5]; let linked_list = get_new_linked_list_with_values(&test_vector); let mut iter = linked_list.iter(); for element in test_vector.iter() { let next = iter.next().unwrap(); assert_eq!(*element, *next); } } #[test] fn into_iter_test() { let test_vector = vec![1, 2, 3, 4, 5]; let linked_list = get_new_linked_list_with_values(&test_vector); let mut into_iter = linked_list.into_iter(); for element in test_vector.iter() { let next = into_iter.next().unwrap(); assert_eq!(*element, next); } } #[test] fn fmt_display_test() { let test_vector = vec![1, 2, 3, 4, 5]; let linked_list = get_new_linked_list_with_values(&test_vector); assert_eq!( format!("{}", linked_list), "HEAD -> 1 -> 2 -> 3 -> 4 -> 5 -> None" ); } #[test] fn delete_at_posn_test() { let test_vector = vec![1, 2, 3, 4, 5]; let mut linked_list = get_new_linked_list_with_values(&test_vector); linked_list .delete_at_posn(2) .expect("Error while deleting from 2nd posn"); assert_eq!( format!("{}", linked_list), "HEAD -> 1 -> 2 -> 4 -> 5 -> None" ); linked_list .delete_at_posn(2) .expect("Error while deleting from 2nd posn"); assert_eq!(format!("{}", linked_list), "HEAD -> 1 -> 2 -> 5 -> None"); if let Ok(_) = linked_list.delete_at_posn(3) { panic!("Should have caused error as we are trying to delete at a posn greater than length"); } assert_eq!(linked_list.length, 3); } #[test] fn delete_where_test() { let test_vector = vec![1, 2, 3, 4, 5]; let mut linked_list = get_new_linked_list_with_values(&test_vector); let mut counter = 3; linked_list .delete_where(move |_element| { counter -= 1; counter > -1 }) .expect("Unexpected error"); assert_eq!(format!("{}", linked_list), "HEAD -> 4 -> 5 -> None"); } #[test] fn reverse_test() { let test_vector = vec![1, 2, 3, 4, 5]; let linked_list = get_new_linked_list_with_values(&test_vector); let reversed_linked_list = linked_list.reverse(); assert_eq!( format!("{}", reversed_linked_list), "HEAD -> 5 -> 4 -> 3 -> 2 -> 1 -> None" ); }
/** * A sum of two real numbers. * * <p />Instances of <code>Sum</code> must be obtained by using one of the * <code>createInstance()</code> factory methods. * * @version $Revision: 1.7 $ $Date: 2002/08/16 21:54:40 $ * @author Ernst de Haan (<a href="mailto:[email protected]">[email protected]</a>) */ public class Sum extends CompositeNumber { //------------------------------------------------------------------------- // Class functions //------------------------------------------------------------------------- /** * Returns a <code>Sum</code> with the specified operands. * * @param a * the first operand, not <code>null</code>. * * @param b * the second operand, not <code>null</code>. * * @return * the <code>Sum</code> instance, possibly newly constructed. * * @throws IllegalArgumentException * if <code>a == null || b == null</code>. * * @throws CanNotCompareException * if the sign of this sum cannot be determined because the 2 arguments * cannot be compared. */ public static Sum createInstance(RealNumber a, RealNumber b) throws IllegalArgumentException, CanNotCompareException { return new Sum(a, b); } /** * Computes the sign of a sum with the specified operands. * * @param a * the first operand, not <code>null</code>. * * @param b * the second operand, not <code>null</code>. * * @return * the sign for a sum with the specified operands, either -1 if the * number is smaller than zero, 0 if the number is zero, or 1 if the * number is greater than 0. * * @throws IllegalArgumentException * if <code>a == null || b == null</code>. * * @throws CanNotCompareException * if a comparison was necessary but failed. */ private static int determineSign(RealNumber a, RealNumber b) throws IllegalArgumentException, CanNotCompareException { // Check preconditions MandatoryArgumentChecker.check("a", a, "b", b); int signA = a.getSign(); int signB = b.getSign(); if (signA != signB) { if (signA==-1 && signB==1) { return a.compareTo(b.negate()); } else if (signA==1 && signB==-1) { return b.compareTo(a.negate()); } else if (signA==0) { return signB; } else { // if (signB==0) return signA; } } // implicit else (signA==signB) return signA; } /** * Creates a textual presentation of this number. This method is used by * the constructor. * * @param a * the first operand for this sum, not <code>null</code>. * * @param b * the second operand for this sum, not <code>null</code>. * * @return * a textual presentation for this sum, never <code>null</code>. * * @throws IllegalArgumentException * if <code>a == null || b == null</code>. */ private final static String createString(RealNumber a, RealNumber b) throws IllegalArgumentException { // Check preconditions MandatoryArgumentChecker.check("a", a, "b", b); StringBuffer buffer = new StringBuffer(512); if (a instanceof CompositeNumber) { buffer.append('('); buffer.append(a.toString()); buffer.append(")+"); } else { buffer.append(a.toString()); buffer.append('+'); } if (b instanceof CompositeNumber) { buffer.append('('); buffer.append(b.toString()); buffer.append(')'); } else { buffer.append(b.toString()); } return buffer.toString(); } //------------------------------------------------------------------------- // Constructor //------------------------------------------------------------------------- /** * Constructs a <code>Sum</code> based on the 2 specified operands. * * @param a * the first operand for the sum, not <code>null</code>. * * @param b * the second operand for the sum, not <code>null</code>. * * @throws IllegalArgumentException * if <code>a == null || b == null</code>. * * @throws CanNotCompareException * if the sign of this sum cannot be determined because the 2 arguments * cannot be compared. */ protected Sum(RealNumber a, RealNumber b) throws IllegalArgumentException, CanNotCompareException { // Call the CompositeNumber constructor super(determineSign(a, b), createString(a, b)); // Store the arguments _elements = new RealNumber[2]; _elements[0] = a; _elements[1] = b; } // XXX: add this method ? /* public static Sum createInstance(RealNumber[] operands) throws IllegalArgumentException { ExceptionSupport.checkOperandsNotNull(operands); ExceptionSupport.checkOperandsLengthAtLeast2(operands); if (operands.length == 2) { return createInstance(operands[0], operands[1]); } RealNumber[] lastOperands = new RealNumber[operands.length - 1]; System.arraycopy(lastOperands, 1, operands, 0, operands.length - 1); return createInstance(operands[0], createInstance(lastOperands)); } */ //--------------------------- //------------------------------------------------------------------------- // Fields //------------------------------------------------------------------------- /** * The operands for this sum. This field is never <code>null</code> and it * should never contain any <code>null</code> elements. It is initialized * by the constructor. After that the contents should never change anymore. */ private final RealNumber[] _elements; //------------------------------------------------------------------------- // Methods //------------------------------------------------------------------------- protected int compareToImpl(RealNumber n) throws IllegalArgumentException, CanNotCompareException { int thatSign = n.getSign(); int thisSign = getSign(); if (thisSign > thatSign) { return 1; } else if (thisSign < thatSign) { return -1; } else if ((thisSign == 0) && (thatSign == 0)) { return 0; } //------------------------------ // TODO: How do we do this ?! //------------------------------ throw new CanNotCompareException(n, this); } /** * Converts the value of this number to a <code>BigDecimal</code> with the * specified precision and rounding mode. * * @param precision the number of digits behind the decimal point. * @param roundingMode the rounding mode to use, one of the modes defined * in class <code>BigDecimal</code>. * @return a <code>BigDecimal</code> with the rounded value of this. * @throws IllegalArgumentException if <em>precision</em>&lt;0 or * the rounding mode is not one of the valid rounding modes defined in * class <code>BigDecimal</code>. */ public BigDecimal toBigDecimal(int precision, int roundingMode) throws IllegalArgumentException { BigDecimal a = _elements[0].toBigDecimal(precision+1, BigDecimal.ROUND_HALF_UP); BigDecimal b = _elements[1].toBigDecimal(precision+1, BigDecimal.ROUND_HALF_UP); // compute BigDecimal sum BigDecimal result = a.add(b); // return correct precision return result.setScale(precision, roundingMode); } /** * Rounds to an integer number towards 0. * * @return this number truncated to an integer. */ public IntegerNumber trunc() { // XXX: Look into this. BigDecimal bigDecimal = toBigDecimal(0, BigDecimal.ROUND_FLOOR); return NumberCentral.valueOf(bigDecimal).trunc(); } public RealNumber[] getElements() { return (RealNumber[]) _elements.clone(); } public int getElementCount() { return _elements.length; } public RealNumber getElement(int n) throws IndexOutOfBoundsException { return _elements[n]; } }
/** * El programa cliente que puede invocar de manera remota los servicios de los * servidores desde el Broker. * */ public class ClientC { private static String brokerHostName; public static void main(String[] args) { System.setProperty("java.security.policy", "java.policy"); if (System.getSecurityManager() == null) { System.setSecurityManager(new SecurityManager()); } brokerHostName = args[0]; try { Broker broker = (Broker) Naming.lookup("//" + brokerHostName + "/Broker"); Boolean fin=false; while(!fin){//En bucle, muestra los servicios por pantalla y le da a elegir 1 al usuario Servicios servicios = broker.lista_servicios(); ArrayList<String> lista_servicios=servicios.obtener_nombres_servicios(); System.out.println("Esribe el nรบmero del servicio que quieres ejecutar.\n"+ "Escribe \"fin\" para salir.\n"+ "Escribe \"r\" para actualizar el listado de servicios disponibles\n"); for (int i = 0; i < lista_servicios.size(); i++) { System.out.println(i+" "+lista_servicios.get(i)); } Scanner scanner = new Scanner(System.in); String input = scanner.nextLine(); if(input.equals("fin")){//Si escribe fin, acaba el bucle fin=true; }else if(input.equals("r")){ broker = (Broker) Naming.lookup("//" + brokerHostName + "/Broker"); servicios = broker.lista_servicios(); lista_servicios=servicios.obtener_nombres_servicios(); }else{ int seleccion = Integer.parseInt(input.trim()); if(seleccion>=lista_servicios.size()){ System.out.println("opciรณn no vรกlida"); }else{ Servicio servicio= servicios.obtener_servicio(lista_servicios.get(seleccion)); Class partypes[]=servicio.getPartypes(); Vector parametros=new Vector(); Boolean parametrosCorrectos=true; //Leemos los parรกmetros por pantalla for(int i = 0; i < partypes.length&&parametrosCorrectos; i++){ System.out.printf("Parametro "+i+" tipo "+partypes[i].getSimpleName()+":"); switch(partypes[i].getName()){ case "java.lang.String": parametros.add(scanner.nextLine()); break; case "java.lang.Integer": try{ parametros.add(scanner.nextInt()); }catch(Exception e){ System.out.println("Error al leer el parametro de teclado"); parametrosCorrectos=false; } break; case "java.lang.Boolean": parametros.add(Boolean.parseBoolean(scanner.nextLine())); break; default: System.out.println("Los parรกmetros del tipo "+partypes[i]+" no son admitidos en este cliente"); parametrosCorrectos=false; } } if(parametrosCorrectos){ try{ Respuesta respuesta=broker.ejecutar_servicio(servicio.getNombre(),parametros); if(servicio.getTipoRetorno()!=null){ System.out.println("\nRespuesta:\n"+respuesta+'\n'); }else{ System.out.println("Servicio realizado"); } }catch(Exception ex){ ex.printStackTrace(); } }else{ System.out.println("No se ha podido ejecutar el servicio"+servicio.getNombre()); } } } } /*Respuesta respuesta = broker.ejecutar_servicio(DAR_FECHA, new Vector()); System.out.println(respuesta); respuesta = broker.ejecutar_servicio(DAR_HORA, new Vector()); System.out.println(respuesta); Vector argmts = new Vector(); String newname = "New collection"; argmts.add(newname); respuesta = broker.ejecutar_servicio(SET_NAME_OF_COLLECTION, argmts); System.out.println(respuesta); respuesta = broker.ejecutar_servicio(GET_NAME_OF_COLLECTION, new Vector()); System.out.println(respuesta);*/ } catch (Exception ex) { System.out.println(ex); } } }
/** * Reads in a JSON object and try to create a LatLng in one of the following formats. * * <pre>{ * "lat" : -33.8353684, * "lng" : 140.8527069 * } * * { * "latitude": -33.865257570508334, * "longitude": 151.19287000481452 * }</pre> */ @Override public LatLng read(JsonReader reader) throws IOException { if (reader.peek() == JsonToken.NULL) { reader.nextNull(); return null; } double lat = 0; double lng = 0; boolean hasLat = false; boolean hasLng = false; reader.beginObject(); while (reader.hasNext()) { String name = reader.nextName(); if ("lat".equals(name) || "latitude".equals(name)) { lat = reader.nextDouble(); hasLat = true; } else if ("lng".equals(name) || "longitude".equals(name)) { lng = reader.nextDouble(); hasLng = true; } } reader.endObject(); if (hasLat && hasLng) { return new LatLng(lat, lng); } else { return null; } }
#include<iostream> using namespace std; int gcd( int a, int b ){ if( b == 0 ) return a; return gcd(b, a%b); } int main(){ int a, b,c,d; cin >> a >> b >> c >> d; int aux = gcd(a*d,c*b); if((c*b - a*d) >= 0) { cout << (c*b - a*d)/aux << "/" << (c*b)/aux; return 0; } cout << (a*d - c*b)/aux << "/" << (a*d)/aux; return 0 ; }
<filename>pubg/PrivateHeaders/QQApiNewsObject.h // // Generated by class-dump 3.5 (64 bit). // // class-dump is Copyright (C) 1997-1998, 2000-2001, 2004-2013 by <NAME>. // #import "QQApiURLObject.h" @interface QQApiNewsObject : QQApiURLObject { } + (id)objectWithURL:(id)arg1 title:(id)arg2 description:(id)arg3 previewImageURL:(id)arg4; // IMP=0x0000000100b32a00 + (id)objectWithURL:(id)arg1 title:(id)arg2 description:(id)arg3 previewImageData:(id)arg4; // IMP=0x0000000100b329c4 @end
/** * Sprite sheet for rpg maker charsets for various versions. * Because rpg maker puts usually 8 character in one character sprite set * we cut the animations in the following way: <pre>[charIndex]_[down/left/right/up]</pre>. * @author Markus Schr&ouml;der */ public class RpgMakerCharSpriteSheet extends SpriteSheet { public enum Version { V2000, V2003, XP, VX, VXAce, MV //4x2 sprite wtih walk animation } //standard in rpg maker private static final List<String> walks = Arrays.asList( "down", "left", "right", "up" ); //TODO maybe change image if background is a color and not transparent public RpgMakerCharSpriteSheet(Version version, Image image) { super(image); cut(version); } private void cut(Version version) { if(version == Version.MV) { //subsheet size Dimension subsheet = getSubsheetSize(version); Dimension character = getCharacterSize(version); int charIndex = 0; int walkIndex = 0; //subsheet for(int i = 0; i < 2; i++) { for(int j = 0; j < 4; j++) { //walk for(int y = 0; y < 4; y++) { List<Rectangle> frames = animatedFrames.computeIfAbsent( charIndex + "_" + walks.get(walkIndex), s -> new ArrayList<>() ); for(int x = 0; x < 3; x++) { int xx = j*subsheet.width + x*character.width; int yy = i*subsheet.height + y*character.height; Rectangle rect = new Rectangle( xx, yy, character.width, character.height ); frames.add(rect); } walkIndex++; } charIndex++; walkIndex = 0; } } } else { throw new RuntimeException(version + " not supported yet"); } } /** * Based on rpg maker version the default image size. * @param version * @return */ public static Dimension getImageSize(Version version) { if(version == Version.MV) { return new Dimension(576, 384); } throw new RuntimeException(version + " not supported yet"); } /** * Based on rpg maker version the subimage size because they put usually * 8 characters in one image. * @param version * @return */ public static Dimension getSubsheetSize(Version version) { if(version == Version.MV) { Dimension imageSize = getImageSize(version); return new Dimension(imageSize.width / 4, imageSize.height / 2); } throw new RuntimeException(version + " not supported yet"); } /** * Based on rpg maker version the default character size. * @param version * @return */ public static Dimension getCharacterSize(Version version) { if(version == Version.MV) { Dimension imageSize = getImageSize(version); return new Dimension(imageSize.width / 12, imageSize.height / 8); } throw new RuntimeException(version + " not supported yet"); } }
/// /// Conversion of raw data to digits. /// void AliEMCALRawUtils::Raw2Digits(AliRawReader* reader,TClonesArray *digitsArr, const AliCaloCalibPedestal* pedbadmap, TClonesArray *digitsTRG, TClonesArray *trgData) { if ( digitsArr) digitsArr->Clear("C"); if (!digitsArr) { Error("Raw2Digits", "no digits found !") ; return ; } if (!reader) { Error("Raw2Digits", "no raw reader found !"); return ; } AliEMCALTriggerSTURawStream inSTU(reader); AliCaloRawStreamV3 in(reader,"EMCAL",fMapping); reader->Select("EMCAL",0,AliDAQ::GetFirstSTUDDL()-1); fTriggerRawDigitMaker->Reset(); fTriggerRawDigitMaker->SetIO(reader, in, inSTU, digitsTRG, trgData); fRawAnalyzer->SetIsZeroSuppressed(true); Int_t lowGain = 0; Int_t caloFlag = 0; Float_t bcTimePhaseCorr = 0; Int_t bcMod4 = (reader->GetBCID() % 4); Int_t runNumber = reader->GetRunNumber(); if ((runNumber > 130850 && runNumber < 200000) && (bcMod4==0 || bcMod4==1)) bcTimePhaseCorr = -1e-7; while (in.NextDDL()) { while (in.NextChannel()) { caloFlag = in.GetCaloFlag(); if ( caloFlag > 2 ) continue; Int_t sm = in.GetModule() ; Int_t row = in.GetRow () ; Int_t column = in.GetColumn() ; Online mapping and numbering is the same for EMCal and DCal SMs but: - DCal odd SM (13,15,17) has online cols: 16-47; offline cols 0-31. - Even DCal SMs have the same numbering online and offline 0-31. - DCal 1/3 SM (18,19), online rows 16-23; offline rows 0-7 In the next lines shift the online cols or rows depending on the SM to match the offline mapping. fGeom->ShiftOnlineToOfflineCellIndexes(sm, row, column); --------------------------------------------------------------------- if ( caloFlag < 2 && fRemoveBadChannels && pedbadmap->IsBadChannel(sm, column, row) ) { continue; } vector<AliCaloBunchInfo> bunchlist; while (in.NextBunch()) { bunchlist.push_back( AliCaloBunchInfo(in.GetStartTimeBin(), in.GetBunchLength(), in.GetSignals() ) ); } if (bunchlist.size() == 0) continue; if ( caloFlag < 2 ) { ALTRO Int_t id = fGeom->GetAbsCellIdFromCellIndexes(sm, row, column) ; lowGain = in.IsLowGain(); if(fUseL1Phase) fRawAnalyzer->SetL1Phase( in.GetL1Phase() ); else fRawAnalyzer->SetL1Phase( 0 ); AliCaloFitResults res = fRawAnalyzer->Evaluate( bunchlist, in.GetAltroCFG1(), in.GetAltroCFG2()); if(res.GetAmp() >= fNoiseThreshold ) { AddDigit(digitsArr, id, lowGain, bunchlist, res.GetAmp(), res.GetTime()+bcTimePhaseCorr, res.GetChi2(), res.GetNdf() ); } } ALTRO else if ( fUseFALTRO ) { Fake ALTRO fTriggerRawDigitMaker->Add( bunchlist ); } Fake ALTRO } End while over channel } End while over DDL's, of input stream fTriggerRawDigitMaker->PostProcess(); TrimDigits(digitsArr); }
/** * * @author Fzwael , Dorra */ public class RLE { public static void compress(File file) { // System.out.println("I will compress " + file.getName()); try{ File compressedFile = new File(file.getName() + "-RLE"); PrintWriter writer = new PrintWriter(compressedFile, "UTF-8"); BufferedReader reader = new BufferedReader(new InputStreamReader(new FileInputStream(file),Charset.forName("UTF-8"))); int c; int occ = 1; char prevChar = (char) reader.read(); // System.out.println("INIT " + prevChar); while((c = reader.read()) != -1) { char character = (char) c; if(character != prevChar){ writer.print(occ+""+prevChar); occ = 1; prevChar = character; } else occ+=1; } writer.print(occ+""+prevChar); writer.close(); }catch(Exception e){ System.out.println(e.toString()); } // System.out.println(); System.out.println("ALL DONE RLE"); } }
from flask import Blueprint, render_template import app mod_viewer = Blueprint('viewer', __name__) @mod_viewer.route('/', methods=['GET']) def home(): return render_template('home.html', **app.app.config) @mod_viewer.route('/nocanvas', methods=['GET']) def home_nocanvas(): return render_template('home_nocanvas.html', **app.app.config)
// Copyright (C) 2000, International Business Machines // Corporation and others. All Rights Reserved. #include <cstdio> #include "CoinTime.hpp" #include "BCP_lp_functions.hpp" #include "BCP_enum.hpp" #include "BCP_lp_result.hpp" #include "BCP_lp_pool.hpp" #include "BCP_lp_user.hpp" #include "BCP_lp.hpp" #include "BCP_lp_node.hpp" int BCP_lp_generate_vars(BCP_lp_prob& p, bool cutset_changed, const bool from_repricing) { double time0 = CoinCpuTime(); BCP_lp_result& lpres = *p.lp_result; BCP_lp_var_pool& vp = *p.local_var_pool; int prev_size = vp.size(); if (prev_size > 0 && ! vp.cols_are_valid()){ // we must regenerate the cols from the variables // first delete the old cols then expand the vars again // these will hold vars and cols expanded from the vars BCP_vec<BCP_var*> vars; BCP_vec<BCP_col*> cols; vars.reserve(prev_size); int i; for (i = 0; i < prev_size; ++i) { vp[i]->delete_col(); vars.unchecked_push_back(vp[i]->var()); } // now expand cols.reserve(prev_size); p.user->vars_to_cols(p.node->cuts, vars, cols, lpres, BCP_Object_Leftover, false); for (i = 0; i < prev_size; ++i) { vp[i]->set_col(cols[i]); } cols.clear(); } vp.cols_are_valid(true); if (p.param(BCP_lp_par::LpVerb_ReportLocalVarPoolSize)) printf("LP: Number of leftover vars: %i\n", prev_size); // Generate vars within the LP process BCP_vec<BCP_var*> new_vars; BCP_vec<BCP_col*> new_cols; BCP_price_vars(p, false /* not from fathom */, new_vars, new_cols); if (new_vars.size() > 0) { const int new_size = new_vars.size(); vp.reserve(vp.size() + new_size); for (int i = 0; i < new_size; ++i) { new_vars[i]->set_bcpind(-BCP_lp_next_var_index(p)); vp.unchecked_push_back(new BCP_lp_waiting_col(new_vars[i], new_cols[i])); } new_cols.clear(); new_vars.clear(); if (p.param(BCP_lp_par::LpVerb_ReportLocalVarPoolSize)) printf("LP: Number of vars generated in the LP process: %i\n", new_size); prev_size = vp.size(); } // Compute the reduced cost for everything in the local var pool and throw // out the ones with positive reduced cost if (prev_size > 0) { vp.compute_red_costs(lpres, vp.begin(), vp.end()); // char dumpname[200]; //sprintf(dumpname, "reducedcosts-%i-%i", // p.node->index, p.node->iteration_count); //FILE* dumpfile = fopen(dumpname, "w"); //for (int i = 0; i < prev_size; ++i) { // fprintf(dumpfile, "%.6f\n", vp[i]->red_cost()); //} //fclose(dumpfile); double detol = 0.0; p.lp_solver->getDblParam(OsiDualTolerance, detol); const int cnt = vp.remove_positives(detol); if (p.param(BCP_lp_par::LpVerb_ReportLocalVarPoolSize)) printf("LP: Positive rc (hence removed): %i\n", cnt); prev_size = vp.size(); } if (p.param(BCP_lp_par::MessagePassingIsSerial)) { // If the message passing environment is not really parallel (i.e., while // the VG/VP are working the LP stops and also the LP must immediately // process any vars sent back then this is the place to send the lp // solution to the VG/VP. // send the current solution to VG, and also to VP if we are either // - at the beginning of a chain (but not in the root in the // first phase) // - or this is the var_pool_check_freq-th iteration. if (p.node->vg || p.node->vp) { const BCP_message_tag msgtag = BCP_lp_pack_for_vg(p); if (p.node->vg) { ++p.no_more_vars_cnt; p.msg_env->send(p.node->vg, msgtag, p.msg_buf); } if (p.node->vp) { if (! (p.node->iteration_count % p.param(BCP_lp_par::VarPoolCheckFrequency)) || cutset_changed) { ++p.no_more_vars_cnt; p.msg_env->send(p.node->vp, msgtag, p.msg_buf); } } } } if (p.no_more_vars_cnt > 0){ // Receive vars if we have sent out the lp solution somewhere. // set the timeout (all the times are in microseconds). double first_var_time_out = cutset_changed ? p.param(BCP_lp_par::FirstLP_FirstVarTimeout) : p.param(BCP_lp_par::LaterLP_FirstVarTimeout); double all_vars_time_out = cutset_changed ? p.param(BCP_lp_par::FirstLP_AllVarsTimeout) : p.param(BCP_lp_par::LaterLP_AllVarsTimeout); double tout = vp.size() == 0 ? first_var_time_out : all_vars_time_out; double tin = CoinCpuTime(); while(true){ p.msg_buf.clear(); p.msg_env->receive(BCP_AnyProcess, BCP_Msg_AnyMessage, p.msg_buf, tout); if (p.msg_buf.msgtag() == BCP_Msg_NoMessage){ // check that everyone is still alive if (! p.msg_env->alive(p.tree_manager)) throw BCP_fatal_error("LP: The TM has died -- LP exiting\n"); if (p.node->cg && ! p.msg_env->alive(p.node->cg)) throw BCP_fatal_error("LP: The CG has died -- LP exiting\n"); if (p.node->cp && ! p.msg_env->alive(p.node->cp)) throw BCP_fatal_error("LP: The CP has died -- LP exiting\n"); if (p.node->vg && ! p.msg_env->alive(p.node->vg)) throw BCP_fatal_error("LP: The VG has died -- LP exiting\n"); if (p.node->vp && ! p.msg_env->alive(p.node->vp)) throw BCP_fatal_error("LP: The VP has died -- LP exiting\n"); // now the message queue is empty and received_message has // returned, i.e., we have waited enough if (p.param(BCP_lp_par::LpVerb_ReportVarGenTimeout)) printf("LP: Receive vars timed out after %f secs\n", (prev_size != static_cast<int>(vp.size())? all_vars_time_out : first_var_time_out)); break; } p.process_message(); // break out if no more vars can come if (p.no_more_vars_cnt == 0) break; // reset the timeout tout = vp.size() == 0 ? first_var_time_out : all_vars_time_out; if (tout >= 0){ // with this tout we'll read out the rest of the message queue // even if var generation times out. tout = std::max(0.0, tout - (CoinCpuTime() - tin)); } } } // reset no_more_vars_cnt to 0 p.no_more_vars_cnt = 0; if (p.param(BCP_lp_par::LpVerb_ReportLocalVarPoolSize)) { printf("LP: Number of vars received from VG: %i\n", static_cast<int>(vp.size() - prev_size)); printf("LP: Total number of vars in local pool: %i\n", static_cast<int>(vp.size())); } if (vp.size() > 0) { const int oldsize = vp.size(); double detol = 0.0; p.lp_solver->getDblParam(OsiDualTolerance, detol); const int cnt = vp.remove_positives(detol); if (cnt > 0) { printf("\ LP: *WARNING*: There are vars with positive red cost in the local VP\n\ at the end of var generation.\n\ Discarding %i variables out of %i.\n", cnt, oldsize); } } p.stat.time_var_generation += CoinCpuTime() - time0; return vp.size(); }
/** * Delete recursively all files in a directory and the directory * * @param dir - the dir to delete * @return if we successfully deleted all files in that directory and the passed directory */ static boolean deleteAllInDir(File dir) { boolean deleted = true; if (dir.exists()) { if (dir.isDirectory()) { File[] files = dir.listFiles(); if (files != null) { for (File f : files) { deleted &= deleteAllInDir(f); } } } else { return deleteIt(dir); } deleted &= deleteIt(dir); } return deleted; }
/** * Reads next recognized token from the scanner. If scanner fails to recognize a token and * throws an exception it will be reported via Parser.scannerError(). * <p>It is expected that scanner is capable of returning at least an EOF token after the * exception.</p> * * @param src scanner * @return next recognized token * @throws IOException * as thrown by a scanner */ private Symbol readToken() throws IOException { while (true) { try { return scanner.nextToken(); } catch (Scanner.Exception e) { report.scannerError(e); } } }
/** * * Definition of the class "Footstep" defined in Footstep_.idl. * * This file was automatically generated from Footstep_.idl by us.ihmc.idl.generator.IDLGenerator. * Do not update this file directly, edit Footstep_.idl instead. * */ public class Footstep { private long unique_id_; private byte robot_side_; private us.ihmc.euclid.tuple3D.Point3D location_; private us.ihmc.euclid.tuple4D.Quaternion orientation_; private us.ihmc.idl.IDLSequence.Object<us.ihmc.euclid.tuple3D.Point3D> predicted_contact_points_2d_; private byte trajectory_type_; private double swing_height_; private us.ihmc.idl.IDLSequence.Object<us.ihmc.euclid.tuple3D.Point3D> position_waypoints_; private us.ihmc.idl.IDLSequence.Object<controller_msgs.msg.dds.TaskspaceTrajectoryStamped> swing_trajectory_; private double swing_trajectory_blend_duration_; private double swing_duration_; private double transfer_duration_; public Footstep() { location_ = new us.ihmc.euclid.tuple3D.Point3D(); orientation_ = new us.ihmc.euclid.tuple4D.Quaternion(); predicted_contact_points_2d_ = new us.ihmc.idl.IDLSequence.Object<us.ihmc.euclid.tuple3D.Point3D>(100, us.ihmc.euclid.tuple3D.Point3D.class, new geometry_msgs.msg.dds.PointPubSubType()); position_waypoints_ = new us.ihmc.idl.IDLSequence.Object<us.ihmc.euclid.tuple3D.Point3D>(100, us.ihmc.euclid.tuple3D.Point3D.class, new geometry_msgs.msg.dds.PointPubSubType()); swing_trajectory_ = new us.ihmc.idl.IDLSequence.Object<controller_msgs.msg.dds.TaskspaceTrajectoryStamped>(100, controller_msgs.msg.dds.TaskspaceTrajectoryStamped.class, new controller_msgs.msg.dds.TaskspaceTrajectoryStampedPubSubType()); } public void set(Footstep other) { unique_id_ = other.unique_id_; robot_side_ = other.robot_side_; geometry_msgs.msg.dds.PointPubSubType.staticCopy(other.location_, location_); geometry_msgs.msg.dds.QuaternionPubSubType.staticCopy(other.orientation_, orientation_); predicted_contact_points_2d_.set(other.predicted_contact_points_2d_); trajectory_type_ = other.trajectory_type_; swing_height_ = other.swing_height_; position_waypoints_.set(other.position_waypoints_); swing_trajectory_.set(other.swing_trajectory_); swing_trajectory_blend_duration_ = other.swing_trajectory_blend_duration_; swing_duration_ = other.swing_duration_; transfer_duration_ = other.transfer_duration_; } public long getUnique_id() { return unique_id_; } public void setUnique_id(long unique_id) { unique_id_ = unique_id; } public byte getRobot_side() { return robot_side_; } public void setRobot_side(byte robot_side) { robot_side_ = robot_side; } public us.ihmc.euclid.tuple3D.Point3D getLocation() { return location_; } public us.ihmc.euclid.tuple4D.Quaternion getOrientation() { return orientation_; } public us.ihmc.idl.IDLSequence.Object<us.ihmc.euclid.tuple3D.Point3D> getPredicted_contact_points_2d() { return predicted_contact_points_2d_; } public byte getTrajectory_type() { return trajectory_type_; } public void setTrajectory_type(byte trajectory_type) { trajectory_type_ = trajectory_type; } public double getSwing_height() { return swing_height_; } public void setSwing_height(double swing_height) { swing_height_ = swing_height; } public us.ihmc.idl.IDLSequence.Object<us.ihmc.euclid.tuple3D.Point3D> getPosition_waypoints() { return position_waypoints_; } public us.ihmc.idl.IDLSequence.Object<controller_msgs.msg.dds.TaskspaceTrajectoryStamped> getSwing_trajectory() { return swing_trajectory_; } public double getSwing_trajectory_blend_duration() { return swing_trajectory_blend_duration_; } public void setSwing_trajectory_blend_duration(double swing_trajectory_blend_duration) { swing_trajectory_blend_duration_ = swing_trajectory_blend_duration; } public double getSwing_duration() { return swing_duration_; } public void setSwing_duration(double swing_duration) { swing_duration_ = swing_duration; } public double getTransfer_duration() { return transfer_duration_; } public void setTransfer_duration(double transfer_duration) { transfer_duration_ = transfer_duration; } @Override public boolean equals(java.lang.Object other) { if (other == null) return false; if (other == this) return true; if (!(other instanceof Footstep)) return false; Footstep otherMyClass = (Footstep) other; boolean returnedValue = true; returnedValue &= this.unique_id_ == otherMyClass.unique_id_; returnedValue &= this.robot_side_ == otherMyClass.robot_side_; returnedValue &= this.location_.equals(otherMyClass.location_); returnedValue &= this.orientation_.equals(otherMyClass.orientation_); returnedValue &= this.predicted_contact_points_2d_.equals(otherMyClass.predicted_contact_points_2d_); returnedValue &= this.trajectory_type_ == otherMyClass.trajectory_type_; returnedValue &= this.swing_height_ == otherMyClass.swing_height_; returnedValue &= this.position_waypoints_.equals(otherMyClass.position_waypoints_); returnedValue &= this.swing_trajectory_.equals(otherMyClass.swing_trajectory_); returnedValue &= this.swing_trajectory_blend_duration_ == otherMyClass.swing_trajectory_blend_duration_; returnedValue &= this.swing_duration_ == otherMyClass.swing_duration_; returnedValue &= this.transfer_duration_ == otherMyClass.transfer_duration_; return returnedValue; } @Override public java.lang.String toString() { StringBuilder builder = new StringBuilder(); builder.append("Footstep {"); builder.append("unique_id="); builder.append(this.unique_id_); builder.append(", "); builder.append("robot_side="); builder.append(this.robot_side_); builder.append(", "); builder.append("location="); builder.append(this.location_); builder.append(", "); builder.append("orientation="); builder.append(this.orientation_); builder.append(", "); builder.append("predicted_contact_points_2d="); builder.append(this.predicted_contact_points_2d_); builder.append(", "); builder.append("trajectory_type="); builder.append(this.trajectory_type_); builder.append(", "); builder.append("swing_height="); builder.append(this.swing_height_); builder.append(", "); builder.append("position_waypoints="); builder.append(this.position_waypoints_); builder.append(", "); builder.append("swing_trajectory="); builder.append(this.swing_trajectory_); builder.append(", "); builder.append("swing_trajectory_blend_duration="); builder.append(this.swing_trajectory_blend_duration_); builder.append(", "); builder.append("swing_duration="); builder.append(this.swing_duration_); builder.append(", "); builder.append("transfer_duration="); builder.append(this.transfer_duration_); builder.append("}"); return builder.toString(); } }
/** * Gets the current node. * * <a href="https://jmespath.org/specification.html#current-node">current-node</a> */ public final class CurrentExpression extends JmespathExpression { public CurrentExpression() { this(1, 1); } public CurrentExpression(int line, int column) { super(line, column); } @Override public <T> T accept(ExpressionVisitor<T> visitor) { return visitor.visitCurrentNode(this); } @Override public int hashCode() { return 1; } @Override public boolean equals(Object other) { return other instanceof CurrentExpression; } @Override public String toString() { return "CurrentExpression{}"; } }
/** * Builds a QueryBatcher based on an array of document URIs. */ public class DocumentUrisQueryBatcherBuilder implements QueryBatcherBuilder { private String[] documentUris; public DocumentUrisQueryBatcherBuilder(String... documentUris) { this.documentUris = documentUris; } @Override public QueryBatcher buildQueryBatcher(DatabaseClient databaseClient, DataMovementManager dataMovementManager) { StructuredQueryDefinition query = databaseClient.newQueryManager().newStructuredQueryBuilder().document(documentUris); return dataMovementManager.newQueryBatcher(query); } }
// Tests that if a scroll-begin gesture is not handled, then subsequent scroll // events are not dispatched to any view. TEST_F(WidgetTest, GestureScrollEventDispatching) { EventCountView* noscroll_view = new EventCountView; EventCountView* scroll_view = new ScrollableEventCountView; noscroll_view->SetBounds(0, 0, 50, 40); scroll_view->SetBounds(60, 0, 40, 40); Widget* widget = CreateTopLevelPlatformWidget(); widget->GetRootView()->AddChildView(noscroll_view); widget->GetRootView()->AddChildView(scroll_view); { ui::GestureEvent begin( 5, 5, 0, base::TimeDelta(), ui::GestureEventDetails(ui::ET_GESTURE_SCROLL_BEGIN)); widget->OnGestureEvent(&begin); ui::GestureEvent update( 25, 15, 0, base::TimeDelta(), ui::GestureEventDetails(ui::ET_GESTURE_SCROLL_UPDATE, 20, 10)); widget->OnGestureEvent(&update); ui::GestureEvent end(25, 15, 0, base::TimeDelta(), ui::GestureEventDetails(ui::ET_GESTURE_SCROLL_END)); widget->OnGestureEvent(&end); EXPECT_EQ(1, noscroll_view->GetEventCount(ui::ET_GESTURE_SCROLL_BEGIN)); EXPECT_EQ(0, noscroll_view->GetEventCount(ui::ET_GESTURE_SCROLL_UPDATE)); EXPECT_EQ(0, noscroll_view->GetEventCount(ui::ET_GESTURE_SCROLL_END)); } { ui::GestureEvent begin( 65, 5, 0, base::TimeDelta(), ui::GestureEventDetails(ui::ET_GESTURE_SCROLL_BEGIN)); widget->OnGestureEvent(&begin); ui::GestureEvent update( 85, 15, 0, base::TimeDelta(), ui::GestureEventDetails(ui::ET_GESTURE_SCROLL_UPDATE, 20, 10)); widget->OnGestureEvent(&update); ui::GestureEvent end(85, 15, 0, base::TimeDelta(), ui::GestureEventDetails(ui::ET_GESTURE_SCROLL_END)); widget->OnGestureEvent(&end); EXPECT_EQ(1, scroll_view->GetEventCount(ui::ET_GESTURE_SCROLL_BEGIN)); EXPECT_EQ(1, scroll_view->GetEventCount(ui::ET_GESTURE_SCROLL_UPDATE)); EXPECT_EQ(1, scroll_view->GetEventCount(ui::ET_GESTURE_SCROLL_END)); } widget->CloseNow(); }
/** * Funcion con la cual evaluo si la remision o la factura a realizar se le * debe hacer retencion en la fuente */ public String evaluaRetefuente() { String rta = "S"; try { } catch (Exception e) { } return rta; }
/** * Obtain min-max normalizers from training data. * * @param data Training data from which the normalizers are obtained * @param numFeatures Number of features * @return */ public static MinMaxNormalizer[] minmaxNormalizeTrainingData( SparseVector[] data, int numFeatures) { int D = data.length; MinMaxNormalizer[] norms = new MinMaxNormalizer[numFeatures]; for (int ii = 0; ii < numFeatures; ii++) { double[] featVals = new double[D]; for (int dd = 0; dd < D; dd++) { featVals[dd] = data[dd].get(ii); } if (min(featVals) == max(featVals)) { norms[ii] = null; continue; } norms[ii] = new MinMaxNormalizer(featVals, 0.0, 1.0); double[] normVal = norms[ii].normalize(featVals); for (int dd = 0; dd < D; dd++) { data[dd].set(ii, normVal[dd]); } } return norms; }
/** * A series of tests accessing Blobs in various ways. * <p> * These tests are intended to detect Blob performance regressions. Before * committing a patch that might change the Blob performance characteristics, * first run these tests on a clean build and then with the patch applied. The * results can only be compared when both runs are done on the same machine. * <p> * The results are the time taken to execute the test. Lower duration is better * (improvement). Currently the results are printed to standard out. There is * one exception, which is {@code testConcurrency}. For this test, the * throughput is printed and it will always run for a fixed amount of time. * <p> * The tests are written with two axis in mind: read-only vs update and small vs * large. These axis were chosen based on the Blob implementation at the time. * In the context of this test, small means the Blob is represented as a string * by the Derby store and large means the Blob is represtend as a stream into * the Derby store. When a Blob is modified, an in-memory or on disk temporary * copy is created. The performance of these temporary representations are * tested with the tests that modify the Blob content. * <p> * System properties controlling test behavior: * <ul><li>derby.tests.disableSmallBlobs</li> * <li>derby.tests.disableLargeBlobs</li> * <li>derby.tests.disableConcurrencyTest</li> * <li>derby.tests.largeBlobSize (in MB, 15 is the default)</li> * </ul> * * <p> * <b>NOTE</b>: Currently there are no tests for the client driver (network) * or for encrypted Blobs. */ public class BlobAccessTest extends JDBCPerfTestCase { private static final boolean disableSmallBlobs = Boolean.getBoolean("derby.tests.disableSmallBlobs"); private static final boolean disableLargeBlobs = Boolean.getBoolean("derby.tests.disableLargeBlobs"); private static final boolean disableConcurrencyTest = Boolean.getBoolean("derby.tests.disableConcurrencyTest"); private static final int largeBlobSizeMB = Integer.getInteger("derby.tests.largeBlobSize", 15).intValue(); private static final int FETCH_GETBYTES = 0; private static final int FETCH_GETBINARYSTREAM = 1; /** * Instantiates a new test that will be run the specified number of * iterations and repeated as specified. * * @param name name of the test to instantiate * @param iterations number of iterations per repetition * @param repeats number of repetitions */ public BlobAccessTest(String name, int iterations, int repeats) { super(name, iterations, repeats); } /** * Set autocommit to false by default. */ public void initializeConnection(Connection conn) throws SQLException { conn.setAutoCommit(false); } /** * Generates a suite of tests. * <p> * The required test data will be generated. Note that a subset of the * tests can be disabled by using a system property. * * @return A suite of tests. */ public static Test suite() { BaseTestSuite mainSuite = new BaseTestSuite("BlobAccessTest suite"); if (!disableSmallBlobs) { int iters = 50; int reps = 3; println("Adding small Blob tests."); BaseTestSuite smallSuite = new BaseTestSuite("Small Blob suite"); smallSuite.addTest(new BlobAccessTest( "testFetchSmallBlobs", iters, reps)); smallSuite.addTest(new BlobAccessTest( "testFetchSmallBlobsInaccurateLength", iters, reps)); smallSuite.addTest(new BlobAccessTest( "testModifySmallBlobs", iters, reps)); mainSuite.addTest(smallSuite); } if (!disableLargeBlobs) { int iters = 5; int reps = 3; println("Adding large Blob tests."); BaseTestSuite largeSuite = new BaseTestSuite("Large Blob suite"); largeSuite.addTest(new BlobAccessTest( "testFetchLargeBlobs", iters, reps)); largeSuite.addTest(new BlobAccessTest( "testFetchLargeBlobOneByOneByteBaseline", iters, reps)); largeSuite.addTest(new BlobAccessTest( "testFetchLargeBlobOneByOneByteModified", iters, reps)); largeSuite.addTest(new BlobAccessTest( "testFetchLargeBlobOneByOneByte", iters, reps)); largeSuite.addTest(new BlobAccessTest( "testFetchLargeBlob", iters, reps)); largeSuite.addTest(new BlobAccessTest( "testFetchLargeBlobModified", iters, reps)); largeSuite.addTest(new BlobAccessTest( "testFetchLargeBlobPieceByPiece", iters, reps)); largeSuite.addTest(new BlobAccessTest( "testFetchLargeBlobPieceByPieceModified", iters, reps)); largeSuite.addTest(new BlobAccessTest( "testLargeBlobGetLength", iters, reps)); mainSuite.addTest(largeSuite); } if (!disableConcurrencyTest) { mainSuite.addTest(new BlobAccessTest("testConcurrency", 1, 1)); } return new CleanDatabaseTestSetup(mainSuite) { protected void decorateSQL(Statement stmt) throws SQLException { try { initializeBlobData(stmt); } catch (UnsupportedEncodingException uee) { // Compiled with JDK 1.4, can't use constructor. SQLException sqle = new SQLException(); sqle.initCause(uee); throw sqle; } } }; } /** * Fetches a number of small Blobs, getting the content using getBytes. * <p> * The exact length of the Blob is used when getting the bytes. */ public void testFetchSmallBlobs() throws SQLException { PreparedStatement ps = prepareStatement( "select dBlob, length from smallBlobs"); ResultSet rs = ps.executeQuery(); while (rs.next()) { Blob Blob = rs.getBlob(1); int blobLength = rs.getInt(2); byte[] content = Blob.getBytes(1, blobLength); } rs.close(); } /** * Fetches a number of small Blobs, getting the content using getBytes. * <p> * A too long length of the Blob is used when getting the bytes. */ public void testFetchSmallBlobsInaccurateLength() throws SQLException { PreparedStatement ps = prepareStatement( "select dBlob, length from smallBlobs"); ResultSet rs = ps.executeQuery(); while (rs.next()) { Blob Blob = rs.getBlob(1); int unusedLength = rs.getInt(2); byte[] content = Blob.getBytes(1, 100); } rs.close(); } /** * Test fetching the content after adding a single byte at the end. */ public void testModifySmallBlobs() throws SQLException, UnsupportedEncodingException { PreparedStatement ps = prepareStatement( "select dBlob, length from smallBlobs"); ResultSet rs = ps.executeQuery(); while (rs.next()) { Blob Blob = rs.getBlob(1); int length = rs.getInt(2); Blob.setBytes(length, "X".getBytes("US-ASCII")); byte[] content = Blob.getBytes(1, 100); } rs.close(); } /** * Fetches a number of Blobs using a rather large read buffer with * {@code getBinaryStream}. */ public void testFetchLargeBlobs() throws IOException, SQLException { PreparedStatement ps = prepareStatement( "select dBlob, length from largeBlobs"); ResultSet rs = ps.executeQuery(); byte[] byteBuf = new byte[16*1024]; // 16 KB while (rs.next()) { Blob Blob = rs.getBlob(1); InputStream content = Blob.getBinaryStream(); long remaining = rs.getInt(2); while (remaining > 0) { remaining -= content.read(byteBuf); } content.close(); } rs.close(); } /** * Fetches a single Blob and reads it byte by byte, but utilizing a * buffered stream to get a lower time bound on the read operation. */ public void testFetchLargeBlobOneByOneByteBaseline() throws IOException, SQLException { // Select just one Blob. PreparedStatement ps = prepareStatement( "select dBlob, length from largeBlobs where id = 4"); ResultSet rs = ps.executeQuery(); while (rs.next()) { Blob Blob = rs.getBlob(1); InputStream content = Blob.getBinaryStream(); BufferedInputStream bufferedContent = new BufferedInputStream(content); long remaining = rs.getInt(2); while (bufferedContent.read() != -1) { remaining--; } content.close(); assertEquals(0, remaining); } rs.close(); } /** * Fetches a single Blob and reads it byte by byte. */ public void testFetchLargeBlobOneByOneByte() throws IOException, SQLException { // Select just one Blob. PreparedStatement ps = prepareStatement( "select dBlob, length from largeBlobs where id = 4"); ResultSet rs = ps.executeQuery(); while (rs.next()) { Blob Blob = rs.getBlob(1); InputStream content = Blob.getBinaryStream(); long remaining = rs.getInt(2); while (content.read() != -1) { remaining--; } content.close(); assertEquals(0, remaining); } rs.close(); } /** * Fetches a single Blob and reads it byte by byte after it has first been * modified. * <p> * The point of modifiying the Blob is to make Derby use the writable Blob * representation (different implementation). */ public void testFetchLargeBlobOneByOneByteModified() throws IOException, SQLException { // Select just one Blob. PreparedStatement ps = prepareStatement( "select dBlob, length from largeBlobs where id = 4"); ResultSet rs = ps.executeQuery(); while (rs.next()) { Blob Blob = rs.getBlob(1); long remaining = rs.getInt(2); Blob.setBytes(++remaining, "X".getBytes("US-ASCII")); InputStream content = Blob.getBinaryStream(); while (content.read() != -1) { remaining --; } content.close(); assertEquals(0, remaining); } rs.close(); } /** * Fetches a single Blob by reading it piece by piece with {@code getBytes}. */ public void testFetchLargeBlobPieceByPiece() throws IOException, SQLException { fetchBlobPieceByPiece(false, FETCH_GETBYTES); } /** * Fetches a single Blob by reading it piece by piece with {@code getBytes}. */ public void testFetchLargeBlobPieceByPieceModified() throws IOException, SQLException { fetchBlobPieceByPiece(true, FETCH_GETBYTES); } /** * Fetches a single Blob by reading it in chunks with * {@code getBinaryStream}. */ public void testFetchLargeBlob() throws IOException, SQLException { fetchBlobPieceByPiece(false, FETCH_GETBINARYSTREAM); } /** * Fetches a single Blob by reading it in chunks with * {@code getBinaryStream}. */ public void testFetchLargeBlobModified() throws IOException, SQLException { fetchBlobPieceByPiece(true, FETCH_GETBINARYSTREAM); } /** * Fetches a "large" Blob piece by piece using getBytes. * * @param modifyBlob whether to modify the Blob before fetching it * (determines the internal Derby Blob representation) */ private void fetchBlobPieceByPiece(boolean modifyBlob, int fetchMode) throws IOException, SQLException { // Select just one Blob. PreparedStatement ps = prepareStatement( "select dBlob, length from largeBlobs where id = 4"); ResultSet rs = ps.executeQuery(); while (rs.next()) { Blob blob = rs.getBlob(1); long remaining = rs.getInt(2); if (modifyBlob) { // Modify the Blob to create a temporary copy in memory or on // disk (depends on the Blob size). long modifyStart = System.currentTimeMillis(); blob.setBytes(++remaining, new byte[] {(byte)'X'}); println("Blob modification duration: " + (System.currentTimeMillis() - modifyStart) + " ms"); } long pos = 1; int MAX_SIZE = 32676; switch (fetchMode) { case FETCH_GETBYTES: while (remaining > 0) { byte[] bytes = blob.getBytes( pos, (int)Math.min(MAX_SIZE, remaining)); pos += bytes.length; remaining -= bytes.length; } break; case FETCH_GETBINARYSTREAM: InputStream stream = blob.getBinaryStream(); byte[] buf = new byte[MAX_SIZE]; while (remaining > 0) { int read = stream.read(buf); pos += read; remaining -= read; } stream.close(); break; default: fail("Unknown fetch mode: " + fetchMode); } } rs.close(); } /** * Tests if the Blob length is cached. */ public void testLargeBlobGetLength() throws SQLException { // Select just one Blob. PreparedStatement ps = prepareStatement( "select dBlob, length from largeBlobs where id = 7"); ResultSet rs = ps.executeQuery(); while (rs.next()) { Blob blob = rs.getBlob(1); long length = rs.getInt(2); // This should be cached. Have to skip lots of data otherwise. for (int i=0; i < 50; i++) { assertEquals(length, blob.length()); } } rs.close(); } /** * Tests if the Blob length is cached. */ public void testLargeBlobGetLengthModified() throws SQLException { // Select just one Blob. PreparedStatement ps = prepareStatement( "select dBlob, length from largeBlobs where id = 7"); ResultSet rs = ps.executeQuery(); while (rs.next()) { Blob blob = rs.getBlob(1); blob.setBytes(1, new byte[] {(byte)'X'}); long length = rs.getInt(2); // This should be cached. Have to skip lots of data otherwise. for (int i=0; i < 50; i++) { assertEquals(length, blob.length()); } } rs.close(); } /** * Runs a test using multiple threads. * <p> * This test intends to detect problems with small Blobs and general * problems with concurrency. * <p> * <b>NOTE</b>: To produce more reliable numbers, please run the performance * client independently outside this JUnit test framework. Performance also * suffers greatly with SANE builds. */ public void testConcurrency() throws InterruptedException, SQLException { final int records = 100000; final int tables = 1; final int threads = 16; DBFiller filler = new SingleRecordFiller( records, tables, java.sql.Types.BLOB, false, false); Connection conn = getConnection(); println("initializing database..."); filler.fill(conn); conn.close(); Client[] clients = new Client[threads]; for (int i = 0; i < clients.length; i++) { Connection c = openDefaultConnection(); c.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED); clients[i] = new SingleRecordSelectClient( records, tables, java.sql.Types.BLOB, false, false); clients[i].init(c); } final int warmupSec = 30; final int steadySec = 60; LoadGenerator gen = new BackToBackLoadGenerator(); gen.init(clients); println("starting warmup..."); gen.startWarmup(); Thread.sleep(1000L * warmupSec); println("entering steady state..."); gen.startSteadyState(); Thread.sleep(1000L * steadySec); println("stopping threads..."); gen.stop(); // Should get the printstream used by the test harness here. gen.printReport(System.out); } /** * Generates test data. */ private static void initializeBlobData(Statement stmt) throws SQLException, UnsupportedEncodingException { Connection con = stmt.getConnection(); con.setAutoCommit(false); if (!disableSmallBlobs) { println("Generating small Blobs test data."); // Insert small Blob data. try { stmt.executeUpdate("drop table smallBlobs"); } catch (SQLException sqle) { assertSQLState("42Y55", sqle); } stmt.executeUpdate( "create table smallBlobs (dBlob Blob, length int)"); PreparedStatement smallBlobInsert = con.prepareStatement( "insert into smallBlobs values (?,?)"); // Insert 15 000 small Blobs. for (int BlobCounter = 1; BlobCounter < 15001; BlobCounter++) { byte[] content = Integer.toString(BlobCounter).getBytes("US-ASCII"); smallBlobInsert.setBytes(1, content); smallBlobInsert.setInt(2, content.length); smallBlobInsert.executeUpdate(); if (BlobCounter % 1000 == 0) { con.commit(); } } con.commit(); } if (!disableLargeBlobs) { println("Generating large Blobs test data."); // Insert large Blob data. try { stmt.executeUpdate("drop table largeBlobs"); } catch (SQLException sqle) { assertSQLState("42Y55", sqle); } stmt.executeUpdate("create table largeBlobs (" + "id int unique not null, dBlob Blob, length int)"); PreparedStatement largeBlobInsert = con.prepareStatement( "insert into largeBlobs values (?,?,?)"); // Insert some large Blobs. final int size = largeBlobSizeMB*1024*1024; // 15 MB default for (int BlobCounter = 1; BlobCounter < 11; BlobCounter++) { largeBlobInsert.setInt(1, BlobCounter); largeBlobInsert.setBinaryStream( 2, new LoopingAlphabetStream(size), size); largeBlobInsert.setInt(3, size); largeBlobInsert.executeUpdate(); } con.commit(); } } }