content
stringlengths 10
4.9M
|
---|
A secret Air Force space plane launched on an Atlas V Thursday night at 7:52 p.m. EDT (2352 GMT) on a classified mission. The vehicle, the umanned X-37B Orbital Test Vehicle, looks like a mini space shuttle and has the capability to remain in orbit for 270 days. The purpose of this vehicle – for this mission and for the future – is unknown, but the Air Force says this newest and most advanced re-entry spacecraft will demonstrate autonomous orbital flight, reentry and landing.
Although the mission is secret, the launch was open to the media and was webcast live by the United Launch Alliance, and included live Twitter updates from the Air Force Space Command. Shortly after main engine cutoff, however, the webcast ended and no more updates were provided about the rocket and the vehicle’s activities.
[/caption]
The mission duration has not been disclosed, but the Air Force said technologies to be tested during the flight include advanced guidance, navigation and control, thermal protection systems, avionics, high temperature structures and seals, reusable insulation and lightweight electromechanical flight systems.
Liftoff occurred on time; and the stages separated 4 minutes and 31 seconds into the flight, and engine cutoff came at about 17 minutes after launch.
The X-37B is 9 meters long and 4.5 meter wide (29 X 15 ft) and its payload bay is 2.1 by 1.2 meters (7 by 4 feet). The vehicle was built at Boeing Phantom Works, based on an orbital and re-entry demonstrator design initially developed by NASA, then handed over to the Pentagon.
Rumors of an X-37B launch have been circulating since 2008.
Originally the vehicle was scheduled for launch in from the payload bay of the Space Shuttle , but that plan was axed following the Columbia accident.
Comparing the X-37B to the space shuttle, the orbiters 56 meters (184 feet) long, has a wingspan of 23 meters (78 feet), and weighs 2 million kg (4.5 million pounds.)
The space shuttle can haul payloads up to 29,500 pounds, while the OTV can only handle up to 226 kg (500 pounds.)
The X37-B will land on a runway in California and will be controlled remotely from the ground. In the future, the Air Force said they hope to conduct experiments and rendezvous with other spacecraft.
See our preview article about the X37-B.
Enjoy more launch images from Alan Walters: |
import { findFiles,
ProcessFile,
getSplitVersionParts
} from "./AppyVersionToJSONFileFunctions";
import tl = require("vsts-task-lib/task");
import fs = require("fs");
var path = tl.getInput("Path");
var versionNumber = tl.getInput("VersionNumber");
var versionRegex = tl.getInput("VersionRegex");
var field = tl.getInput("Field");
var outputversion = tl.getInput("outputversion");
var filenamePattern = tl.getInput("FilenamePattern");
var versionForJSONFileFormat = tl.getInput("versionForJSONFileFormat");
var useBuildNumberDirectly = tl.getBoolInput("useBuildNumberDirectly");
var recursion = tl.getBoolInput("recursion");
console.log (`Source Directory: ${path}`);
console.log (`Filename Pattern: ${filenamePattern}`);
console.log (`File search recursion: ${recursion}`);
console.log (`Version Number/Build Number: ${versionNumber}`);
console.log (`Use Build Number Directly: ${useBuildNumberDirectly}`);
console.log (`Version Filter to extract build number: ${versionRegex}`);
console.log (`Version Format for JSON File: ${versionForJSONFileFormat}`);
console.log (`Field to update (all if empty): ${field}`);
console.log (`Output: Version Number Parameter Name: ${outputversion}`);
// Make sure path to source code directory is available
if (!fs.existsSync(path)) {
tl.error(`Source directory does not exist: ${path}`);
process.exit(1);
}
// work out if we need to extract the verson from the build
let jsonVersion = versionNumber; // set the default value
if (useBuildNumberDirectly === false) {
// Get and validate the version data
var regexp = new RegExp(versionRegex);
var versionData = regexp.exec(versionNumber);
if (!versionData) {
// extra check as we don't get zero size array but a null
tl.error(`Could not find version number data in ${versionNumber} that matches ${versionRegex}.`);
process.exit(1);
}
switch (versionData.length) {
case 0:
// this is trapped by the null check above
tl.error(`Could not find version number data in ${versionNumber} that matches ${versionRegex}.`);
process.exit(1);
case 1:
break;
default:
tl.warning(`Found more than instance of version data in ${versionNumber} that matches ${versionRegex}.`);
tl.warning(`Will assume first instance is version.`);
break;
}
console.log (`Extracting version from the build number`);
var buildVersion = versionData[0];
console.log (`Extracted Build Version: ${buildVersion}`);
jsonVersion = getSplitVersionParts(versionRegex, versionForJSONFileFormat, buildVersion);
} else {
console.log (`Using the provided build number without any further processing`);
}
console.log (`JSON Version Name will be: ${jsonVersion}`);
// Apply the version to the assembly property files
var files = findFiles(`${path}`, filenamePattern, files, recursion);
if (files.length > 0) {
console.log (`Will apply ${jsonVersion} to ${files.length} files.`);
files.forEach(file => {
ProcessFile(file, field, jsonVersion);
});
if (outputversion && outputversion.length > 0) {
console.log (`Set the output variable '${outputversion}' with the value ${jsonVersion}`);
tl.setVariable(outputversion, jsonVersion );
}
} else {
tl.warning("Found no files.");
}
|
import sys
input = sys.stdin.readline
import math
def print_ans(N, D):
"""Test Case
>>> print_ans(6, 2)
2
>>> print_ans(14, 3)
2
>>> print_ans(20, 4)
3
"""
print(math.ceil(N / (D * 2 + 1)))
if __name__ == '__main__':
N, D = map(int,input().rstrip().split())
print_ans(N, D)
|
/*
* vqec_config_parser.h - Implements parsing of VQE-C system configuration
* files.
*
* Copyright (c) 2007-2009 by Cisco Systems, Inc.
* All rights reserved.
*/
#include "queue_plus.h"
#include "vam_types.h"
/**
* Enumeration of setting types.
*/
typedef enum vqec_config_setting_type_ {
VQEC_CONFIG_SETTING_TYPE_INVALID = 0,
VQEC_CONFIG_SETTING_TYPE_STRING,
VQEC_CONFIG_SETTING_TYPE_BOOLEAN,
VQEC_CONFIG_SETTING_TYPE_INT,
VQEC_CONFIG_SETTING_TYPE_LIST,
VQEC_CONFIG_SETTING_TYPE_GROUP,
} vqec_config_setting_type_t;
#define VQEC_CONFIG_MAX_NAME_LEN 100
/**
* Declaration for structure to be used with list element.
*/
struct vqec_config_setting_;
/**
* Structure which contains all of a particular setting's data.
*/
typedef struct vqec_config_setting_ {
/**
* The type of setting which this is.
*/
vqec_config_setting_type_t type;
/**
* The name of this particular setting.
*/
char *name;
/**
* The following value fields are stored as a union to conserve memory.
*/
union {
/**
* The string value of this setting.
*/
char *value_string;
/**
* The boolean value of this setting.
*/
boolean value_boolean;
/**
* The integer (signed) value of this setting.
*/
int value_int;
};
/**
* Queue object for setting list.
*/
VQE_TAILQ_ENTRY(vqec_config_setting_) list_qobj;
/**
* Head for setting sublist.
*/
VQE_TAILQ_HEAD(setting_sublist_head, vqec_config_setting_) subsetting_head;
} vqec_config_setting_t;
#define VQEC_CONFIG_ERROR_STRLEN 80
/**
* Structure which contains all data within a particular configuration.
*/
typedef struct vqec_config_ {
/**
* Head for setting list.
*/
vqec_config_setting_t root;
/**
* Textual information which may help indicate a problem in parsing a
* particular configuration file.
*/
char error_text[VQEC_CONFIG_ERROR_STRLEN];
/**
* Line number at which a problem occurred while parsing a particular
* configuration file.
*/
int error_line;
} vqec_config_t;
/**
* Initialize the configuration parser.
*
* @param[in] cfg Instance of configuration parser.
* @return Returns TRUE if the parser was initialized successfully; FALSE
* otherwise.
*/
boolean vqec_config_init(vqec_config_t *cfg);
/**
* Read a configuration file and parse its parameters and values.
*
* @param[in] cfg Instance of configuration parser.
* @param[in] filepath Path to the file to be read and parsed.
* @return Returns TRUE if the file was read and parsed successfully; FALSE
* otherwise. If FALSE is returned, the "error_text" and
* "error_line" fields of cfg may contain information helpful in
* determining what the problem was.
*/
boolean vqec_config_read_file(vqec_config_t *cfg,
const char *filepath);
/**
* Read a buffer containing configuration information and parse its parameters
* and values.
*
* @param[in] cfg Instance of configuration parser.
* @param[in] buffer Pointer to the buffer to be parsed.
* @return Returns TRUE if the buffer was read and parsed successfully; FALSE
* otherwise. If FALSE is returned, the "error_text" and
* "error_line" fields of cfg may contain information helpful in
* determining what the problem was.
*/
boolean vqec_config_read_buffer(vqec_config_t *cfg,
const char *buffer);
/**
* Look up a configuration setting by its parameter name.
*
* @param[in] cfg Instance of configuration parser.
* @param[in] name Name of the parameter which should be looked up.
* @return If a setting is found that matches the given parameter name, then
* a pointer to that setting is returned. Otherwise, NULL is
* returned.
*/
vqec_config_setting_t *vqec_config_lookup(vqec_config_t *cfg,
char *name);
/**
* Determine the type of a particular setting.
*
* @param[in] setting Setting to have its type determined.
* @return Returns the type of the setting.
*/
vqec_config_setting_type_t
vqec_config_setting_type(vqec_config_setting_t *setting);
/**
* Determine the length (number of elements) of a particular group or list
* format configuration setting. If the given setting is not a group or list
* type, then 0 will be returned.
*
* @param[in] setting Setting to have its length determined.
* @return Returns the number of elements in the group or list.
*/
int vqec_config_setting_length(vqec_config_setting_t *setting);
/**
* Retrieve an element of a list by its index. If the given setting is not a
* list type, then NULL will be returned.
*
* @param[in] setting List from which the element shall be retrieved.
* @param[in] index Index of the requested element within the list.
* @return Returns a pointer to the requested element if it exists. If the
* requested element does not exist, NULL is returned.
*/
vqec_config_setting_t *
vqec_config_setting_get_elem(vqec_config_setting_t *setting,
int index);
/**
* Retrieve a member of a group by its name. If the given setting is not a
* group type, then NULL will be returned.
*
* @param[in] setting Group from which the member shall be retrieved.
* @param[in] name Name of the requested member within the group.
* @return Returns a pointer to the member with the given name if it exists.
* If no member with the given name exists, NULL is returned.
*/
vqec_config_setting_t *
vqec_config_setting_get_member(vqec_config_setting_t *setting,
char *name);
/**
* Retrieve the value of a string setting. If the given setting is not a
* string type, then NULL will be returned.
*
* @param[in] setting Setting to have its value retrieved.
* @return Returns a pointer to the string value of the setting.
*/
char *vqec_config_setting_get_string(vqec_config_setting_t *setting);
/**
* Retrieve the value of a boolean setting. If the given setting is not a
* boolean type, then FALSE will be returned.
*
* @param[in] setting Setting to have its value retrieved.
* @return Returns TRUE or FALSE in accordance with the value of the setting.
*/
boolean vqec_config_setting_get_bool(vqec_config_setting_t *setting);
/**
* Retrieve the value of a signed integer setting. If the given setting is not
* an integer type, then 0 will be returned.
*
* @param[in] setting Setting to have its value retrieved.
* @return Returns the signed integer value of the setting.
*/
int vqec_config_setting_get_int(vqec_config_setting_t *setting);
/**
* Destroy all information stored in a configuration parser instance.
*
* @param[in] cfg Instance of configuration parser.
* @return Returns TRUE if the parser was destroyed successfully; FALSE
* otherwise.
*/
boolean vqec_config_destroy(vqec_config_t *cfg);
|
<filename>CMSSiteInformation.py
# Class definition:
# CMSSiteInformation
# This class is the prototype of a site information class inheriting from SiteInformation
# Instances are generated with SiteInformationFactory via pUtil::getSiteInformation()
# Implemented as a singleton class
# http://stackoverflow.com/questions/42558/python-and-the-singleton-pattern
# import relevant python/pilot modules
import os
import re
import pickle
import commands
import shlex
import getopt
from SiteInformation import SiteInformation # Main site information class
from pUtil import tolog # Logging method that sends text to the pilot log
from pUtil import readpar # Used to read values from the schedconfig DB (queuedata)
from PilotErrors import PilotErrors # Error codes
from optparse import (OptionParser,BadOptionError)
class PassThroughOptionParser(OptionParser):
"""
An unknown option pass-through implementation of OptionParser.
When unknown arguments are encountered, bundle with largs and try again,
until rargs is depleted.
sys.exit(status) will still be called if a known argument is passed
incorrectly (e.g. missing arguments or bad argument types, etc.)
"""
def _process_args(self, largs, rargs, values):
while rargs:
try:
OptionParser._process_args(self,largs,rargs,values)
except (BadOptionError, Exception), e:
#largs.append(e.opt_str)
continue
class CMSSiteInformation(SiteInformation):
# private data members
__experiment = "CMS"
__instance = None
# Required methods
def __init__(self):
""" Default initialization """
# not needed?
pass
def __new__(cls, *args, **kwargs):
""" Override the __new__ method to make the class a singleton """
if not cls.__instance:
cls.__instance = super(CMSSiteInformation, cls).__new__(cls, *args, **kwargs)
return cls.__instance
def getExperiment(self):
""" Return a string with the experiment name """
return self.__experiment
def isTier1(self, sitename):
""" Is the given site a Tier-1? """
return False
def isTier2(self, sitename):
""" Is the given site a Tier-2? """
return False
def isTier3(self):
""" Is the given site a Tier-3? """
return False
def allowAlternativeStageOut(self):
""" Is alternative stage-out allowed? """
# E.g. if stage-out to primary SE (at Tier-2) fails repeatedly, is it allowed to attempt stage-out to secondary SE (at Tier-1)?
return False
def belongsTo(self, value, rangeStart, rangeEnd):
if value >= rangeStart and value <= rangeEnd:
return True, rangeEnd
rangeStart=(rangeStart+1000)
return False, rangeStart
def findRange(self, job, filename):
jobNumber = int(filename.split('_')[-2]) #int(self.extractJobPar(job, '--jobNumber'))
filesPerJob = len(job.outFiles)
range = ((filesPerJob)*(jobNumber+1) / 1000 +1)*1000
if filename.split('.')[-1] != 'root' and filename.split('.')[-2] != 'root' :
range = '%s/log' % str(range)
return str(range)
def extractJobPar(self, job, par, ptype="string"):
strpars = job.jobPars
cmdopt = shlex.split(strpars)
parser = PassThroughOptionParser()
parser.add_option(par,\
dest='par',\
type=ptype)
(options,args) = parser.parse_args(cmdopt)
return options.par
def getProperPaths(self, error, analyJob, token, prodSourceLabel, dsname, filename, sitename, JobData, alt=False):
""" Called by LocalSiteMover, from put_data method, instead of using SiteMover.getProperPaths
needed only if LocalSiteMover is used instead of specific Mover
serve solo se utilizziamo LocalSiteMover al posto dello specifico Mover
full lfn format:
/store/user/<yourHyperNewsusername>/<primarydataset>/<publish_data_name>/<PSETHASH>/<NNNN>/<output_file_name>
"""
tolog("prodSourceLabel = %s " % prodSourceLabel)
tolog("dsname = %s " % dsname)
tolog("filename = %s " % filename)
tolog("sitename = %s " % sitename)
tolog(" analyJob = %s " % analyJob)
tolog("token = %s " % token)
job = None
remoteSE = ''
try:
import Job
pkl_file = open(JobData, 'rb')
newJobDef = pickle.load(pkl_file)
job = newJobDef['job']
#for attr in dir(newJobDef['job']):
# print "obj.%s = %s" % (attr, getattr(newJobDef['job'], attr))
remoteSE = job.fileDestinationSE
if remoteSE.find(','):
remoteSE = remoteSE.split(',')[0]
tolog("############# getProperPaths - remoteSE: %s - filename: %s" % (remoteSE, filename))
except Exception, e:
tolog("############# getProperPaths except!! %s !! remoteSE: %s - filename: %s" % (e, remoteSE, filename))
pass
ec = 0
pilotErrorDiag = ""
tracer_error = ""
dst_gpfn = ""
lfcdir = ""
full_lfn = ""
if "runGen" in newJobDef['job'].trf:
""" case runGen
dsname example: vmancine/GenericTTbar/AsoTest_130403_094107-v1/USER """
primarydataset = dsname.split('/')[1]
publishdataset = dsname.split('/')[2].split('-v1')[0].split('_')[0]
hnusername = dsname.split('/')[0]
psethash = 'PSETHASH'
rndcmd = 'den=(0 1 2 3 4 5 6 7 8 9 A B C D E F G H I J K L M N O P Q R S T U V W X Y Z a b c d e f g h i j k l m n o p q r s t u v w x y z); nd=${#den[*]}; randj=${den[$RANDOM % $nd]}${den[$RANDOM % $nd]}${den[$RANDOM % $nd]}; echo $randj'
rnd = commands.getoutput(rndcmd)
if filename.split('.')[1] == 'root':
#output file
newfilename = '%s_%s.%s' % (filename.split('.')[0], rnd, filename.split('.')[1])
else:
#log file
newfilename = '%s%s_%s.%s' % (filename.split('.')[1], filename.split('.')[2], rnd, filename.split('.')[3])
else:
""" case CMSRunAnaly
dsname example: /RelValProdTTbar/mcinquil-test000001-psethash/USER"""
hnusername = dsname.split('/')[2].split('-')[0]
primarydataset = dsname.split('/')[1]
publishdataset = '-'.join(dsname.split('/')[2].split('-')[1:-1])
#try:
# psethash = dsname.split('/')[2].split('-')[2]
#except Exception, e:
# psethash = 'psethash'
psethash = dsname.split('/')[2].split('-')[-1]
newfilename = filename
# Calculate value of NNNN folder
if job:
try:
nnnn = self.findRange(job, newfilename)
except Exception, e:
tolog('error = %s' % e)
nnnn = 'NNNN'
else:
nnnn = 'NNNN'
full_lfn_suffix = '%s/%s/%s/%s/%s/%s' % (hnusername, primarydataset, publishdataset, psethash, nnnn, newfilename)
#here we should have a check on the async destination and the local site.
# if they are the same we should use full_lfn_prefix = '/store/user' otherwise
# we should have full_lfn_prefix = '/store/temp/user/'
full_lfn_prefix = '/store/temp/user/'
#if remoteSE and sitename:
# sitename = sitename.split('ANALY_')[1]
# if remoteSE == sitename:
# full_lfn_prefix = '/store/user/'
full_lfn = '%s%s'%( full_lfn_prefix, full_lfn_suffix )
tolog("full_lfn = %s" % full_lfn )
tolog("dst_gpfn = %s" % (None))
tolog("lfcdir = %s" % (None))
return ec, pilotErrorDiag, tracer_error, dst_gpfn, lfcdir, full_lfn
def extractAppdir(self, appdir, processingType, homePackage):
""" Called by pilot.py, runMain method """
tolog("CMSExperiment - extractAppdir - nothing to do")
return 0, ""
if __name__ == "__main__":
a = CMSSiteInformation()
tolog("Experiment: %s" % (a.getExperiment()))
|
#![feature(asm)]
#![no_std]
pub mod intrin;
pub mod time;
|
def split_csv(data_frame, train_portion=0.4, validation_portion=0.2):
rows = data_frame.index.values
Random(RANDOM_SEED).shuffle(rows)
n_rows = len(rows)
n_train_data = int(n_rows * train_portion)
n_validation_data = int(n_rows * validation_portion)
train_data = data_frame.iloc[rows[1:n_train_data], :]
validation_data = data_frame.iloc[rows[n_train_data:n_train_data+n_validation_data], :]
test_data = data_frame.iloc[rows[n_train_data + n_validation_data:], :]
save_data_to_csv(train_data, TRAIN_DATASET_CSV_PATH)
save_data_to_csv(validation_data, VALIDATION_DATASET_CSV_PATH)
save_data_to_csv(test_data, TEST_DATASET_CSV_PATH) |
Mark Burnett, the reality show impresario, has faced mounting demands in recent days to release old video from his series “The Apprentice,” on speculation that Donald J. Trump was captured on camera making vulgar remarks during his 11 years as the show’s host.
On Monday, Mr. Burnett broke his silence and issued a statement with a basic message: Don’t look at me.
“Despite reports to the contrary, Mark Burnett does not have the ability nor the right to release footage or other material from ‘The Apprentice,’” read a statement issued by Mr. Burnett’s public relations team and attributed to Mr. Burnett and Metro-Goldwyn-Mayer, the entertainment conglomerate that acquired a majority of his production company in 2014.
The statement did not elaborate on the reasons Mr. Burnett could not distribute the video, beyond a general description of “various contractual and legal requirements” that “restrict M.G.M.’s ability to release such material.” |
<reponame>heinrichreimer/thuringian-field-names
import { FunctionComponent, useEffect } from "react";
import { Container, Row, Col, Alert } from "react-bootstrap";
import { useHistory, useParams } from "react-router-dom";
import { useSelector } from "react-redux";
import {
selectSearchResults,
selectSearchIsLoading,
searchFieldNames,
selectSearchError,
useAppDispatch,
emptySearch,
} from "../../store";
import { SearchForm, ApiErrorAlert, SearchSnippets, LoadingAlert } from "..";
interface Parameters {
query: string;
}
/**
* Page component describing the field name search.
*
* Mounted at `/search/:query` where query is the string to search for.
*/
export const SearchPage: FunctionComponent = () => {
const params = useParams<Parameters>();
const history = useHistory();
const dispatch = useAppDispatch();
const results = useSelector(selectSearchResults);
const loading = useSelector(selectSearchIsLoading);
const error = useSelector(selectSearchError);
// Search field names whenever the query changes.
useEffect(() => {
if (params.query) {
dispatch(searchFieldNames(params.query));
} else {
dispatch(emptySearch());
}
}, [dispatch, params.query]);
function search(query: string) {
history.push(`/search/${query}`);
}
return (
<Container>
<Row>
<Col>
<SearchForm search={search} query={params.query} />
</Col>
</Row>
<Row style={{ marginTop: "1ex" }}>
<Col>
{results.length > 0 ? (
<p>
Found {results.length} results for <b>{params.query}</b>.
</p>
) : params.query ? (
<p>
No results found for <b>{params.query}</b>.
</p>
) : undefined}
</Col>
</Row>
<hr />
{!params.query ? (
<Row>
<Col>
<Alert variant="info">Type in your search query above.</Alert>
</Col>
</Row>
) : undefined}
{results.length > 0 ? (
<Row>
<Col>
{loading ? (
<Row>
<Col>
<LoadingAlert />
</Col>
</Row>
) : error ? (
<Row>
<Col>
<ApiErrorAlert error={error} />
</Col>
</Row>
) : (
<SearchSnippets snippets={results} />
)}
</Col>
</Row>
) : undefined}
</Container>
);
};
|
/*
* An inner class that inherits from the abstract class,and
* overriding super class method to shows items list by Tracker instance using
* @param: menuItemId() method to get the call position
**/
class MenuShowAllItem extends MenuItem {
public MenuShowAllItem(String name) {
super(name);
}
public int menuItemId() {
return 3;
}
@Override
public void menuItemInfo(ConsoleInput cIn, Tracker tracker) {
for (int i = 0; i < tracker.getAll().length; i++) {
cIn.outPrintln(i + "." + tracker.getAll()[i].getName()
+ " " + tracker.getAll()[i].getDescription());
}
}
} |
N=int(input())
MOD=10**9+7
ans=10**N %MOD
ans=ans-2*9**N %MOD
ans=ans+8**N %MOD
print(ans%MOD) |
Simona Halep retired from her quarterfinal matchup against Ekaterina Makarova because of the heat Friday. (Geoff Burke/USA Today Sports)
Top-seeded Simona Halep, the second-ranked women’s tennis player in the world, retired from her Citi Open quarterfinal match Friday afternoon because of illness brought on by the heat.
Halep was trailing No. 7 seed Ekaterina Makarova, 1-0, in the third set when she called the trainer over early and appeared to have her blood pressure taken, after which she immediately retired.
“I had a headache,” Halep said, “and little sick, so it was better to stop.”
[Steinberg: The Citi Open is making me lose my mind]
Halep won the first set, 6-2, before dropping the second, 6-3. She has played midafternoon matches all week and took a medical timeout for heat during her match Thursday, after which she requested to play later in the day Friday. The crowded tournament schedule, however, didn’t allow it.
The temperature was 90 degrees in Washington at the time of Halep’s match Friday.
Makarova, a Russian ranked 58th in the world, advanced to face either Sabine Lisicki or fifth-seeded Oceane Dodin in the semifinals.
“The sun just took so much energy,” Makarova said. “Simona, she’s a great player, No. 2 in the world, so it’s always tough to play against her. I was just trying to stay in the game, do what I can to fight, do what I can do in that moment as fast as I can.”
Zverev storms into semifinals
Fifth-seeded Alexander Zverev made quick work of Daniil Medvedev on Friday, taking less than an hour to beat the Russian 6-2, 6-4, to clinch his second consecutive appearance in the Citi Open semifinals. The 20-year-old advances to play either second-seeded Kei Nishikori, the 2015 champion, or former junior circuit foe Tommy Paul on Saturday.
Zverev, the eight-ranked player in the world, is the highest-ranked player remaining in the men’s field with Dominic Thiem having been eliminated. He won 83 points off his first serve and broke Medvedev three times in front of a boisterous Friday evening crowd.
“I love the atmosphere,” Zverev said after the match. “I love the people in Washington, it’s a great crowd — pretty young crowd, as well. They’re always very very loud, so it’s always fun to play.”
Bryan brothers advance
Bob and Mike Bryan’s 7-5, 6-4 win against the doubles pair of Rohan Bopanna and Donald Young propelled them into the Citi Open doubles semifinals Friday, where the fourth-seeded brothers will play the second-seeded doubles pair of Lukasz Kubot and Marcelo Melo.
The Bryan brothers are the most successful men’s duo of all time, having won more games, matches, tournaments and Grand Slams — 16 — than any other men’s pairing. They won the gold medal in the 2012 Summer Olympics in London. And they have yet to lose a set in this year’s Citi Open.
Stephens and Bouchard advance
Spectators sat expectantly, on the metal benches at Grandstand 1, anxiously waiting for Sloane Stephens to serve for match point.
Some of them had their phones out. A couple had cameras. But all erupted in cheers when Monica Niculescu’s return of serve went out of bounds.
Stephens and Eugenie Bouchard hugged. They smiled. They had just toppled the highest-ranked doubles team at the Citi Open — Niculescu and Sania Mirza — 1-6, 7-5, 10-8, in the women’s doubles semifinals. Now, Bouchard and Stephens are in the finals, where they will face Shuko Aoyama and Renata Voracova.
It was a shocking result for several reasons. Bouchard and Stephens lost the first set by a huge margin. They trailed 4-3 in the second set before storming back. But perhaps the biggest reason is that they have never played together. So they’re still trying to develop their most effective strategy.
“Our strategy is actually no strategy,” Bouchard said. “So if we don’t know what we’re doing, our opponents for sure don’t know what we’re doing.”
Earlier in the day, Bouchard and Stephens won their quarterfinal match when Nigina Abduraimova and Patricia Maria Tig retired down a set and trailing 1-0 in the second because of an injury suffered by Abduraimova.
More from the Citi Open:
Tennys Sandgren enjoys a first pro tennis breakthrough
There’s an art to choosing practice partners at a tennis tournament
For Steve Johnson, tennis is no escape from grief |
// IsDirectory returns true iff we have no error and the path points to a directory.
func (g GoPath) IsDirectory() bool {
if info := g.FileInfo(); info != nil {
return info.IsDir()
} else {
return false
}
} |
<filename>buildSrc/src/main/java/com/debughelper/tools/r8/shaking/VerticalClassMergerGraphLense.java
// Copyright (c) 2018, the R8 project authors. Please see the AUTHORS file
// for details. All rights reserved. Use of this source code is governed by a
// BSD-style license that can be found in the LICENSE file.
package com.debughelper.tools.r8.shaking;
import com.debughelper.tools.r8.graph.DexEncodedMethod;
import com.debughelper.tools.r8.graph.DexField;
import com.debughelper.tools.r8.graph.DexMethod;
import com.debughelper.tools.r8.graph.DexType;
import com.debughelper.tools.r8.graph.GraphLense;
import com.debughelper.tools.r8.ir.code.Invoke.Type;
import com.debughelper.tools.r8.ir.code.Invoke;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableSet;
import java.util.HashMap;
import java.util.Map;
import java.util.Set;
// This graph lense is instantiated during vertical class merging. The graph lense is context
// sensitive in the enclosing class of a given invoke *and* the type of the invoke (e.g., invoke-
// super vs invoke-virtual). This is illustrated by the following example.
//
// public class A {
// public void m() { ... }
// }
// public class B extends A {
// @Override
// public void m() { invoke-super A.m(); ... }
//
// public void m2() { invoke-virtual A.m(); ... }
// }
//
// Vertical class merging will merge class A into class B. Since class B already has a method with
// the signature "void B.m()", the method A.m will be given a fresh name and moved to class B.
// During this process, the method corresponding to A.m will be made private such that it can be
// called via an invoke-direct instruction.
//
// For the invocation "invoke-super A.m()" in B.m, this graph lense will return the newly created,
// private method corresponding to A.m (that is now in B.m with a fresh name), such that the
// invocation will hit the same implementation as the original super.m() call.
//
// For the invocation "invoke-virtual A.m()" in B.m2, this graph lense will return the method B.m.
public class VerticalClassMergerGraphLense extends GraphLense {
private final GraphLense previousLense;
private final Map<DexField, DexField> fieldMap;
private final Map<DexMethod, DexMethod> methodMap;
private final Map<DexType, Map<DexMethod, DexMethod>> contextualVirtualToDirectMethodMaps;
public VerticalClassMergerGraphLense(
Map<DexField, DexField> fieldMap,
Map<DexMethod, DexMethod> methodMap,
Map<DexType, Map<DexMethod, DexMethod>> contextualVirtualToDirectMethodMaps,
GraphLense previousLense) {
this.previousLense = previousLense;
this.fieldMap = fieldMap;
this.methodMap = methodMap;
this.contextualVirtualToDirectMethodMaps = contextualVirtualToDirectMethodMaps;
}
@Override
public DexType lookupType(DexType type) {
return previousLense.lookupType(type);
}
@Override
public DexMethod lookupMethod(DexMethod method, DexEncodedMethod context, Invoke.Type type) {
assert isContextFreeForMethod(method) || (context != null && type != null);
DexMethod previous = previousLense.lookupMethod(method, context, type);
if (type == Invoke.Type.SUPER) {
Map<DexMethod, DexMethod> virtualToDirectMethodMap =
contextualVirtualToDirectMethodMaps.get(context.method.holder);
if (virtualToDirectMethodMap != null) {
DexMethod directMethod = virtualToDirectMethodMap.get(previous);
if (directMethod != null) {
return directMethod;
}
}
}
return methodMap.getOrDefault(previous, previous);
}
@Override
public Set<DexMethod> lookupMethodInAllContexts(DexMethod method) {
ImmutableSet.Builder<DexMethod> builder = ImmutableSet.builder();
for (DexMethod previous : previousLense.lookupMethodInAllContexts(method)) {
builder.add(methodMap.getOrDefault(previous, previous));
for (Map<DexMethod, DexMethod> virtualToDirectMethodMap :
contextualVirtualToDirectMethodMaps.values()) {
DexMethod directMethod = virtualToDirectMethodMap.get(previous);
if (directMethod != null) {
builder.add(directMethod);
}
}
}
return builder.build();
}
@Override
public DexField lookupField(DexField field) {
DexField previous = previousLense.lookupField(field);
return fieldMap.getOrDefault(previous, previous);
}
@Override
public boolean isContextFreeForMethods() {
return contextualVirtualToDirectMethodMaps.isEmpty() && previousLense.isContextFreeForMethods();
}
@Override
public boolean isContextFreeForMethod(DexMethod method) {
if (!previousLense.isContextFreeForMethod(method)) {
return false;
}
DexMethod previous = previousLense.lookupMethod(method);
for (Map<DexMethod, DexMethod> virtualToDirectMethodMap :
contextualVirtualToDirectMethodMaps.values()) {
if (virtualToDirectMethodMap.containsKey(previous)) {
return false;
}
}
return true;
}
public static class Builder {
private final ImmutableMap.Builder<DexField, DexField> fieldMapBuilder = ImmutableMap.builder();
private final ImmutableMap.Builder<DexMethod, DexMethod> methodMapBuilder =
ImmutableMap.builder();
private final Map<DexType, Map<DexMethod, DexMethod>> contextualVirtualToDirectMethodMaps =
new HashMap<>();
public Builder() {}
public GraphLense build(GraphLense previousLense) {
Map<DexField, DexField> fieldMap = fieldMapBuilder.build();
Map<DexMethod, DexMethod> methodMap = methodMapBuilder.build();
if (fieldMap.isEmpty()
&& methodMap.isEmpty()
&& contextualVirtualToDirectMethodMaps.isEmpty()) {
return previousLense;
}
return new VerticalClassMergerGraphLense(
fieldMap, methodMap, contextualVirtualToDirectMethodMaps, previousLense);
}
public void map(DexField from, DexField to) {
fieldMapBuilder.put(from, to);
}
public void map(DexMethod from, DexMethod to) {
methodMapBuilder.put(from, to);
}
public void mapVirtualMethodToDirectInType(DexMethod from, DexMethod to, DexType type) {
Map<DexMethod, DexMethod> virtualToDirectMethodMap =
contextualVirtualToDirectMethodMaps.computeIfAbsent(type, key -> new HashMap<>());
virtualToDirectMethodMap.put(from, to);
}
public void merge(VerticalClassMergerGraphLense.Builder builder) {
fieldMapBuilder.putAll(builder.fieldMapBuilder.build());
methodMapBuilder.putAll(builder.methodMapBuilder.build());
for (DexType context : builder.contextualVirtualToDirectMethodMaps.keySet()) {
Map<DexMethod, DexMethod> current = contextualVirtualToDirectMethodMaps.get(context);
Map<DexMethod, DexMethod> other = builder.contextualVirtualToDirectMethodMaps.get(context);
if (current != null) {
current.putAll(other);
} else {
contextualVirtualToDirectMethodMaps.put(context, other);
}
}
}
}
}
|
/**
* @todo Many of these tests are in need of attention for a few reasons:
*
* - They won't execute on phoenix.dataverse.org because they some tests assume
* Solr on localhost.
*
* - Each test should create its own user (or users) rather than relying on
* global users. Once this is done the "Ignore" annotations can be removed.
*
* - We've seen "PSQLException: ERROR: deadlock detected" when running these
* tests per https://github.com/IQSS/dataverse/issues/2460 .
*
* - Other tests have moved to using UtilIT.java methods and these tests should
* follow suit.
*/
public class SearchIT {
private static final Logger logger = Logger.getLogger(SearchIT.class.getCanonicalName());
private static final String builtinUserKey = "burrito";
private static final String keyString = "X-Dataverse-key";
private static final String EMPTY_STRING = "";
private static final String idKey = "id";
private static final String apiTokenKey = "apiToken";
private static final String usernameKey = "userName";
private static final String emailKey = "email";
private static TestUser homer;
private static TestUser ned;
private static TestUser clancy;
private static final String dvForPermsTesting = "dvForPermsTesting";
private static String dataset1;
private static String dataset2;
private static String dataset3;
private static Integer dataset2Id;
private static Integer dataset3Id;
private static long nedAdminOnRootAssignment;
private static String dataverseToCreateDataset1In = "root";
/**
* @todo Figure out why we sometimes get database deadlocks when all tests
* are enabled: https://github.com/IQSS/dataverse/issues/2460
*/
private static final boolean disableTestPermsonRootDv = false;
private static final boolean disableTestPermsOnNewDv = false;
private static final boolean homerPublishesVersion2AfterDeletingFile = false;
private Stopwatch timer;
private boolean haveToUseCurlForUpload = false;
public SearchIT() {
}
@BeforeClass
public static void setUpClass() {
RestAssured.baseURI = UtilIT.getRestAssuredBaseUri();
Response setSearchApiNonPublicAllowed = UtilIT.setSetting(SettingsServiceBean.Key.SearchApiNonPublicAllowed, "true");
setSearchApiNonPublicAllowed.prettyPrint();
setSearchApiNonPublicAllowed.then().assertThat()
// .body(":SearchApiNonPublicAllowed", CoreMatchers.equalTo("true")) // Invalid JSON expression?
.statusCode(200);
Response getSearchApiNonPublicAllowed = UtilIT.getSetting(SettingsServiceBean.Key.SearchApiNonPublicAllowed);
getSearchApiNonPublicAllowed.prettyPrint();
getSearchApiNonPublicAllowed.then().assertThat()
.body("data.message", CoreMatchers.equalTo("true"))
.statusCode(200);
Response remove = UtilIT.deleteSetting(SettingsServiceBean.Key.ThumbnailSizeLimitImage);
remove.then().assertThat()
.statusCode(200);
boolean enabled = false;
if (!enabled) {
return;
}
logger.info("Running setup...");
JsonObject homerJsonObject = createUser(getUserAsJsonString("homer", "Homer", "Simpson"));
homer = new TestUser(homerJsonObject);
int homerIdFromDatabase = getUserIdFromDatabase(homer.getUsername());
if (homerIdFromDatabase != homer.getId()) {
// should never reach here: https://github.com/IQSS/dataverse/issues/2418
homer.setId(homerIdFromDatabase);
}
Response makeSuperUserResponse = makeSuperuser(homer.getUsername());
assertEquals(200, makeSuperUserResponse.getStatusCode());
JsonObject nedJsonObject = createUser(getUserAsJsonString("ned", "Ned", "Flanders"));
ned = new TestUser(nedJsonObject);
int nedIdFromDatabase = getUserIdFromDatabase(ned.getUsername());
if (nedIdFromDatabase != ned.getId()) {
// should never reach here: https://github.com/IQSS/dataverse/issues/2418
ned.setId(nedIdFromDatabase);
}
JsonObject clancyJsonObject = createUser(getUserAsJsonString("clancy", "Clancy", "Wiggum"));
clancy = new TestUser(clancyJsonObject);
int clancyIdFromDatabase = getUserIdFromDatabase(clancy.getUsername());
if (clancyIdFromDatabase != clancy.getId()) {
// should never reach here: https://github.com/IQSS/dataverse/issues/2418
clancy.setId(clancyIdFromDatabase);
}
}
@Ignore
@Test
public void testSearchCitation() {
Response createUser = UtilIT.createRandomUser();
createUser.prettyPrint();
String username = UtilIT.getUsernameFromResponse(createUser);
String apiToken = UtilIT.getApiTokenFromResponse(createUser);
Response createDataverseResponse = UtilIT.createRandomDataverse(apiToken);
createDataverseResponse.prettyPrint();
String dataverseAlias = UtilIT.getAliasFromResponse(createDataverseResponse);
Response createDatasetResponse = UtilIT.createRandomDatasetViaNativeApi(dataverseAlias, apiToken);
createDatasetResponse.prettyPrint();
Integer datasetId = UtilIT.getDatasetIdFromResponse(createDatasetResponse);
Response solrResponse = querySolr("id:dataset_" + datasetId + "_draft");
solrResponse.prettyPrint();
Response enableNonPublicSearch = enableSetting(SettingsServiceBean.Key.SearchApiNonPublicAllowed);
assertEquals(200, enableNonPublicSearch.getStatusCode());
Response searchResponse = search("id:dataset_" + datasetId + "_draft", apiToken);
searchResponse.prettyPrint();
assertFalse(searchResponse.body().jsonPath().getString("data.items[0].citation").contains("href"));
assertTrue(searchResponse.body().jsonPath().getString("data.items[0].citationHtml").contains("href"));
Response deleteDatasetResponse = UtilIT.deleteDatasetViaNativeApi(datasetId, apiToken);
deleteDatasetResponse.prettyPrint();
assertEquals(200, deleteDatasetResponse.getStatusCode());
Response deleteDataverseResponse = UtilIT.deleteDataverse(dataverseAlias, apiToken);
deleteDataverseResponse.prettyPrint();
assertEquals(200, deleteDataverseResponse.getStatusCode());
makeSuperuser(username);
search("finch&show_relevance=true&show_facets=true&fq=publicationDate:2016&subtree=birds", apiToken).prettyPrint();
search("trees", apiToken).prettyPrint();
Response deleteUserResponse = UtilIT.deleteUser(username);
deleteUserResponse.prettyPrint();
assertEquals(200, deleteUserResponse.getStatusCode());
}
@Ignore
@Test
public void homerGivesNedPermissionAtRoot() {
if (disableTestPermsonRootDv) {
return;
}
Response enableNonPublicSearch = enableSetting(SettingsServiceBean.Key.SearchApiNonPublicAllowed);
assertEquals(200, enableNonPublicSearch.getStatusCode());
long rootDataverseId = 1;
String rootDataverseAlias = getDataverseAlias(rootDataverseId, homer.getApiToken());
if (rootDataverseAlias != null) {
dataverseToCreateDataset1In = rootDataverseAlias;
}
String xmlIn = getDatasetXml(homer.getUsername(), homer.getUsername(), homer.getUsername());
Response createDataset1Response = createDataset(xmlIn, dataverseToCreateDataset1In, homer.getApiToken());
// System.out.println(createDataset1Response.prettyPrint());
assertEquals(201, createDataset1Response.getStatusCode());
dataset1 = getGlobalId(createDataset1Response);
// String zipFileName = "1000files.zip";
String zipFileName = "trees.zip";
if (haveToUseCurlForUpload) {
Process uploadZipFileProcess = uploadZipFileWithCurl(dataset1, zipFileName, homer.getApiToken());
// printCommandOutput(uploadZipFileProcess);
} else {
try {
Response uploadZipFileResponse = uploadZipFile(dataset1, zipFileName, homer.getApiToken());
} catch (FileNotFoundException ex) {
System.out.println("Problem uploading " + zipFileName + ": " + ex.getMessage());
}
}
Integer idHomerFound = printDatasetId(dataset1, homer);
assertEquals(true, idHomerFound != null);
Integer idNedFoundBeforeBecomingAdmin = printDatasetId(dataset1, ned);
String roleToAssign = "admin";
assertEquals(null, idNedFoundBeforeBecomingAdmin);
timer = Stopwatch.createStarted();
Response grantNedAdminOnRoot = grantRole(dataverseToCreateDataset1In, roleToAssign, ned.getUsername(), homer.getApiToken());
// System.out.println(grantNedAdminOnRoot.prettyPrint());
System.out.println("Method took: " + timer.stop());
assertEquals(200, grantNedAdminOnRoot.getStatusCode());
Integer idNedFoundAfterBecomingAdmin = printDatasetId(dataset1, ned);
// Response contentDocResponse = querySolr("entityId:" + idHomerFound);
// System.out.println(contentDocResponse.prettyPrint());
// Response permDocResponse = querySolr("definitionPointDvObjectId:" + idHomerFound);
// System.out.println(idHomerFound + " was found by homer (user id " + homer.getId() + ")");
// System.out.println(idNedFoundAfterBecomingAdmin + " was found by ned (user id " + ned.getId() + ")");
assertEquals(idHomerFound, idNedFoundAfterBecomingAdmin);
nedAdminOnRootAssignment = getRoleAssignmentId(grantNedAdminOnRoot);
timer = Stopwatch.createStarted();
Response revokeNedAdminOnRoot = revokeRole(dataverseToCreateDataset1In, nedAdminOnRootAssignment, homer.getApiToken());
// System.out.println(revokeNedAdminOnRoot.prettyPrint());
System.out.println("Method took: " + timer.stop());
assertEquals(200, revokeNedAdminOnRoot.getStatusCode());
Integer idNedFoundAfterNoLongerAdmin = printDatasetId(dataset1, ned);
assertEquals(null, idNedFoundAfterNoLongerAdmin);
Response disableNonPublicSearch = deleteSetting(SettingsServiceBean.Key.SearchApiNonPublicAllowed);
assertEquals(200, disableNonPublicSearch.getStatusCode());
}
@Ignore
@Test
public void homerGivesNedPermissionAtNewDv() {
if (disableTestPermsOnNewDv) {
return;
}
Response enableNonPublicSearch = enableSetting(SettingsServiceBean.Key.SearchApiNonPublicAllowed);
assertEquals(200, enableNonPublicSearch.getStatusCode());
TestDataverse dataverseToCreate = new TestDataverse(dvForPermsTesting, dvForPermsTesting, Dataverse.DataverseType.ORGANIZATIONS_INSTITUTIONS);
Response createDvResponse = createDataverse(dataverseToCreate, homer);
assertEquals(201, createDvResponse.getStatusCode());
String xmlIn = getDatasetXml(homer.getUsername(), homer.getUsername(), homer.getUsername());
Response createDataset1Response = createDataset(xmlIn, dvForPermsTesting, homer.getApiToken());
assertEquals(201, createDataset1Response.getStatusCode());
dataset2 = getGlobalId(createDataset1Response);
Integer datasetIdHomerFound = printDatasetId(dataset2, homer);
assertEquals(true, datasetIdHomerFound != null);
dataset2Id = datasetIdHomerFound;
Map<String, String> datasetTimestampsAfterCreate = checkPermissionsOnDvObject(datasetIdHomerFound, homer.apiToken).jsonPath().getMap("data.timestamps", String.class, String.class);
assertEquals(true, datasetTimestampsAfterCreate.get(Index.contentChanged) != null);
assertEquals(true, datasetTimestampsAfterCreate.get(Index.contentIndexed) != null);
assertEquals(true, datasetTimestampsAfterCreate.get(Index.permsChanged) != null);
assertEquals(true, datasetTimestampsAfterCreate.get(Index.permsIndexed) != null);
// String zipFileName = "noSuchFile.zip";
String zipFileName = "trees.zip";
// String zipFileName = "100files.zip";
// String zipFileName = "1000files.zip";
timer = Stopwatch.createStarted();
if (haveToUseCurlForUpload) {
Process uploadZipFileProcess = uploadZipFileWithCurl(dataset2, zipFileName, homer.getApiToken());
// printCommandOutput(uploadZipFileProcess);
} else {
Response uploadZipFileResponse;
try {
uploadZipFileResponse = uploadZipFile(dataset2, zipFileName, homer.getApiToken());
} catch (FileNotFoundException ex) {
System.out.println("Problem uploading " + zipFileName + ": " + ex.getMessage());
}
}
System.out.println("Uploading zip file took " + timer.stop());
List<Integer> idsOfFilesUploaded = getIdsOfFilesUploaded(dataset2, datasetIdHomerFound, homer.getApiToken());
int numFilesFound = idsOfFilesUploaded.size();
System.out.println("num files found: " + numFilesFound);
Integer idNedFoundBeforeRoleGranted = printDatasetId(dataset2, ned);
assertEquals(null, idNedFoundBeforeRoleGranted);
String roleToAssign = "admin";
timer = Stopwatch.createStarted();
Response grantNedAdmin = grantRole(dvForPermsTesting, roleToAssign, ned.getUsername(), homer.getApiToken());
// System.out.println(grantNedAdmin.prettyPrint());
System.out.println("granting role took " + timer.stop());
assertEquals(200, grantNedAdmin.getStatusCode());
Integer idNedFoundAfterRoleGranted = printDatasetId(dataset2, ned);
assertEquals(datasetIdHomerFound, idNedFoundAfterRoleGranted);
clearIndexTimesOnDvObject(datasetIdHomerFound);
reindexDataset(datasetIdHomerFound);
Map<String, String> datasetTimestampsAfterReindex = checkPermissionsOnDvObject(datasetIdHomerFound, homer.apiToken).jsonPath().getMap("data.timestamps", String.class, String.class);
assertEquals(true, datasetTimestampsAfterReindex.get(Index.contentChanged) != null);
assertEquals(true, datasetTimestampsAfterReindex.get(Index.contentIndexed) != null);
assertEquals(true, datasetTimestampsAfterReindex.get(Index.permsChanged) != null);
assertEquals(true, datasetTimestampsAfterReindex.get(Index.permsIndexed) != null);
if (!idsOfFilesUploaded.isEmpty()) {
Random random = new Random();
int randomFileIndex = random.nextInt(numFilesFound);
System.out.println("picking random file with index of " + randomFileIndex + " from list of " + numFilesFound);
int randomFileId = idsOfFilesUploaded.get(randomFileIndex);
Set<String> expectedSet = new HashSet<>();
expectedSet.add(IndexServiceBean.getGroupPerUserPrefix() + homer.getId());
expectedSet.add(IndexServiceBean.getGroupPerUserPrefix() + ned.getId());
Response checkPermsReponse = checkPermissionsOnDvObject(randomFileId, homer.getApiToken());
// checkPermsReponse.prettyPrint();
// [0] because there's only one "permissions" Solr doc (a draft)
List<String> permListFromDebugEndpoint = JsonPath.from(checkPermsReponse.getBody().asString()).get("data.perms[0]." + SearchFields.DISCOVERABLE_BY);
Set<String> setFoundFromPermsDebug = new TreeSet<>();
for (String perm : permListFromDebugEndpoint) {
setFoundFromPermsDebug.add(perm);
}
Map<String, String> timeStamps = JsonPath.from(checkPermsReponse.getBody().asString()).get("data.timestamps");
for (Map.Entry<String, String> entry : timeStamps.entrySet()) {
String key = entry.getKey();
String value = entry.getValue();
System.out.println(key + ":" + value);
}
assertEquals(expectedSet, setFoundFromPermsDebug);
Response solrQueryPerms = querySolr(SearchFields.DEFINITION_POINT_DVOBJECT_ID + ":" + randomFileId);
// solrQueryPerms.prettyPrint();
Set<String> setFoundFromSolr = new TreeSet<>();
List<String> perms = JsonPath.from(solrQueryPerms.getBody().asString()).getList("response.docs[0]." + SearchFields.DISCOVERABLE_BY);
for (String perm : perms) {
setFoundFromSolr.add(perm);
}
// System.out.println(setFoundFromSolr + " found");
assertEquals(expectedSet, setFoundFromSolr);
Response solrQueryContent = querySolr(SearchFields.ENTITY_ID + ":" + randomFileId);
// solrQueryContent.prettyPrint();
}
long rootDataverseId = 1;
String rootDataverseAlias = getDataverseAlias(rootDataverseId, homer.getApiToken());
Response publishRootDataverseResponse = publishDataverseAsCreator(rootDataverseId);
// publishRootDataverseResponse.prettyPrint();
Response publishDataverseResponse = publishDataverse(dvForPermsTesting, homer.apiToken);
// publishDataverseResponse.prettyPrint();
Response publishDatasetResponse = publishDatasetViaNative(datasetIdHomerFound, homer.apiToken);
// publishDatasetResponse.prettyPrint();
Integer idClancyFoundAfterPublished = printDatasetId(dataset2, clancy);
assertEquals(datasetIdHomerFound, idClancyFoundAfterPublished);
if (!idsOfFilesUploaded.isEmpty()) {
Random random = new Random();
int randomFileIndex = random.nextInt(numFilesFound);
System.out.println("picking random file with index of " + randomFileIndex + " from list of " + numFilesFound);
int randomFileId = idsOfFilesUploaded.get(randomFileIndex);
Set<String> expectedSet = new HashSet<>();
expectedSet.add(IndexServiceBean.getPublicGroupString());
Response checkPermsReponse = checkPermissionsOnDvObject(randomFileId, homer.getApiToken());
// checkPermsReponse.prettyPrint();
// [0] because there's only one "permissions" Solr doc (a published file)
List<String> permListFromDebugEndpoint = JsonPath.from(checkPermsReponse.getBody().asString()).get("data.perms[0]." + SearchFields.DISCOVERABLE_BY);
Set<String> setFoundFromPermsDebug = new TreeSet<>();
for (String perm : permListFromDebugEndpoint) {
setFoundFromPermsDebug.add(perm);
}
assertEquals(expectedSet, setFoundFromPermsDebug);
Response solrQueryPerms = querySolr(SearchFields.DEFINITION_POINT_DVOBJECT_ID + ":" + randomFileId);
// solrQueryPerms.prettyPrint();
Set<String> setFoundFromSolr = new TreeSet<>();
String publishedId = IndexServiceBean.solrDocIdentifierFile + randomFileId + IndexServiceBean.discoverabilityPermissionSuffix;
List<Map> docs = with(solrQueryPerms.getBody().asString()).param("name", publishedId).get("response.docs.findAll { docs -> docs.id == name }");
List<String> permsPublished = with(solrQueryPerms.getBody().asString()).param("name", publishedId).getList("response.docs.findAll { docs -> docs.id == name }[0]." + SearchFields.DISCOVERABLE_BY);
for (String perm : permsPublished) {
setFoundFromSolr.add(perm);
}
assertEquals(expectedSet, setFoundFromSolr);
String draftId = IndexServiceBean.solrDocIdentifierFile + randomFileId + IndexServiceBean.draftSuffix + IndexServiceBean.discoverabilityPermissionSuffix;
/**
* @todo The fact that we're able to find the permissions document
* for a file that has been published is a bug. It should be
* deleted, ideally, when the dataset goes from draft to published.
*/
List<String> permsFormerDraft = with(solrQueryPerms.getBody().asString()).param("name", draftId).getList("response.docs.findAll { docs -> docs.id == name }[0]." + SearchFields.DISCOVERABLE_BY);
// System.out.println("permsDraft: " + permsFormerDraft);
Response solrQueryContent = querySolr(SearchFields.ENTITY_ID + ":" + randomFileId);
// solrQueryContent.prettyPrint();
}
Response disableNonPublicSearch = deleteSetting(SettingsServiceBean.Key.SearchApiNonPublicAllowed);
assertEquals(200, disableNonPublicSearch.getStatusCode());
}
@Ignore
@Test
public void testAssignRoleAtDataset() throws InterruptedException {
Response createUser1 = UtilIT.createRandomUser();
String username1 = UtilIT.getUsernameFromResponse(createUser1);
String apiToken1 = UtilIT.getApiTokenFromResponse(createUser1);
Response createDataverse1Response = UtilIT.createRandomDataverse(apiToken1);
createDataverse1Response.prettyPrint();
assertEquals(201, createDataverse1Response.getStatusCode());
String dataverseAlias1 = UtilIT.getAliasFromResponse(createDataverse1Response);
Response createDataset1Response = UtilIT.createRandomDatasetViaNativeApi(dataverseAlias1, apiToken1);
createDataset1Response.prettyPrint();
assertEquals(201, createDataset1Response.getStatusCode());
Integer datasetId1 = UtilIT.getDatasetIdFromResponse(createDataset1Response);
Response createUser2 = UtilIT.createRandomUser();
String username2 = UtilIT.getUsernameFromResponse(createUser2);
String apiToken2 = UtilIT.getApiTokenFromResponse(createUser2);
String roleToAssign = "admin";
Response grantUser2AccessOnDataset = grantRoleOnDataset(datasetId1.toString(), roleToAssign, username2, apiToken1);
grantUser2AccessOnDataset.prettyPrint();
assertEquals(200, grantUser2AccessOnDataset.getStatusCode());
sleep(500l);
Response shouldBeVisible = querySolr("id:dataset_" + datasetId1 + "_draft_permission");
shouldBeVisible.prettyPrint();
String discoverableBy = JsonPath.from(shouldBeVisible.asString()).getString("response.docs.discoverableBy");
Set actual = new HashSet<>();
for (String userOrGroup : discoverableBy.replaceAll("\\[", "").replaceAll("\\]", "").replaceAll(" ", "").split(",")) {
actual.add(userOrGroup);
}
Set expected = new HashSet<>();
createUser1.prettyPrint();
String userid1 = JsonPath.from(createUser1.asString()).getString("data.user.id");
String userid2 = JsonPath.from(createUser2.asString()).getString("data.user.id");
expected.add("group_user" + userid1);
expected.add("group_user" + userid2);
assertEquals(expected, actual);
}
@Ignore
@Test
public void testAssignGroupAtDataverse() throws InterruptedException {
Response createUser1 = UtilIT.createRandomUser();
String username1 = UtilIT.getUsernameFromResponse(createUser1);
String apiToken1 = UtilIT.getApiTokenFromResponse(createUser1);
Response createDataverse1Response = UtilIT.createRandomDataverse(apiToken1);
createDataverse1Response.prettyPrint();
assertEquals(201, createDataverse1Response.getStatusCode());
String dvAlias = UtilIT.getAliasFromResponse(createDataverse1Response);
int dvId = JsonPath.from(createDataverse1Response.asString()).getInt("data.id");
Response createDataset1Response = UtilIT.createRandomDatasetViaNativeApi(dvAlias, apiToken1);
createDataset1Response.prettyPrint();
assertEquals(201, createDataset1Response.getStatusCode());
Integer datasetId1 = UtilIT.getDatasetIdFromResponse(createDataset1Response);
Response createUser2 = UtilIT.createRandomUser();
createUser2.prettyPrint();
String username2 = UtilIT.getUsernameFromResponse(createUser2);
String apiToken2 = UtilIT.getApiTokenFromResponse(createUser2);
String aliasInOwner = "groupFor" + dvAlias;
String displayName = "Group for " + dvAlias;
String user2identifier = "@" + username2;
Response createGroup = UtilIT.createGroup(dvAlias, aliasInOwner, displayName, apiToken1);
createGroup.prettyPrint();
String groupIdentifier = JsonPath.from(createGroup.asString()).getString("data.identifier");
assertEquals(201, createGroup.getStatusCode());
List<String> roleAssigneesToAdd = new ArrayList<>();
roleAssigneesToAdd.add(user2identifier);
Response addToGroup = UtilIT.addToGroup(dvAlias, aliasInOwner, roleAssigneesToAdd, apiToken1);
addToGroup.prettyPrint();
Response grantRoleResponse = UtilIT.grantRoleOnDataverse(dvAlias, "admin", groupIdentifier, apiToken1);
grantRoleResponse.prettyPrint();
assertEquals(200, grantRoleResponse.getStatusCode());
sleep(500l);
Response shouldBeVisible = querySolr("id:dataset_" + datasetId1 + "_draft_permission");
shouldBeVisible.prettyPrint();
String discoverableBy = JsonPath.from(shouldBeVisible.asString()).getString("response.docs.discoverableBy");
Set actual = new HashSet<>();
for (String userOrGroup : discoverableBy.replaceAll("\\[", "").replaceAll("\\]", "").replaceAll(" ", "").split(",")) {
actual.add(userOrGroup);
}
Set expected = new HashSet<>();
createUser1.prettyPrint();
String userid1 = JsonPath.from(createUser1.asString()).getString("data.authenticatedUser.id");
expected.add("group_user" + userid1);
expected.add("group_" + dvId + "-" + aliasInOwner);
logger.info("expected: " + expected);
logger.info("actual: " + actual);
assertEquals(expected, actual);
Response enableNonPublicSearch = enableSetting(SettingsServiceBean.Key.SearchApiNonPublicAllowed);
assertEquals(200, enableNonPublicSearch.getStatusCode());
TestSearchQuery query = new TestSearchQuery("*");
JsonObjectBuilder createdUser = Json.createObjectBuilder();
createdUser.add(idKey, Integer.MAX_VALUE);
createdUser.add(usernameKey, username2);
createdUser.add(apiTokenKey, apiToken2);
JsonObject json = createdUser.build();
TestUser testUser = new TestUser(json);
Response searchResponse = search(query, testUser);
searchResponse.prettyPrint();
Set<String> titles = new HashSet<>(JsonPath.from(searchResponse.asString()).getList("data.items.name"));
System.out.println("title: " + titles);
Set expectedNames = new HashSet<>();
expectedNames.add(dvAlias);
expectedNames.add("Darwin's Finches");
assertEquals(expectedNames, titles);
Response disableNonPublicSearch = deleteSetting(SettingsServiceBean.Key.SearchApiNonPublicAllowed);
assertEquals(200, disableNonPublicSearch.getStatusCode());
}
@Ignore
@Test
public void homerPublishesVersion2AfterDeletingFile() throws InterruptedException {
if (homerPublishesVersion2AfterDeletingFile) {
return;
}
Response enableNonPublicSearch = enableSetting(SettingsServiceBean.Key.SearchApiNonPublicAllowed);
assertEquals(200, enableNonPublicSearch.getStatusCode());
long rootDataverseId = 1;
String rootDataverseAlias = getDataverseAlias(rootDataverseId, homer.getApiToken());
if (rootDataverseAlias != null) {
dataverseToCreateDataset1In = rootDataverseAlias;
}
String xmlIn = getDatasetXml(homer.getUsername(), homer.getUsername(), homer.getUsername());
Response createDatasetResponse = createDataset(xmlIn, dataverseToCreateDataset1In, homer.getApiToken());
// createDatasetResponse.prettyPrint();
assertEquals(201, createDatasetResponse.getStatusCode());
dataset3 = getGlobalId(createDatasetResponse);
// System.out.println("dataset persistent id: " + dataset3);
String zipFileName = "3files.zip";
Process uploadZipFileProcess = uploadZipFileWithCurl(dataset3, zipFileName, homer.getApiToken());
// printCommandOutput(uploadZipFileProcess);
sleep(200);
Integer datasetIdHomerFound = printDatasetId(dataset3, homer);
assertEquals(true, datasetIdHomerFound != null);
dataset3Id = datasetIdHomerFound;
List<Integer> idsOfFilesUploaded = getIdsOfFilesUploaded(dataset3, datasetIdHomerFound, homer.getApiToken());
// System.out.println("file IDs: " + idsOfFilesUploaded);
Set<String> expectedInitialFilesHomer = new HashSet<String>() {
{
add("file1.txt");
add("file2.txt");
add("file3.txt");
}
};
String DRAFT = "DRAFT";
Response fileDataBeforePublishingV1Homer = getFileSearchData(dataset3, DRAFT, homer.getApiToken());
// System.out.println("Files before publishing 1.0 as seen by creator...");
// fileDataBeforePublishingV1Homer.prettyPrint();
Set<String> actualInitialFilesHomer = getFileData(fileDataBeforePublishingV1Homer);
assertEquals(expectedInitialFilesHomer, actualInitialFilesHomer);
// System.out.println("Files before publishing 1.0 as seen by non-creator...");
Response fileDataBeforePublishingV1Ned = getFileSearchData(dataset3, DRAFT, ned.getApiToken());
// fileDataBeforePublishingV1Ned.prettyPrint();
Set<String> actualInitialFilesed = getFileData(fileDataBeforePublishingV1Ned);
assertEquals(new HashSet<String>(), actualInitialFilesed);
Response publishDatasetResponse = publishDatasetViaSword(dataset3, homer.getApiToken());
// publishDatasetResponse.prettyPrint();
Response datasetAsJson = getDatasetAsJson(dataset3Id, homer.getApiToken());
// datasetAsJson.prettyPrint();
// Response fileDataAfterPublishingV1Ned = getFileSearchData(dataset3, ned.getApiToken());
Response fileDataAfterPublishingV1Guest = getFileSearchData(dataset3, DRAFT, EMPTY_STRING);
// System.out.println("Files after publishing 1.0 as seen by non-creator...");
// fileDataAfterPublishingV1Guest.prettyPrint();
Set<String> actualFilesAfterPublishingV1Guest = getFileData(fileDataAfterPublishingV1Guest);
assertEquals(expectedInitialFilesHomer, actualFilesAfterPublishingV1Guest);
// getSwordStatement(dataset3, homer.getApiToken()).prettyPrint();
// List<String> getfiles = getFileNameFromSearchDebug(dataset3, homer.getApiToken());
// System.out.println("some files: " + getfiles);
Response datasetFiles = getDatasetFilesEndpoint(dataset3Id, homer.getApiToken());
// datasetFiles.prettyPrint();
String fileToDelete = "file2.txt";
// getSwordStatement(dataset3, homer.getApiToken()).prettyPrint();
// System.out.println("### BEFORE TOUCHING PUBLISHED DATASET");
Response atomEntryBeforeDeleteReponse = getSwordAtomEntry(dataset3, homer.getApiToken());
// atomEntryBeforeDeleteReponse.prettyPrint();
/**
* @todo The "SWORD: deleting a file from a published version (not a
* draft) creates a draft but doesn't delete the file" bug at
* https://github.com/IQSS/dataverse/issues/2464 means we must first
* create a draft via the "update metadata" endpoint before deleting the
* file. Otherwise, the file won't be properly deleted!
*/
System.out.println("Updating metadata before delete because of https://github.com/IQSS/dataverse/issues/2464");
Response updateMetadataResponse = updateDatasetMetadataViaSword(dataset3, xmlIn, homer.getApiToken());
// updateMetadataResponse.prettyPrint();
// System.out.println("### AFTER UPDATING METADATA");
Response atomEntryAfterDeleteReponse = getSwordAtomEntry(dataset3, homer.getApiToken());
// atomEntryAfterDeleteReponse.prettyPrint();
int fileId = getFileIdFromDatasetEndpointFileListing(datasetFiles, fileToDelete);
Response deleteFileResponse = deleteFile(fileId, homer.getApiToken());
// deleteFileResponse.prettyPrint();
assertEquals(204, deleteFileResponse.statusCode());
// System.out.println("### AFTER DELETING FILE");
Response swordStatementAfterDelete = getSwordStatement(dataset3, homer.getApiToken());
// swordStatementAfterDelete.prettyPrint();
XmlPath xmlPath = new XmlPath(swordStatementAfterDelete.body().asString());
String firstFileName = xmlPath.get("feed.entry[0].id").toString().split("/")[11];
// System.out.println("first file name:" + firstFileName);
String secondFileName = xmlPath.get("feed.entry[1].id").toString().split("/")[11];
// System.out.println("second file name: " + secondFileName);
Set<String> filesFoundInSwordStatement = new HashSet<>();
filesFoundInSwordStatement.add(firstFileName);
filesFoundInSwordStatement.add(secondFileName);
Set<String> expectedFilesInSwordStatementAfterDelete = new HashSet<String>() {
{
add("file1.txt");
add("file3.txt");
}
};
assertEquals(expectedFilesInSwordStatementAfterDelete, filesFoundInSwordStatement);
NodeChildrenImpl thirdFileNode = xmlPath.get("feed.entry[2].id");
/**
* If you get "java.lang.String cannot be cast to
* com.jayway.restassured.internal.path.xml.NodeChildrenImpl" here it
* means that the third file was found and not deleted! See the note
* above about https://github.com/IQSS/dataverse/issues/2464
*/
assertEquals(true, thirdFileNode.isEmpty());
Set<String> expectedV1FilesAfterDeleteGuest = new HashSet<String>() {
{
add("file1.txt");
add("file2.txt");
add("file3.txt");
}
};
String v1dot0 = "1.0";
Response fileDataAfterDelete = getFileSearchData(dataset3, v1dot0, EMPTY_STRING);
// System.out.println("Files guest sees after Homer deletes a file from 1.0, creating a draft...");
// fileDataAfterDelete.prettyPrint();
Set<String> actualFilesAfterDelete = getFileData(fileDataAfterDelete);
assertEquals(expectedV1FilesAfterDeleteGuest, actualFilesAfterDelete);
Set<String> expectedDraftFilesAfterDeleteHomerAfterIssue2455Implemented = expectedFilesInSwordStatementAfterDelete;
Response fileDataAfterDeleteHomer = getFileSearchData(dataset3, DRAFT, homer.getApiToken());
// System.out.println("Files Homer sees in draft after deleting a file from v1.0...");
// fileDataAfterDeleteHomer.prettyPrint();
Set<String> actualDraftFilesAfterDeleteHomer = getFileData(fileDataAfterDeleteHomer);
Response querySolrResponse = querySolr(SearchFields.PARENT_ID + ":" + dataset3Id);
// querySolrResponse.prettyPrint();
logger.info("files found: " + JsonPath.from(querySolrResponse.asString()).get("response.docs.name").toString());
/**
* @todo In order for this test to pass we'll probably need to change
* the indexing rules defined in "Only show draft file card if file has
* changed from published version"
* https://github.com/IQSS/dataverse/issues/528 . From the "Use Solr for
* file listing on dataset page" issue at
* https://github.com/IQSS/dataverse/issues/2455 we'd like Homer to be
* able to look at a post v1 draft and see that one of his three files
* has been deleted in that draft. With current indexing rules, this is
* not possible. There are only three files indexed into Solr and they
* all belong to the publish v1 dataset. We don't index drafts unless
* the content has changed (again per issue 528).
*/
System.out.println(new TreeSet(expectedDraftFilesAfterDeleteHomerAfterIssue2455Implemented) + " expected after issue 2455 implemented");
System.out.println(new TreeSet(actualDraftFilesAfterDeleteHomer) + " actual");
// assertEquals(expectedDraftFilesAfterDeleteHomer, actualDraftFilesAfterDeleteHomer);
Response disableNonPublicSearch = deleteSetting(SettingsServiceBean.Key.SearchApiNonPublicAllowed);
assertEquals(200, disableNonPublicSearch.getStatusCode());
}
@AfterClass
public static void cleanup() {
Response enableNonPublicSearch = UtilIT.enableSetting(SettingsServiceBean.Key.SearchApiNonPublicAllowed);
assertEquals(200, enableNonPublicSearch.getStatusCode());
Response deleteSearchApiNonPublicAllowed = UtilIT.deleteSetting(SettingsServiceBean.Key.SearchApiNonPublicAllowed);
deleteSearchApiNonPublicAllowed.then().assertThat()
.statusCode(200);
Response getSearchApiNonPublicAllowed = UtilIT.getSetting(SettingsServiceBean.Key.SearchApiNonPublicAllowed);
// getSearchApiNonPublicAllowed.prettyPrint();
getSearchApiNonPublicAllowed.then().assertThat()
.body("message", CoreMatchers.equalTo("Setting " + SettingsServiceBean.Key.SearchApiNonPublicAllowed + " not found"))
.statusCode(404);
boolean enabled = false;
if (!enabled) {
return;
}
logger.info("Running cleanup...");
/**
* We revoke roles here just in case an assertion failed because role
* assignments are currently not deleted when you delete a user per
* https://github.com/IQSS/dataverse/issues/1929
*
* You can also delete the role assignments manually like this:
*
* "DELETE FROM roleassignment WHERE assigneeidentifier='@ned';"
*/
// Response revokeNedAdminOnRoot = revokeRole(dataverseToCreateDataset1In, nedAdminOnRootAssignment, homer.getApiToken());
// System.out.println(revokeNedAdminOnRoot.prettyPrint());
// System.out.println("cleanup - status code revoking admin on root from ned: " + revokeNedAdminOnRoot.getStatusCode());
/**
*
*/
if (!disableTestPermsonRootDv) {
Response deleteDataset1Response = deleteDataset(dataset1, homer.getApiToken());
assertEquals(204, deleteDataset1Response.getStatusCode());
}
if (!disableTestPermsOnNewDv) {
Response destroyDatasetResponse = destroyDataset(dataset2Id, homer.getApiToken());
assertEquals(200, destroyDatasetResponse.getStatusCode());
}
if (!homerPublishesVersion2AfterDeletingFile) {
Response destroyDataset = destroyDataset(dataset3Id, homer.getApiToken());
assertEquals(200, destroyDataset.getStatusCode());
}
if (!disableTestPermsOnNewDv) {
Response deleteDvResponse = deleteDataverse(dvForPermsTesting, homer);
assertEquals(200, deleteDvResponse.getStatusCode());
}
deleteUser(homer.getUsername());
deleteUser(ned.getUsername());
deleteUser(clancy.getUsername());
}
private Response enableSetting(SettingsServiceBean.Key settingKey) {
Response response = given().body("true").when().put("/api/admin/settings/" + settingKey);
return response;
}
private Response deleteSetting(SettingsServiceBean.Key settingKey) {
Response response = given().when().delete("/api/admin/settings/" + settingKey);
return response;
}
private Response checkSetting(SettingsServiceBean.Key settingKey) {
Response response = given().when().get("/api/admin/settings/" + settingKey);
return response;
}
private static String getDataverseAlias(long dataverseId, String apiToken) {
Response getDataverse = given()
.get("api/dataverses/" + dataverseId + "?key=" + apiToken);
JsonPath jsonPath = JsonPath.from(getDataverse.body().asString());
String dataverseAlias = jsonPath.get("data.alias");
return dataverseAlias;
}
private static Response createDataverse(TestDataverse dataverseToCreate, TestUser creator) {
JsonArrayBuilder contactArrayBuilder = Json.createArrayBuilder();
contactArrayBuilder.add(Json.createObjectBuilder().add("contactEmail", creator.getEmail()));
JsonArrayBuilder subjectArrayBuilder = Json.createArrayBuilder();
subjectArrayBuilder.add("Other");
JsonObject dvData = Json.createObjectBuilder()
.add("alias", dataverseToCreate.alias)
.add("name", dataverseToCreate.name)
.add("dataverseContacts", contactArrayBuilder)
.add("dataverseSubjects", subjectArrayBuilder)
.build();
Response createDataverseResponse = given()
.body(dvData.toString()).contentType(ContentType.JSON)
.when().post("/api/dataverses/:root?key=" + creator.apiToken);
return createDataverseResponse;
}
private Response createDataset(String xmlIn, String dataverseToCreateDatasetIn, String apiToken) {
Response createDatasetResponse = given()
.auth().basic(apiToken, EMPTY_STRING)
.body(xmlIn)
.contentType("application/atom+xml")
.post("/dvn/api/data-deposit/v1.1/swordv2/collection/dataverse/" + dataverseToCreateDatasetIn);
return createDatasetResponse;
}
private Response updateDatasetMetadataViaSword(String persistentId, String xmlIn, String apiToken) {
return given()
.auth().basic(apiToken, EMPTY_STRING)
.body(xmlIn)
.contentType("application/atom+xml")
.put("/dvn/api/data-deposit/v1.1/swordv2/edit/study/" + persistentId);
}
/**
* @deprecated We can't assume we'll be able to query Solr across the wire.
* For security, we shouldn't be allowed to!
*/
@Deprecated
private Response querySolr(String query) {
Response querySolrResponse = given().get("http://localhost:8983/solr/collection1/select?wt=json&indent=true&q=" + query);
return querySolrResponse;
}
private static JsonObject createUser(String jsonStr) {
JsonObjectBuilder createdUser = Json.createObjectBuilder();
Response response = createUserViaApi(jsonStr, getPassword(jsonStr));
// response.prettyPrint();
Assert.assertEquals(200, response.getStatusCode());
JsonPath jsonPath = JsonPath.from(response.body().asString());
int userId = jsonPath.getInt("data.user." + idKey);
createdUser.add(idKey, userId);
String username = jsonPath.get("data.user." + usernameKey).toString();
createdUser.add(usernameKey, username);
createdUser.add(apiTokenKey, jsonPath.get("data." + apiTokenKey).toString());
return createdUser.build();
}
private static String getPassword(String jsonStr) {
String password = JsonPath.from(jsonStr).get(usernameKey);
return password;
}
private static String getUserAsJsonString(String username, String firstName, String lastName) {
JsonObjectBuilder builder = Json.createObjectBuilder();
builder.add(usernameKey, username);
builder.add("firstName", firstName);
builder.add("lastName", lastName);
builder.add(emailKey, getEmailFromUserName(username));
String userAsJson = builder.build().toString();
logger.fine("User to create: " + userAsJson);
return userAsJson;
}
private static String getEmailFromUserName(String username) {
return username + "@mailinator.com";
}
private static Response createUserViaApi(String jsonStr, String password) {
Response response = given().body(jsonStr).contentType(ContentType.JSON).when().post("/api/builtin-users?key=" + builtinUserKey + "&password=" + password);
return response;
}
private static Response makeSuperuser(String userToMakeSuperuser) {
Response response = given().post("/api/admin/superuser/" + userToMakeSuperuser);
return response;
}
private Response grantRole(String definitionPoint, String role, String roleAssignee, String apiToken) {
JsonObjectBuilder roleBuilder = Json.createObjectBuilder();
roleBuilder.add("assignee", "@" + roleAssignee);
roleBuilder.add("role", role);
String roleObject = roleBuilder.build().toString();
System.out.println("Granting role on dataverse alias \"" + definitionPoint + "\": " + roleObject);
return given()
.body(roleObject).contentType(ContentType.JSON)
.post("api/dataverses/" + definitionPoint + "/assignments?key=" + apiToken);
}
private Response grantRoleOnDataset(String definitionPoint, String role, String roleAssignee, String apiToken) {
System.out.println("Granting role on dataset \"" + definitionPoint + "\": " + role);
return given()
.body("@" + roleAssignee)
.post("api/datasets/" + definitionPoint + "/assignments?key=" + apiToken);
}
private static Response revokeRole(String definitionPoint, long doomed, String apiToken) {
System.out.println("Attempting to revoke role assignment id " + doomed);
/**
* OUTPUT=`curl -s -X DELETE
* "http://localhost:8080/api/dataverses/$BIRDS_DATAVERSE/assignments/$SPRUCE_ADMIN_ON_BIRDS?key=$FINCHKEY"`
*/
return given()
.delete("api/dataverses/" + definitionPoint + "/assignments/" + doomed + "?key=" + apiToken);
}
private String getGlobalId(Response createDatasetResponse) {
String xml = createDatasetResponse.body().asString();
String datasetSwordIdUrl = from(xml).get("entry.id");
/**
* @todo stop assuming the last 22 characters are the doi/globalId
*/
return datasetSwordIdUrl.substring(datasetSwordIdUrl.length() - 22);
}
/**
* Assumes you have turned on experimental non-public search
* https://github.com/IQSS/dataverse/issues/1299
*
* curl -X PUT -d true
* http://localhost:8080/api/admin/settings/:SearchApiNonPublicAllowed
*
* @return The Integer found or null.
*/
private static Integer findDatasetIdFromGlobalId(String globalId, String apiToken) {
Response searchForGlobalId = given()
.get("api/search?key=" + apiToken
+ "&q=dsPersistentId:\""
+ globalId.replace(":", "\\:")
+ "\"&show_entity_ids=true");
JsonPath jsonPath = JsonPath.from(searchForGlobalId.body().asString());
int id;
try {
id = jsonPath.get("data.items[0].entity_id");
} catch (IllegalArgumentException ex) {
return null;
}
return id;
}
private String getDatasetXml(String title, String author, String description) {
String xmlIn = "<?xml version=\"1.0\"?>\n"
+ "<entry xmlns=\"http://www.w3.org/2005/Atom\" xmlns:dcterms=\"http://purl.org/dc/terms/\">\n"
+ " <dcterms:title>" + title + "</dcterms:title>\n"
+ " <dcterms:creator>" + author + "</dcterms:creator>\n"
+ " <dcterms:description>" + description + "</dcterms:description>\n"
+ "</entry>\n"
+ "";
return xmlIn;
}
private static Response deleteDataverse(String doomed, TestUser user) {
// System.out.println("deletingn dataverse " + doomed);
return given().delete("/api/dataverses/" + doomed + "?key=" + user.getApiToken());
}
private static Response deleteDataset(String globalId, String apiToken) {
return given()
.auth().basic(apiToken, EMPTY_STRING)
.relaxedHTTPSValidation()
.delete("/dvn/api/data-deposit/v1.1/swordv2/edit/study/" + globalId);
}
private static Response destroyDataset(Integer datasetId, String apiToken) {
return given()
.header(keyString, apiToken)
.delete("/api/datasets/" + datasetId + "/destroy");
}
private static void deleteUser(String username) {
Response deleteUserResponse = given().delete("/api/admin/authenticatedUsers/" + username + "/");
assertEquals(200, deleteUserResponse.getStatusCode());
}
private static int getUserIdFromDatabase(String username) {
Response getUserResponse = given().get("/api/admin/authenticatedUsers/" + username + "/");
JsonPath getUserJson = JsonPath.from(getUserResponse.body().asString());
int userIdFromDatabase = getUserJson.getInt("data.id");
return userIdFromDatabase;
}
private long getRoleAssignmentId(Response response) {
JsonPath jsonPath = JsonPath.from(response.body().asString());
return jsonPath.getInt("data.id");
}
private Integer printDatasetId(String dataset1, TestUser user) {
Integer datasetIdFound = findDatasetIdFromGlobalId(dataset1, user.getApiToken());
// System.out.println(dataset1 + " id " + datasetIdFound + " found by " + user);
return datasetIdFound;
}
@Deprecated
private Response search(TestSearchQuery query, TestUser user) {
return given()
.get("api/search?key=" + user.getApiToken()
+ "&q=" + query.getQuery()
+ "&show_facets=" + true
);
}
@Deprecated
static Response search(String query, String apiToken) {
return given()
.header(keyString, apiToken)
.get("/api/search?q=" + query);
}
private Response uploadZipFile(String persistentId, String zipFileName, String apiToken) throws FileNotFoundException {
String pathToFileName = "scripts/search/data/binary/" + zipFileName;
Path path = Paths.get(pathToFileName);
byte[] data = null;
try {
data = Files.readAllBytes(path);
} catch (IOException ex) {
logger.info("Could not read bytes from " + path + ": " + ex);
}
Response swordStatementResponse = given()
.body(data)
.header("Packaging", "http://purl.org/net/sword/package/SimpleZip")
.header("Content-Disposition", "filename=" + zipFileName)
/**
* It's unclear why we need to add "preemptive" to auth but
* without it we can't seem to switch from .multiPart(file) to
* .body(bytes).
*
* See https://github.com/jayway/rest-assured/issues/507
*/
.auth().preemptive().basic(apiToken, EMPTY_STRING)
.post("/dvn/api/data-deposit/v1.1/swordv2/edit-media/study/" + persistentId);
return swordStatementResponse;
}
/**
* @todo Delete this once you get the REST-assured version working
*/
private Process uploadZipFileWithCurl(String globalId, String zipfilename, String apiToken) {
Process p = null;
try {
p = Runtime.getRuntime().exec(new String[]{"bash", "-c", "curl -s --insecure --data-binary @scripts/search/data/binary/" + zipfilename + " -H \"Content-Disposition: filename=trees.zip\" -H \"Content-Type: application/zip\" -H \"Packaging: http://purl.org/net/sword/package/SimpleZip\" -u " + apiToken + ": https://localhost:8181/dvn/api/data-deposit/v1.1/swordv2/edit-media/study/" + globalId});
} catch (IOException ex) {
Logger.getLogger(SearchIT.class.getName()).log(Level.SEVERE, null, ex);
}
return p;
}
private void printCommandOutput(Process p) {
try {
p.waitFor();
} catch (InterruptedException ex) {
Logger.getLogger(SearchIT.class.getName()).log(Level.SEVERE, null, ex);
}
BufferedReader input = new BufferedReader(new InputStreamReader(p.getInputStream()));
String line;
try {
while ((line = input.readLine()) != null) {
System.out.println(line);
}
} catch (IOException ex) {
Logger.getLogger(SearchIT.class.getName()).log(Level.SEVERE, null, ex);
}
try {
input.close();
} catch (IOException ex) {
Logger.getLogger(SearchIT.class.getName()).log(Level.SEVERE, null, ex);
}
}
private List<Integer> getIdsOfFilesUploaded(String persistentId, Integer datasetId, String apiToken) {
Response swordStatentResponse = getSwordStatement(persistentId, apiToken);
// swordStatentResponse.prettyPrint();
if (datasetId != null) {
List<Integer> fileList = getFilesFromDatasetEndpoint(datasetId, apiToken);
if (!fileList.isEmpty()) {
return fileList;
}
}
return Collections.emptyList();
}
private Response getSwordAtomEntry(String persistentId, String apiToken) {
Response response = given()
.auth().basic(apiToken, EMPTY_STRING)
.get("/dvn/api/data-deposit/v1.1/swordv2/edit/study/" + persistentId);
return response;
}
private Response getSwordStatement(String persistentId, String apiToken) {
Response swordStatementResponse = given()
.auth().basic(apiToken, EMPTY_STRING)
.get("/dvn/api/data-deposit/v1.1/swordv2/statement/study/" + persistentId);
return swordStatementResponse;
}
private List<Integer> getFilesFromDatasetEndpoint(Integer datasetId, String apiToken) {
List<Integer> fileList = new ArrayList<>();
Response getDatasetFilesResponse = getDatasetFilesEndpoint(datasetId, apiToken);
// getDatasetFilesResponse.prettyPrint();
JsonPath jsonPath = JsonPath.from(getDatasetFilesResponse.body().asString());
List<Map> filesMap = jsonPath.get("data.datafile");
for (Map map : filesMap) {
int fileId = (int) map.get("id");
fileList.add(fileId);
}
return fileList;
}
private Response getDatasetFilesEndpoint(Integer datasetId, String apiToken) {
Response getDatasetFilesResponse = given()
.get("api/datasets/" + datasetId + "/versions/:latest/files?key=" + apiToken);
return getDatasetFilesResponse;
}
private Response checkPermissionsOnDvObject(int dvObjectId, String apiToken) {
Response debugPermsResponse = given()
.get("api/admin/index/permsDebug/?id=" + dvObjectId + "&key=" + apiToken);
// debugPermsResponse.prettyPrint();
return debugPermsResponse;
}
private Response clearIndexTimesOnDvObject(int dvObjectId) {
Response debugPermsResponse = given()
.delete("api/admin/index/timestamps/" + dvObjectId);
return debugPermsResponse;
}
private Response reindexDataset(int datasetId) {
return given().get("api/admin/index/datasets/" + datasetId);
}
private Response publishDataverse(String alias, String apiToken) {
return given()
.header(keyString, apiToken)
.urlEncodingEnabled(false)
.post("/api/dataverses/" + alias + "/actions/:publish");
}
private Response publishDataverseAsCreator(long id) {
return given()
.post("/api/admin/publishDataverseAsCreator/" + id);
}
private Response getDatasetAsJson(long datasetId, String apiToken) {
return given()
.header(keyString, apiToken)
.urlEncodingEnabled(false)
.get("/api/datasets/" + datasetId);
}
private Response publishDatasetViaSword(String persistentId, String apiToken) {
return given()
.auth().basic(apiToken, EMPTY_STRING)
.header("In-Progress", "false")
.post("/dvn/api/data-deposit/v1.1/swordv2/edit/study/" + persistentId);
}
private Response publishDatasetViaNative(long datasetId, String apiToken) {
/**
* This should probably be a POST rather than a GET:
* https://github.com/IQSS/dataverse/issues/2431
*/
return given()
.header(keyString, apiToken)
.urlEncodingEnabled(false)
.get("/api/datasets/" + datasetId + "/actions/:publish?type=minor");
}
private Response getFileSearchData(String persistentId, String semanticVersion, String apiToken) {
/**
* Note In all commands below, dataset versions can be referred to as:
*
* :draft the draft version, if any
*
* :latest either a draft (if exists) or the latest published version.
*
* :latest-published the latest published version
*
* x.y a specific version, where x is the major version number and y is
* the minor version number.
*
* x same as x.0
*
* http://guides.dataverse.org/en/latest/api/native-api.html#datasets
*/
// String semanticVersion = null;
return given()
.header(keyString, apiToken)
.urlEncodingEnabled(false)
.get("/api/admin/index/filesearch?persistentId=" + persistentId + "&semanticVersion=" + semanticVersion);
}
private Response deleteFile(int fileId, String apiToken) {
// System.out.println("deleting file id " + fileId);
return given()
.auth().basic(apiToken, EMPTY_STRING)
.delete("/dvn/api/data-deposit/v1.1/swordv2/edit-media/file/" + fileId);
}
private List<String> getFileNameFromSearchDebug(String datasetPersistentId, String apiToken) {
Response fileDataResponse = getFileSearchData(datasetPersistentId, "DRAFT", apiToken);
// fileDataResponse.prettyPrint();
return JsonPath.from(fileDataResponse.body().asString()).getList("data.cards");
}
private int getFileIdFromDatasetEndpointFileListing(Response datasetFiles, String filename) {
return with(datasetFiles.getBody().asString())
.param("name", filename)
.getInt("data.findAll { data -> data.label == name }[0].datafile.id");
}
private Set<String> getFileData(Response fileDataResponse) {
Set<String> filesFound = new HashSet<>();
List<String> files1 = JsonPath.from(fileDataResponse.body().asString()).getList("data.cards");
for (String file : files1) {
filesFound.add(file);
}
return filesFound;
}
private static class TestUser {
private long id;
private String username;
private String apiToken;
private TestUser(JsonObject json) {
this.id = json.getInt(idKey);
this.username = json.getString(usernameKey);
this.apiToken = json.getString(apiTokenKey);
}
public long getId() {
return id;
}
public void setId(long id) {
this.id = id;
}
public String getUsername() {
return username;
}
public String getApiToken() {
return apiToken;
}
public String getEmail() {
return getEmailFromUserName(username);
}
@Override
public String toString() {
return "TestUser{" + "id=" + id + ", username=" + username + '}';
}
}
private static class TestDataverse {
String alias;
String name;
Dataverse.DataverseType category;
public TestDataverse(String alias, String name, Dataverse.DataverseType category) {
this.alias = alias;
this.name = name;
this.category = category;
}
}
private static class TestSearchQuery {
private String query;
private List<String> filterQueries = new ArrayList<>();
private TestSearchQuery(String query) {
this.query = query;
}
public TestSearchQuery(String query, List<String> filterQueries) {
this.query = query;
if (!filterQueries.isEmpty()) {
this.filterQueries = filterQueries;
}
}
public String getQuery() {
return query;
}
public List<String> getFilterQueries() {
return filterQueries;
}
}
@Test
public void testDatasetThumbnail() {
logger.info("BEGIN testDatasetThumbnail");
// Response setSearchApiNonPublicAllowed = UtilIT.setSetting(SettingsServiceBean.Key.SearchApiNonPublicAllowed, "true");
// setSearchApiNonPublicAllowed.prettyPrint();
//
// assertEquals("foo", "foo");
// if (true) {
// return;
// }
//
Response createUser = UtilIT.createRandomUser();
createUser.prettyPrint();
String username = UtilIT.getUsernameFromResponse(createUser);
String apiToken = UtilIT.getApiTokenFromResponse(createUser);
Response createDataverseResponse = UtilIT.createRandomDataverse(apiToken);
createDataverseResponse.prettyPrint();
String dataverseAlias = UtilIT.getAliasFromResponse(createDataverseResponse);
Response createDatasetResponse = UtilIT.createRandomDatasetViaNativeApi(dataverseAlias, apiToken);
createDatasetResponse.prettyPrint();
Integer datasetId = UtilIT.getDatasetIdFromResponse(createDatasetResponse);
Response search1 = UtilIT.search("id:dataset_" + datasetId + "_draft", apiToken);
search1.prettyPrint();
search1.then().assertThat()
.body("data.items[0].name", CoreMatchers.equalTo("Darwin's Finches"))
.body("data.items[0].thumbnailFilename", CoreMatchers.equalTo(null))
.body("data.items[0].datasetThumbnailBase64image", CoreMatchers.equalTo(null))
.statusCode(200);
Response datasetAsJson = UtilIT.nativeGet(datasetId, apiToken);
datasetAsJson.prettyPrint();
String protocol = JsonPath.from(datasetAsJson.getBody().asString()).getString("data.protocol");
String authority = JsonPath.from(datasetAsJson.getBody().asString()).getString("data.authority");
String identifier = JsonPath.from(datasetAsJson.getBody().asString()).getString("data.identifier");
String datasetPersistentId = protocol + ":" + authority + "/" + identifier;
long datasetVersionId = JsonPath.from(datasetAsJson.getBody().asString()).getLong("data.latestVersion.id");
Response createNoSpecialAccessUser = UtilIT.createRandomUser();
createNoSpecialAccessUser.prettyPrint();
String noSpecialAccessUsername = UtilIT.getUsernameFromResponse(createNoSpecialAccessUser);
String noSpecialAcessApiToken = UtilIT.getApiTokenFromResponse(createNoSpecialAccessUser);
logger.info("Dataset created, no thumbnail expected:");
Response getThumbnail1 = UtilIT.getDatasetThumbnailMetadata(datasetId, apiToken);
getThumbnail1.prettyPrint();
JsonObject emptyObject = Json.createObjectBuilder().build();
getThumbnail1.then().assertThat()
// .body("data", CoreMatchers.equalTo(emptyObject))
.body("data.isUseGenericThumbnail", CoreMatchers.equalTo(false))
.body("data.dataFileId", CoreMatchers.equalTo(null))
.body("data.datasetLogoPresent", CoreMatchers.equalTo(false))
.statusCode(200);
String thumbnailUrl = RestAssured.baseURI + "/api/datasets/" + datasetId + "/thumbnail";
InputStream inputStream1creator = UtilIT.getInputStreamFromUnirest(thumbnailUrl, apiToken);
assertNull(inputStream1creator);
InputStream inputStream1guest = UtilIT.getInputStreamFromUnirest(thumbnailUrl, noSpecialAcessApiToken);
assertNull(inputStream1guest);
Response getThumbnailImage1 = UtilIT.getDatasetThumbnail(datasetPersistentId, apiToken); //
getThumbnailImage1.prettyPrint();
getThumbnailImage1.then().assertThat()
.contentType("")
.statusCode(NO_CONTENT.getStatusCode());
Response attemptToGetThumbnailCandidates = UtilIT.showDatasetThumbnailCandidates(datasetPersistentId, noSpecialAcessApiToken);
attemptToGetThumbnailCandidates.prettyPrint();
attemptToGetThumbnailCandidates.then().assertThat()
.body("message", CoreMatchers.equalTo("You are not permitted to list dataset thumbnail candidates."))
.statusCode(FORBIDDEN.getStatusCode());
Response thumbnailCandidates1 = UtilIT.showDatasetThumbnailCandidates(datasetPersistentId, apiToken);
thumbnailCandidates1.prettyPrint();
JsonArray emptyArray = Json.createArrayBuilder().build();
thumbnailCandidates1.then().assertThat()
.body("data", CoreMatchers.equalTo(emptyArray))
.statusCode(200);
Response getThumbnailImageNoAccess1 = UtilIT.getDatasetThumbnail(datasetPersistentId, noSpecialAcessApiToken);
getThumbnailImageNoAccess1.prettyPrint();
getThumbnailImageNoAccess1.then().assertThat()
.contentType("")
.statusCode(NO_CONTENT.getStatusCode());
Response uploadFile = UtilIT.uploadFile(datasetPersistentId, "trees.zip", apiToken);
uploadFile.prettyPrint();
Response getDatasetJson1 = UtilIT.nativeGetUsingPersistentId(datasetPersistentId, apiToken);
Long dataFileId1 = JsonPath.from(getDatasetJson1.getBody().asString()).getLong("data.latestVersion.files[0].dataFile.id");
System.out.println("datafileId: " + dataFileId1);
getDatasetJson1.then().assertThat()
.statusCode(200);
logger.info("DataFile uploaded, should automatically become the thumbnail:");
File trees = new File("scripts/search/data/binary/trees.png");
String treesAsBase64 = null;
treesAsBase64 = ImageThumbConverter.generateImageThumbnailFromFileAsBase64(trees, ImageThumbConverter.DEFAULT_CARDIMAGE_SIZE);
if (treesAsBase64 == null) {
Logger.getLogger(SearchIT.class.getName()).log(Level.SEVERE, "Failed to generate a base64 thumbnail from the file trees.png");
}
Response search2 = UtilIT.search("id:dataset_" + datasetId + "_draft", apiToken);
search2.prettyPrint();
search2.then().assertThat()
.body("data.items[0].name", CoreMatchers.equalTo("Darwin's Finches"))
.statusCode(200);
Response getThumbnail2 = UtilIT.getDatasetThumbnailMetadata(datasetId, apiToken);
getThumbnail2.prettyPrint();
getThumbnail2.then().assertThat()
// .body("data.datasetThumbnail", CoreMatchers.equalTo("randomFromDataFile" + dataFileId1))
.body("data.datasetThumbnailBase64image", CoreMatchers.equalTo(treesAsBase64))
.body("data.isUseGenericThumbnail", CoreMatchers.equalTo(false))
// This dataFileId is null because of automatic thumbnail selection.
.body("data.dataFileId", CoreMatchers.equalTo(dataFileId1.toString()))
.body("data.datasetLogoPresent", CoreMatchers.equalTo(false))
.statusCode(200);
InputStream inputStream2creator = UtilIT.getInputStreamFromUnirest(thumbnailUrl, apiToken);
assertNotNull(inputStream2creator);
assertEquals(treesAsBase64, UtilIT.inputStreamToDataUrlSchemeBase64Png(inputStream2creator));
InputStream inputStream2guest = UtilIT.getInputStreamFromUnirest(thumbnailUrl, noSpecialAcessApiToken);
assertEquals(treesAsBase64, UtilIT.inputStreamToDataUrlSchemeBase64Png(inputStream2guest));
String leadingStringToRemove = FileUtil.DATA_URI_SCHEME;
System.out.println("before: " + treesAsBase64);
String encodedImg = treesAsBase64.substring(leadingStringToRemove.length());
System.out.println("after: " + encodedImg);
byte[] decodedImg = null;
try {
decodedImg = Base64.getDecoder().decode(encodedImg.getBytes("UTF-8"));
} catch (UnsupportedEncodingException ex) {
}
Response getThumbnailImage2 = UtilIT.getDatasetThumbnail(datasetPersistentId, apiToken);
getThumbnailImage2.prettyPrint();
getThumbnailImage2.then().assertThat()
// .body(CoreMatchers.equalTo(decodedImg))
.contentType("image/png")
/**
* @todo Why can't we assert the content here? Why do we have to
* use Unirest instead? How do you download the bytes of the
* image using REST Assured?
*/
// .content(CoreMatchers.equalTo(decodedImg))
.statusCode(200);
String pathToFile = "src/main/webapp/resources/images/dataverseproject.png";
Response uploadSecondImage = UtilIT.uploadFileViaNative(datasetId.toString(), pathToFile, apiToken);
uploadSecondImage.prettyPrint();
uploadSecondImage.then().assertThat()
.statusCode(200);
Response getDatasetJson2 = UtilIT.nativeGetUsingPersistentId(datasetPersistentId, apiToken);
//odd that [0] gets the second uploaded file... replace with a find for dataverseproject.png
Long dataFileId2 = JsonPath.from(getDatasetJson2.getBody().asString()).getLong("data.latestVersion.files[0].dataFile.id");
System.out.println("datafileId2: " + dataFileId2);
getDatasetJson2.then().assertThat()
.statusCode(200);
File dataverseProjectLogo = new File(pathToFile);
String dataverseProjectLogoAsBase64 = null;
dataverseProjectLogoAsBase64 = ImageThumbConverter.generateImageThumbnailFromFileAsBase64(dataverseProjectLogo, ImageThumbConverter.DEFAULT_CARDIMAGE_SIZE);
if (dataverseProjectLogoAsBase64 == null) {
Logger.getLogger(SearchIT.class.getName()).log(Level.SEVERE, "Failed to generate a base64 thumbnail from the file dataverseproject.png");
}
Response switchToSecondDataFileThumbnail = UtilIT.useThumbnailFromDataFile(datasetPersistentId, dataFileId2, apiToken);
switchToSecondDataFileThumbnail.prettyPrint();
switchToSecondDataFileThumbnail.then().assertThat()
.body("data.message", CoreMatchers.equalTo("Thumbnail set to " + dataverseProjectLogoAsBase64))
.statusCode(200);
logger.info("Second DataFile has been uploaded and switched to as the thumbnail:");
Response getThumbnail3 = UtilIT.getDatasetThumbnailMetadata(datasetId, apiToken);
getThumbnail3.prettyPrint();
getThumbnail3.then().assertThat()
// .body("data.datasetThumbnail", CoreMatchers.equalTo("dataverseproject.png"))
.body("data.datasetThumbnailBase64image", CoreMatchers.equalTo(dataverseProjectLogoAsBase64))
.body("data.isUseGenericThumbnail", CoreMatchers.equalTo(false))
.body("data.dataFileId", CoreMatchers.equalTo(dataFileId2.toString()))
.body("data.datasetLogoPresent", CoreMatchers.equalTo(false))
.statusCode(200);
InputStream inputStream3creator = UtilIT.getInputStreamFromUnirest(thumbnailUrl, apiToken);
assertEquals(dataverseProjectLogoAsBase64, UtilIT.inputStreamToDataUrlSchemeBase64Png(inputStream3creator));
InputStream inputStream3guest = UtilIT.getInputStreamFromUnirest(thumbnailUrl, noSpecialAcessApiToken);
assertEquals(dataverseProjectLogoAsBase64, UtilIT.inputStreamToDataUrlSchemeBase64Png(inputStream3guest));
Response search3 = UtilIT.search("id:dataset_" + datasetId + "_draft", apiToken);
search3.prettyPrint();
search3.then().assertThat()
.body("data.items[0].name", CoreMatchers.equalTo("Darwin's Finches"))
.statusCode(200);
Response thumbnailCandidates2 = UtilIT.showDatasetThumbnailCandidates(datasetPersistentId, apiToken);
thumbnailCandidates2.prettyPrint();
thumbnailCandidates2.then().assertThat()
.body("data[0].base64image", CoreMatchers.equalTo(dataverseProjectLogoAsBase64))
.body("data[0].dataFileId", CoreMatchers.equalTo(dataFileId2.intValue()))
.body("data[1].base64image", CoreMatchers.equalTo(treesAsBase64))
.body("data[1].dataFileId", CoreMatchers.equalTo(dataFileId1.intValue()))
.statusCode(200);
//Add Failing Test logo file too big
//Size limit hardcoded in systemConfig.getUploadLogoSizeLimit
String tooBigLogo = "src/test/resources/images/coffeeshop.png";
Response overrideThumbnailFail = UtilIT.uploadDatasetLogo(datasetPersistentId, tooBigLogo, apiToken);
overrideThumbnailFail.prettyPrint();
overrideThumbnailFail.then().assertThat()
.body("message", CoreMatchers.equalTo("File is larger than maximum size: 500000."))
/**
* @todo We want this to expect 400 (BAD_REQUEST), not 403
* (FORBIDDEN).
*/
// .statusCode(400);
.statusCode(FORBIDDEN.getStatusCode());
String datasetLogo = "src/main/webapp/resources/images/cc0.png";
File datasetLogoFile = new File(datasetLogo);
String datasetLogoAsBase64 = datasetLogoAsBase64 = ImageThumbConverter.generateImageThumbnailFromFileAsBase64(datasetLogoFile, ImageThumbConverter.DEFAULT_CARDIMAGE_SIZE);
if (datasetLogoAsBase64 == null) {
Logger.getLogger(SearchIT.class.getName()).log(Level.SEVERE, "Failed to generate a base64 thumbnail from the file dataverseproject.png");
}
Response overrideThumbnail = UtilIT.uploadDatasetLogo(datasetPersistentId, datasetLogo, apiToken);
overrideThumbnail.prettyPrint();
overrideThumbnail.then().assertThat()
.body("data.message", CoreMatchers.equalTo("Thumbnail is now " + datasetLogoAsBase64))
.statusCode(200);
logger.info("Dataset logo has been uploaded and becomes the thumbnail:");
Response getThumbnail4 = UtilIT.getDatasetThumbnailMetadata(datasetId, apiToken);
getThumbnail4.prettyPrint();
getThumbnail4.then().assertThat()
// .body("data.datasetThumbnail", CoreMatchers.equalTo(null))
.body("data.isUseGenericThumbnail", CoreMatchers.equalTo(false))
.body("data.datasetThumbnailBase64image", CoreMatchers.equalTo(datasetLogoAsBase64))
.body("data.datasetLogoPresent", CoreMatchers.equalTo(false))
.statusCode(200);
InputStream inputStream4creator = UtilIT.getInputStreamFromUnirest(thumbnailUrl, apiToken);
assertEquals(datasetLogoAsBase64, UtilIT.inputStreamToDataUrlSchemeBase64Png(inputStream4creator));
InputStream inputStream4guest = UtilIT.getInputStreamFromUnirest(thumbnailUrl, noSpecialAcessApiToken);
assertEquals(datasetLogoAsBase64, UtilIT.inputStreamToDataUrlSchemeBase64Png(inputStream4guest));
Response search4 = UtilIT.search("id:dataset_" + datasetId + "_draft", apiToken);
search4.prettyPrint();
search4.then().assertThat()
.body("data.items[0].name", CoreMatchers.equalTo("Darwin's Finches"))
.statusCode(200);
Response thumbnailCandidates3 = UtilIT.showDatasetThumbnailCandidates(datasetPersistentId, apiToken);
thumbnailCandidates3.prettyPrint();
logger.fine("datasetLogoAsBase64: " + datasetLogoAsBase64);
logger.fine("dataverseProjectLogoAsBase64: " + dataverseProjectLogoAsBase64);
logger.fine("treesAsBase64: " + treesAsBase64);
thumbnailCandidates3.then().assertThat()
.body("data[0].base64image", CoreMatchers.equalTo(datasetLogoAsBase64))
.body("data[0].dataFileId", CoreMatchers.equalTo(null))
.body("data[1].base64image", CoreMatchers.equalTo(dataverseProjectLogoAsBase64))
.body("data[1].dataFileId", CoreMatchers.equalTo(dataFileId2.intValue()))
.body("data[2].base64image", CoreMatchers.equalTo(treesAsBase64))
.body("data[2].dataFileId", CoreMatchers.equalTo(dataFileId1.intValue()))
.statusCode(200);
Response deleteDatasetLogo = UtilIT.removeDatasetThumbnail(datasetPersistentId, apiToken);
deleteDatasetLogo.prettyPrint();
deleteDatasetLogo.then().assertThat()
.body("data.message", CoreMatchers.equalTo("Dataset thumbnail removed."))
.statusCode(200);
logger.info("Deleting the dataset logo means that the thumbnail is not set. It should be the generic icon:");
Response getThumbnail5 = UtilIT.getDatasetThumbnailMetadata(datasetId, apiToken);
getThumbnail5.prettyPrint();
getThumbnail5.then().assertThat()
// .body("data.datasetThumbnail", CoreMatchers.equalTo(null))
.body("data.isUseGenericThumbnail", CoreMatchers.equalTo(true))
.body("data.datasetLogoPresent", CoreMatchers.equalTo(false))
.statusCode(200);
InputStream inputStream5creator = UtilIT.getInputStreamFromUnirest(thumbnailUrl, apiToken);
assertNull(inputStream5creator);
InputStream inputStream5guest = UtilIT.getInputStreamFromUnirest(thumbnailUrl, noSpecialAcessApiToken);
assertNull(inputStream5guest);
Response search5 = UtilIT.search("id:dataset_" + datasetId + "_draft", apiToken);
search5.prettyPrint();
search5.then().assertThat()
.body("data.items[0].name", CoreMatchers.equalTo("Darwin's Finches"))
.body("data.items[0].thumbnailFilename", CoreMatchers.equalTo(null))
.body("data.items[0].datasetThumbnailBase64image", CoreMatchers.equalTo(null))
.statusCode(200);
Response thumbnailCandidates4 = UtilIT.showDatasetThumbnailCandidates(datasetPersistentId, apiToken);
thumbnailCandidates4.prettyPrint();
thumbnailCandidates4.then().assertThat()
.body("data[0].base64image", CoreMatchers.equalTo(dataverseProjectLogoAsBase64))
.body("data[0].dataFileId", CoreMatchers.equalTo(dataFileId2.intValue()))
.body("data[1].base64image", CoreMatchers.equalTo(treesAsBase64))
.body("data[1].dataFileId", CoreMatchers.equalTo(dataFileId1.intValue()))
.statusCode(200);
Response switchtoFirstDataFileThumbnail = UtilIT.useThumbnailFromDataFile(datasetPersistentId, dataFileId1, apiToken);
switchtoFirstDataFileThumbnail.prettyPrint();
switchtoFirstDataFileThumbnail.then().assertThat()
.body("data.message", CoreMatchers.equalTo("Thumbnail set to " + treesAsBase64))
.statusCode(200);
Response publishDataverse = UtilIT.publishDataverseViaSword(dataverseAlias, apiToken);
publishDataverse.prettyPrint();
publishDataverse.then().assertThat()
.statusCode(OK.getStatusCode());
Response publishDataset = UtilIT.publishDatasetViaNativeApi(datasetId, "major", apiToken);
publishDataset.prettyPrint();
publishDataset.then().assertThat()
.statusCode(OK.getStatusCode());
Response getThumbnailImageNoSpecialAccess99 = UtilIT.getDatasetThumbnail(datasetPersistentId, noSpecialAcessApiToken);
// getThumbnailImageNoSpecialAccess99.prettyPrint();
getThumbnailImageNoSpecialAccess99.then().assertThat()
.contentType("image/png")
.statusCode(OK.getStatusCode());
InputStream inputStream99creator = UtilIT.getInputStreamFromUnirest(thumbnailUrl, apiToken);
assertEquals(treesAsBase64, UtilIT.inputStreamToDataUrlSchemeBase64Png(inputStream99creator));
InputStream inputStream99guest = UtilIT.getInputStreamFromUnirest(thumbnailUrl, noSpecialAcessApiToken);
assertEquals(treesAsBase64, UtilIT.inputStreamToDataUrlSchemeBase64Png(inputStream99guest));
Response searchResponse = UtilIT.search("id:dataset_" + datasetId, noSpecialAcessApiToken);
searchResponse.prettyPrint();
searchResponse.then().assertThat()
.statusCode(OK.getStatusCode());
/**
* @todo What happens when you delete a dataset? Does the thumbnail
* created based on the logo get deleted too? Should it?
*/
}
@After
public void tearDownDataverse() {
File treesThumb = new File("scripts/search/data/binary/trees.png.thumb48");
treesThumb.delete();
File cc0Thumb = new File("src/main/webapp/resources/images/cc0.png.thumb48");
cc0Thumb.delete();
File dataverseprojectThumb = new File("src/main/webapp/resources/images/dataverseproject.png.thumb48");
dataverseprojectThumb.delete();
}
} |
/*------------------------------------------------------------------------
* EXPORTED SRXAFSCB_ProbeUuid
*
* Description:
* Routine called by the server-side callback RPC interface to
* implement ``probing'' the Cache Manager, just making sure it's
* still there is still the same client it used to be.
*
* Arguments:
* a_call : Ptr to Rx call on which this request came in.
* a_uuid : Ptr to UUID that must match the client's UUID.
*
* Returns:
* 0 if a_uuid matches the UUID for this client
* Non-zero otherwize
*
* Environment:
* Nothing interesting.
*
* Side Effects:
* As advertised.
*------------------------------------------------------------------------*/
int
SRXAFSCB_ProbeUuid(struct rx_call *a_call, afsUUID * a_uuid)
{
int code = 0;
XSTATS_DECLS;
RX_AFS_GLOCK();
AFS_STATCNT(SRXAFSCB_Probe);
XSTATS_START_CMTIME(AFS_STATS_CM_RPCIDX_PROBE);
if (!afs_uuid_equal(a_uuid, &afs_cb_interface.uuid))
code = 1;
XSTATS_END_TIME;
RX_AFS_GUNLOCK();
return code;
} |
<filename>vendor/github.com/Arvinderpal/matra/common/types/container.go
package types
// dTypes "github.com/docker/docker/api/types"
type Container struct {
// dTypes.ContainerJSON
}
|
<reponame>dvdbrink/xudp
package com.danielvandenbrink.xudp;
public class PacketException extends RuntimeException {
public PacketException(String message) {
super(message);
}
public PacketException(Throwable cause) {
super(cause);
}
public PacketException(String message, Throwable cause) {
super(message, cause);
}
}
|
/**
* return max compress
* zero if end
*/
int reduce(int index, int cur_budget)
{
int red = 0;
int new_budget = cur_budget;
if (index >= n) {
return 0;
}
bool merge = false;
if (len[index] <= cur_budget) {
new_budget -= len[index];
if ((index - 1 >= 0) && (index + 1 < n) && (charat[index - 1] == charat[index + 1])) {
red = 4 + reduce(index + 2, new_budget);
merge = true;
} else {
red = 2 + reduce(index + 1, new_budget);
}
}
int without_red = reduce(index + 1, cur_budget);
if (red > without_red) {
cout << index << ";merge=" << merge << ";red=" << red << ";new_bud=" << new_budget << endl;
return red;
} else {
cout << index << ";noremove=" << without_red << ";cur_bud=" << cur_budget << endl;
return without_red;
}
} |
<gh_stars>1-10
use crate::*;
pub(crate) fn get_contract_token_id(contract_id: &AccountId, token_id: &str) -> String{
format!("{}{}{}", contract_id, DELIMETER, token_id)
}
pub(crate) fn is_promise_success() -> bool {
require!(env::promise_results_count() == 1, "promise failed");
match env::promise_result(0) {
PromiseResult::Successful(_) => true,
_ => false,
}
}
/// enumeration
pub(crate) fn paginate<V>(
values: &Vector<V>,
from_index: Option<U128>,
limit: Option<u64>,
) -> Vec<V> where V: BorshSerialize + BorshDeserialize {
let len = values.len();
if len == 0 {
return vec![];
}
let limit = limit.map(|v| v as usize).unwrap_or(usize::MAX);
assert_ne!(limit, 0, "limit 0");
let start_index: u128 = from_index.map(From::from).unwrap_or_default();
assert!(
len as u128 > start_index,
"start_index gt len"
);
values
.iter()
.skip(start_index as usize)
.take(limit)
.map(|v| v)
.collect()
}
pub(crate) fn unordered_map_val_pagination<K, V>(
map: &UnorderedMap<K, V>,
from_index: Option<U128>,
limit: Option<u64>,
) -> Vec<V> where K: BorshSerialize + BorshDeserialize, V: BorshSerialize + BorshDeserialize {
paginate(map.values_as_vector(), from_index, limit)
}
pub(crate) fn unordered_map_key_pagination<K, V>(
map: &UnorderedMap<K, V>,
from_index: Option<U128>,
limit: Option<u64>,
) -> Vec<K> where K: BorshSerialize + BorshDeserialize, V: BorshSerialize + BorshDeserialize {
paginate(map.keys_as_vector(), from_index, limit)
}
/// set management
pub(crate) fn map_set_insert<K, V> (
map: &mut LookupMap<K, UnorderedSet<V>>,
map_key: &K,
storage_key: StorageKey,
val: V,
) where K: BorshSerialize + BorshDeserialize, V: BorshSerialize + BorshDeserialize {
let mut set = map.get(map_key).unwrap_or_else(|| {
UnorderedSet::new(storage_key)
});
set.insert(&val);
map.insert(&map_key, &set);
}
pub(crate) fn map_set_remove<K, V> (
map: &mut LookupMap<K, UnorderedSet<V>>,
map_key: &K,
val: V,
) where K: BorshSerialize + BorshDeserialize, V: BorshSerialize + BorshDeserialize {
let mut set = map.get(map_key);
if let Some(set) = set.as_mut() {
set.remove(&val);
if set.len() == 0 {
map.remove(&map_key);
return;
}
map.insert(&map_key, &set);
}
}
impl Contract {
pub(crate) fn id_to_offer(&self, set: Vec<u64>) -> Vec<Offer> {
set.iter()
.map(|offer_id| self.offer_by_id.get(&offer_id).unwrap())
.collect()
}
// Add the offer to the contract state
pub(crate) fn internal_add_offer(&mut self, offer: Offer) {
self.offer_id += 1;
self.offer_by_id.insert(&self.offer_id, &offer);
map_set_insert(
&mut self.offers_by_maker_id,
&offer.maker_id,
StorageKey::OfferByMakerIdInner { maker_id: offer.maker_id.clone() },
self.offer_id
);
map_set_insert(
&mut self.offers_by_taker_id,
&offer.taker_id,
StorageKey::OfferByTakerIdInner { taker_id: offer.taker_id.clone() },
self.offer_id
);
let contract_token_id = get_contract_token_id(&offer.contract_id, &offer.token_id);
self.offer_by_contract_token_id.insert(
&contract_token_id.clone(),
&self.offer_id
);
}
// Removes the offer from the contract state
pub(crate) fn internal_swap_offer_maker(&mut self, offer_id: u64, prev_maker_id: &AccountId, new_maker_id: &AccountId) {
map_set_remove(
&mut self.offers_by_maker_id,
&prev_maker_id,
offer_id,
);
self.internal_decrement_storage(prev_maker_id, Some(1));
map_set_insert(
&mut self.offers_by_maker_id,
&new_maker_id,
StorageKey::OfferByMakerIdInner { maker_id: new_maker_id.clone() },
offer_id
);
}
// Removes the offer from the contract state
pub(crate) fn internal_remove_offer_state(&mut self, offer_id: u64, offer: &Offer) {
//remove the offer from its ID
self.offer_by_id.remove(&offer_id);
//remove the offer ID from the maker
map_set_remove(
&mut self.offers_by_maker_id,
&offer.maker_id,
offer_id,
);
//remove the offer ID from the taker
map_set_remove(
&mut self.offers_by_taker_id,
&offer.taker_id,
offer_id,
);
//remove the offer from the contract and token ID
let contract_token_id = get_contract_token_id(&offer.contract_id, &offer.token_id);
self.offer_by_contract_token_id.remove(&contract_token_id);
}
// Removes an offer and handles a storage payback with or without a refund of the offer amount
pub(crate) fn internal_remove_offer(&mut self, offer_id: u64, offer: &Offer, refund: bool) {
self.internal_remove_offer_state(offer_id, offer);
self.internal_decrement_storage(&offer.maker_id, Some(1));
let mut payout = self.offer_storage_amount;
if refund {
payout += offer.amount.0;
}
// refund the offer maker the offer amount + the amount they added for storage
Promise::new(offer.maker_id.clone()).transfer(payout);
}
pub(crate) fn internal_accept_offer(
&mut self,
offer_id: u64,
offer: Offer
) {
if offer.taker_id == offer.maker_id {
env::panic_str("cannot accept your own offer");
}
// make sure there's an approval ID.
let approval_id = offer.approval_id.unwrap_or_else(|| env::panic_str("Cannot accept an offer that has no approval ID"));
// get market amount
let market_amount = self.market_royalty as u128 * offer.amount.0 / 10_000u128;
// subtract from payout amount
let payout_amount = U128(offer.amount.0.checked_sub(market_amount).unwrap_or_else(|| env::panic_str("Market amount too high.")));
// remove the offer from state and do not refund funds
self.internal_remove_offer(offer_id, &offer, false);
//initiate a cross contract call to the nft contract. This will transfer the token to the buyer and return
//a payout object used for the market to distribute funds to the appropriate accounts.
ext_contract::nft_transfer_payout(
offer.maker_id.clone(), //maker of the offer (person to transfer the NFT to)
offer.token_id.clone(), //token ID to transfer
approval_id, //market contract's approval ID in order to transfer the token on behalf of the owner
"payout from market".to_string(), //memo (to include some context)
/*
the price that the token was offered for. This will be used in conjunction with the royalty percentages
for the token in order to determine how much money should go to which account.
*/
payout_amount,
10, //the maximum amount of accounts the market can payout at once (this is limited by GAS)
offer.contract_id.clone(), //contract to initiate the cross contract call to
1, //yoctoNEAR to attach to the call
GAS_FOR_NFT_TRANSFER, //GAS to attach to the call
)
//after the transfer payout has been initiated, we resolve the promise by calling our own resolve_offer function.
//resolve offer will take the payout object returned from the nft_transfer_payout and actually pay the accounts
.then(ext_self::resolve_offer(
offer.maker_id,
offer.taker_id,
offer.token_id,
offer.contract_id,
offer.amount,
offer.updated_at,
payout_amount,
market_amount,
env::current_account_id(), //we are invoking this function on the current contract
NO_DEPOSIT, //don't attach any deposit
GAS_FOR_ROYALTIES, //GAS attached to the call to payout royalties
));
}
}
|
The following review contains spoilers, as that is the only way to discuss how the film reflects contemporary realities.
Gene Roddenberry, when he developed Star Trek, acknowledged that he created a “new world with new rules,” which he could use to examine contemporary issues in society. Director J.J Abrams and the writers of the latest film in the franchise, Star Trek: Into Darkness, appear to have decided to honor the spirit of Roddenberry by drawing upon the pressing societal issue of militarization.
On the planet Nbiru, the crew of the USS Enterprise is on a mission to observe a primitive civilization. First Officer Spock (Zachary Quinto) ends up being endangered by an exploding volcano and Captain James T. Kirk (Chris Pine) decides to violate the Prime Directive, a principle that dictates the Federation will not interfere with the alien civilizations which they discover or observe.
The violation compels Spock to submit a report to Admiral Christopher Pike (Bruce Greenwood) informing him that, even though Kirk saved his life, the Prime Directive was violated. This results in a demotion for Kirk and Pike reassumes command of the Enterprise. However, Pike wants him to still be an officer in his crew.
An act of domestic terrorism occurs against a secret installation. The suspected terrorist is believed to be one of their own, John T. Harrison (Benedict Cumberbatch). Kirk and Pike attend an emergency meeting at Starfleet headquarters. He apparently blew up a library archive, but why? Just as Pike is discovering why he would have wanted to explode an archive, the headquarters comes under attack from Harrison. He knew they would call an emergency meeting and wanted to eliminate high-ranking officers in the command.
Harrison flees to Kronos, the Klingon home world. Admiral Alexander Marcus (Peter Weller) orders Kirk to lead an operation to target and kill Harrison with a new kind of photon torpedo. He acknowledges this could start a war with the Klingons, but he believes war is inevitable, an allusion to how President George W. Bush was willing to exploit the 9/11 terror attacks to go to war in Iraq.
Spock opposes assassinating Harrison with photon torpedoes, as it would be depriving Harrison of life without charge or trial. He also argues extrajudicially killing Harrison would be a violation of Kronos’ sovereignty. Kirk was ordered to kill Harrison when they got to Kronos. Ultimately, against the order of Marcus, he leads a team to capture Harrison alive.
Aboard the Enterprise, Harrison reveals his true identity. He is a superhuman named Khan, who was genetically-engineered. Marcus woke him up from a 300-year cryogenic sleep to take advantage of his savagery and have him develop weapons to aid in war against the Klingons. And, the photon torpedoes actually contained 72 cryogenically frozen colleagues that would have died if they were all fired upon Khan.
Marcus wanted to weaponize Starfleet and approach the universe as if it was a battlefield. The Federation’s purpose, however, had been to keep peace and not start wars.
A central theme is militarization. In fact, at the end of the film, Kirk delivers a speech where he warns fellow officers in the Starfleet that they should all be wary of the thirst for revenge and awakening evil within themselves.
The classic line, “Space: the final frontier. These are the voyages of the starship Enterprise. Its five-year mission: to explore strange new worlds, to seek out new life and new civilizations, to boldly go where no man has gone before,” directly repudiate the direction that Marcus wanted to take Starfleet.
As Roberto Orci, one of the writers told the Wall Street Journal, “The original Star Trek mirrored the Civil Rights movement. It mirrored some progressive ideas that were not exactly popular at the time, like relations with the Soviet Union during the Cold War as represented by having a Russian officer. We felt that Star Trek was always embedded in its best forms in the world that we live in. The world that we happen to currently live in involves issues of terrorism and of war and of sovereignty. So surely recent events and the things happening in the century were part of our calculus.”
In other words, the elements of the movie that seem to comment on war and sovereignty are, while subtle, intended to explore such issues. (There’s even a message before the credits that pays tribute to post-9/11 veterans, which could be considered an injection of reality into the story world.)
It stands in sharp contrast to the recent film, Zero Dark Thirty, which celebrated vigilantism and policies in the war on terrorism that have transformed the world into a battlefield. Those on the raid to kill Osama bin Laden show little restraint in killing people in bin Laden’s compound, who have not fired any weapons at them. They are not careful in making sure bin Laden is captured alive. In fact, the team is very pleased that a bullet hit him in the head and he was killed.
The capture of Khan has a value in the film. The crew would not have found out about the threat Marcus posed to them if they had not taken him on board.
The above only speaks to the tie-in with how the film reflects on our time. It does not delve into all the homages to previous entries in the Star Trek franchise. However, the effort undertaken to speak to current issues of the day could be an homage to Star Trek in and of itself.
Quinto said on “Real Time w/ Bill Maher,” “Roddenberry really believed ultimately in humanity and with a lot of faith and optimism, but the stories always reflected the society in which they took place. So, he was really allegorically tackling a lot of social issues in the ’60s that weren’t really openly discussed.
“I think the darkness is a reflection of our time, for better or for worse. I think people go to the movies to be confronted or immersed in things that maybe in their real lives they’re a little less eager to look at.”
As someone who regularly writes about the current administration’s global assassination policy, it is refreshing to know that there is a blockbuster science fiction film that Americans can go to this summer that confront them with a scenario that has unfolded multiple times.
I don’t expect the tie-in to reality to change anyone’s mind, but I do think it will give Americans an opportunity to imagine that it is possible to respond to threats or attacks without aggressively starting and waging perpetual war. And, since the genre of science fiction has always been most powerful when it draws from contemporary issues, I consider this film to be much more than just another movie for longtime fans of the Star Trek universe. |
<reponame>rju/peething
package de.peerthing.scenarioeditor.model;
public interface IListen extends ICommand, IScenarioObject{
/**
* @author Patrik
*/
public void setDistribution(IDistribution distribution);
/**
* gets the distribution that says how long the nodes have to listen
*/
public IDistribution getDistribution();
/**
* gets a string that discribes the event that is listened for
*/
public String getEvent();
/**
* set the event that is listened for by the handed string
*/
public void setEvent(String event);
}
|
package util_test
import (
"testing"
"github.com/rs/zerolog"
"github.com/stretchr/testify/assert"
"go-starter-example/internal/util"
)
func TestLogLevelFromString(t *testing.T) {
t.Parallel()
res := util.LogLevelFromString("panic")
assert.Equal(t, zerolog.PanicLevel, res)
res = util.LogLevelFromString("warn")
assert.Equal(t, zerolog.WarnLevel, res)
res = util.LogLevelFromString("foo")
assert.Equal(t, zerolog.DebugLevel, res)
}
|
import { SessionInterface as SessionInterfaceSession } from "../../../../shopify/session"
import { missingScopes } from "../callback"
describe("missingScopes", () => {
it("should return an empty array when the scopes are the same", () => {
const requiredScopes = ["read_products", "write_products"]
const diff = missingScopes(
{ onlineAccessInfo: { associated_user_scope: "read_products,write_products" } } as SessionInterfaceSession,
requiredScopes
)
expect(diff).toEqual([])
})
it("should return the difference between the scopes and the user's scopes", () => {
const requiredScopes = ["read_products", "write_products", "read_orders", "write_orders"]
const diff = missingScopes(
{ onlineAccessInfo: { associated_user_scope: "read_products,write_orders" } } as SessionInterfaceSession,
requiredScopes
)
expect(diff).toEqual(["write_products", "read_orders"])
})
})
|
async def member_log(self, ctx: object):
filename = f'{uuid.uuid1()}.png'
author = ctx.message.author
guild = author.guild
async with ctx.typing():
joined_dates = Counter([member.joined_at.date() for member in guild.members])
start_date = min(joined_dates)
end_date = datetime.today().date()
delta = end_date - start_date
dates = [(start_date + timedelta(days = i)) for i in range(delta.days + 1)]
values = [joined_dates.get(date, 0) for date in dates]
for i in range(1, len(values)):
values[i] = values[i] + values[i-1]
fig, ax = plt.subplots()
ax.plot(dates, values)
ax.set_ylabel('Members')
ax.set_xlabel('Date')
ax.set_title('Members over time')
ax.grid(True)
fig.autofmt_xdate()
fig.tight_layout()
fig.savefig(filename)
file = discord.File(filename, filename = 'image.png')
embed = discord.Embed(title = 'Member Join Log', colour = discord.Colour.blue())
embed.set_thumbnail(url = guild.icon_url)
embed.set_image(url = 'attachment://image.png')
embed.add_field(name = 'Total Members', value = guild.member_count, inline = False)
for date, num in joined_dates.items():
if num == max(joined_dates.values()):
temp = f'{num} people on {date}'
embed.add_field(name = 'Most Active Day', value = temp, inline = False)
embed.add_field(name = 'Average joins/day', value = round(sum(joined_dates.values()) / len(dates), 3), inline = False)
await ctx.send(file = file, embed = embed)
remove(filename) |
WASHINGTON, D.C. — From the Carolina's to Martha's Vineyard, the earthquake that shook the East Coast Tuesday rattled the nerves of many people who had never felt an earthquake.
Just after noon, buildings began to shake along the Eastern seaboard. The 5.9 earthquake was the most powerful in Virginia in decades.
The shaking damaged Latter-day Saint temple in Washington, D.C., causing it to lose the tips of four of its spires. They were knocked off, as were some pieces of granite on the temple facing.
"We started finding chunks of marble and spires laying on the ground. They are about four feet long; the base of them are probably 4 inches square, and it comes up to a point," said Doug Wiggins, a North Carolina resident who was at the temple when the earthquake hit.
We started finding chunks of marble and spires laying on the ground. They are about four feet long ... –Doug Wiggins
Don Olson, director of the temple's visitor center, called the quake a "pretty healthy shake," but said there was no serious internal damage to the temple of which he was aware.
The quake was centered northwest of Richmond, Va. At a new Walmart in King George, Va., former Utahn Sandy Miller and her daughter Eilee were startled while shopping.
"All of a sudden we could feel, like, waves; and we looked up and we could see the ceiling shaking and the walls," Miller said. "I was thinking our new Walmart wasn't built strong enough and it was going to collapse or something."
"It scared me," Eliee said.
The earthquake also shook New York City, where crowds felt it in Times Square and on the floor of the New York Stock Exchange, which cleared out.
"What is unusual about this one is the size of the event; this one is significantly larger," said Dr. Harley Benz, of the U.S. Geological Survey.
"What is unusual about this one is the size of the event; this was significantly larger," said Dr. Harley Benz, with the U.S. Geological Survey.
There have been reports of injuries, but no one died in the quake.
----
Written with contributions from Carole Mikita and the ksl.com news team.
×
Photos |
/**
* utility to make a double. If the string begins with '%' then we take a percentage of the baseValue
*
* @param s string
* @param baseValue Used if s is a percentage
*
* @return double value
*/
private double toDouble(String s, double baseValue) {
if (s.endsWith("%")) {
double percent = Misc.toDouble(s.substring(0, s.length() - 1));
return (percent / 100.0) * baseValue;
}
return new Double(s).doubleValue();
} |
<filename>cache/src/main/java/com/nytimes/android/external/cache3/Ascii.java
package com.nytimes.android.external.cache3;
import javax.annotation.Nonnull;
public final class Ascii {
private Ascii() {
}
/**
* Returns a copy of the input string in which all {@linkplain #isUpperCase(char) uppercase ASCII
* characters} have been converted to lowercase. All other characters are copied without
* modification.
*/
@Nonnull
public static String toLowerCase(@Nonnull String string) {
int length = string.length();
for (int i = 0; i < length; i++) {
if (isUpperCase(string.charAt(i))) {
char[] chars = string.toCharArray();
for (; i < length; i++) {
char c = chars[i];
if (isUpperCase(c)) {
chars[i] = (char) (c ^ 0x20);
}
}
return String.valueOf(chars);
}
}
return string;
}
/**
* Indicates whether {@code c} is one of the twenty-six uppercase ASCII alphabetic characters
* between {@code 'A'} and {@code 'Z'} inclusive. All others (including non-ASCII characters)
* return {@code false}.
*/
public static boolean isUpperCase(char c) {
return (c >= 'A') && (c <= 'Z');
}
}
|
Apple's iOS 8 may not look too different from the version that preceded it, but trust us: There are plenty of new bits and bobs to get familiar with once you start poking around. Now that you've had some time to dig into our full review, you can take iOS 8 for a spin yourself -- Apple has just pushed the update live, so check your iDevice's settings to see if it's your time to shine. Just keep a few things in mind before you enter the breach: The update will only install on the iPhone 4s and newer, the iPad 2 and newer and the fifth-generation iPod touch. Oh, and it looks like Apple is having some HealthKit trouble at the moment, so all HealthKit compatible apps have been temporarily removed from the App Store. According to tweets from Carrot Fit developer Brian Mueller, Apple has been saying that a fix is in the works, but there's no ETA on when it'll actually take effect. Nothing like a few hiccups to kick off a massive software launch, no?
Photos by Will Lipman. |
<gh_stars>1-10
from functools import reduce
from operator import add
from itertools import accumulate
import sys
from typing import Generator
INSTRUCTION_MAP = {'(': 1, ')': -1}
def translate(instructions: str) -> Generator[int, None, None]:
return (INSTRUCTION_MAP[i] for i in instructions)
def final_floor(instructions: str) -> int:
steps = translate(instructions)
return reduce(add, steps, 0)
def first_negative(instructions: str) -> int:
steps = translate(instructions)
return next(i for i, floor in enumerate(accumulate(steps, add), 1) if floor < 0)
def main():
with open(sys.argv[1], 'r') as f:
instructions = f.read().strip()
print("The final floor is: {}".format(final_floor(instructions)))
print("The first basement is on step: {}".format(first_negative(instructions)))
if __name__ == "__main__":
main()
|
def allLegalMoves(self,BlackSide):
childNodes = self._moves[self._currentNode]
legalMoves = []
for cNode in childNodes:
legalMoves.append((self._currentNode,cNode))
return legalMoves |
CaMKIIα promoter-controlled circuit manipulations target both pyramidal cells and inhibitory interneurons in cortical networks
A key assumption in studies of cortical functions is that excitatory principal neurons, but not inhibitory cells express calcium/calmodulin-dependent protein kinase II subunit α (CaMKIIα) resulting in a widespread use of CaMKIIα promoter-driven protein expression for principal cell manipulation and monitoring their activities. Using neuroanatomical and electrophysiological methods we demonstrate that in addition to pyramidal neurons, multiple types of cortical GABAegic cells are targeted by adeno-associated viral vector (AAV) carrying the CaMKIIα-Channelrhodopsin 2-mCherry construct. We show that the reporter protein, mCherry can visualize a large fraction of different interneuron types, including parvalbumin (PV), somatostatin (SST), neuronal nitric oxide synthase (nNOS) and neuropeptide Y (NPY)-containing GABAergic cells, which altogether cover around 50% of the whole inhibitory cell population in cortical structures. Importantly, the expression of the excitatory opsin Channelrhodopsin 2 in the interneurons effectively drive spiking of infected GABAergic cells even if the detectability of reporter proteins is ambiguous. Thus, our results challenge the use of CaMKIIα promoter-driven protein expression as a selective tool in targeting cortical glutamatergic neurons using viral vectors. |
/**
* Wraps`JavaVM` pointer and provides some machinery to keep
* track of JVM state.
*/
class java_vm_ptr {
sl::support::observer_ptr<JavaVM> jvm;
std::atomic<bool> init_flag;
sl::concurrent::countdown_latch init_latch;
std::atomic<bool> shutdown_flag;
sl::concurrent::condition_latch shutdown_latch;
public:
java_vm_ptr(JavaVM* vm) :
jvm(vm),
init_latch(1),
shutdown_flag(false),
shutdown_latch([this] {
return !this->running();
}) { }
java_vm_ptr(const java_vm_ptr&) = delete;
java_vm_ptr& operator=(const java_vm_ptr&) = delete;
JavaVM* operator->() {
return jvm.get();
}
JavaVM* get() {
return jvm.get();
}
bool running() {
return !shutdown_flag.load(std::memory_order_acquire);
}
void await_init_complete() {
init_latch.await();
}
void notify_init_complete() {
init_flag.store(true, std::memory_order_release);
init_latch.count_down();
}
bool init_complete() {
return init_flag.load(std::memory_order_acquire);
}
void thread_sleep_before_shutdown(std::chrono::milliseconds millis) {
shutdown_latch.await(millis);
}
void notify_shutdown() {
shutdown_flag.store(true, std::memory_order_release);
init_latch.reset();
shutdown_latch.notify_all();
}
} |
// REQUIRES: x86-registered-target
// RUN: rm -rf %t && mkdir %t && cd %t
// RUN: cp %s debug-info-objname.cpp
/// No output file provided, input file is relative, we emit an absolute path (MSVC behavior).
// RUN: %clang_cl --target=x86_64-windows-msvc /c /Z7 -nostdinc debug-info-objname.cpp
// RUN: llvm-pdbutil dump -all debug-info-objname.obj | FileCheck %s --check-prefix=ABSOLUTE
/// No output file provided, input file is absolute, we emit an absolute path (MSVC behavior).
// RUN: %clang_cl --target=x86_64-windows-msvc /c /Z7 -nostdinc -- %t/debug-info-objname.cpp
// RUN: llvm-pdbutil dump -all debug-info-objname.obj | FileCheck %s --check-prefix=ABSOLUTE
/// The output file is provided as an absolute path, we emit an absolute path.
// RUN: %clang_cl --target=x86_64-windows-msvc /c /Z7 -nostdinc /Fo%t/debug-info-objname.obj -- %t/debug-info-objname.cpp
// RUN: llvm-pdbutil dump -all debug-info-objname.obj | FileCheck %s --check-prefix=ABSOLUTE
/// The output file is provided as relative path, -working-dir is provided, we emit an absolute path.
// RUN: %clang_cl --target=x86_64-windows-msvc /c /Z7 -nostdinc -working-dir=%t debug-info-objname.cpp
// RUN: llvm-pdbutil dump -all debug-info-objname.obj | FileCheck %s --check-prefix=ABSOLUTE
/// The input file name is relative and we specify -fdebug-compilation-dir, we emit a relative path.
// RUN: %clang_cl --target=x86_64-windows-msvc /c /Z7 -nostdinc -fdebug-compilation-dir=. debug-info-objname.cpp
// RUN: llvm-pdbutil dump -all debug-info-objname.obj | FileCheck %s --check-prefix=RELATIVE
/// Ensure /FA emits an .asm file which contains the path to the final .obj, not the .asm
// RUN: %clang_cl --target=x86_64-windows-msvc /c /Z7 -nostdinc -fdebug-compilation-dir=. /FA debug-info-objname.cpp
// RUN: FileCheck --input-file=debug-info-objname.asm --check-prefix=ASM %s
/// Same thing for -save-temps
// RUN: %clang_cl --target=x86_64-windows-msvc /c /Z7 -nostdinc -fdebug-compilation-dir=. /clang:-save-temps debug-info-objname.cpp
// RUN: FileCheck --input-file=debug-info-objname.asm --check-prefix=ASM %s
int main() {
return 1;
}
// ABSOLUTE: S_OBJNAME [size = [[#]]] sig=0, `{{.+}}debug-info-objname.obj`
// RELATIVE: S_OBJNAME [size = [[#]]] sig=0, `debug-info-objname.obj`
// ASM: Record kind: S_OBJNAME
// ASM-NEXT: .long 0
// ASM-NEXT: .asciz "debug-info-objname.obj"
|
Vasoactive Intestinal Peptide Downregulates Proinflammatory TLRs While Upregulating Anti-Inflammatory TLRs in the Infected Cornea
TLRs recognize microbial pathogens and trigger an immune response, but their regulation by neuropeptides, such as vasoactive intestinal peptide (VIP), during Pseudomonas aeruginosa corneal infection remains unexplored. Therefore, C57BL/6 (B6) mice were injected i.p. with VIP, and mRNA, protein, and immunostaining assays were performed. After VIP treatment, PCR array and real-time RT-PCR demonstrated that proinflammatory TLRs (conserved helix-loop-helix ubiquitous kinase, IRAK1, TLR1, TLR4, TLR6, TLR8, TLR9, and TNFR-associated factor 6) were downregulated, whereas anti-inflammatory TLRs (single Ig IL-1–related receptor and ST2) were upregulated. ELISA showed that VIP modestly downregulated phosphorylated inhibitor of NF-κB kinase subunit α but upregulated ST2 ~2-fold. SIGIRR was also upregulated, whereas TLR4 immunostaining was reduced in cornea; all confirmed the mRNA data. To determine whether VIP effects were cAMP dependent, mice were injected with small interfering RNA for type 7 adenylate cyclase (AC7), with or without VIP treatment. After silencing AC7, changes in mRNA levels of TLR1, TNFR-associated factor 6, and ST2 were seen and unchanged with addition of VIP, indicating that their regulation was cAMP dependent. In contrast, changes were seen in mRNA levels of conserved helix-loop-helix ubiquitous kinase, IRAK1, 2, TLR4, 9 and SIGIRR following AC7 silencing alone; these were modified by VIP addition, indicating their cAMP independence. In vitro studies assessed the effects of VIP on TLR regulation in macrophages and Langerhans cells. VIP downregulated mRNA expression of proinflammatory TLRs while upregulating anti-inflammatory TLRs in both cell types. Collectively, the data provide evidence that VIP downregulates proinflammatory TLRs and upregulates anti-inflammatory TLRs and that this regulation is both cAMP dependent and independent and involves immune cell types found in the infected cornea. |
from django.contrib import admin
from .models import Message
# from .models import smsSendingSetting
# Register your models here.
@admin.register(Message)
class MessageAdmin(admin.ModelAdmin):
pass
# @admin.register(smsSendingSetting)
# class smsSendingSettingAdmin(admin.ModelAdmin):
# pass
|
/// gets the current vr extended display interface (initialization is required beforehand)
pub fn extended_display() -> Result<IVRExtendedDisplay, openvr_sys::HmdError> {
let mut err = EVRInitError_VRInitError_None;
let name = std::ffi::CString::new("FnTable:IVRExtendedDisplay_001").unwrap();
let ptr = unsafe {
openvr_sys::VR_GetGenericInterface(name.as_ptr(), &mut err)
};
match err {
EVRInitError_VRInitError_None => {
unsafe {
return Ok(IVRExtendedDisplay::from_raw(ptr as *const ()));
}
},
_ => {
return Err(err);
}
}
} |
/**
* This is not a test per say but cleans up old data in the root folder so
* that there won't be any performance issue after a while.
*
* @throws BoxSDKServiceException
*/
@Category(BoxSDKTest.class)
public void removeContentOlderThanADayInRootFolder() throws BoxSDKServiceException {
BoxFolder rootFolder = boxSDKService.getRootFolder();
DateTime dateTime = new DateTime();
dateTime = dateTime.minusDays(1);
boxSDKService.deleteFolderContentOlderThan(rootFolder.getID(), dateTime);
} |
/**
* Creates a copy of a static field expression
*
* @return The copy
*/
@Override
public Expr copy()
{
return new StaticFieldExpr(owner, name, desc);
} |
/*
* @Author: zhangyang
* @Date: 2021-04-08 14:10:59
* @LastEditTime: 2021-04-08 14:13:08
* @Description:
*/
import { Context } from 'koa';
type RespondType = 'success' | 'fail' | 'unknown error';
export class BaseController {
respond(ctx: Context, data: any, type: RespondType) {
switch (type) {
case 'success':
ctx.body = { status: 0, data, msg: '成功' };
break;
case 'fail':
ctx.body = { status: -1, data, msg: '失败' };
break;
case 'unknown error':
ctx.body = { status: 99999, data, msg: '未知错误' };
break;
default:
ctx.body = { status: 99999, data, msg: '未知错误' };
break;
}
}
} |
/**
* Test cases for the WebSocket Client implementation.
*/
public class WebSocketClientFunctionalityTestCase {
private static final Logger LOG = LoggerFactory.getLogger(WebSocketClientFunctionalityTestCase.class);
private DefaultHttpWsConnectorFactory httpConnectorFactory = new DefaultHttpWsConnectorFactory();
private WebSocketClientConnector clientConnector;
private WebSocketRemoteServer remoteServer;
@BeforeClass
public void setup() throws InterruptedException {
remoteServer = new WebSocketRemoteServer(WEBSOCKET_REMOTE_SERVER_PORT, "xml, json");
remoteServer.run();
WebSocketClientConnectorConfig configuration = new WebSocketClientConnectorConfig(WEBSOCKET_REMOTE_SERVER_URL);
configuration.setAutoRead(true);
clientConnector = httpConnectorFactory.createWsClientConnector(configuration);
}
@Test(description = "Test the WebSocket handshake and sending and receiving text messages.")
public void testTextSendAndReceive() throws Throwable {
WebSocketTestClientConnectorListener connectorListener = new WebSocketTestClientConnectorListener();
WebSocketConnection webSocketConnection = getWebSocketConnection(connectorListener);
String textSent = "testText";
sendTextMessageAndReceiveResponse(textSent, connectorListener, webSocketConnection);
WebSocketTextMessage textMessage = connectorListener.getReceivedTextMessageToClient();
assertMessageProperties(textMessage);
Assert.assertEquals(textMessage.getText(), textSent);
Assert.assertTrue(textMessage.isFinalFragment());
}
@Test(description = "This is to test whether the text frame continuation is working as expected.")
public void testFrameContinuationForText() throws Throwable {
WebSocketTestClientConnectorListener connectorListener = new WebSocketTestClientConnectorListener();
WebSocketConnection webSocketConnection = getWebSocketConnection(connectorListener);
String continuationText1 = "continuation text 1";
String continuationText2 = "continuation text 2";
String finalText = "final text";
int loopCount = 2;
for (int i = 0; i < loopCount; i++) {
WebSocketTextMessage textMessage;
sendTextMessageAndReceiveResponse(continuationText1, false, connectorListener, webSocketConnection);
textMessage = connectorListener.getReceivedTextMessageToClient();
Assert.assertEquals(textMessage.getText(), continuationText1);
Assert.assertFalse(textMessage.isFinalFragment());
sendTextMessageAndReceiveResponse(continuationText2, false, connectorListener, webSocketConnection);
textMessage = connectorListener.getReceivedTextMessageToClient();
Assert.assertEquals(textMessage.getText(), continuationText2);
Assert.assertFalse(textMessage.isFinalFragment());
sendTextMessageAndReceiveResponse(finalText, true, connectorListener, webSocketConnection);
textMessage = connectorListener.getReceivedTextMessageToClient();
Assert.assertEquals(textMessage.getText(), finalText);
Assert.assertTrue(textMessage.isFinalFragment());
}
}
@Test(description = "This is to test whether timeout occurs when write timeout handler is added.")
public void testFrameContinuationTimeoutForText() throws Throwable {
WebSocketTestClientConnectorListener connectorListener = new WebSocketTestClientConnectorListener();
WebSocketConnection webSocketConnection = getWebSocketConnection(connectorListener);
WriteTimeOutTestListener timeoutHandler = new WriteTimeOutTestListener();
// set the timeout as 1 sec.
webSocketConnection.addWriteIdleStateHandler(timeoutHandler, 1);
// Wait till idle timeout occurs and then send the frame.
Thread.sleep(2000);
sendTextMessageAndReceiveResponse("Hello", true, connectorListener, webSocketConnection);
Assert.assertTrue(timeoutHandler.getTimedOut());
webSocketConnection.removeWriteIdleStateHandler();
}
@Test(description = "Test binary message sending and receiving.")
public void testBinarySendAndReceive() throws Throwable {
WebSocketTestClientConnectorListener connectorListener = new WebSocketTestClientConnectorListener();
WebSocketConnection webSocketConnection = getWebSocketConnection(connectorListener);
byte[] bytes = {1, 2, 3, 4, 5};
ByteBuffer bufferSent = ByteBuffer.wrap(bytes);
sendAndReceiveBinaryMessage(bufferSent, connectorListener, webSocketConnection);
WebSocketBinaryMessage receivedBinaryMessage = connectorListener.getReceivedBinaryMessageToClient();
assertMessageProperties(receivedBinaryMessage);
Assert.assertEquals(receivedBinaryMessage.getByteBuffer(), bufferSent);
Assert.assertEquals(receivedBinaryMessage.getByteArray(), bytes);
}
@Test(description = "This is to test whether the binary frame continuation is working as expected.")
public void testFrameContinuationForBinary() throws Throwable {
WebSocketTestClientConnectorListener connectorListener = new WebSocketTestClientConnectorListener();
WebSocketConnection webSocketConnection = getWebSocketConnection(connectorListener);
ByteBuffer continuationBuffer1 = ByteBuffer.wrap("continuation text 1".getBytes(StandardCharsets.UTF_8));
ByteBuffer continuationBuffer2 = ByteBuffer.wrap("continuation text 2".getBytes(StandardCharsets.UTF_8));
ByteBuffer finalBuffer = ByteBuffer.wrap("final text".getBytes(StandardCharsets.UTF_8));
int loopCount = 2;
for (int i = 0; i < loopCount; i++) {
WebSocketBinaryMessage binaryMessage;
sendAndReceiveBinaryMessage(continuationBuffer1, false, connectorListener, webSocketConnection);
binaryMessage = connectorListener.getReceivedBinaryMessageToClient();
Assert.assertEquals(binaryMessage.getByteBuffer(), continuationBuffer1);
Assert.assertFalse(binaryMessage.isFinalFragment());
sendAndReceiveBinaryMessage(continuationBuffer2, false, connectorListener, webSocketConnection);
binaryMessage = connectorListener.getReceivedBinaryMessageToClient();
Assert.assertEquals(binaryMessage.getByteBuffer(), continuationBuffer2);
Assert.assertFalse(binaryMessage.isFinalFragment());
sendAndReceiveBinaryMessage(finalBuffer, true, connectorListener, webSocketConnection);
binaryMessage = connectorListener.getReceivedBinaryMessageToClient();
Assert.assertEquals(binaryMessage.getByteBuffer(), finalBuffer);
Assert.assertTrue(binaryMessage.isFinalFragment());
}
}
@Test(description = "See if an error is thrown if a binary message is sent during text continuation.",
expectedExceptions = IllegalStateException.class,
expectedExceptionsMessageRegExp = "Cannot interrupt WebSocket text frame continuation")
public void testIllegalTextFrameContinuation() throws Throwable {
WebSocketTestClientConnectorListener connectorListener = new WebSocketTestClientConnectorListener();
WebSocketConnection webSocketConnection = getWebSocketConnection(connectorListener);
ByteBuffer buffer = ByteBuffer.wrap("continuation text 1".getBytes(StandardCharsets.UTF_8));
String text = "continuation text 2";
sendTextMessageAndReceiveResponse(text, false, connectorListener, webSocketConnection);
sendAndReceiveBinaryMessage(buffer, false, connectorListener, webSocketConnection);
}
@Test(description = "See if an error is thrown if a text message is sent during binary continuation.",
expectedExceptions = IllegalStateException.class,
expectedExceptionsMessageRegExp = "Cannot interrupt WebSocket binary frame continuation")
public void testIllegalBinaryFrameContinuation() throws Throwable {
WebSocketTestClientConnectorListener connectorListener = new WebSocketTestClientConnectorListener();
WebSocketConnection webSocketConnection = getWebSocketConnection(connectorListener);
ByteBuffer buffer = ByteBuffer.wrap("continuation text 1".getBytes(StandardCharsets.UTF_8));
String text = "continuation text 2";
sendAndReceiveBinaryMessage(buffer, false, connectorListener, webSocketConnection);
sendTextMessageAndReceiveResponse(text, false, connectorListener, webSocketConnection);
}
@Test(description = "Push text after connection closure should throw an exception.",
expectedExceptions = IllegalStateException.class,
expectedExceptionsMessageRegExp = "Close frame already sent. Cannot push binary data.")
public void testPushBinaryAfterConnectionClosure() throws Throwable {
WebSocketTestClientConnectorListener connectorListener = new WebSocketTestClientConnectorListener();
WebSocketConnection webSocketConnection = getWebSocketConnection(connectorListener);
webSocketConnection.terminateConnection(1000, "");
ByteBuffer buffer = ByteBuffer.wrap("continuation text 1".getBytes(StandardCharsets.UTF_8));
sendAndReceiveBinaryMessage(buffer, true, connectorListener, webSocketConnection);
}
@Test(description = "Push text after connection closure should throw an exception.",
expectedExceptions = IllegalStateException.class,
expectedExceptionsMessageRegExp = "Close frame already sent. Cannot push text data.")
public void testPushTextAfterConnectionClosure() throws Throwable {
WebSocketTestClientConnectorListener connectorListener = new WebSocketTestClientConnectorListener();
WebSocketConnection webSocketConnection = getWebSocketConnection(connectorListener);
webSocketConnection.terminateConnection(1000, "");
String text = "continuation text 1";
sendTextMessageAndReceiveResponse(text, true, connectorListener, webSocketConnection);
}
@Test(description = "Closing connection twice leads to illegal state.",
expectedExceptions = IllegalStateException.class,
expectedExceptionsMessageRegExp = "Close frame already sent. Cannot send close frame again.")
public void testConnectionClosureTwice() throws Throwable {
WebSocketTestClientConnectorListener connectorListener = new WebSocketTestClientConnectorListener();
WebSocketConnection webSocketConnection = getWebSocketConnection(connectorListener);
webSocketConnection.terminateConnection(1000, "");
webSocketConnection.initiateConnectionClosure(1000, "");
}
@Test(description = "Closing connection twice leads to illegal state.",
expectedExceptions = IllegalStateException.class,
expectedExceptionsMessageRegExp = "Close frame already sent. Cannot send close frame again.")
public void testTerminateConnectionClosureTwice() throws Throwable {
WebSocketTestClientConnectorListener connectorListener = new WebSocketTestClientConnectorListener();
WebSocketConnection webSocketConnection = getWebSocketConnection(connectorListener);
webSocketConnection.terminateConnection(1000, "");
webSocketConnection.terminateConnection(1000, "");
}
private void assertMessageProperties(WebSocketMessage webSocketMessage) {
Assert.assertEquals(webSocketMessage.getTarget(), "ws://localhost:9010/websocket");
Assert.assertFalse(webSocketMessage.isServerMessage());
}
private void sendAndReceiveBinaryMessage(ByteBuffer bufferSent,
WebSocketTestClientConnectorListener connectorListener,
WebSocketConnection webSocketConnection)
throws InterruptedException {
CountDownLatch countDownLatch = new CountDownLatch(1);
connectorListener.setCountDownLatch(countDownLatch);
webSocketConnection.pushBinary(bufferSent);
countDownLatch.await(WEBSOCKET_TEST_IDLE_TIMEOUT, SECONDS);
}
private void sendAndReceiveBinaryMessage(ByteBuffer bufferSent, boolean finalFragment,
WebSocketTestClientConnectorListener connectorListener,
WebSocketConnection webSocketConnection)
throws InterruptedException {
CountDownLatch countDownLatch = new CountDownLatch(1);
connectorListener.setCountDownLatch(countDownLatch);
webSocketConnection.pushBinary(bufferSent, finalFragment);
countDownLatch.await(WEBSOCKET_TEST_IDLE_TIMEOUT, SECONDS);
}
@Test(description = "Test ping received from the server.")
public void testPing() throws Throwable {
WebSocketTestClientConnectorListener connectorListener = new WebSocketTestClientConnectorListener();
WebSocketConnection webSocketConnection = getWebSocketConnection(connectorListener);
sendTextMessageAndReceiveResponse("ping", connectorListener, webSocketConnection);
Assert.assertTrue(connectorListener.isPingReceived(), "Ping message should be received");
}
private void sendTextMessageAndReceiveResponse(String textSent,
WebSocketTestClientConnectorListener connectorListener,
WebSocketConnection webSocketConnection)
throws Throwable {
CountDownLatch countDownLatch = new CountDownLatch(1);
connectorListener.setCountDownLatch(countDownLatch);
webSocketConnection.pushText(textSent);
countDownLatch.await(WEBSOCKET_TEST_IDLE_TIMEOUT, SECONDS);
}
private void sendTextMessageAndReceiveResponse(String textSent, boolean finalFragment,
WebSocketTestClientConnectorListener connectorListener,
WebSocketConnection webSocketConnection)
throws Throwable {
CountDownLatch countDownLatch = new CountDownLatch(1);
connectorListener.setCountDownLatch(countDownLatch);
webSocketConnection.pushText(textSent, finalFragment);
countDownLatch.await(WEBSOCKET_TEST_IDLE_TIMEOUT, SECONDS);
}
@Test(description = "Test pong received from the server after pinging the server.")
public void testPong() throws Throwable {
WebSocketTestClientConnectorListener pongConnectorListener = handshakeAndPing();
Assert.assertTrue(pongConnectorListener.isPongReceived(), "Pong message should be received");
}
private WebSocketTestClientConnectorListener handshakeAndPing() throws InterruptedException {
CountDownLatch pongLatch = new CountDownLatch(1);
WebSocketTestClientConnectorListener pongConnectorListener =
new WebSocketTestClientConnectorListener(pongLatch);
handshake(pongConnectorListener).setClientHandshakeListener(new ClientHandshakeListener() {
@Override
public void onSuccess(WebSocketConnection webSocketConnection, HttpCarbonResponse response) {
byte[] bytes = {1, 2, 3, 4, 5};
ByteBuffer buffer = ByteBuffer.wrap(bytes);
webSocketConnection.ping(buffer);
}
@Override
public void onError(Throwable t, HttpCarbonResponse response) {
LOG.error(t.getMessage());
Assert.fail(t.getMessage());
}
});
pongLatch.await(WEBSOCKET_TEST_IDLE_TIMEOUT, SECONDS);
return pongConnectorListener;
}
@Test
public void testConnectionClosureFromServerSide() throws Throwable {
WebSocketTestClientConnectorListener connectorListener = new WebSocketTestClientConnectorListener();
WebSocketConnection webSocketConnection = getWebSocketConnection(connectorListener);
sendTextMessageAndReceiveResponse("close", connectorListener, webSocketConnection);
WebSocketCloseMessage closeMessage = connectorListener.getCloseMessage();
Assert.assertTrue(connectorListener.isClosed());
Assert.assertEquals(closeMessage.getCloseCode(), 1000);
Assert.assertEquals(closeMessage.getCloseReason(), "Close on request");
webSocketConnection.finishConnectionClosure(closeMessage.getCloseCode(), null).sync();
Assert.assertFalse(webSocketConnection.isOpen());
}
@Test
public void testConnectionClosureWithoutCloseCodeFromServerSide() throws Throwable {
WebSocketTestClientConnectorListener connectorListener = new WebSocketTestClientConnectorListener();
WebSocketConnection webSocketConnection = getWebSocketConnection(connectorListener);
sendTextMessageAndReceiveResponse("close-without-status-code", connectorListener, webSocketConnection);
WebSocketCloseMessage closeMessage = connectorListener.getCloseMessage();
Assert.assertTrue(connectorListener.isClosed());
Assert.assertEquals(closeMessage.getCloseCode(), 1005);
Assert.assertEquals(closeMessage.getCloseReason(), "");
webSocketConnection.finishConnectionClosure().sync();
Assert.assertFalse(webSocketConnection.isOpen());
}
@Test
public void testConnectionClosureFromServerSideWithoutCloseFrame() throws Throwable {
WebSocketTestClientConnectorListener connectorListener = new WebSocketTestClientConnectorListener();
WebSocketConnection webSocketConnection = getWebSocketConnection(connectorListener);
sendTextMessageAndReceiveResponse("close-without-frame", connectorListener, webSocketConnection);
WebSocketCloseMessage closeMessage = connectorListener.getCloseMessage();
Assert.assertTrue(connectorListener.isClosed());
Assert.assertEquals(closeMessage.getCloseCode(), 1000);
Assert.assertEquals(closeMessage.getCloseReason(), "Bye");
}
@Test(description = "Test connection termination using WebSocketConnection without sending a close frame.")
public void testConnectionTerminationWithoutCloseFrame() throws Throwable {
WebSocketConnection webSocketConnection =
getWebSocketConnection(new WebSocketTestClientConnectorListener());
CountDownLatch countDownLatch = new CountDownLatch(1);
ChannelFuture closeFuture = webSocketConnection.terminateConnection().addListener(
future -> countDownLatch.countDown());
countDownLatch.await(WEBSOCKET_TEST_IDLE_TIMEOUT, SECONDS);
Assert.assertNull(closeFuture.cause());
Assert.assertTrue(closeFuture.isDone());
Assert.assertTrue(closeFuture.isSuccess());
Assert.assertFalse(webSocketConnection.isOpen());
}
@Test(description = "Test connection termination using WebSocketConnection with a close frame.")
public void testConnectionTerminationWithCloseFrame() throws Throwable {
WebSocketConnection webSocketConnection =
getWebSocketConnection(new WebSocketTestClientConnectorListener());
CountDownLatch countDownLatch = new CountDownLatch(1);
ChannelFuture closeFuture = webSocketConnection.terminateConnection(1011, "Unexpected failure").addListener(
future -> countDownLatch.countDown());
countDownLatch.await(WEBSOCKET_TEST_IDLE_TIMEOUT, SECONDS);
Assert.assertNull(closeFuture.cause());
Assert.assertTrue(closeFuture.isDone());
Assert.assertTrue(closeFuture.isSuccess());
Assert.assertFalse(webSocketConnection.isOpen());
}
// TODO disabled due to https://github.com/ballerina-platform/module-ballerina-http/issues/78
@Test(enabled = false)
public void testClientInitiatedClosure() throws Throwable {
WebSocketConnection webSocketConnection =
getWebSocketConnection(new WebSocketTestClientConnectorListener());
CountDownLatch countDownLatch = new CountDownLatch(1);
ChannelFuture closeFuture = webSocketConnection.initiateConnectionClosure(1001, "Going away").addListener(
future -> countDownLatch.countDown());
countDownLatch.await(WEBSOCKET_TEST_IDLE_TIMEOUT, SECONDS);
Assert.assertNull(closeFuture.cause());
Assert.assertTrue(closeFuture.isDone());
Assert.assertTrue(closeFuture.isSuccess());
}
// TODO disabled due to https://github.com/ballerina-platform/module-ballerina-http/issues/78
@Test(enabled = false)
public void testClientInitiatedClosureWithoutCloseCode() throws Throwable {
WebSocketConnection webSocketConnection =
getWebSocketConnection(new WebSocketTestClientConnectorListener());
CountDownLatch countDownLatch = new CountDownLatch(1);
ChannelFuture closeFuture = webSocketConnection.initiateConnectionClosure().addListener(
future -> countDownLatch.countDown());
countDownLatch.await(WEBSOCKET_TEST_IDLE_TIMEOUT, SECONDS);
Assert.assertNull(closeFuture.cause());
Assert.assertTrue(closeFuture.isDone());
Assert.assertTrue(closeFuture.isSuccess());
}
@Test
public void testExceptionCaught() throws Throwable {
WebSocketTestClientConnectorListener connectorListener = new WebSocketTestClientConnectorListener();
WebSocketConnection webSocketConnection = getWebSocketConnection(connectorListener);
CountDownLatch countDownLatch = new CountDownLatch(1);
connectorListener.setCountDownLatch(countDownLatch);
webSocketConnection.pushText("send-corrupted-frame");
countDownLatch.await(WEBSOCKET_TEST_IDLE_TIMEOUT, SECONDS);
Throwable throwable = connectorListener.getError();
Assert.assertNotNull(webSocketConnection);
Assert.assertNotNull(throwable);
Assert.assertTrue(throwable instanceof CorruptedFrameException);
Assert.assertEquals(throwable.getMessage(), "received continuation data frame outside fragmented message");
Assert.assertFalse(webSocketConnection.isOpen());
}
@AfterClass
public void cleanUp() throws ServerConnectorException, InterruptedException {
remoteServer.stop();
httpConnectorFactory.shutdown();
}
private ClientHandshakeFuture handshake(WebSocketConnectorListener connectorListener) {
ClientHandshakeFuture clientHandshakeFuture = clientConnector.connect();
clientHandshakeFuture.setWebSocketConnectorListener(connectorListener);
return clientHandshakeFuture;
}
private WebSocketConnection getWebSocketConnection(WebSocketConnectorListener connectorListener)
throws Throwable {
CountDownLatch countDownLatch = new CountDownLatch(1);
AtomicReference<WebSocketConnection> webSocketConnectionAtomicReference = new AtomicReference<>();
AtomicReference<Throwable> throwableAtomicReference = new AtomicReference<>();
handshake(connectorListener).setClientHandshakeListener(new ClientHandshakeListener() {
@Override
public void onSuccess(WebSocketConnection webSocketConnection, HttpCarbonResponse response) {
webSocketConnectionAtomicReference.set(webSocketConnection);
countDownLatch.countDown();
}
@Override
public void onError(Throwable throwable, HttpCarbonResponse response) {
throwableAtomicReference.set(throwable);
countDownLatch.countDown();
}
});
countDownLatch.await(WEBSOCKET_TEST_IDLE_TIMEOUT, SECONDS);
if (throwableAtomicReference.get() != null) {
throw throwableAtomicReference.get();
}
return webSocketConnectionAtomicReference.get();
}
} |
def gen_balanced_tree(height=2, branch=3, directed=False):
G = nx.balanced_tree(r=branch, h=height)
adj_mat = nx.adjacency_matrix(G).todense()
if directed:
return np.triu(adj_mat)
else:
return np.array(adj_mat) |
#include<bits/stdc++.h>
using namespace std;
#define sci(n) scanf("%d",&n)
#define scl(n) scanf("%lld",&n)
#define scd(n) scanf("%lf",&n)
#define FOR(i,n) for(ll i=1;i<=n;i++)
#define LOOP(i,n) for(ll i=0;i<n;i++)
#define loop(a,b) for(ll i=a;i<=b;i++)
#define ll long long
#define pb push_back
#define mp make_pair
int main()
{
ll t;
scl(t);
while(t--)
{
ll n;
scl(n);
ll a[n+5];
LOOP(i,n)
{
scl(a[i]);
}
ll cnt=0;
ll samon=0,pichon=n+4;
LOOP(i,n)
{
if(a[i]>=cnt)
{
//cnt++;
samon=cnt;
cnt++;
}
else
{
break;
}
}
cnt=0;
for(ll i=n-1;i>=0;i--)
{
if(a[i]>=cnt)
{
pichon=i;
cnt++;
}
else
{
break;
}
}
// cout << samon << " " << pichon << endl;
if(samon>=pichon)
{
cout << "Yes" << endl;
}
else
{
cout << "No" << endl;
}
}
}
|
/**
* Executes setMode(DisplayMode d) with a new FourUp display mode
*/
@Override
public void execute() {
prevStrat = dState.getCurStrategy();
dState.setStrategy(new FourUpStrategy());
} |
package models
import "time"
type ApplicationSettings struct {
Id int
ExpiryDuration int //Time in seconds after which it token will expired..
Added time.Time `orm:"auto_now_add;type(datetime)"`
LastUpdated time.Time `orm:"auto_now;type(datetime)"`
Application *Application `orm:"reverse(one)"` // Reverse relationship (optional)
}
//TODO ADD CRUD METHODS HERE |
/**
* Respond to one or more join requests to a restricted group.
*
* @param gid The SteamID of the group you want to manage
* @param steamIDs The SteamIDs of the users you want to approve or deny membership for (or a single value)
* @param approve True to put them in the group, false to deny their membership
*/
public void respondToGroupJoinRequests(SteamID gid, Collection<SteamID> steamIDs, boolean approve) {
final String rgAccounts = steamIDs.stream()
.map(steamID -> String.valueOf(steamID.convertToUInt64()))
.collect(Collectors.joining(","));
final Map<String, String> params = MapHelper.newHashMapWithExpectedSize(4);
params.put("rgAccounts", rgAccounts);
params.put("bapprove", approve ? "1" : "0");
params.put("json", "1");
params.put("sessionID", bot.getSteamWeb().getSessionId());
bot.getSteamWeb().fetch("https://steamcommunity.com/gid/" + gid.convertToUInt64() + "/joinRequestsManage",
new HttpParameters(params, HttpMethod.POST));
} |
use error;
use error::*;
use simplisp::Environment as LispEnvironment;
use simplisp::ExecutionTreeObject;
use simplisp::Result as LispResult;
use simplisp::WrapErr;
use simplisp::WrapError;
use std::ops::BitAnd;
pub unsafe fn bitand<TEnvironment>(
environment: &TEnvironment,
lisp_environment: &mut LispEnvironment<TEnvironment>,
args: Vec<&ExecutionTreeObject>) -> LispResult<ExecutionTreeObject> {
let func_name = "bitand".to_string();
let num_args = 1;
let arg_index = 0;
let mut rest = args.into_iter();
let first_arg = {
match rest.next() {
Some(first_arg) => try!(lisp_environment.evaluate(environment, &first_arg)),
None => {
let err: Error = ErrorKind::LispArgNotFound(func_name, arg_index, num_args).into();
return err.wrap_error_to_err();
},
}
};
let result =
match first_arg {
ExecutionTreeObject::Bool(first) => bitand_bool(environment, lisp_environment, first, rest),
ExecutionTreeObject::I8(first) => bitand_i8(environment, lisp_environment, first, rest),
ExecutionTreeObject::I16(first) => bitand_i16(environment, lisp_environment, first, rest),
ExecutionTreeObject::I32(first) => bitand_i32(environment, lisp_environment, first, rest),
ExecutionTreeObject::I64(first) => bitand_i64(environment, lisp_environment, first, rest),
ExecutionTreeObject::ISize(first) => bitand_isize(environment, lisp_environment, first, rest),
ExecutionTreeObject::U8(first) => bitand_u8(environment, lisp_environment, first, rest),
ExecutionTreeObject::U16(first) => bitand_u16(environment, lisp_environment, first, rest),
ExecutionTreeObject::U32(first) => bitand_u32(environment, lisp_environment, first, rest),
ExecutionTreeObject::U64(first) => bitand_u64(environment, lisp_environment, first, rest),
ExecutionTreeObject::USize(first) => bitand_usize(environment, lisp_environment, first, rest),
_ => Err(ErrorKind::Msg("Value cannot be bitanded.".to_string()).into()),
};
result.wrap_err_to_err()
}
math_op_2!(bitand_bool, bool, Bool, bitand, { return Err(ErrorKind::Msg("Value cannot be bitanded.".to_string()).into()); });
math_op_2!(bitand_i8, i8, I8, bitand, { return Err(ErrorKind::Msg("Value cannot be bitanded.".to_string()).into()); });
math_op_2!(bitand_i16, i16, I16, bitand, { return Err(ErrorKind::Msg("Value cannot be bitanded.".to_string()).into()); });
math_op_2!(bitand_i32, i32, I32, bitand, { return Err(ErrorKind::Msg("Value cannot be bitanded.".to_string()).into()); });
math_op_2!(bitand_i64, i64, I64, bitand, { return Err(ErrorKind::Msg("Value cannot be bitanded.".to_string()).into()); });
math_op_2!(bitand_isize, isize, ISize, bitand, { return Err(ErrorKind::Msg("Value cannot be bitanded.".to_string()).into()); });
math_op_2!(bitand_u8, u8, U8, bitand, { return Err(ErrorKind::Msg("Value cannot be bitanded.".to_string()).into()); });
math_op_2!(bitand_u16, u16, U16, bitand, { return Err(ErrorKind::Msg("Value cannot be bitanded.".to_string()).into()); });
math_op_2!(bitand_u32, u32, U32, bitand, { return Err(ErrorKind::Msg("Value cannot be bitanded.".to_string()).into()); });
math_op_2!(bitand_u64, u64, U64, bitand, { return Err(ErrorKind::Msg("Value cannot be bitanded.".to_string()).into()); });
math_op_2!(bitand_usize, usize, USize, bitand, { return Err(ErrorKind::Msg("Value cannot be bitanded.".to_string()).into()); });
|
<gh_stars>100-1000
/******************************************************************************
This source file is part of the Avogadro project.
Copyright (C) 2010 <NAME>
This source code is released under the New BSD License, (the "License").
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
******************************************************************************/
#ifndef QTAIMMATHUTILITIES_H
#define QTAIMMATHUTILITIES_H
#include <QtGlobal>
#include <Eigen/Core>
using namespace Eigen;
namespace Avogadro {
namespace QtPlugins {
namespace QTAIMMathUtilities {
Matrix<qreal, 3, 1> eigenvaluesOfASymmetricThreeByThreeMatrix(
const Matrix<qreal, 3, 3>& A);
Matrix<qreal, 3, 3> eigenvectorsOfASymmetricThreeByThreeMatrix(
const Matrix<qreal, 3, 3>& A);
Matrix<qreal, 4, 1> eigenvaluesOfASymmetricFourByFourMatrix(
const Matrix<qreal, 4, 4>& A);
Matrix<qreal, 4, 4> eigenvectorsOfASymmetricFourByFourMatrix(
const Matrix<qreal, 4, 4>& A);
qint64 signOfARealNumber(qreal x);
qint64 signatureOfASymmetricThreeByThreeMatrix(const Matrix<qreal, 3, 3>& A);
qreal ellipticityOfASymmetricThreeByThreeMatrix(const Matrix<qreal, 3, 3>& A);
qreal distance(const Matrix<qreal, 3, 1>& a, const Matrix<qreal, 3, 1>& b);
Matrix<qreal, 3, 1> sphericalToCartesian(const Matrix<qreal, 3, 1>& rtp,
const Matrix<qreal, 3, 1>& x0y0z0);
Matrix<qreal, 3, 1> sphericalToCartesian(const Matrix<qreal, 3, 1>& rtp);
Matrix<qreal, 3, 1> cartesianToSpherical(const Matrix<qreal, 3, 1>& xyz,
const Matrix<qreal, 3, 1>& x0y0z0);
Matrix<qreal, 3, 1> cartesianToSpherical(const Matrix<qreal, 3, 1>& xyz);
// Cerjan-Miller-Baker-Popelier Methods
// A small number to prevent divide by zero in CMBP routines
#define SMALL 1.e-10
Matrix<qreal, 3, 1> minusThreeSignatureLocatorGradient(
const Matrix<qreal, 3, 1>& g, const Matrix<qreal, 3, 3>& H);
Matrix<qreal, 3, 1> minusOneSignatureLocatorGradient(
const Matrix<qreal, 3, 1>& g, const Matrix<qreal, 3, 3>& H);
Matrix<qreal, 3, 1> plusOneSignatureLocatorGradient(
const Matrix<qreal, 3, 1>& g, const Matrix<qreal, 3, 3>& H);
Matrix<qreal, 3, 1> plusThreeSignatureLocatorGradient(
const Matrix<qreal, 3, 1>& g, const Matrix<qreal, 3, 3>& H);
}
} // namespace QtPlugins
} // namespace Avogadro
#endif // QTAIMMATHUTILITIES_H
|
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package dk.nordfalk.nepalspil.kontrol;
import com.jme3.renderer.RenderManager;
import com.jme3.renderer.ViewPort;
import com.jme3.scene.control.AbstractControl;
import com.jme3.scene.control.Control;
/**
*
* @author j
*/
public class BrikRoterKontrol extends AbstractControl {
float rotTid = 1;
@Override
protected void controlUpdate(float tpf) {
rotTid += tpf;
if (rotTid>1) {
setEnabled(false);
spatial.rotate(0, 0, 0);
} else {
spatial.rotate(0, 10*tpf, 0);
}
}
public void start() {
rotTid = 0;
setEnabled(true);
}
@Override
protected void controlRender(RenderManager rm, ViewPort vp) {
}
}
|
/**
* This should be called by the host Activity when its onRequestPermissionsResult method is called. The call will be forwarded
* to the {@link Controller} with the instanceId passed in.
*
* @param instanceId The instanceId of the Controller to which this result should be forwarded
* @param requestCode The Activity's onRequestPermissionsResult requestCode
* @param permissions The Activity's onRequestPermissionsResult permissions
* @param grantResults The Activity's onRequestPermissionsResult grantResults
*/
public void onRequestPermissionsResult(@NonNull String instanceId, int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {
Controller controller = getControllerWithInstanceId(instanceId);
if (controller != null) {
controller.requestPermissionsResult(requestCode, permissions, grantResults);
}
} |
def serving_output(output):
raise NotImplementedError |
<gh_stars>1-10
import React from 'react'
import { Story, Meta } from '@storybook/react'
import { Button, ButtonProps } from '.'
import StoryContainer from '../../helpers/StoryContainer'
const componentStatus = `
---
**NOTE FOR UXs**: This component is available in the following variants:
- ✅ \`contained\`
- ✅ \`outlined\`
- ✅ \`text\`
With the following attribute status:
- **Size**
- ✅ \`semi\`
- ✅ \`semiX\`
- ✅ \`medium\`
- ✅ **Icon**
- ✅ **Disabled**
- **Display**
- ✅ \`inline\`
- ✅ \`block\`
---
`
export default {
title: 'Components/Button',
component: Button,
parameters: {
componentSubtitle: 'Buttons allow users to take actions, and make choices, with a single tap',
docs: { description: { component: componentStatus } },
actions: { argTypesRegex: '^on.*' }
}
} as Meta
export const Playground: Story<ButtonProps> = (args) => <Button {...args} />
Playground.args = {
children: 'button',
onClick: () => console.log('clicked!')
}
export const Variants: Story<ButtonProps> = (args) => (
<StoryContainer>
<Button {...args} variant="contained" />
<Button {...args} variant="outlined" />
<Button {...args} variant="text" />
</StoryContainer>
)
Variants.args = { ...Playground.args }
export const Sizes: Story<ButtonProps> = (args) => (
<div style={{ display: 'flex', flexDirection: 'column', gap: 8 }}>
<div style={{ display: 'flex', gap: 8 }}>
<Button {...args} size="semi" />
<Button {...args} size="semiX" />
<Button {...args} size="medium" />
</div>
<div style={{ display: 'flex', gap: 8 }}>
<Button {...args} variant="outlined" size="semi" />
<Button {...args} variant="outlined" size="semiX" />
<Button {...args} variant="outlined" size="medium" />
</div>
<div style={{ display: 'flex', gap: 8 }}>
<Button {...args} variant="text" size="semi" />
<Button {...args} variant="text" size="semiX" />
<Button {...args} variant="text" size="medium" />
</div>
</div>
)
Sizes.args = { ...Playground.args }
export const Disabled: Story<ButtonProps> = (args) => (
<StoryContainer>
<Button {...args} variant="contained" />
<Button {...args} variant="outlined" />
<Button {...args} variant="text" />
</StoryContainer>
)
Disabled.args = { ...Playground.args, disabled: true }
export const Icon: Story<ButtonProps> = (args) => (
<div style={{ display: 'flex', flexDirection: 'column', gap: 8 }}>
<div style={{ display: 'flex', gap: 8 }}>
<Button {...args} showIcon iconName="outlined-default-mockup" iconPosition="left" />
<Button {...args} showIcon iconName="outlined-default-mockup" />
</div>
<div style={{ display: 'flex', gap: 8 }}>
<Button {...args} variant="outlined" showIcon iconName="outlined-default-mockup" iconPosition="left" />
<Button {...args} variant="outlined" showIcon iconName="outlined-default-mockup" />
</div>
<div style={{ display: 'flex', gap: 8 }}>
<Button {...args} variant="text" showIcon iconName="outlined-default-mockup" iconPosition="left" />
<Button {...args} variant="text" showIcon iconName="outlined-default-mockup" />
</div>
</div>
)
Icon.args = { ...Playground.args }
export const FullWidth: Story<ButtonProps> = (args) => (
<div style={{ display: 'flex', gap: 8, flexDirection: 'column' }}>
<Button {...args} />
<Button {...args} variant="outlined" />
<Button {...args} variant="text" />
</div>
)
FullWidth.args = { ...Playground.args, fullWidth: true }
|
<gh_stars>0
package k8s
import (
"context"
v1 "k8s.io/api/core/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/project-flotta/flotta-operator/api/v1alpha1"
"github.com/project-flotta/flotta-operator/internal/common/repository/edgedevice"
"github.com/project-flotta/flotta-operator/internal/common/repository/edgedeviceset"
"github.com/project-flotta/flotta-operator/internal/common/repository/edgedevicesignedrequest"
"github.com/project-flotta/flotta-operator/internal/common/repository/edgeworkload"
"github.com/project-flotta/flotta-operator/internal/edgeapi/k8sclient"
)
type EdgeDeviceRepository interface {
GetEdgeDevice(ctx context.Context, name, namespace string) (*v1alpha1.EdgeDevice, error)
PatchEdgeDeviceStatus(ctx context.Context, edgeDevice *v1alpha1.EdgeDevice, patch *client.Patch) error
UpdateEdgeDeviceLabels(ctx context.Context, device *v1alpha1.EdgeDevice, labels map[string]string) error
PatchEdgeDevice(ctx context.Context, old, new *v1alpha1.EdgeDevice) error
}
type EdgeDeviceSignedRequestRepository interface {
GetEdgeDeviceSignedRequest(ctx context.Context, name string, namespace string) (*v1alpha1.EdgeDeviceSignedRequest, error)
CreateEdgeDeviceSignedRequest(ctx context.Context, edgeDeviceSignedRequest *v1alpha1.EdgeDeviceSignedRequest) error
}
type EdgeWorkloadRepository interface {
GetEdgeWorkload(ctx context.Context, name string, namespace string) (*v1alpha1.EdgeWorkload, error)
}
type EdgeDeviceSetRepository interface {
GetEdgeDeviceSet(ctx context.Context, name string, namespace string) (*v1alpha1.EdgeDeviceSet, error)
}
type CoreRepository interface {
GetSecret(ctx context.Context, name string, namespace string) (*v1.Secret, error)
GetConfigMap(ctx context.Context, name string, namespace string) (*v1.ConfigMap, error)
}
//go:generate mockgen -package=k8s -destination=mock_repository_facade.go . RepositoryFacade
type RepositoryFacade interface {
EdgeDeviceRepository
EdgeDeviceSignedRequestRepository
EdgeWorkloadRepository
EdgeDeviceSetRepository
CoreRepository
}
type repositoryFacade struct {
deviceSignedRequestRepository edgedevicesignedrequest.Repository
deviceRepository edgedevice.Repository
workloadRepository edgeworkload.Repository
deviceSetRepository edgedeviceset.Repository
client k8sclient.K8sClient
}
func NewRepository(deviceSignedRequestRepository edgedevicesignedrequest.Repository,
deviceRepository edgedevice.Repository,
workloadRepository edgeworkload.Repository,
deviceSetRepository edgedeviceset.Repository,
client k8sclient.K8sClient) RepositoryFacade {
return &repositoryFacade{
deviceSignedRequestRepository: deviceSignedRequestRepository,
deviceRepository: deviceRepository,
deviceSetRepository: deviceSetRepository,
workloadRepository: workloadRepository,
client: client,
}
}
func (b *repositoryFacade) GetEdgeDevice(ctx context.Context, name, namespace string) (*v1alpha1.EdgeDevice, error) {
return b.deviceRepository.Read(ctx, name, namespace)
}
func (b *repositoryFacade) PatchEdgeDeviceStatus(ctx context.Context, edgeDevice *v1alpha1.EdgeDevice, patch *client.Patch) error {
return b.deviceRepository.PatchStatus(ctx, edgeDevice, patch)
}
func (b *repositoryFacade) UpdateEdgeDeviceLabels(ctx context.Context, device *v1alpha1.EdgeDevice, labels map[string]string) error {
return b.deviceRepository.UpdateLabels(ctx, device, labels)
}
func (b *repositoryFacade) PatchEdgeDevice(ctx context.Context, old, new *v1alpha1.EdgeDevice) error {
return b.deviceRepository.Patch(ctx, old, new)
}
func (b *repositoryFacade) GetEdgeDeviceSignedRequest(ctx context.Context, name string, namespace string) (*v1alpha1.EdgeDeviceSignedRequest, error) {
return b.deviceSignedRequestRepository.Read(ctx, name, namespace)
}
func (b *repositoryFacade) CreateEdgeDeviceSignedRequest(ctx context.Context, edgedeviceSignedRequest *v1alpha1.EdgeDeviceSignedRequest) error {
return b.deviceSignedRequestRepository.Create(ctx, edgedeviceSignedRequest)
}
func (b *repositoryFacade) GetEdgeWorkload(ctx context.Context, name string, namespace string) (*v1alpha1.EdgeWorkload, error) {
return b.workloadRepository.Read(ctx, name, namespace)
}
func (b *repositoryFacade) GetEdgeDeviceSet(ctx context.Context, name string, namespace string) (*v1alpha1.EdgeDeviceSet, error) {
return b.deviceSetRepository.Read(ctx, name, namespace)
}
func (b *repositoryFacade) GetSecret(ctx context.Context, name string, namespace string) (*v1.Secret, error) {
secret := v1.Secret{}
err := b.client.Get(ctx, client.ObjectKey{Namespace: namespace, Name: name}, &secret)
if err != nil {
return nil, err
}
return &secret, nil
}
func (b *repositoryFacade) GetConfigMap(ctx context.Context, name string, namespace string) (*v1.ConfigMap, error) {
configMap := v1.ConfigMap{}
err := b.client.Get(ctx, client.ObjectKey{Namespace: namespace, Name: name}, &configMap)
if err != nil {
return nil, err
}
return &configMap, nil
}
|
<gh_stars>100-1000
/**
* @fileoverview Tests what happens when a rest args (...x) param is
* instantiated in a context where it creates a zero-argument function.
*/
export {};
function returnsRestArgFn<A extends unknown[]>(fn: (...args: A) => void):
(...args: A) => void {
return fn;
}
const zeroRestArguments = returnsRestArgFn(() => {});
console.log(zeroRestArguments);
|
Image copyright AP Image caption Alphonso mangoes, shown on sale in Mumbai, are popular in the UK
UK Prime Minister David Cameron says he is "looking forward" to discussing the EU ban on Indian mango imports with the country's new prime minister.
Mr Cameron said the ban was a "serious issue", there were concerns about possible cross-contamination and "we must make sure that that is got right".
But he told MPs he understood the strength of feeling on the matter.
Labour MP Keith Vaz had urged him to reverse the ban, which he said was harming hundreds of businesses.
Mr Vaz, who raised the subject at Prime Minister's Questions, said the ban, which came into force last week, would cost firms millions of pounds.
Media playback is unsupported on your device Media caption Mr Vaz said hundreds of businesses in his constituency of Leicester, and across the UK, would suffer millions of pounds in losses
He said "there was no consultation with this house and no vote by British ministers" and noted that Mr Cameron would have his first conversation with India's new prime minister next week.
"Will he do his best to reverse this ban so we can keep the special relationship with India which his predecessors and he have worked so hard to maintain, and so we can have our delicious mangos once again?"
Mr Cameron began his answer by saying he was grateful for the box of alphonso mangoes delivered to Number 10 by Mr Vaz just before the ban came into force.
He added: "The European Union has to look on the basis of the science and the evidence and there are concerns about particular cross contamination in terms of British crops and British interests so we have to make that that is got right.
"But I understand how strongly he feels and how strongly the Indian community in this country feels and indeed I look forward to discussing it with the new Indian prime minister."
The ban, which began on 1 May, also includes aubergines, two types of squash, and a type of leaf used in Indian cooking.
It was brought in after non-European food pests were found in 207 shipments of fruit and vegetables in 2013.
Indian mango exporters said they have put checks in place and have approached the authorities in Brussels to try to get the ban lifted.
The UK imports around £6.3m worth of Indian mangoes per year. Similar types of mango imported from Pakistan and Bangladesh have not been banned.
Premium Alphonso mangoes, which are popular in the UK, were in season as the ban came into force.
The Department for Environment, Food and Rural Affairs (Defra), which voted to put the ban in place, is working with Indian authorities and the European Commission to try to get the ban lifted.
The ban includes imports of Momordica and Snake Gourd squashes, and Patra leaves, which are used in a dish called Patra.
India is currently in the process of holding its month-long general election. Votes are due to be counted on 16 May. |
# -*- coding: utf-8 -*-
# Copyright 2013 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import itertools
from mock import Mock
from mock import patch
from netaddr import IPAddress
from netaddr import IPNetwork
from netaddr import IPRange
from sqlalchemy import not_
import nailgun
from nailgun import objects
from nailgun.db.sqlalchemy.models import IPAddr
from nailgun.db.sqlalchemy.models import IPAddrRange
from nailgun.db.sqlalchemy.models import NetworkGroup
from nailgun.db.sqlalchemy.models import Node
from nailgun.db.sqlalchemy.models import NodeNICInterface
from nailgun.network.neutron import NeutronManager
from nailgun.network.nova_network import NovaNetworkManager
from nailgun.test.base import BaseIntegrationTest
from nailgun.test.base import fake_tasks
class TestNetworkManager(BaseIntegrationTest):
@fake_tasks(fake_rpc=False, mock_rpc=False)
@patch('nailgun.rpc.cast')
def test_assign_ips(self, mocked_rpc):
self.env.create(
cluster_kwargs={},
nodes_kwargs=[
{"pending_addition": True, "api": True},
{"pending_addition": True, "api": True}
]
)
nailgun.task.task.Cobbler = Mock()
self.env.network_manager.assign_ips(
self.env.nodes,
"management"
)
management_net = self.db.query(NetworkGroup).\
filter(
NetworkGroup.group_id ==
objects.Cluster.get_default_group(self.env.clusters[0]).id
).filter_by(
name='management'
).first()
assigned_ips = []
for node in self.env.nodes:
ips = self.db.query(IPAddr).\
filter_by(node=node.id).\
filter_by(network=management_net.id).all()
self.assertEqual(1, len(ips))
self.assertEqual(
True,
self.env.network_manager.check_ip_belongs_to_net(
ips[0].ip_addr,
management_net
)
)
assigned_ips.append(ips[0].ip_addr)
# check for uniqueness of IPs:
self.assertEqual(len(assigned_ips), len(list(set(assigned_ips))))
# check it doesn't contain broadcast and other special IPs
net_ip = IPNetwork(management_net.cidr)[0]
gateway = management_net.gateway
broadcast = IPNetwork(management_net.cidr)[-1]
self.assertEqual(False, net_ip in assigned_ips)
self.assertEqual(False, gateway in assigned_ips)
self.assertEqual(False, broadcast in assigned_ips)
@fake_tasks(fake_rpc=False, mock_rpc=False)
@patch('nailgun.rpc.cast')
def test_assign_ips_idempotent(self, mocked_rpc):
self.env.create(
cluster_kwargs={},
nodes_kwargs=[
{
"pending_addition": True,
"api": True,
"status": "discover"
}
]
)
node_db = self.env.nodes[0]
self.env.network_manager.assign_ips(
[node_db],
"management"
)
self.env.network_manager.assign_ips(
[node_db],
"management"
)
self.db.refresh(node_db)
self.assertEqual(
len(
filter(
lambda n: n['name'] == 'management',
self.env.network_manager.get_node_networks(
node_db
)
)
),
1
)
def test_assign_vip_is_idempotent(self):
cluster = self.env.create_cluster(api=True)
vip = self.env.network_manager.assign_vip(
cluster['id'],
"management"
)
vip2 = self.env.network_manager.assign_vip(
cluster['id'],
"management"
)
self.assertEqual(vip, vip2)
def test_get_node_networks_for_vlan_manager(self):
cluster = self.env.create(
cluster_kwargs={},
nodes_kwargs=[
{"pending_addition": True},
]
)
networks_data = \
{'networking_parameters': {'net_manager': 'VlanManager'}}
resp = self.env.nova_networks_put(cluster['id'], networks_data)
self.assertEqual(resp.json_body['status'], 'ready')
network_data = self.env.network_manager.get_node_networks(
self.env.nodes[0]
)
self.assertEqual(len(network_data), 4)
fixed_nets = filter(lambda net: net['name'] == 'fixed', network_data)
self.assertEqual(fixed_nets, [])
def test_assign_admin_ip_multiple_groups(self):
self.env.create(
cluster_kwargs={
'api': False,
'net_provider': 'neutron',
'net_segment_type': 'gre'
},
nodes_kwargs=[{}, {}]
)
node_group = self.env.create_node_group()
self.env.nodes[1].group_id = node_group.json_body['id']
self.db().flush()
admin_net =\
self.env.network_manager.get_admin_network_group(
self.env.nodes[1].id
)
mock_range = IPAddrRange(
first='172.16.31.10',
last='192.168.127.12',
network_group_id=admin_net.id
)
self.db.add(mock_range)
self.db.commit()
self.env.network_manager.assign_admin_ips(self.env.nodes)
for n in self.env.nodes:
admin_net = self.env.network_manager.get_admin_network_group(n.id)
ip = self.db.query(IPAddr).\
filter_by(network=admin_net.id).\
filter_by(node=n.id).first()
self.assertIn(
IPAddress(ip.ip_addr),
IPNetwork(admin_net.cidr)
)
def test_assign_ip_multiple_groups(self):
self.env.create(
cluster_kwargs={
'api': False,
'net_provider': 'neutron',
'net_segment_type': 'gre'
},
nodes_kwargs=[{}, {}]
)
node_group = self.env.create_node_group()
self.env.nodes[1].group_id = node_group.json_body['id']
self.db().flush()
mgmt_net = self.db.query(NetworkGroup).\
filter(
NetworkGroup.group_id == node_group.json_body["id"]
).filter_by(
name='management'
).first()
mock_range = IPAddrRange(
first='172.16.31.10',
last='192.168.127.12',
network_group_id=mgmt_net.id
)
self.db.add(mock_range)
self.db.commit()
self.env.network_manager.assign_ips(self.env.nodes, "management")
for n in self.env.nodes:
mgmt_net = self.db.query(NetworkGroup).\
filter(
NetworkGroup.group_id == n.group_id
).filter_by(
name='management'
).first()
ip = self.db.query(IPAddr).\
filter_by(network=mgmt_net.id).\
filter_by(node=n.id).first()
self.assertIn(
IPAddress(ip.ip_addr),
IPNetwork(mgmt_net.cidr)
)
def test_ipaddr_joinedload_relations(self):
self.env.create(
cluster_kwargs={},
nodes_kwargs=[
{"pending_addition": True, "api": True},
{"pending_addition": True, "api": True}
]
)
self.env.network_manager.assign_ips(
self.env.nodes,
"management"
)
ips = self.env.network_manager._get_ips_except_admin(joined=True)
self.assertEqual(len(ips), 2)
self.assertTrue(isinstance(ips[0].node_data, Node))
self.assertTrue(isinstance(ips[0].network_data, NetworkGroup))
def test_nets_empty_list_if_node_does_not_belong_to_cluster(self):
node = self.env.create_node(api=False)
network_data = self.env.network_manager.get_node_networks(node)
self.assertEqual(network_data, [])
def test_assign_admin_ips(self):
node = self.env.create_node()
self.env.network_manager.assign_admin_ips([node])
admin_ng_id = self.env.network_manager.get_admin_network_group_id()
admin_network_range = self.db.query(IPAddrRange).\
filter_by(network_group_id=admin_ng_id).all()[0]
admin_ip = self.db.query(IPAddr).\
filter_by(node=node.id).\
filter_by(network=admin_ng_id).all()
self.assertEqual(len(admin_ip), 1)
self.assertIn(
IPAddress(admin_ip[0].ip_addr),
IPRange(admin_network_range.first, admin_network_range.last))
def test_assign_admin_ips_idempotent(self):
node = self.env.create_node()
self.env.network_manager.assign_admin_ips([node])
admin_net_id = self.env.network_manager.get_admin_network_group_id()
admin_ips = set([i.ip_addr for i in self.db.query(IPAddr).
filter_by(node=node.id).
filter_by(network=admin_net_id).all()])
self.env.network_manager.assign_admin_ips([node])
admin_ips2 = set([i.ip_addr for i in self.db.query(IPAddr).
filter_by(node=node.id).
filter_by(network=admin_net_id).all()])
self.assertEqual(admin_ips, admin_ips2)
def test_assign_admin_ips_only_one(self):
map(self.db.delete, self.db.query(IPAddrRange).all())
admin_net_id = self.env.network_manager.get_admin_network_group_id()
mock_range = IPAddrRange(
first='10.0.0.1',
last='10.0.0.1',
network_group_id=admin_net_id
)
self.db.add(mock_range)
self.db.commit()
node = self.env.create_node()
self.env.network_manager.assign_admin_ips([node])
admin_net_id = self.env.network_manager.get_admin_network_group_id()
admin_ips = self.db.query(IPAddr).\
filter_by(node=node.id).\
filter_by(network=admin_net_id).all()
self.assertEqual(len(admin_ips), 1)
self.assertEqual(admin_ips[0].ip_addr, '10.0.0.1')
def test_assign_admin_ips_for_many_nodes(self):
map(self.db.delete, self.db.query(IPAddrRange).all())
admin_net_id = self.env.network_manager.get_admin_network_group_id()
mock_range = IPAddrRange(
first='10.0.0.1',
last='10.0.0.2',
network_group_id=admin_net_id
)
self.db.add(mock_range)
self.db.commit()
n1 = self.env.create_node()
n2 = self.env.create_node()
nc = [n1, n2]
self.env.network_manager.assign_admin_ips(nc)
admin_net_id = self.env.network_manager.get_admin_network_group_id()
for node, ip in zip(nc, ['10.0.0.1', '10.0.0.2']):
admin_ips = self.db.query(IPAddr).\
filter_by(node=node.id).\
filter_by(network=admin_net_id).all()
self.assertEqual(len(admin_ips), 1)
self.assertEqual(admin_ips[0].ip_addr, ip)
@fake_tasks(fake_rpc=False, mock_rpc=False)
@patch('nailgun.rpc.cast')
def test_admin_ip_cobbler(self, mocked_rpc):
node_1_meta = {}
self.env.set_interfaces_in_meta(node_1_meta, [{
"name": "eth0",
"mac": "00:00:00:00:00:00",
}, {
"name": "eth1",
"mac": "00:00:00:00:00:01"}])
node_2_meta = {}
self.env.set_interfaces_in_meta(node_2_meta, [{
"name": "eth0",
"mac": "00:00:00:00:00:02",
}, {
"name": "eth1",
"mac": "00:00:00:00:00:03"}])
self.env.create(
cluster_kwargs={},
nodes_kwargs=[
{
"api": True,
"pending_addition": True,
"mac": "00:00:00:00:00:00",
"meta": node_1_meta
},
{
"api": True,
"pending_addition": True,
"mac": "00:00:00:00:00:02",
"meta": node_2_meta
}
]
)
self.env.launch_deployment()
rpc_nodes_provision = nailgun.task.manager.rpc.cast. \
call_args_list[0][0][1][0]['args']['provisioning_info']['nodes']
admin_ng_id = self.env.network_manager.get_admin_network_group_id()
admin_network_range = self.db.query(IPAddrRange).\
filter_by(network_group_id=admin_ng_id).all()[0]
map(
lambda (x, y): self.assertIn(
IPAddress(
rpc_nodes_provision[x]['interfaces'][y]['ip_address']
),
IPRange(
admin_network_range.first,
admin_network_range.last
)
),
itertools.product((0, 1), ('eth0',))
)
class TestNovaNetworkManager(BaseIntegrationTest):
def setUp(self):
super(TestNovaNetworkManager, self).setUp()
self.env.create(
cluster_kwargs={},
nodes_kwargs=[
{'api': True,
'pending_addition': True}
])
self.node_db = self.env.nodes[0]
def test_get_default_nic_assignment(self):
admin_nic_id = self.node_db.admin_interface.id
admin_nets = [n.name for n in self.db.query(
NodeNICInterface).get(admin_nic_id).assigned_networks_list]
other_nic = self.db.query(NodeNICInterface).filter_by(
node_id=self.node_db.id
).filter(
not_(NodeNICInterface.id == admin_nic_id)
).first()
other_nets = [n.name for n in other_nic.assigned_networks_list]
nics = NovaNetworkManager.get_default_networks_assignment(self.node_db)
def_admin_nic = [n for n in nics if n['id'] == admin_nic_id]
def_other_nic = [n for n in nics if n['id'] == other_nic.id]
self.assertEqual(len(def_admin_nic), 1)
self.assertEqual(len(def_other_nic), 1)
self.assertEqual(
set(admin_nets),
set([n['name'] for n in def_admin_nic[0]['assigned_networks']]))
self.assertEqual(
set(other_nets),
set([n['name'] for n in def_other_nic[0]['assigned_networks']]))
class TestNeutronManager(BaseIntegrationTest):
def check_networks_assignment(self, node_db):
node_nics = self.db.query(NodeNICInterface).filter_by(
node_id=node_db.id
).all()
def_nics = NeutronManager.get_default_networks_assignment(node_db)
self.assertEqual(len(node_nics), len(def_nics))
for n_nic in node_nics:
n_assigned = set(n['name'] for n in n_nic.assigned_networks)
for d_nic in def_nics:
if d_nic['id'] == n_nic.id:
d_assigned = set(n['name']
for n in d_nic['assigned_networks']) \
if d_nic.get('assigned_networks') else set()
self.assertEqual(n_assigned, d_assigned)
break
else:
self.fail("NIC is not found")
def test_gre_get_default_nic_assignment(self):
self.env.create(
cluster_kwargs={
'net_provider': 'neutron',
'net_segment_type': 'gre'},
nodes_kwargs=[
{'api': True,
'pending_addition': True}
])
self.check_networks_assignment(self.env.nodes[0])
def test_vlan_get_default_nic_assignment(self):
meta = self.env.default_metadata()
self.env.set_interfaces_in_meta(
meta,
[{'name': 'eth0', 'mac': '00:00:00:00:00:11'},
{'name': 'eth1', 'mac': '00:00:00:00:00:22'},
{'name': 'eth2', 'mac': '00:00:00:00:00:33'}])
self.env.create(
cluster_kwargs={
'net_provider': 'neutron',
'net_segment_type': 'vlan'},
nodes_kwargs=[
{'api': True,
'meta': meta,
'pending_addition': True}
])
self.check_networks_assignment(self.env.nodes[0])
|
Problem drinking in middle age doubles risk of memory loss later in life, study finds
A history of problem drinking in middle age more than doubles the risk of developing severe memory problems later in life, a new study has found. The study was carried out by researchers from the University of Exeter Medical School in the United Kingdom and was published in the American Journal of Geriatric Psychiatry .1
The paper’s lead author was Elzbieta Kuzma, a research fellow in neuroepidemiology, and the senior author was Iain Lang, senior lecturer in public health.
Researchers looked at the association between a history of alcohol use disorder in middle age and participants’ memory and cognitive function later in life. The study considered 6542 middle aged men and women born from 1931 … |
use integer_sqrt::IntegerSquareRoot as _;
use std::convert::TryInto;
use types::helper_functions_types::Error;
// inteface has changed
pub fn xor_str(bytes_1: &str, bytes_2: &str) -> String {
if bytes_1.chars().count() != 32 && bytes_2.chars().count() != 32 {
panic!("One of the input arguments is too short to be a sha256 hash.");
}
if bytes_1.len() != 32 || bytes_2.len() != 32 {
panic!("Illegal characters in one of the input strings");
}
let mut string_to_return = String::new();
let bytes_1_as_bytes = bytes_1.as_bytes();
let bytes_2_as_bytes = bytes_2.as_bytes();
for i in 0..32 {
if bytes_1_as_bytes[i] == bytes_2_as_bytes[i] {
string_to_return += "1";
} else {
string_to_return += "0";
}
}
string_to_return
}
pub fn xor(bytes_1: &[u8; 32], bytes_2: &[u8; 32]) -> Vec<u8> {
let mut vec_to_return: Vec<u8> = Vec::new();
for i in 0..32 {
vec_to_return.push(bytes_1[i] ^ bytes_2[i]);
}
vec_to_return
}
pub fn integer_squareroot(n: u64) -> u64 {
/*
let sqrt = (n as f64).sqrt();
let mut sqrt_floor = sqrt as u64;
if (sqrt_floor + 1) * (sqrt_floor + 1) <= n {
sqrt_floor += 1;
}
sqrt_floor
*/
n.integer_sqrt()
}
pub fn int_to_bytes(n: u64, length: usize) -> Result<Vec<u8>, Error> {
let mut capacity = 1;
for _i in 0..length - 1 {
capacity *= 256;
}
capacity = capacity - 1 + 255 * capacity;
if n > capacity {
return Err(Error::NumberExceedsCapacity);
}
let mut rez_vec: Vec<u8> = Vec::with_capacity(length);
let mut num = n;
for _i in 0..length {
rez_vec.push((num % 256).try_into().expect(""));
num /= 256;
}
Ok(rez_vec)
}
pub fn bytes_to_int(bytes: &[u8]) -> Result<u64, Error> {
let length = bytes.len();
let mut result: u64 = 0;
let mut mult = 1;
let mut i = 0;
let iter = bytes.iter().take(length);
for j in iter {
result += mult * (u64::from(*j));
if i < length - 1 {
mult *= 256;
i += 1;
}
}
Ok(result)
}
#[cfg(test)]
mod tests {
use super::*;
use ethereum_types::U256;
#[test]
fn test_xor_str() {
let test_str_1: &str = "A4x3A4x3A4x3A4x3A4x3A4x3A4x3A4x3";
let mut test_str_2: &str = "A4x3A4x3A4x3A4x3A4x3A4x3A4x3A4x3";
assert_eq!(
xor_str(test_str_1, test_str_2),
"11111111111111111111111111111111",
);
test_str_2 = "AAAABBBBCCCCDDDDAAAABBBBCCCCDDDD";
assert_eq!(
xor_str(test_str_1, test_str_2),
"10000000000000001000000000000000",
);
assert_ne!(
xor_str(test_str_1, test_str_2),
"11000000000000001000000000000000",
);
}
#[test]
#[should_panic]
fn test_too_short_hashes() {
let test_str_1: &str = "ABC";
let test_str_2: &str = "ABC";
assert_eq!(xor_str(test_str_1, test_str_2), "111");
}
#[test]
#[should_panic]
fn test_invalid_symbols_in_hashes() {
let test_str_1: &str = "\u{104}\u{104}\u{104}\u{104}\u{118}\u{118}\u{118}\u{118}\u{12e}\u{12e}\u{12e}\u{12e}\u{160}\u{160}\u{160}\u{160}\u{104}\u{104}\u{104}\u{104}\u{118}\u{118}\u{118}\u{118}\u{12e}\u{12e}\u{12e}\u{12e}\u{160}\u{160}\u{160}\u{160}";
let test_str_2: &str = "\u{104}\u{104}\u{104}\u{104}\u{118}\u{118}\u{118}\u{118}\u{12e}\u{12e}\u{12e}\u{12e}\u{160}\u{160}\u{160}\u{160}\u{104}\u{104}\u{104}\u{104}\u{118}\u{118}\u{118}\u{118}\u{12e}\u{12e}\u{12e}\u{12e}\u{160}\u{160}\u{160}\u{160}";
assert_eq!(
xor_str(test_str_1, test_str_2),
"11111111111111111111111111111111",
);
}
#[test]
fn test_xor() {
// let expected_vec: Vec<u8> = vec![
// 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0,
// 0, 1, 1,
// ];
let v1: [u8; 32] = [
255, 255, 1, 2, 254, 254, 3, 4, 253, 253, 5, 6, 252, 252, 7, 8, 251, 251, 9, 10, 250,
250, 11, 12, 249, 249, 13, 14, 248, 248, 15, 16,
];
let v2: [u8; 32] = [
255, 255, 10, 20, 254, 254, 30, 40, 253, 253, 50, 60, 252, 252, 70, 80, 251, 251, 90,
100, 250, 250, 110, 120, 249, 249, 130, 140, 248, 248, 150, 160,
];
let v1_int = U256::from(v1);
let v2_int = U256::from(v2);
let expected = v1_int ^ v2_int;
assert_eq!(expected, U256::from(xor(&v1, &v2).as_slice()));
}
#[test]
fn test_int_to_bytes() {
let test_vec: Vec<u8> = vec![0, 2, 2];
let vec_from_func: Vec<u8> = int_to_bytes(514, 3).expect("");
assert_eq!(test_vec, vec_from_func);
}
#[test]
#[should_panic]
fn test_int_to_bytes_overflow() {
let _vec_from_func: Vec<u8> = int_to_bytes(256, 1).expect("");
}
#[test]
fn test_bytes_to_int() {
let num: u64 = bytes_to_int(&[1, 1]).expect("");
assert_eq!(num, 257);
}
}
|
def delete_network(self, context, id):
LOG.debug(_("NECPluginV2.delete_network() called, id=%s ."), id)
net = super(NECPluginV2, self).get_network(context, id)
tenant_id = net['tenant_id']
if self.packet_filter_enabled:
filters = dict(network_id=[id])
pfs = (super(NECPluginV2, self).
get_packet_filters(context, filters=filters))
super(NECPluginV2, self).delete_network(context, id)
try:
self.ofc.delete_ofc_network(tenant_id, id)
except (nexc.OFCException, nexc.OFCConsistencyBroken) as exc:
reason = _("delete_network() failed due to %s") % exc
LOG.warn(reason)
if self.packet_filter_enabled:
for pf in pfs:
self.delete_packet_filter(context, pf['id'])
filters = dict(tenant_id=[tenant_id])
nets = super(NECPluginV2, self).get_networks(context, filters=filters)
if len(nets) == 0:
try:
self.ofc.delete_ofc_tenant(tenant_id)
except (nexc.OFCException, nexc.OFCConsistencyBroken) as exc:
reason = _("delete_ofc_tenant() failed due to %s") % exc
LOG.warn(reason) |
import java.io.File;
import java.io.FileNotFoundException;
import java.util.ArrayList;
import java.util.Scanner;
public class A711{
public static void main(String[] args) throws FileNotFoundException{
Scanner scan = new Scanner(System.in);
int n = Integer.parseInt(scan.nextLine());
ArrayList<String> arr = new ArrayList<String>();
boolean oolean = false;
while(scan.hasNextLine()) {
arr.add(scan.nextLine());
}
for(int i =0 ; i < n; i ++) {
String current = arr.get(i);
char[] bar = current.toCharArray();
for(int j = 0; j < bar.length-1; j ++) {
if(bar[j] == 'O' && bar[j + 1] == 'O') {
oolean = true;
bar[j] = '+';
bar[j + 1] = '+';
break;
}
}
String result = "";
for(int j =0 ; j < bar.length; j ++) {
result += bar[j];
}
if(oolean) {
arr.set(i, result);
break;
}
}
if(oolean) {
System.out.println("YES");
for(int i =0 ;i < n; i ++) {
System.out.println(arr.get(i));
}
}else {
System.out.println("NO");
}
}
}
|
def check_image_alignment(self):
if self.raw_imgs is not None:
print(f'Raw image shape: {self.raw_imgs.shape}')
if self.raw_imgs.shape[0] < self.max_track_len:
raise ValueError(f'Got images with {self.raw_imgs.shape[0]} frames but tracks with {self.max_track_len} timepoints')
if self.act_imgs is not None:
print(f'Act image shape: {self.act_imgs.shape}')
if self.act_imgs.shape[0] < self.max_track_len:
raise ValueError(f'Got activations with {self.act_imgs.shape[0]} frames but tracks with {self.max_track_len} timepoints')
if self.act_imgs is not None and self.raw_imgs is not None:
if self.act_imgs.shape != self.raw_imgs.shape:
raise ValueError(f'Images have shape {self.raw_imgs.shape} but activations are shape {self.act_imgs.shape}') |
/*
* Copyright (c) 2020 Samsung Electronics Co., Ltd.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
#include <dali-test-suite-utils.h>
#include <dali/public-api/dali-core.h>
#include <stdlib.h>
#include <iomanip>
#include <iostream>
using namespace Dali;
int UtcDaliPropertyTypesGetNameP(void)
{
DALI_TEST_EQUALS("NONE", Dali::PropertyTypes::GetName(Property::NONE), TEST_LOCATION);
DALI_TEST_EQUALS("BOOLEAN", Dali::PropertyTypes::GetName(Property::BOOLEAN), TEST_LOCATION);
DALI_TEST_EQUALS("FLOAT", Dali::PropertyTypes::GetName(Property::FLOAT), TEST_LOCATION);
DALI_TEST_EQUALS("INTEGER", Dali::PropertyTypes::GetName(Property::INTEGER), TEST_LOCATION);
DALI_TEST_EQUALS("VECTOR2", Dali::PropertyTypes::GetName(Property::VECTOR2), TEST_LOCATION);
DALI_TEST_EQUALS("VECTOR3", Dali::PropertyTypes::GetName(Property::VECTOR3), TEST_LOCATION);
DALI_TEST_EQUALS("VECTOR4", Dali::PropertyTypes::GetName(Property::VECTOR4), TEST_LOCATION);
DALI_TEST_EQUALS("MATRIX3", Dali::PropertyTypes::GetName(Property::MATRIX3), TEST_LOCATION);
DALI_TEST_EQUALS("MATRIX", Dali::PropertyTypes::GetName(Property::MATRIX), TEST_LOCATION);
DALI_TEST_EQUALS("RECTANGLE", Dali::PropertyTypes::GetName(Property::RECTANGLE), TEST_LOCATION);
DALI_TEST_EQUALS("ROTATION", Dali::PropertyTypes::GetName(Property::ROTATION), TEST_LOCATION);
DALI_TEST_EQUALS("STRING", Dali::PropertyTypes::GetName(Property::STRING), TEST_LOCATION);
DALI_TEST_EQUALS("ARRAY", Dali::PropertyTypes::GetName(Property::ARRAY), TEST_LOCATION);
DALI_TEST_EQUALS("MAP", Dali::PropertyTypes::GetName(Property::MAP), TEST_LOCATION);
END_TEST;
}
int UtcDaliPropertyTypesGet02P(void)
{
DALI_TEST_CHECK(Dali::PropertyTypes::Get<bool>() == Property::BOOLEAN);
END_TEST;
}
int UtcDaliPropertyTypesGet03P(void)
{
DALI_TEST_CHECK(Dali::PropertyTypes::Get<float>() == Property::FLOAT);
END_TEST;
}
int UtcDaliPropertyTypesGet04P(void)
{
DALI_TEST_CHECK(Dali::PropertyTypes::Get<int>() == Property::INTEGER);
END_TEST;
}
int UtcDaliPropertyTypesGet06P(void)
{
DALI_TEST_CHECK(Dali::PropertyTypes::Get<Dali::Vector2>() == Property::VECTOR2);
END_TEST;
}
int UtcDaliPropertyTypesGet07P(void)
{
DALI_TEST_CHECK(Dali::PropertyTypes::Get<Dali::Vector3>() == Property::VECTOR3);
END_TEST;
}
int UtcDaliPropertyTypesGet08P(void)
{
DALI_TEST_CHECK(Dali::PropertyTypes::Get<Dali::Vector4>() == Property::VECTOR4);
END_TEST;
}
int UtcDaliPropertyTypesGet09P(void)
{
DALI_TEST_CHECK(Dali::PropertyTypes::Get<Dali::Matrix3>() == Property::MATRIX3);
END_TEST;
}
int UtcDaliPropertyTypesGet10(void)
{
DALI_TEST_CHECK(Dali::PropertyTypes::Get<Dali::Matrix>() == Property::MATRIX);
END_TEST;
}
int UtcDaliPropertyTypesGet11P(void)
{
DALI_TEST_CHECK(Dali::PropertyTypes::Get<Dali::AngleAxis>() == Property::ROTATION);
END_TEST;
}
int UtcDaliPropertyTypesGet12P(void)
{
DALI_TEST_CHECK(Dali::PropertyTypes::Get<Dali::Quaternion>() == Property::ROTATION);
END_TEST;
}
int UtcDaliPropertyTypesGet13P(void)
{
DALI_TEST_CHECK(Dali::PropertyTypes::Get<std::string>() == Property::STRING);
END_TEST;
}
int UtcDaliPropertyTypesGet14P(void)
{
DALI_TEST_CHECK(Dali::PropertyTypes::Get<Dali::Rect<int> >() == Property::RECTANGLE);
END_TEST;
}
int UtcDaliPropertyTypesGet15P(void)
{
DALI_TEST_CHECK(Dali::PropertyTypes::Get<Dali::Property::Map>() == Property::MAP);
END_TEST;
}
int UtcDaliPropertyTypesGet16P(void)
{
DALI_TEST_CHECK(Dali::PropertyTypes::Get<Dali::Property::Array>() == Property::ARRAY);
END_TEST;
}
|
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.search.aggregations.metrics.percentiles;
import com.carrotsearch.hppc.DoubleArrayList;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.search.SearchParseException;
import org.elasticsearch.search.aggregations.Aggregator;
import org.elasticsearch.search.aggregations.AggregatorFactory;
import org.elasticsearch.search.aggregations.metrics.percentiles.tdigest.TDigest;
import org.elasticsearch.search.aggregations.support.ValuesSource;
import org.elasticsearch.search.aggregations.support.ValuesSourceParser;
import org.elasticsearch.search.internal.SearchContext;
import java.io.IOException;
import java.util.Arrays;
import java.util.HashMap;
import java.util.Map;
/**
*
*/
public class PercentilesParser implements Aggregator.Parser {
private final static double[] DEFAULT_PERCENTS = new double[] { 1, 5, 25, 50, 75, 95, 99 };
@Override
public String type() {
return InternalPercentiles.TYPE.name();
}
/**
* We must override the parse method because we need to allow custom parameters
* (execution_hint, etc) which is not possible otherwise
*/
@Override
public AggregatorFactory parse(String aggregationName, XContentParser parser, SearchContext context) throws IOException {
ValuesSourceParser<ValuesSource.Numeric> vsParser = ValuesSourceParser.numeric(aggregationName, InternalPercentiles.TYPE, context)
.requiresSortedValues(true)
.build();
double[] percents = DEFAULT_PERCENTS;
boolean keyed = true;
Map<String, Object> settings = null;
XContentParser.Token token;
String currentFieldName = null;
while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {
if (token == XContentParser.Token.FIELD_NAME) {
currentFieldName = parser.currentName();
} else if (vsParser.token(currentFieldName, token, parser)) {
continue;
} else if (token == XContentParser.Token.START_ARRAY) {
if ("percents".equals(currentFieldName)) {
DoubleArrayList values = new DoubleArrayList(10);
while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {
double percent = parser.doubleValue();
if (percent < 0 || percent > 100) {
throw new SearchParseException(context, "the percents in the percentiles aggregation [" +
aggregationName + "] must be in the [0, 100] range");
}
values.add(percent);
}
percents = values.toArray();
// Some impls rely on the fact that percents are sorted
Arrays.sort(percents);
} else {
throw new SearchParseException(context, "Unknown key for a " + token + " in [" + aggregationName + "]: [" + currentFieldName + "].");
}
} else if (token.isValue()) {
if (token == XContentParser.Token.VALUE_BOOLEAN && "keyed".equals(currentFieldName)) {
keyed = parser.booleanValue();
} else {
if (settings == null) {
settings = new HashMap<>();
}
settings.put(currentFieldName, parser.objectText());
}
} else {
throw new SearchParseException(context, "Unexpected token " + token + " in [" + aggregationName + "].");
}
}
PercentilesEstimator.Factory estimatorFactory = EstimatorType.TDIGEST.estimatorFactory(settings);
return new PercentilesAggregator.Factory(aggregationName, vsParser.config(), percents, estimatorFactory, keyed);
}
/**
*
*/
public static enum EstimatorType {
TDIGEST() {
@Override
public PercentilesEstimator.Factory estimatorFactory(Map<String, Object> settings) {
return new TDigest.Factory(settings);
}
};
public abstract PercentilesEstimator.Factory estimatorFactory(Map<String, Object> settings);
public static EstimatorType resolve(String name, SearchContext context) {
if (name.equals("tdigest")) {
return TDIGEST;
}
throw new SearchParseException(context, "Unknown percentile estimator [" + name + "]");
}
}
}
|
/**
* @author Ibrahima Diarra
*
*/
public class CircularQueueTest {
final private String[] data = new String[]{"to", "be", "or"};
final private CircularQueue<String> queue = new CircularQueue<String>();
final private Logger logger = LogManager.getLogger(CircularQueueTest.class);
@Before
public void setUp() throws Exception {
for(String d : data)
queue.enqueue(d);
}
@Test
public final void testEnqueue() {
assertTrue(!queue.isEmpty());
}
@Test
public final void testDequeue() {
while(!queue.isEmpty())
logger.debug(queue.dequeue());
assertTrue(queue.isEmpty());
}
@Test
public final void testSize() {
assertTrue(queue.size() > 0);
}
@Test
public final void testIterator() {
for (String s : queue)
logger.debug(s);
}
} |
/**
* Items are kept in a list of nodes.
*/
public class Node {
/**
* Item kept by this node.
*/
public T value;
/**
* Next node in the queue.
*/
public AtomicReference<Node> next;
/**
* Create a new node.
*/
public Node(T value) {
this.next = new AtomicReference<Node>(null);
this.value = value;
}
} |
min_num, max_num, mul = map(int, input().split())
numbers = set(range(min_num, max_num+1))
multiples = set([mul*i for i in range(1, 100+1)])
ans = len(numbers & multiples)
print(ans)
|
<reponame>marirs/rocketapi
use rocket::{http::Status, request::Request, serde::json::Value};
#[catch(400)]
pub async fn bad_request(req: &Request<'_>) -> (Status, Value) {
json_response!(
400,
"request not understood",
"request_uri" => req.uri().to_string()
)
}
#[catch(401)]
pub async fn not_authorized(req: &Request<'_>) -> (Status, Value) {
json_response!(
401,
"not authorized",
"request_uri" => req.uri().to_string()
)
}
#[catch(403)]
pub async fn forbidden(req: &Request<'_>) -> (Status, Value) {
json_response!(
403,
"forbidden",
"request_uri" => req.uri().to_string()
)
}
#[catch(404)]
pub async fn not_found(req: &Request<'_>) -> (Status, Value) {
json_response!(
404,
"not found",
"request_uri" => req.uri().to_string()
)
}
#[catch(422)]
pub async fn unprocessed_entity(_req: &Request<'_>) -> (Status, Value) {
json_response!(422, "Check your input data".to_string())
}
#[catch(429)]
pub async fn too_many_requests(req: &Request<'_>) -> (Status, Value) {
json_response!(
429,
"too many requests",
"request_uri" => req.uri().to_string()
)
}
#[catch(500)]
pub async fn internal_server_error(req: &Request<'_>) -> (Status, Value) {
json_response!(
500,
"internal server error",
"request_uri" => req.uri().to_string()
)
}
|
import { request } from '@test/init'
import * as Debug from 'debug'
const debug = Debug('test:api:index')
describe('GET /v1/users', () => {
it('it should return 200', async () => {
let response = await request().get('/v1/users')
debug('index %j', response.body)
expect(response.status).toBe(200)
})
})
|
package com.tazine.ipaddr.controller;
import com.tazine.ipaddr.entity.IPInfo;
import com.tazine.ipaddr.service.IPaddrService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
/**
* IP 查询服务控制器
*
* @author frank
* @since 1.0.0
*/
@RestController
public class IPaddrController {
@Value(value = "server.port")
private String port;
@Autowired
private IPaddrService ipService;
@GetMapping(value = "/find")
public IPInfo find(@RequestParam("ip") String ip) {
return ipService.ipSeek(ip);
}
@GetMapping(value = "/hi")
public String hi(@RequestParam("name") String name){
return "Hi " + name + ", I am a service provider at port " + port;
}
}
|
<filename>src/Main.hs
{-# LANGUAGE OverloadedStrings, RankNTypes, DataKinds #-}
module Main where
import Config
import Control.Monad
import Control.Monad.Logger (runNoLoggingT)
import Control.Monad.Trans
import Data.HVect (HVect(..))
import Data.IORef
import Data.Maybe (fromMaybe, fromJust)
import Data.Monoid
import Data.Text (Text)
import qualified Data.Text as T
import qualified Data.Text.Encoding as TE
import qualified Data.Text.Lazy as TL
import Data.Time ( getCurrentTime )
import Data.Time.LocalTime (getCurrentTimeZone)
import Database.Persist.Postgresql (SqlBackend, createPostgresqlPool)
import Layout (Page, renderPage)
import Lucid (Html)
import qualified Lucid.Html5 as H
import Models.BlogCategory
import Models.BlogIndex (monthView)
import qualified Models.BlogIndex as Index
import Models.BlogPost
import Models.Events (EventHandler, Category(..), forwardEventHandlers)
import Network.HTTP.Types (urlDecode, notFound404, internalServerError500 )
import Network.Wai (Middleware, Application)
import Network.Wai.Handler.Warp (run)
import Network.Wai.Middleware.Static (staticPolicy, addBase)
import Routes
import Session
import Utils.Database (initializePool, runOnPool, runSqlAction)
import Utils.Password (Password(..), PasswordHash)
import qualified Utils.Password as Pwd
import Views.AboutMe
import Views.BlogIndex
import Views.BlogPost
import Views.EditPost
import Views.Error
import Views.Login
import Web.Routing.Combinators (PathState(..))
import Web.Spock
import Web.Spock.Config
import Control.Monad.Catch (catch)
import Database.PostgreSQL.Simple (SqlError(..))
main :: IO ()
main = do
cfg <- defaultAppConfig eventHandlers
pool <- initializePool cfg
spockCfg <- config cfg pool
runOnPool pool $ forwardEventHandlers Index.blogIndexHandler
runSpock 8080 (spock spockCfg app)
config cfg pool = do
spockCfg <- defaultSpockCfg emptySession (PCPool pool) (SiteState cfg)
return spockCfg { spc_sessionCfg = sessionCfg (spc_sessionCfg spockCfg)
, spc_errorHandler = viewErrorPage
}
where
sessionCfg sc = sc { sc_cookieName = "Carsten-Koenig-Cookie" }
viewErrorPage st =
renderPage (Error st) $ Views.Error.page st
eventHandlers :: [EventHandler]
eventHandlers =
[ Index.blogIndexHandler
, blogCategoryHandler
]
app :: SiteApp
app = prehook baseHook $ do
adminHash <- adminPwdHash . appConfig <$> getState
-- serve static files from local static folder
middleware serveStatic
timeZone <- liftIO getCurrentTimeZone
ex <- liftIO example
get root $ renderPage Home $
Views.BlogPost.page timeZone ex
get showPostIdR $ \id -> do
findPost <- getBlogPostId id
case findPost of
Right (Just post) -> renderPage (ShowId id) $
Views.BlogPost.page timeZone post
Right Nothing ->
viewErrorPage notFound404
Left _ ->
viewErrorPage internalServerError500
get showPostPathR $ \year month title -> do
findPost <- getBlogPostPath year month title
case findPost of
Right (Just post) -> renderPage (ShowPath year month title) $
Views.BlogPost.page timeZone post
Right Nothing ->
viewErrorPage notFound404
Left _ ->
viewErrorPage internalServerError500
get showMonthPathR $ \year month -> do
index <- runSqlAction $ monthView year month
case index of
Right index ->
renderPage (ShowMonth year month) $
Views.BlogIndex.monthPage timeZone year month index
Left _ ->
viewErrorPage internalServerError500
get aboutMeR $
renderPage AboutMe Views.AboutMe.page
getpost logoutR logout
get loginR $ do
redTo <- fmap RedirectTo <$> param "redirect"
renderPage Login $ Views.Login.page redTo
post loginR $ do
pwd <- fromMaybe "" <$> param "pwd"
redTo <- fmap RedirectTo <$> param "redirect"
adminLogon redTo adminHash $ Password pwd
prehook adminHook $ do
get adminR $ text "hi Admin"
get newPostR $ renderPage New $
Views.EditPost.page Nothing Nothing
post newPostR $ do
title <- fromJust <$> param "title"
content <- fromJust <$> param "content"
cats <- splitKomma . fromMaybe "" <$> param "categories"
now <- liftIO getCurrentTime
id <- insertBlogPost title content now cats
case id of
Right id ->
redirect (routeLinkText $ ShowId id)
Left _ ->
viewErrorPage internalServerError500
get editPostR $ \id -> do
findPost <- getBlogPostId id
case findPost of
Right findPost ->
renderPage (Edit id) $ Views.EditPost.page (Just id) findPost
Left _ ->
viewErrorPage internalServerError500
post editPostR $ \id -> do
title <- fromJust <$> param "title"
content <- fromJust <$> param "content"
cats <- splitKomma . fromMaybe "" <$> param "categories"
now <- liftIO getCurrentTime
updateBlogPost id title content now cats
redirect (routeLinkText $ ShowId id)
splitKomma :: Text -> [Category]
splitKomma = map Category . filter (not . T.null) . fmap T.strip . T.split isSep
where isSep ',' = True
isSep ';' = True
isSep _ = False
example :: IO BlogPost
example = do
time <- getCurrentTime
return $
BlogPost
"#Hey you\n`what is up`?\n\n```haskell\nf :: Int -> Bool\nf 0 = True\nf _ = False\n```"
"Testblogpost" time [Category "Test"]
serveStatic :: Middleware
serveStatic = staticPolicy (addBase "./static")
baseHook :: SiteAction () (HVect '[])
baseHook = return HNil
|
// This function makes sure that the apex_available property is valid
func (m *ApexModuleBase) checkApexAvailableProperty(mctx BaseModuleContext) {
for _, n := range m.ApexProperties.Apex_available {
if n == AvailableToPlatform || n == AvailableToAnyApex || n == AvailableToGkiApex {
continue
}
if !mctx.OtherModuleExists(n) && !mctx.Config().AllowMissingDependencies() {
mctx.PropertyErrorf("apex_available", "%q is not a valid module name", n)
}
}
} |
/**
* Copy a host_t as sockaddr_t to the given memory location and map the port to
* ICMP/ICMPv6 message type/code as the Linux kernel expects it, that is, the
* type in the source and the code in the destination address.
* @return the number of bytes copied
*/
static size_t hostcpy_icmp(void *dest, host_t *host, u_int16_t type)
{
size_t len;
len = hostcpy(dest, host, TRUE);
if (type == SADB_EXT_ADDRESS_SRC)
{
set_port(dest, traffic_selector_icmp_type(host->get_port(host)));
}
else
{
set_port(dest, traffic_selector_icmp_code(host->get_port(host)));
}
return len;
} |
def analyze(self, binsI=50, binsQ=50, cluster_method='gmm', plot=True):
if cluster_method == 'gmm':
self._gaussian_mixture_clustering()
x0, bounds = self._prepare_x0_bounds()
fit = least_squares(self._gaussian_mixture_residual_function, x0,
bounds=bounds,
args=(binsI, binsQ))
self.means, self.variances, self.weights = self._x0_to_meas_vars_weights(fit.x)
logprob = np.zeros((self.num_of_states,
self.num_of_points,
self.num_of_states))
for i in range(self.num_of_states):
logprob[:, :, i] = \
(- (np.abs(self.signal - self.means[i]) ** 2 /
(2 * self.variances[i])) - np.log(self.variances[i]))
self.state_prediction = np.argmax(logprob, axis=-1)
confusion = (self._count_occurences(self.state_prediction) /
self.num_of_points)
self.confusion = confusion
self.fidelity = np.sum(np.diagonal(confusion)) / self.num_of_states
comb_list = list(combinations(range(self.num_of_states), 2))
snr_list = []
for c in comb_list:
s1, s2 = c
signal = np.abs(self.means[s1] - self.means[s2])
noise = np.sqrt(0.5 * (self.variances[s1] + self.variances[s2]))
snr = signal / noise
snr_list.append(snr)
self.signal_to_noise_ratio = dict(zip(comb_list, snr_list))
self.is_analyzed = True
if plot:
self.plot_result() |
/**
* @author Trung Phan
*
*/
public class InListExpression extends PredicateExpression {
private final static Pattern PATTERN = Pattern.compile("(\\?|:[a-zA-Z0-9_]+|[a-zA-Z0-9_.() ]+?) +(not )? *in +\\(([a-zA-Z0-9:_?, ]+)\\)");
private final SelectorExpression selectorExpression;
private final List<SelectorExpression> params;
public InListExpression(boolean negated, String expression, SelectorExpression selectorExpression, List<SelectorExpression> params) {
super(negated, expression);
this.selectorExpression = selectorExpression;
this.params = Immutables.$(params);
}
public static InListExpression parse(String expression, boolean negated) {
Matcher matcher = PATTERN.matcher(expression.trim());
if (matcher.matches()) {
SelectorExpression selectorExpr = SelectorExpression.parse(matcher.group(1));
AssertSyntax.notNull(selectorExpr, "Syntax is invalid for %s in expression %s.", matcher.group(1), expression);
if ("not ".equals(matcher.group(2))) {
negated = !negated;
}
List<SelectorExpression> params = new ArrayList<SelectorExpression>();
List<String> paramStrings = StringUtils.splitFunctionInput(matcher.group(3).trim());
for (String paramString : paramStrings) {
SelectorExpression param = SelectorExpression.parse(paramString);
AssertSyntax.notNull(param, "Syntax is invalid for %s in expression %s.", paramString, expression);
params.add(param);
}
return new InListExpression(negated, expression, selectorExpr, params);
}
return null;
}
public SelectorExpression getSelectorExpression() {
return selectorExpression;
}
public List<SelectorExpression> getParams() {
return params;
}
@Override
public String toString() {
return "";
}
@Override
public Iterator<? extends Expression> expressionIterator() {
return new ExpressionIterator(this, new ListBuilder<Expression>().add(selectorExpression).add(params).toList());
}
} |
s=(raw_input("")).split(" ")
n=int(s[0])
l=int(s[1])
co=(raw_input("").split(" "))
for i in range(len(co)):
co[i]=int(co[i])
co=sorted(co)
dif=[]
if n!=1:
for i in range(1,len(co)):
dif.append((co[i]-co[i-1]))
print max([float(max(dif))/2,float(co[0]),float(l-co[len(co)-1])])
else:
print float(max([co[0],l-co[0]])) |
Traveling Wave Undulators for FELs and Synchrotron Radiation Sources
We study the use of a traveling wave waveguide as an undulator for short wavelength free-electron lasers (FELs) and synchrotron radiation sources. This type of undulator -which we will call TWUcan be useful when a short electron oscillation period and a large aperture for the propagation of the beam are needed. The availability of high power X-band microwave sources, developed for the electron-positron linear collider, make it possible today to build TWUs of practical interest to produce short wavelength radiation from a beam of reduced energy respect to the case of more conventional undulators. In this paper we will discuss the characteristic of the TWU, the systems that can be used to control the effects of RF power losses in the waveguide walls, and how to optimize a TWU and the associated electron transport system for use in a synchrotron radiation source or FEL. Microwave undulators have been considered before in a standing wave configuration , . Measurements of the spontaneous undulator radiation when propagating a beam with energy of 143 or 220 MeV through the undulator have also been reported in reference . A discussion of the parameters of radio frequency undulator in various configurations has also been made recently in reference . The use of a radio frequency wiggler operating at 30 GHz, as a damping device in a damping ring for a linear collider has been considered in reference . In this paper we discuss a traveling wave, radio frequency undulator operating at 12 Ghz. The reason for the choice of this frequency is that high power RF sources operating near this frequency, and pulse compression techniques have been developed as part of the international linear collider R&D . Using these sources and pulse compression, power level as high as 450MW are now available, with pulse duration of hundreds of nanoseconds. This power level makes possible to consider a traveling wave, radio frequency undulator (TWU) as a a practical and interesting part of linac based synchrotron radiation sources and free-electron lasers. As we will discuss in the following undulator parameter values of the order of 0.4 for an undulator period of 1.45 cm are possible. The corresponding waveguide transverse size is about 1.8 cm. Some of the advantages of a TWU are: short undulator period; the possibility of |
Netflix has released its fourth-quarter 2014 financial results along with a letter to shareholders, posting both to its website. These documents show plans to increase its amount of original content, rapidly expand internationally, continue its DVD-by-mail service, and more.
First, some numbers. Netflix picked up 13 million new members in 2014, an increase from 11 million the year before. The video on demand provider now boasts 57.4 million subscribers, and it expects to end the first quarter of the new year with 61.4 million.
Earlier this year, Netflix raised its price from $7.99 to $8.99, a move it believed cost it subscribers throughout the months that followed. The company is now changing its tune, finding that while US growth slowed in 2014, it was still highest among lower-income Americans, folks who reasonably would be most sensitive to price hikes. Netflix will obviously continue to study the US market, as it believes virtually all entertainment video will be streamed over the Internet in the future, and it sees itself continuing to lead the charge domestically and around the world.
Speaking of which, Netflix saw steady growth internationally, including in Austria, Belgium, France, Germany, Luxembourg, and Switzerland; all expansions that arrived in the third quarter of last year. The service is available in 50 countries, and it sees its reach growing to 200 over the next two years thanks to the rapid proliferation of smartphones, tablets, and smart TVs.
Netflix says increasing its size is good for everyone, because the increased revenue gives in more money to create and license content for its viewers. Yet surprisingly, the company found that its original productions were more cost-effective than licensed shows. To clarify, it cost Netflix less money per view to produce its own content than to buy the rights to stream well-known shows and movies from big studios. As a result, the company intends to produce more of its own creations in the years ahead.
In the immediate future, season Three of House of Cards will premiere February 27th, joined by The Unbreakable Kimmy Schmidt (a comedy by Tina Fey and Robert Carlock starring Ellie Kemper due out March 6th) and Bloodline (due on March 20th from the creators of Damages). Crouching Tiger, Hidden Dragon II: The Green Destiny, the company's first original feature film, is due out on August 26th in all markets. Throughout the course of 2015, Netflix plans to release 320 hours of original series, films, documentaries, and stand-up comedy, three times the amount it produced last year.
And don't forget the DVD-by-mail service that made Netflix famous in the first place. 5.8 million members still take advantage of it, bringing the company $89 million in the fourth quarter. The company has big plans, and for now, mailing discs out to people remains a part of them.
Sources: Netflix [1], [2] |
import emoji from "node-emoji";
import AST from "../ast";
import Decorator from "../highlight/decorator";
import Highlight from "../highlight/highlight";
import {
ArgumentNodeAST,
AssignmentNodeAST,
ConsoleAnswers,
ExplainCommandResponse,
OperatorNodeAST,
OptionNodeAST,
PipeNodeAST,
ProgramNodeAST,
StickyOptionNodeAST,
SubcommandNodeAST,
SudoNodeAST,
} from "../interfaces";
import Console from "./console";
const explanationEmoji = emoji.get("bulb");
class ExplainConsole extends Console {
private questions: Object[] = [
{
message: "Explain a command:",
name: "query",
prefix: `${explanationEmoji}`,
type: "input",
},
];
constructor() {
super();
}
public makeHelp(
leafNodes: Array<
Array<
| OptionNodeAST
| ProgramNodeAST
| AssignmentNodeAST
| SubcommandNodeAST
| OperatorNodeAST
| PipeNodeAST
| StickyOptionNodeAST
>
>,
): string {
let help = "";
for (const unit of leafNodes) {
for (const node of unit) {
if (AST.isProgram(node)) {
const programNode = <ProgramNodeAST>node;
const { summary, name } = programNode.schema;
const decoratedProgramName = Decorator.decorate(name, programNode);
help += ` ${decoratedProgramName}: ${summary}\n`;
}
if (AST.isOption(node)) {
const optionNode = node as OptionNodeAST;
const { summary, long, short } = optionNode.optionSchema;
const decoratedOptions = [];
if (short && short.length >= 1) {
decoratedOptions.push(Decorator.decorate(short.join(", "), optionNode));
}
if (long && long.length >= 1) {
decoratedOptions.push(Decorator.decorate(long.join(", "), optionNode));
}
help += ` ${decoratedOptions.join(", ")}: ${summary}\n`;
}
if (AST.isSubcommand(node)) {
const subcommandNode = <SubcommandNodeAST>node;
const { name, summary } = subcommandNode.schema;
const decoratedSubcommandName = Decorator.decorate(name, subcommandNode);
help += ` ${decoratedSubcommandName}: ${summary}\n`;
}
if (AST.isAssignment(node)) {
const assignmentNode = node as AssignmentNodeAST;
const { word } = assignmentNode;
const decoratedAssignment = Decorator.decorate(word, assignmentNode);
help += ` ${decoratedAssignment}: A variable passed to the program process\n`;
}
if (AST.isOperator(node)) {
const operatorNode = <OperatorNodeAST>node;
const { op } = operatorNode;
const decoratedOperator = Decorator.decorate(op, operatorNode);
help += `${decoratedOperator} - `;
if (op === "&&") {
help += ` command2 is executed if, and only if, command1 returns an exit status of zero\n`;
} else if (op === "||") {
help += ` command2 is executed if and only if command1 returns a non-zero exit status\n`;
}
}
if (AST.isSudo(node)) {
const sudoNode = node as SudoNodeAST;
const { summary } = sudoNode.schema;
const decoratedNode = Decorator.decorate("sudo", sudoNode);
help += ` ${decoratedNode}: ${summary}`;
}
if (AST.isArgument(node)) {
const argNode = node as ArgumentNodeAST;
const { word } = argNode;
const decoratedNode = Decorator.decorate(word, argNode);
help += ` ${decoratedNode}: an argument\n`;
}
if (AST.isPipe(node)) {
const pipeNode = node as PipeNodeAST;
const { pipe } = pipeNode;
const decoratedNode = Decorator.decorate(pipe, pipeNode);
help += ` ${decoratedNode}: A pipe connects the STDOUT of the first process to the STDIN of the second\n`;
}
}
}
return help;
}
public async prompt(): Promise<ConsoleAnswers> {
return super.prompt(this.questions);
}
public makeSamples() {
//
}
public error(msg: string) {
super.error(msg);
}
public render(data: ExplainCommandResponse) {
const { query, leafNodes } = data.explainCommand;
this.print();
if (leafNodes) {
const highlight = new Highlight();
const decoratedQuery = highlight.decorate(query, leafNodes);
// const boxedContent = this.box(decoratedQuery);
// add a new line
this.print(` ${decoratedQuery}`);
const help = this.makeHelp(leafNodes);
this.print();
this.print(help);
} else {
this.error("No result");
}
this.print();
}
}
export default ExplainConsole;
|
def curify_type(self, concept_type_str):
if concept_type_str.startswith('biolink:'):
return concept_type_str
return 'biolink:' + string.capwords(concept_type_str.replace('_', ' '), ' ').replace(' ', '') |
/**
* Disable/Enable voicemail. Available only if the line has fax capabilities
*
* REST: POST /freefax/{serviceName}/voicemail/changeRouting
* @param routing [required] Activate or Desactivate voicemail on the line
* @param serviceName [required] Freefax number
*/
public void serviceName_voicemail_changeRouting_POST(String serviceName, OvhVoicefaxRoutingEnum routing) throws IOException {
String qPath = "/freefax/{serviceName}/voicemail/changeRouting";
StringBuilder sb = path(qPath, serviceName);
HashMap<String, Object>o = new HashMap<String, Object>();
addBody(o, "routing", routing);
exec(qPath, "POST", sb.toString(), o);
} |
// Parse parses the given GraphQL source into a Document.
func Parse(source *token.Source, options ...ParseOption) (ast.Document, error) {
var opts parseOptions
for _, applyOption := range options {
applyOption(&opts)
}
parser, err := newParser(source, &opts)
if err != nil {
return ast.Document{}, err
}
return parser.parseDocument()
} |
<gh_stars>1-10
import { Product } from '../../utils/cart';
export const OPEN_CART: string = 'OPEN_CART';
export const CLOSE_CART: string = 'CLOSE_CART';
export const UPDATE_CART: string = 'UPDATE_CART';
export const OPEN_MOBILE_MENU: string = 'OPEN_MOBILE_MENU';
export const CLOSE_MOBILE_MENU: string = 'CLOSE_MOBILE_MENU';
export interface MobileMenuState {
menuStatus: boolean;
}
export interface CartState {
cartStatus: boolean;
cartQuantity: Array<Product>;
}
export interface ActionType {
type: string;
payload: any;
}
|
<reponame>prodriguezval/clean-architecture-apollo-ts
import { ApolloError } from "apollo-server-fastify";
import { GetUserByIdUseCase } from "user/usecase/GetUserByIdUseCase";
import { logger } from "infrastructure/logger/loggerConfig";
export const userResolver = {
Query: {
user: async (root: any, params: any, context: any) => {
try {
const { id }: { id: string } = params;
logger().info(`Getting user with id: ${id}`);
const userByIdUseCase: GetUserByIdUseCase = context.getUserByIdUseCase;
return userByIdUseCase.execute(id);
} catch (error) {
throw new ApolloError(error);
}
},
},
};
|
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package persistentvolume
import (
"k8s.io/kubernetes/pkg/api"
)
func getClaimRefNamespace(pv *api.PersistentVolume) string {
if pv.Spec.ClaimRef != nil {
return pv.Spec.ClaimRef.Namespace
}
return ""
}
// VisitPVSecretNames invokes the visitor function with the name of every secret
// referenced by the PV spec. If visitor returns false, visiting is short-circuited.
// Returns true if visiting completed, false if visiting was short-circuited.
func VisitPVSecretNames(pv *api.PersistentVolume, visitor func(string, string) bool) bool {
source := &pv.Spec.PersistentVolumeSource
switch {
case source.AzureFile != nil:
if len(source.AzureFile.SecretName) > 0 && !visitor(getClaimRefNamespace(pv), source.AzureFile.SecretName) {
return false
}
return true
case source.CephFS != nil:
if source.CephFS.SecretRef != nil && !visitor(getClaimRefNamespace(pv), source.CephFS.SecretRef.Name) {
return false
}
case source.FlexVolume != nil:
if source.FlexVolume.SecretRef != nil && !visitor(getClaimRefNamespace(pv), source.FlexVolume.SecretRef.Name) {
return false
}
case source.RBD != nil:
if source.RBD.SecretRef != nil && !visitor(getClaimRefNamespace(pv), source.RBD.SecretRef.Name) {
return false
}
case source.ScaleIO != nil:
if source.ScaleIO.SecretRef != nil && !visitor(getClaimRefNamespace(pv), source.ScaleIO.SecretRef.Name) {
return false
}
case source.ISCSI != nil:
if source.ISCSI.SecretRef != nil && !visitor(getClaimRefNamespace(pv), source.ISCSI.SecretRef.Name) {
return false
}
case source.StorageOS != nil:
if source.StorageOS.SecretRef != nil && !visitor(source.StorageOS.SecretRef.Namespace, source.StorageOS.SecretRef.Name) {
return false
}
}
return true
}
|
#[macro_use]
extern crate log;
use std::io::Read;
use std::{env, io};
use actix_web::middleware::Logger;
use actix_web::{web, App, HttpServer, Responder};
use curl::easy::{Easy, List};
use dotenv::dotenv;
use todoist_habit_tracker::todoist_habit_api::manager::*;
async fn index() -> impl Responder {
"Hello you! Have a great day ❤️"
}
async fn webhook_complete_item(event: web::Json<Event>) -> io::Result<String> {
let mut serialized = String::new();
if event.has_day_string() {
let content = event.add_one_day_to_content();
// Data to send
let mut event_update = EventUpdate {
content: event.event_data.content.to_string(),
};
event_update.content = content;
serialized = serde_json::to_string(&event_update)?;
// Initialize Curl
let mut easy = Easy::new();
easy.url(
format!(
"https://api.todoist.com/rest/v1/tasks/{}",
event.event_data.id
)
.as_str(),
)?;
easy.post(true)?;
easy.post_field_size(serialized.len() as u64)?;
// Header
let mut list = List::new();
let bearer = format!(
"{}",
env::var("TODOIST_TOKEN").expect("TODOIST_TOKEN must be set")
);
list.append(format!("Authorization: Bearer {}", bearer).as_str())?;
list.append("Content-Type: application/json")?;
easy.http_headers(list)?;
// Send
let mut transfer = easy.transfer();
transfer.read_function(|buf| Ok(serialized.as_bytes().read(buf).unwrap_or(0)))?;
transfer.perform()?;
}
let log_string = format!(
"Event trigger name : {} - id : {} - content : {} - content sent : {}",
event.event_name, event.event_data.id, event.event_data.content, serialized
);
info!("{}", log_string);
Ok(log_string)
}
#[actix_rt::main]
async fn main() -> std::io::Result<()> {
dotenv().ok();
env_logger::init();
let address = format!(
"127.0.0.1:{}",
env::var("SERVER_PORT").expect("SERVER_PORT must be set")
);
HttpServer::new(|| {
App::new()
.wrap(Logger::default())
.route("/", web::get().to(index))
.route(
"/api/webhook/complete-item",
web::post().to(webhook_complete_item),
)
})
.bind(address)?
.run()
.await
}
|
/**
* Creates a new thread for the frame processor and enables the processor.
*/
public CameraSource startFrameProcessor() {
mProcessingThread = new Thread(mFrameProcessor);
mFrameProcessor.setActive(true);
mProcessingThread.start();
return this;
} |
16 Gb/s PAM4 UWOC system based on 488-nm LD with light injection and optoelectronic feedback techniques.
A 16 Gb/s four-level pulse amplitude modulation (PAM4) underwater wireless optical communication (UWOC) system based on 488-nm laser diode (LD) with light injection and optoelectronic feedback techniques is proposed and successfully demonstrated. Experimental results show that such a 1.8-GHz 488-nm blue light LD with light injection and optoelectronic feedback techniques is enough forceful for a 16 Gb/s PAM4 signal underwater link. To the authors' knowledge, this study is the first to successfully adopt a 488-nm LD transmitter with light injection and optoelectronic feedback techniques in a PAM4 UWOC system. By adopting a 488-nm LD transmitter with light injection and optoelectronic feedback techniques, good bit error rate performance (offline processed by Matlab) and clear eye diagrams (measured in real-time) are achieved over a 10-m underwater link. The proposed system has the potential to play a vital role in the future UWOC infrastructure by effectively providing high transmission rate (16 Gb/s) and long underwater transmission distance (10 m). |
// Copyright (c) 2015-2021, NVIDIA CORPORATION.
// SPDX-License-Identifier: Apache-2.0
package main
import (
"bytes"
"crypto/rand"
"fmt"
"log"
"math"
"math/big"
"os"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/NVIDIA/proxyfs/conf"
)
var (
displayUpdateInterval time.Duration
filePathPrefix string
fileSize uint64
maxExtentSize uint64
minExtentSize uint64
numExtentsToWriteInTotal uint64
numExtentsToWritePerFile uint32
numExtentWritesPerFlush uint32
numExtentWritesPerValidate uint32
numExtentsWrittenInTotal uint64
numFiles uint16
waitGroup sync.WaitGroup
)
func main() {
var (
args []string
confMap conf.ConfMap
displayUpdateIntervalAsString string
displayUpdateIntervalPad string
err error
filePathPrefixPad string
fileSizeAsString string
fileSizePad string
maxExtentSizeAsString string
maxExtentSizePad string
maxParameterStringLen int
minExtentSizeAsString string
minExtentSizePad string
numExtentsToWritePerFileAsString string
numExtentsToWritePerFilePad string
numExtentWritesPerFlushAsString string
numExtentWritesPerFlushPad string
numExtentWritesPerValidateAsString string
numExtentWritesPerValidatePad string
numFilesAsString string
numFilesPad string
progressPercentage uint64
stresserIndex uint16
)
// Parse arguments
args = os.Args[1:]
// Read in the program's os.Arg[1]-specified (and required) .conf file
if 0 == len(args) {
log.Fatalf("no .conf file specified")
}
confMap, err = conf.MakeConfMapFromFile(args[0])
if nil != err {
log.Fatalf("failed to load config: %v", err)
}
// Update confMap with any extra os.Args supplied
err = confMap.UpdateFromStrings(args[1:])
if nil != err {
log.Fatalf("failed to load config overrides: %v", err)
}
// Process resultant confMap
filePathPrefix, err = confMap.FetchOptionValueString("StressParameters", "FilePathPrefix")
if nil != err {
log.Fatal(err)
}
numFiles, err = confMap.FetchOptionValueUint16("StressParameters", "NumFiles")
if nil != err {
log.Fatal(err)
}
if 0 == numFiles {
log.Fatalf("NumFiles must be > 0")
}
fileSize, err = confMap.FetchOptionValueUint64("StressParameters", "FileSize")
if nil != err {
log.Fatal(err)
}
if 0 == fileSize {
log.Fatalf("FileSize must be > 0")
}
if fileSize > math.MaxInt64 {
log.Fatalf("FileSize(%v) must be <= math.MaxInt64(%v)", fileSize, uint64(math.MaxInt64))
}
minExtentSize, err = confMap.FetchOptionValueUint64("StressParameters", "MinExtentSize")
if nil != err {
log.Fatal(err)
}
if 0 == minExtentSize {
log.Fatalf("MinExtentSize must be > 0")
}
if minExtentSize > fileSize {
log.Fatalf("MinExtentSize(%v) must be <= FileSize(%v)", minExtentSize, fileSize)
}
maxExtentSize, err = confMap.FetchOptionValueUint64("StressParameters", "MaxExtentSize")
if nil != err {
log.Fatal(err)
}
if maxExtentSize < minExtentSize {
log.Fatalf("MaxExtentSize(%v) must be >= MinExtentSize(%v)", maxExtentSize, minExtentSize)
}
if maxExtentSize > fileSize {
log.Fatalf("MaxExtentSize(%v) must be <= FileSize(%v)", maxExtentSize, fileSize)
}
numExtentsToWritePerFile, err = confMap.FetchOptionValueUint32("StressParameters", "NumExtentsToWritePerFile")
if nil != err {
log.Fatal(err)
}
if 0 == numExtentsToWritePerFile {
log.Fatalf("NumExtentsToWritePerFile must be > 0")
}
numExtentWritesPerFlush, err = confMap.FetchOptionValueUint32("StressParameters", "NumExtentWritesPerFlush")
if nil != err {
log.Fatal(err)
}
if 0 == numExtentWritesPerFlush {
numExtentWritesPerFlush = numExtentsToWritePerFile
}
numExtentWritesPerValidate, err = confMap.FetchOptionValueUint32("StressParameters", "NumExtentWritesPerValidate")
if nil != err {
log.Fatal(err)
}
if 0 == numExtentWritesPerValidate {
numExtentWritesPerValidate = numExtentsToWritePerFile
}
displayUpdateInterval, err = confMap.FetchOptionValueDuration("StressParameters", "DisplayUpdateInterval")
if nil != err {
log.Fatal(err)
}
// Display parameters
numFilesAsString = fmt.Sprintf("%d", numFiles)
fileSizeAsString = fmt.Sprintf("%d", fileSize)
minExtentSizeAsString = fmt.Sprintf("%d", minExtentSize)
maxExtentSizeAsString = fmt.Sprintf("%d", maxExtentSize)
numExtentsToWritePerFileAsString = fmt.Sprintf("%d", numExtentsToWritePerFile)
numExtentWritesPerFlushAsString = fmt.Sprintf("%d", numExtentWritesPerFlush)
numExtentWritesPerValidateAsString = fmt.Sprintf("%d", numExtentWritesPerValidate)
displayUpdateIntervalAsString = fmt.Sprintf("%s", displayUpdateInterval)
maxParameterStringLen = len(filePathPrefix)
if len(numFilesAsString) > maxParameterStringLen {
maxParameterStringLen = len(numFilesAsString)
}
if len(fileSizeAsString) > maxParameterStringLen {
maxParameterStringLen = len(fileSizeAsString)
}
if len(minExtentSizeAsString) > maxParameterStringLen {
maxParameterStringLen = len(minExtentSizeAsString)
}
if len(maxExtentSizeAsString) > maxParameterStringLen {
maxParameterStringLen = len(maxExtentSizeAsString)
}
if len(numExtentsToWritePerFileAsString) > maxParameterStringLen {
maxParameterStringLen = len(numExtentsToWritePerFileAsString)
}
if len(numExtentWritesPerFlushAsString) > maxParameterStringLen {
maxParameterStringLen = len(numExtentWritesPerFlushAsString)
}
if len(numExtentWritesPerValidateAsString) > maxParameterStringLen {
maxParameterStringLen = len(numExtentWritesPerValidateAsString)
}
if len(displayUpdateIntervalAsString) > maxParameterStringLen {
maxParameterStringLen = len(displayUpdateIntervalAsString)
}
filePathPrefixPad = strings.Repeat(" ", maxParameterStringLen-len(filePathPrefix))
numFilesPad = strings.Repeat(" ", maxParameterStringLen-len(numFilesAsString))
fileSizePad = strings.Repeat(" ", maxParameterStringLen-len(fileSizeAsString))
minExtentSizePad = strings.Repeat(" ", maxParameterStringLen-len(minExtentSizeAsString))
maxExtentSizePad = strings.Repeat(" ", maxParameterStringLen-len(maxExtentSizeAsString))
numExtentsToWritePerFilePad = strings.Repeat(" ", maxParameterStringLen-len(numExtentsToWritePerFileAsString))
numExtentWritesPerFlushPad = strings.Repeat(" ", maxParameterStringLen-len(numExtentWritesPerFlushAsString))
numExtentWritesPerValidatePad = strings.Repeat(" ", maxParameterStringLen-len(numExtentWritesPerValidateAsString))
displayUpdateIntervalPad = strings.Repeat(" ", maxParameterStringLen-len(displayUpdateIntervalAsString))
fmt.Println("[StressParameters]")
fmt.Printf("FilePathPrefix: %s%s\n", filePathPrefixPad, filePathPrefix)
fmt.Printf("NumFiles: %s%s\n", numFilesPad, numFilesAsString)
fmt.Printf("FileSize: %s%s\n", fileSizePad, fileSizeAsString)
fmt.Printf("MinExtentSize: %s%s\n", minExtentSizePad, minExtentSizeAsString)
fmt.Printf("MaxExtentSize: %s%s\n", maxExtentSizePad, maxExtentSizeAsString)
fmt.Printf("NumExtentsToWritePerFile: %s%s\n", numExtentsToWritePerFilePad, numExtentsToWritePerFileAsString)
fmt.Printf("NumExtentWritesPerFlush: %s%s\n", numExtentWritesPerFlushPad, numExtentWritesPerFlushAsString)
fmt.Printf("NumExtentWritesPerValidate: %s%s\n", numExtentWritesPerValidatePad, numExtentWritesPerValidateAsString)
fmt.Printf("DisplayUpdateInterval: %s%s\n", displayUpdateIntervalPad, displayUpdateIntervalAsString)
// Setup monitoring parameters
numExtentsToWriteInTotal = uint64(numFiles) * uint64(numExtentsToWritePerFile)
numExtentsWrittenInTotal = uint64(0)
// Launch fileStresser goroutines
waitGroup.Add(int(numFiles))
for stresserIndex = 0; stresserIndex < numFiles; stresserIndex++ {
go fileStresser(stresserIndex)
}
// Monitor fileStresser goroutines
for {
time.Sleep(displayUpdateInterval)
progressPercentage = 100 * atomic.LoadUint64(&numExtentsWrittenInTotal) / numExtentsToWriteInTotal
fmt.Printf("\rProgress: %3d%%", progressPercentage)
if 100 == progressPercentage {
break
}
}
waitGroup.Wait()
fmt.Println("... done!")
}
type fileStresserContext struct {
filePath string
file *os.File
written []byte
}
func fileStresser(stresserIndex uint16) {
var (
b uint8
err error
extentIndex uint32
fSC *fileStresserContext
l int64
mustBeLessThanBigIntPtr *big.Int
numExtentWritesSinceLastFlush uint32
numExtentWritesSinceLastValidate uint32
off int64
u64BigIntPtr *big.Int
)
// Construct this instance's fileStresserContext
fSC = &fileStresserContext{
filePath: fmt.Sprintf("%s%04X", filePathPrefix, stresserIndex),
written: make([]byte, fileSize),
}
fSC.file, err = os.OpenFile(fSC.filePath, os.O_CREATE|os.O_EXCL|os.O_RDWR, 0600)
if nil != err {
log.Fatal(err)
}
fSC.fileStresserWriteAt(int64(0), int64(fileSize), 0x00)
fSC.fileStresserValidate()
// Perform extent writes
b = 0x00
numExtentWritesSinceLastFlush = 0
numExtentWritesSinceLastValidate = 0
for extentIndex = 0; extentIndex < numExtentsToWritePerFile; extentIndex++ {
// Pick an l value such that minExtentSize <= l <= maxExtentSize
mustBeLessThanBigIntPtr = big.NewInt(int64(maxExtentSize - minExtentSize + 1))
u64BigIntPtr, err = rand.Int(rand.Reader, mustBeLessThanBigIntPtr)
if nil != err {
log.Fatal(err)
}
l = int64(minExtentSize) + u64BigIntPtr.Int64()
// Pick an off value such that 0 <= off <= (fileSize - l)
mustBeLessThanBigIntPtr = big.NewInt(int64(int64(fileSize) - l))
u64BigIntPtr, err = rand.Int(rand.Reader, mustBeLessThanBigIntPtr)
if nil != err {
log.Fatal(err)
}
off = u64BigIntPtr.Int64()
// Pick next b value (skipping 0x00 for as-yet-un-over-written bytes)
b++
if 0x00 == b {
b = 0x01
}
fSC.fileStresserWriteAt(off, l, b)
numExtentWritesSinceLastFlush++
if numExtentWritesPerFlush == numExtentWritesSinceLastFlush {
fSC.fileStresserFlush()
numExtentWritesSinceLastFlush = 0
}
numExtentWritesSinceLastValidate++
if numExtentWritesPerValidate == numExtentWritesSinceLastValidate {
fSC.fileStresserValidate()
numExtentWritesSinceLastValidate = 0
}
atomic.AddUint64(&numExtentsWrittenInTotal, uint64(1))
}
// Do one final fileStresserFlush() call if necessary to flush final writes
if 0 < numExtentWritesSinceLastFlush {
fSC.fileStresserFlush()
}
// Do one final fileStresserValidate() call if necessary to validate final writes
if 0 < numExtentWritesSinceLastValidate {
fSC.fileStresserValidate()
}
// Clean up and exit
err = fSC.file.Close()
if nil != err {
log.Fatal(err)
}
err = os.Remove(fSC.filePath)
if nil != err {
log.Fatal(err)
}
waitGroup.Done()
}
func (fSC *fileStresserContext) fileStresserWriteAt(off int64, l int64, b byte) {
buf := make([]byte, l)
for i := int64(0); i < l; i++ {
buf[i] = b
fSC.written[off+i] = b
}
_, err := fSC.file.WriteAt(buf, off)
if nil != err {
log.Fatal(err)
}
}
func (fSC *fileStresserContext) fileStresserFlush() {
err := fSC.file.Sync()
if nil != err {
log.Fatal(err)
}
}
func (fSC *fileStresserContext) fileStresserValidate() {
buf := make([]byte, fileSize)
_, err := fSC.file.ReadAt(buf, int64(0))
if nil != err {
log.Fatal(err)
}
if 0 != bytes.Compare(fSC.written, buf) {
log.Fatalf("Miscompare in filePath %s\n", fSC.filePath)
}
}
|
/**
* This is a helper method that reads a JSON token using a JsonParser
* instance, and throws an exception if the next token is not END_OBJECT.
*
* @param jsonParser The JsonParser instance to be used
* @param parentFieldName The name of the field
* @throws IOException
*/
public static void readEndObjectToken(JsonParser jsonParser,
String parentFieldName)
throws IOException {
readToken(jsonParser, parentFieldName, JsonToken.END_OBJECT);
} |
async def connect(self, self_mute: bool = False, self_deaf: bool = True) -> None:
await self.guild.shard.voice_connect(self.guild_id, self.id, self_mute, self_deaf) |
/* IsPartNested: Test whether smaller partition is nested in larger partition */
int IsPartNested (BitsLong *smaller, BitsLong *larger, int length)
{
int i;
for (i=0; i<length; i++)
if ((smaller[i] | larger[i]) != larger[i])
break;
if (i == length)
return YES;
else
return NO;
} |
import { ReactElement, SVGProps } from "react";
export function TwitterShare(props: SVGProps<SVGSVGElement>): ReactElement {
return (
<svg
viewBox="0 0 112.197 112.197"
focusable="false"
aria-hidden="true"
{...props}
>
<circle cx="56.099" cy="56.098" r="56.098" fill="#55acee"></circle>
<path
fill="#f1f2f2"
d="M90.461 40.316a26.753 26.753 0 01-7.702 2.109 13.445 13.445 0 005.897-7.417 26.843 26.843 0 01-8.515 3.253 13.396 13.396 0 00-9.79-4.233c-7.404 0-13.409 6.005-13.409 13.409 0 1.051.119 2.074.349 3.056-11.144-.559-21.025-5.897-27.639-14.012a13.351 13.351 0 00-1.816 6.742c0 4.651 2.369 8.757 5.965 11.161a13.314 13.314 0 01-6.073-1.679l-.001.17c0 6.497 4.624 11.916 10.757 13.147a13.362 13.362 0 01-3.532.471c-.866 0-1.705-.083-2.523-.239 1.706 5.326 6.657 9.203 12.526 9.312a26.904 26.904 0 01-16.655 5.74c-1.08 0-2.15-.063-3.197-.188a37.929 37.929 0 0020.553 6.025c24.664 0 38.152-20.432 38.152-38.153 0-.581-.013-1.16-.039-1.734a27.192 27.192 0 006.692-6.94z"
></path>
</svg>
);
}
|
/**
* Writes out the text representation of an integer using base 10 to an
* OutputStream in UTF-8 encoding.
* <p/>
* Note: division by a constant (like 10) is much faster than division by a
* variable. That's one of the reasons that we don't make radix a parameter
* here.
*
* @param out the outputstream to write to
* @param i an int to write out
* @throws java.io.IOException
*/
public static void writeUTF8(OutputStream out, long i) throws IOException {
if (i == 0) {
out.write('0');
return;
}
boolean negative = i < 0;
if (negative) {
out.write('-');
} else {
i = -i;
}
long start = 1000000000000000000L;
while (i / start == 0) {
start /= 10;
}
while (start > 0) {
out.write('0' - (int) ((i / start) % 10));
start /= 10;
}
} |
/**
* Created by Andreas "denDAY" Stensig on 20-Sep-16.
*/
public class Lottery {
private Object id;
private long created; /* In Unix time. */
private double pricePerLotteryNum;
private int lotteryNumLowerBound;
private int lotteryNumUpperBound;
private String name;
private boolean ticketMultiWinEnabled;
private TreeMap<Object, Prize> prizes;
private TreeMap<Object, Ticket> tickets;
public Lottery() {
tickets = new TreeMap<>();
prizes = new TreeMap<>();
}
public Object getId() {
return id;
}
public void setId(Object id) {
this.id = id;
}
public long getCreated() {
return created;
}
public void setCreated(long created) {
this.created = created;
}
public double getPricePerLotteryNum() {
return pricePerLotteryNum;
}
public void setPricePerLotteryNum(double pricePerLotteryNum) {
this.pricePerLotteryNum = pricePerLotteryNum;
}
public int getLotteryNumLowerBound() {
return lotteryNumLowerBound;
}
public void setLotteryNumLowerBound(int lotteryNumLowerBound) {
if (lotteryNumLowerBound < 1)
throw new InvalidParameterException("Lower number boundary must be ");
this.lotteryNumLowerBound = lotteryNumLowerBound;
}
public int getLotteryNumUpperBound() {
return lotteryNumUpperBound;
}
public void setLotteryNumUpperBound(int lotteryNumUpperBound) {
this.lotteryNumUpperBound = lotteryNumUpperBound;
}
public TreeMap<Object, Prize> getPrizes() {
return prizes;
}
public void setPrizes(TreeMap<Object, Prize> prizes) {
this.prizes = prizes;
}
public void addPrize(Prize prize) {
if (prize == null) {
return;
}
if (prize.getId() == null) {
throw new IllegalArgumentException("Prize ID is null.");
}
prizes.put(prize.getId(), prize);
}
public void removePrize(String prizeId) {
if (prizeId == null) {
return;
}
Prize prize = prizes.get(prizeId);
if (prize == null) {
return;
}
// In case the prize has been assigned to a lottery number, find the number and remove its reference to the prize
if (prize.getNumberId() != null) {
for (Object ticketId : tickets.keySet()) {
Ticket ticket = tickets.get(ticketId);
for (LotteryNumber number : ticket.getLotteryNumbers()) {
if (number.getId().equals(prize.getNumberId())) {
number.setWinningPrize(null);
prizes.remove(prizeId);
ApplicationDomain.getInstance().broadcastChange(DataAccessEvent.LOTTERY_NUMBER_UPDATE);
return;
}
}
}
} else {
prizes.remove(prizeId);
}
}
public void changePrize(Prize newPrize) {
if (newPrize == null) {
return;
}
if (newPrize.getId() == null) {
throw new IllegalArgumentException("New-Prize ID is null.");
}
if (prizes.containsKey(newPrize.getId())) {
prizes.get(newPrize.getId()).copy(newPrize);
} else {
prizes.put(newPrize.getId(), newPrize);
}
}
public TreeMap<Object, Ticket> getTickets() {
return tickets;
}
public void setTickets(TreeMap<Object, Ticket> tickets) {
this.tickets = tickets;
}
public void addTicket(Ticket ticket) {
if (ticket == null) {
return;
}
if (ticket.getId() == null) {
throw new IllegalArgumentException("Ticket ID is null.");
}
tickets.put(ticket.getId(), ticket);
}
public void removeTicket(String prizeId) {
if (prizeId == null) {
return;
}
prizes.remove(prizeId);
}
@Override
public String toString() {
StringBuilder sb = new StringBuilder();
sb.append(String.format("Lottery ID %s", (String)id));
sb.append(String.format(", created %d", created));
sb.append(String.format(", price pr. num %.2f", pricePerLotteryNum));
sb.append(String.format(", num bounds {low, high} {%d, %d}", lotteryNumLowerBound, lotteryNumUpperBound));
if (tickets != null) {
sb.append(String.format(", #tickets %d", tickets.size()));
int lotteryNumberCount = 0;
for (Object key : tickets.keySet()) {
Ticket ticket = tickets.get(key);
lotteryNumberCount += ticket.getLotteryNumbers().size();
}
sb.append(String.format(", #lotteryNumbers %d" , lotteryNumberCount));
} else {
sb.append(", #tickets 0, #lotteryNumbers 0");
}
sb.append(String.format(", #prizes %d", prizes != null ? prizes.size() : 0));
return sb.toString();
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public boolean isTicketMultiWinEnabled() {
return ticketMultiWinEnabled;
}
public void setTicketMultiWinEnabled(boolean ticketMultiWinEnabled) {
this.ticketMultiWinEnabled = ticketMultiWinEnabled;
}
} |
Attention! This news was published on the old version of the website. There may be some problems with news display in specific browser versions.
Heavy Tank T34: Commanding Respect
The Heavy Tank T34 is another tank with the designation 34, but now this title conceals not a nimble Soviet medium tank, but a real American “heavyweight.” It features a massive gun, thick armor and almost an entire division of American tank crewmen on board. Get ready for a new unit that requires almost no introduction: the American Heavy Tank T34!
The Heavy Tank T34 represented an eventual attempt by the US Artillery Department to enhance the firepower of the tanks from Project T29. In spring of 1945, two prototype T30 tanks (a T29 variant with a 155mm gun and a more powerful engine) were rearmed with 120mm guns and received the designation T34. The T53 120mm gun was essentially a version of the M1 anti-aircraft gun that was adapted to be installed on a tank and it was also the most powerful of all three weapons on tanks from Project T29. Eventually, due to problems with high levels of fumes in the cockpit after the gun was fired and the risk of generating a backblast when opening the hatch, a bore evacuator was installed on the barrel of the T53 cannon to expel lingering propellant gases.
After the war ended, the pressing need to develop such heavy vehicles declined and in 1950, the entire Project T29 was permanently scrapped.
Many players have already managed to acquaint themselves with the outstanding combat capabilities of the American T29, but until recently, it simply had no decent alternative in the main branch. There is now an alternative to the T29: the Heavy Tank T34, which already stands out when it comes to the peal of thunderous gunfire – the thunder of war.
Download Wallpaper: 1280x1024 | 1920x1080 | 2560x1440
With a more powerful 120mm gun, a new 810-horsepower engine, and an additional 100 millimeters of armored plating welded to the rear of the turret, this tank is actually slightly superior to the deadly T29 in terms of its combat capabilities.
This massive steel giant will be available soon at rank IV of the American heavy tank tech tree, and it will doubtlessly become one of the most sought-after new units in patch 1.67.
Previous Development Blogs: |
/**
* \brief Calibrate for too-slow or too-fast oscillator.
*
* When used, the RTC will compensate for an inaccurate oscillator. The
* RTC module will add or subtract cycles from the RTC prescaler to adjust the
* frequency in approximately 1 PPM steps. The provided correction value should
* be between -127 and 127, allowing for a maximum 127 PPM correction in either
* direction.
*
* If no correction is needed, set value to zero.
*
* \note Can only be used when the RTC is operated at 1Hz.
*
* \param[in, out] module Pointer to the software instance struct
* \param[in] value Between -127 and 127 used for the correction.
*
* \return Status of the calibration procedure.
* \retval STATUS_OK If calibration was done correctly.
* \retval STATUS_ERR_INVALID_ARG If invalid argument(s) were provided.
*/
enum status_code rtc_calendar_frequency_correction(
struct rtc_module *const module,
const int8_t value)
{
Assert(module);
Assert(module->hw);
Rtc *const rtc_module = module->hw;
if (abs(value) > 0x7F) {
return STATUS_ERR_INVALID_ARG;
}
uint32_t new_correction_value;
new_correction_value = abs(value);
if (value < 0) {
new_correction_value |= RTC_FREQCORR_SIGN;
}
while (rtc_calendar_is_syncing(module)) {
}
rtc_module->MODE2.FREQCORR.reg = new_correction_value;
return STATUS_OK;
} |
<reponame>anthonyballugjr/witty-app
import { HttpClient } from '@angular/common/http';
import { Injectable } from '@angular/core';
import { apiUrl } from '../../data/apiURL';
@Injectable()
export class SavingsProvider {
apiUrl = `${apiUrl}wallets/savings`;
authHeader = {
headers: {
'Authorization': `Token ${localStorage.token}`
}
}
constructor(public http: HttpClient) {
console.log('Hello SavingsProvider Provider');
}
getE() {
var name = 'emergency fund';
return new Promise(resolve => {
this.http.get(`${this.apiUrl}/user/${localStorage.userId}?name=${name}`, this.authHeader)
.subscribe(res => {
resolve(res);
});
});
}
getWallets() {
return new Promise(resolve => {
this.http.get(`${this.apiUrl}/user/${localStorage.userId}`, this.authHeader)
.subscribe(res => {
resolve(res);
});
});
}
addWallet(data) {
return new Promise((resolve, reject) => {
this.http.post(this.apiUrl, data, this.authHeader)
.subscribe(res => {
resolve(res);
}, err => {
reject(err);
});
});
}
updateWallet(data) {
console.log(data.id);
return new Promise((resolve, reject) => {
this.http.put(`${this.apiUrl}/${data.id}`, data, this.authHeader)
.subscribe(res => {
resolve(res);
}, err => {
reject(err);
});
});
}
deleteWallet(id) {
return new Promise((resolve, reject) => {
this.http.delete(`${this.apiUrl}/${id}`, this.authHeader)
.subscribe(res => {
resolve(res);
}, err => {
reject(err);
});
});
}
}
|
/**
* Uniquely identifies a particular cargo. Automatically generated by the application.
*/
@Entity
@Table(name = "TRACKINGID")
public class TrackingID extends AbstractDomainObject {
private static final long serialVersionUID = 1L;
@Id
@GeneratedValue(strategy = GenerationType.AUTO)
@Column(name = "")
private String id;
public TrackingID() {
}
public String getId() {
return id;
}
/**
* The id is not intended to be changed or assigned manually, but
* for test purpose it is allowed to assign the id.
*/
protected void setId(String id) {
if ((this.id != null) && !this.id.equals(id)) {
throw new IllegalArgumentException("Not allowed to change the id property.");
}
this.id = id;
}
/**
* This method is used by equals and hashCode.
* @return {@link #getId}
*/
public Object getKey() {
return getId();
}
} |
Cantillon Fou’Foune is one of the more sought after beers from the Brussels brewery, which is quite the feet if you consider that they’re one of the most sought after Lambic breweries in the world. Only 3,000 liters are brewed every year, made from 1,200 kg of apricots. To my knowledge, Fou’Foune is the only apricot Lambic being made.
As the story goes, Jean Van Roy was visiting a wine make friend in the French Rhone. While having dinner and wine with several friends, including an apricot grower affectionately nicknamed “Foufoune.” During the dinner, he lovingly rhapsodize about his Bergeron Apricots. Jean made an offhanded comment about brewing him a beer that was quickly forgotten over the course of the evening.
Franscois “Foufoune” Daronnat, however, didn’t forget about the statement. One day not long after the fortuitous dinner, Van Roy showed up to the brewery and found 300 kg of ripe Bergeron Apricots. And what do you know, the beer turned out fantastic. Every year Cantillon uses 1,200 kg of Bergeron apricots to create 3,000 (about 792 gallons) liters of beer. The apricots are hand stoned and then placed into 2 year old Lambic where the wild yeast eagerly chew up the new sugar source. After two more months, the beer is bottled. The majority of the production is sent back to the region from whence the apricots come.
On a side note, our French apricot growing friend has an interesting nickname. According to reliable sources, Foufoune is a French slang word for part of the female anatomy. Look it up on Google Translate…
Appearance: Cloudy blonde with peachy highlights, off white head, great retention.
Aroma: Funky, lemon, peaches, apricots, sweaty notes, grapefruit, almonds.
Taste: Lemon peel, stone pits, grapefruit rind, grapefruit juice.
Overall Impression: Nicely tart but with a creamy mouthfeel, Fou’Foune is another exceptional Lambic from Cantillon. As a fan of apricots, I really liked this beer. They provide a nice complement and contrast to the Cantillon Lambic. Balanced and flavorful, this is an outstanding beer.
Availability: Extremely limited and hunted. Imported by Shelton Brothers.
5% ABV
Note: Special Thanks to Kevin of Belgian Beer Geek for this bottle!
Like this: Like Loading... |
// SqlDb is a middlware for creating a unique session for each incomming request
func SqlDb(databaseDriver string, dsn string) Middleware {
return func(h http.Handler) http.Handler {
return http.HandlerFunc(func(rw http.ResponseWriter, r *http.Request) {
db, err := Open(databaseDriver, dsn)
if err != nil {
r = r.WithContext(context.WithValue(r.Context(), SqlDbContextKey, &DatabaseError{err.Error()}))
} else {
defer Close(db)
r = r.WithContext(context.WithValue(r.Context(), SqlDbContextKey, db))
}
h.ServeHTTP(rw, r)
})
}
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.