content
stringlengths 10
4.9M
|
---|
Big Picture Update: Birthday Edition
Greetings Contestants!
Earlier this month we reached a big milestone: The Culling has been in Early Access for a full year! It’s amazing to see how far we’ve come in the last twelve months. With endless hard work from our talented team and unwavering dedication from the community, The Culling has really grown up. Mom would be so proud! Even after 365 days of murder and mayhem, our passion for The Culling hasn’t dulled in the slightest, and we’re thrilled to share a few of our plans for year two.
Without further Mountain Dew...
Xbox One
Perhaps the biggest item on the list for 2017 is the Xbox One launch.
Bringing The Culling to a new platform means a whole new audience will get their hands on the game and we couldn’t be more excited to see people bring BR, into the LR. That’s Battle Royale into the living room.
We’ve dabbed. We’re Hip.
Although UE4 makes it relatively easy for us to add a platform, it’s true that some of our attention will be divided moving forward. With that said, we remain dedicated to the PC and continuing to make smart development decisions that benefit the game’s global audience, ensuring that the Xaviant team can continue lavishing the community with updates for a long time to come across both platforms.
At this moment we’re still weighing whether to support cross-platform play due to the implications of competing with a controller vs. keyboard/mouse, but full controller support is a feature we know many PC players will be happy to receive. As you can imagine, making the game feel good on a controller is more than just mapping buttons and we think we have a handle on how to let players enjoy both ranged and melee combat using a gamepad.
Invert-able thumbsticks? Check. Dual-zone look sensitivity? Check. Aiming sensitivity and rotation assist? Check. Selectable control scheme presets? Check.
Combat
We have a few important changes coming to combat - and they’re coming with the next patch.
The first change addresses the weakness/feeble wounds. After observing the latest combat iteration, we’ve found that skilled players are able to consistently avoid taking damage after receiving a weakness wound, which means blocking still doesn’t offer a significant advantage in combat. We’re upping the intensity of the micro-stagger that accompanies weakness, so that blocking a player’s attack creates a true opportunity to capitalize.
The second change slightly more intense(and still subject to change based on internal play testing before we put it out in the wild).
At the upper levels of competition, players are taking advantage of the fact that being in a fully charged attack state is tremendously powerful, leading to stalemate scenarios (i.e. one player holds charged attack while the other player holds block) that aren’t fun to play or spectate.
TL;DR: Good players do this weird dance combat that’s as awkward as watching your dad try to both whip AND nae nae.
To address this, we’re no longer allowing players to hold a fully charged attack. Once a melee attack reaches full charge, it will execute automatically. Further, players will no longer be able to cancel an attack by blocking. While this technique can create some interesting moments, it doesn’t fit our overall vision of combat. Charged attacks should be very powerful for capitalizing on an opportunity (i.e. against a staggered or unaware opponent), however, they should also be risky under normal combat circumstances due to a likelihood of being blocked.
Finally, we are removing the non-obvious mini-stagger that occurs if you are struck by an attack while you are shoving. This was implemented some time ago to discourage players spamming shove, but we don’t think it has a meaningful place in the current combat system.
Balance
There are a few more balance changes that will be finding their way into the next patch:
Dig Deep will restore 10 HP (instead of 20), properly plays SFX when it procs
Leg Day is being removed
Speedy Spear now improves movement speed by 5% (previously 7%)
Duration of the expose wound (imparted by Axes) is now 4 seconds (previously 6 seconds)
Economy and Progression
We have quite a few changes planned for the economy and match progression.
Early match airdrops are being removed (this will be included in the next patch)
Airdrop purchasing will be based on FUNC accumulated, not current FUNC balance, meaning you can spend FUNC on crafting and healing without impacting your ability to call your Airdrop
The Airdrop list will be revised heavily to include fun themed options as well as vanilla choices similar to the ones that currently exist
Mid- and Late-match events and airdrops will be triggered primarily by player attrition. Mid-match begins after 4 contestants die, Late-match begins after 8 contestants die. Timers will still be used as backups for slow-moving matches.
Tier 2 weapons no longer spawn in lockers, only found in green crates.
We believe these changes will make Airdrops more accessible and desirable (regardless of the pace of the match) and also ensure that each match has two match events to spice things up.
Under the Hood
Although we’re continuing to work on shiny new features and balance work, there’s a lot going on that won’t be visible to the naked eye. We’re upgrading to the latest version of the Unreal Engine (4.15), which brings with it a host of optimizations that should improve load times, memory footprint, frame rate performance, and networking efficiency for many users.
We’ve also done quite a bit of targeted optimization of our client-side network behavior, resulting in a major reduction in the number and size of messages being sent to and from the game servers. This has allowed us to increase our server tick rate substantially, and should provide a better online experience for everyone.
Have you ever found yourself stuck in a block or taking an action you didn’t intend? We’ve rewritten our input system to be more robust and eliminate the types of bugs that occur when you switch weapons or change states during combat. Hopefully you won’t feel much of a difference, but you’ll find the controls to be consistently responsive and reliable, just as you’d expect.
New Features
The Big House update brought with it lots of new features, and throughout 2017 we hope to introduce many more. Two of the most critical are matchmaker filtering (allowing you to specify teams vs. solo and classic vs. lightning if you choose) and seasonal ranked play.
Matchmaker filtering will be coming with the next patch and ranked play will follow soon after the Xbox One launch. We are also excited to expand our private match feature set to allow more customization and better support of community tournaments.
We’re also excited to experiment with new game modes. We’re even working on a fast-paced offline horde survival mode that’s intended to let novice players hone their combat skills against waves of mandroids and provide veteran players with a challenging diversion.
As we knock out the big features on our list, we’re looking forward to engaging the community about what you’re hoping to see next.
Leaving Early Access
The Culling will be exiting the Early Access program in 2017. We don’t have a date to share just yet, but our goal has always been to officially launch the game when it’s ready. For us, Early Access isn’t a permanent label, it’s a phase of our project that we’re using to release The Culling in the best possible form.
That means that the coming months will see lots of polishing and bug fixing as we strive to attain the level of fit and finish we think the game deserves. To that end, we’re working to build a small in-house QA team to help us discover issues, vet community bug reports, and verify our fixes to provide our future updates with a higher level of quality and stability.
Once we exit Early Access we’ll continue to patch and update the game, but those updates will be geared towards adding content, features, and variety, rather than fixing bugs or iterating core features.
The community has been invaluable so far on our Early Access journey and we will continue to rely on your feedback and support as we move forward.
Live Streams
In the year that we’ve been in Early Access, we’ve seen the cullmunity create some incredible content. On platforms such as Twitch and YouTube, players have put together montages, hosted tournaments and have even held regular roundtable discussions centered entirely around the game.
Now it’s our turn to return the favor.
We’re excited to announce that starting next week we will be launching weekly Official live streams. Viewers can expect to not only learn more about The Culling as a game, but also get to know the developers behind it and the community that supports it.
We will have more information on this very shortly, so make sure you’re following us on Twitter, Facebook and Twitch for the latest!
Update Schedule
So when are all of these changes coming?
The next patch will include matchmaking, combat changes, removal of early airdrops, and a few other balance changes. It should arrive next week (March 22nd). Look for official patch notes as that date nears.
There will likely be one more patch after that before the Xbox One launch. That patch will bring the live PC version up to date with all of the under the hood (Unreal Engine 4.15) and economy / Airdrop / match progression changes detailed above. We’re not ready to estimate a date for that yet.
We will patch the PC again to coincide with the Xbox One launch, and from there we expect ongoing updates every 4-6 weeks, with smaller hotfixes as necessary.
Conclusion
The Culling had an amazing first year. Our little team never imagined the response we would get when the game first launched, nor did we fully grasp the magnitude of the task that still lay ahead. This journey has been a wild ride, an unparalleled learning experience, and an absolute joy. You, the cullmunity, have made it possible. We thank you for your passion, your feedback, your kind words, and your uncanny ability to smash, shoot, poison, and slash each other to death.
We’ll see you on the island! |
import logging
import torch
import numpy as np
logger = logging.getLogger(__name__)
def recover_metric_depth(pred, gt):
if type(pred).__module__ == torch.__name__:
pred = pred.cpu().numpy()
if type(gt).__module__ == torch.__name__:
gt = gt.cpu().numpy()
gt_mean = np.mean(gt)
gt_var = np.var(gt)
pred_mean = np.mean(pred)
pred_var = np.var(pred)
#pred_metric = ((pred - pred_mean) / pred_var) * gt_var + gt_mean
pred_metric = pred * (gt_mean / pred_mean)
return pred_metric
def validate_rel_depth_err(pred, gt, smoothed_criteria, mask=None, scale=10.):
if type(pred).__module__ == torch.__name__:
pred = pred.cpu().numpy()
if type(gt).__module__ == torch.__name__:
gt = gt.cpu().numpy()
gt = np.squeeze(gt)
pred = np.squeeze(pred)
if mask is not None:
gt = gt[mask[0]:mask[1], mask[2]:mask[3]]
pred = pred[mask[0]:mask[1], mask[2]:mask[3]]
if pred.shape != gt.shape:
logger.info('The shapes of dt and gt are not same!')
return -1
mask2 = gt > 0
gt = gt[mask2]
pred = pred[mask2]
# Scale matching
pred = recover_metric_depth(pred, gt)
n_pxl = gt.size
gt_scale = gt * scale
pred_scale = pred * scale
# Mean Absolute Relative Error
rel = np.abs(gt_scale - pred_scale) / gt_scale # compute errors
abs_rel_sum = np.sum(rel)
smoothed_criteria['err_absRel'].AddValue(np.float64(abs_rel_sum), n_pxl)
# WHDR error
whdr_err_sum, eval_num = weighted_human_disagreement_rate(gt_scale, pred_scale)
smoothed_criteria['err_whdr'].AddValue(np.float64(whdr_err_sum), eval_num)
return smoothed_criteria
def validate_err(pred, gt, smoothed_criteria, mask=None, scale=10.):
if type(pred).__module__ == torch.__name__:
pred = pred.cpu().numpy()
if type(gt).__module__ == torch.__name__:
gt = gt.cpu().numpy()
gt = np.squeeze(gt)
pred = np.squeeze(pred)
if mask is not None:
gt = gt[mask[0]:mask[1], mask[2]:mask[3]]
pred = pred[mask[0]:mask[1], mask[2]:mask[3]]
if pred.shape != gt.shape:
logger.info('The shapes of dt and gt are not same!')
return -1
mask2 = gt > 0
gt = gt[mask2]
pred = pred[mask2]
n_pxl = gt.size
gt_scale = gt * scale
pred_scale = pred * scale
# Mean Absolute Relative Error
rel = np.abs(gt_scale - pred_scale) / gt_scale # compute errors
abs_rel_sum = np.sum(rel)
smoothed_criteria['err_absRel'].AddValue(np.float64(abs_rel_sum), n_pxl)
return smoothed_criteria
def validate_err_kitti(pred, gt, smoothed_criteria, mask=None, scale=256.*80.):
if type(pred).__module__ == torch.__name__:
pred = pred.cpu().numpy()
if type(gt).__module__ == torch.__name__:
gt = gt.cpu().numpy()
gt = np.squeeze(gt)
pred = np.squeeze(pred)
if mask is not None:
gt = gt[mask[0]:mask[1], mask[2]:mask[3]]
pred = pred[mask[0]:mask[1], mask[2]:mask[3]]
if pred.shape != gt.shape:
logger.info('The shapes of dt and gt are not same!')
return -1
mask2 = gt > 0
gt = gt[mask2]
pred = pred[mask2]
n_pxl = gt.size
gt_scale = gt * scale
pred_scale = pred * scale
# Mean Absolute Relative Error
rel = np.abs(gt_scale - pred_scale) / gt_scale # compute errors
abs_rel_sum = np.sum(rel)
smoothed_criteria['err_absRel'].AddValue(np.float64(abs_rel_sum), n_pxl)
# Scale invariant error, silog is an evaluation metric of KITTI benchmark
diff_log = np.log(pred_scale) - np.log(gt_scale)
diff_log_sum = np.sum(diff_log)
smoothed_criteria['err_silog'].AddValue(np.float64(diff_log_sum), n_pxl)
diff_log_2 = diff_log ** 2
diff_log_2_sum = np.sum(diff_log_2)
smoothed_criteria['err_silog2'].AddValue(np.float64(diff_log_2_sum), n_pxl)
return smoothed_criteria
def evaluate_err(pred, gt, smoothed_criteria, mask = None, scale=10.0 ):
if type(pred).__module__ != np.__name__:
pred = pred.cpu().numpy()
if type(gt).__module__ != np.__name__:
gt = gt.cpu().numpy()
pred = np.squeeze(pred)
gt = np.squeeze(gt)
if mask is not None:
gt = gt[mask[0]:mask[1], mask[2]:mask[3]]
pred = pred[mask[0]:mask[1], mask[2]:mask[3]]
if pred.shape != gt.shape:
logger.info('The shapes of dt and gt are not same!')
return -1
mask2 = gt > 0
gt = gt[mask2]
pred = pred[mask2]
n_pxl = gt.size
gt_scale = gt * scale
pred_scale = pred * scale
#Mean Absolute Relative Error
rel = np.abs(gt - pred) / gt# compute errors
abs_rel_sum = np.sum(rel)
smoothed_criteria['err_absRel'].AddValue(np.float64(abs_rel_sum), n_pxl)
#Square Mean Relative Error
s_rel = ((gt_scale - pred_scale) * (gt_scale - pred_scale)) / (gt_scale * gt_scale)# compute errors
squa_rel_sum = np.sum(s_rel)
smoothed_criteria['err_squaRel'].AddValue(np.float64(squa_rel_sum), n_pxl)
#Root Mean Square error
square = (gt_scale - pred_scale) ** 2
rms_squa_sum = np.sum(square)
smoothed_criteria['err_rms'].AddValue(np.float64(rms_squa_sum), n_pxl)
#Log Root Mean Square error
log_square = (np.log(gt_scale) - np.log(pred_scale)) **2
log_rms_sum = np.sum(log_square)
smoothed_criteria['err_logRms'].AddValue(np.float64(log_rms_sum), n_pxl)
# Scale invariant error
diff_log = np.log(pred_scale) - np.log(gt_scale)
diff_log_sum = np.sum(diff_log)
smoothed_criteria['err_silog'].AddValue(np.float64(diff_log_sum), n_pxl)
diff_log_2 = diff_log ** 2
diff_log_2_sum = np.sum(diff_log_2)
smoothed_criteria['err_silog2'].AddValue(np.float64(diff_log_2_sum), n_pxl)
# Mean log10 error
log10_sum = np.sum(np.abs(np.log10(gt) - np.log10(pred)))
smoothed_criteria['err_log10'].AddValue(np.float64(log10_sum), n_pxl)
#Delta
gt_pred = gt_scale / pred_scale
pred_gt = pred_scale / gt_scale
gt_pred = np.reshape(gt_pred, (1, -1))
pred_gt = np.reshape(pred_gt, (1, -1))
gt_pred_gt = np.concatenate((gt_pred, pred_gt), axis=0)
ratio_max = np.amax(gt_pred_gt, axis=0)
delta_1_sum = np.sum(ratio_max < 1.25)
smoothed_criteria['err_delta1'].AddValue(np.float64(delta_1_sum), n_pxl)
delta_2_sum = np.sum(ratio_max < 1.25**2)
smoothed_criteria['err_delta2'].AddValue(np.float64(delta_2_sum), n_pxl)
delta_3_sum = np.sum(ratio_max < 1.25**3)
smoothed_criteria['err_delta3'].AddValue(np.float64(delta_3_sum), n_pxl)
# WHDR error
whdr_err_sum, eval_num = weighted_human_disagreement_rate(gt_scale, pred_scale)
smoothed_criteria['err_whdr'].AddValue(np.float64(whdr_err_sum), eval_num)
return smoothed_criteria
def evaluate_rel_err(pred, gt, smoothed_criteria, mask = None, scale=10.0 ):
if type(pred).__module__ != np.__name__:
pred = pred.cpu().numpy()
if type(gt).__module__ != np.__name__:
gt = gt.cpu().numpy()
pred = np.squeeze(pred)
gt = np.squeeze(gt)
if mask is not None:
gt = gt[mask[0]:mask[1], mask[2]:mask[3]]
pred = pred[mask[0]:mask[1], mask[2]:mask[3]]
if pred.shape != gt.shape:
logger.info('The shapes of dt and gt are not same!')
return -1
mask2 = gt > 0
gt = gt[mask2]
pred = pred[mask2]
n_pxl = gt.size
gt_scale = gt * scale
pred_scale = pred * scale
#Mean Absolute Relative Error
rel = np.abs(gt - pred) / gt# compute errors
abs_rel_sum = np.sum(rel)
smoothed_criteria['err_absRel'].AddValue(np.float64(abs_rel_sum), n_pxl)
#Square Mean Relative Error
s_rel = ((gt_scale - pred_scale) * (gt_scale - pred_scale)) / (gt_scale * gt_scale)# compute errors
squa_rel_sum = np.sum(s_rel)
smoothed_criteria['err_squaRel'].AddValue(np.float64(squa_rel_sum), n_pxl)
#Root Mean Square error
square = (gt_scale - pred_scale) ** 2
rms_squa_sum = np.sum(square)
smoothed_criteria['err_rms'].AddValue(np.float64(rms_squa_sum), n_pxl)
#Log Root Mean Square error
log_square = (np.log(gt_scale) - np.log(pred_scale)) **2
log_rms_sum = np.sum(log_square)
smoothed_criteria['err_logRms'].AddValue(np.float64(log_rms_sum), n_pxl)
# Scale invariant error
diff_log = np.log(pred_scale) - np.log(gt_scale)
diff_log_sum = np.sum(diff_log)
smoothed_criteria['err_silog'].AddValue(np.float64(diff_log_sum), n_pxl)
diff_log_2 = diff_log ** 2
diff_log_2_sum = np.sum(diff_log_2)
smoothed_criteria['err_silog2'].AddValue(np.float64(diff_log_2_sum), n_pxl)
# Mean log10 error
log10_sum = np.sum(np.abs(np.log10(gt) - np.log10(pred)))
smoothed_criteria['err_log10'].AddValue(np.float64(log10_sum), n_pxl)
#Delta
gt_pred = gt_scale / pred_scale
pred_gt = pred_scale / gt_scale
gt_pred = np.reshape(gt_pred, (1, -1))
pred_gt = np.reshape(pred_gt, (1, -1))
gt_pred_gt = np.concatenate((gt_pred, pred_gt), axis=0)
ratio_max = np.amax(gt_pred_gt, axis=0)
delta_1_sum = np.sum(ratio_max < 1.25)
smoothed_criteria['err_delta1'].AddValue(np.float64(delta_1_sum), n_pxl)
delta_2_sum = np.sum(ratio_max < 1.25**2)
smoothed_criteria['err_delta2'].AddValue(np.float64(delta_2_sum), n_pxl)
delta_3_sum = np.sum(ratio_max < 1.25**3)
smoothed_criteria['err_delta3'].AddValue(np.float64(delta_3_sum), n_pxl)
# WHDR error
whdr_err_sum, eval_num = weighted_human_disagreement_rate(gt_scale, pred_scale)
smoothed_criteria['err_whdr'].AddValue(np.float64(whdr_err_sum), eval_num)
return smoothed_criteria
def weighted_human_disagreement_rate(gt, pred):
p12_index = select_index(gt.size)
gt_reshape = np.reshape(gt, gt.size)
pred_reshape = np.reshape(pred, pred.size)
gt_p1 = gt_reshape[p12_index['p1']]
gt_p2 = gt_reshape[p12_index['p2']]
pred_p1 = pred_reshape[p12_index['p1']]
pred_p2 = pred_reshape[p12_index['p2']]
gt_p2[gt_p2 == 0.] = 0.00001
pred_p2[pred_p2 == 0.] = 0.00001
gt_p12 = gt_p1 / gt_p2
pred_p12 = pred_p1 / pred_p2
l12_gt = np.zeros_like(gt_p12)
l12_gt[gt_p12 > 1.02] = 1
l12_gt[gt_p12 < 0.98] = -1
l12_pred = np.zeros_like(pred_p12)
l12_pred[pred_p12 > 1.02] = 1
l12_pred[pred_p12 < 0.98] = -1
err = np.sum(l12_gt != l12_pred)
valid_pixels = gt_p1.size
return err, valid_pixels
def select_index(img_size):
p1 = np.random.choice(img_size, int(img_size * 0.6), replace=False)
np.random.shuffle(p1)
p2 = np.random.choice(img_size, int(img_size * 0.6), replace=False)
np.random.shuffle(p2)
mask = p1 != p2
p1 = p1[mask]
p2 = p2[mask]
p12_index = {'p1': p1, 'p2': p2}
return p12_index
|
<gh_stars>1-10
package com.swak.mongo.codec;
import static org.bson.assertions.Assertions.notNull;
import org.bson.Transformer;
import org.bson.codecs.BsonTypeClassMap;
import org.bson.codecs.Codec;
import org.bson.codecs.configuration.CodecProvider;
import org.bson.codecs.configuration.CodecRegistry;
public class DocumentCodecxProvider implements CodecProvider {
private final BsonTypeClassMap bsonTypeClassMap;
private final Transformer valueTransformer;
/**
* Construct a new instance with a default {@code BsonTypeClassMap}.
*/
public DocumentCodecxProvider() {
this(new BsonTypeClassMap());
}
/**
* Construct a new instance with a default {@code BsonTypeClassMap} and the
* given {@code Transformer}. The transformer is used by the DocumentCodec as a
* last step when decoding values.
*
* @param valueTransformer
* the value transformer for decoded values
* @see org.bson.codecs.DocumentCodec#DocumentCodec(org.bson.codecs.configuration.CodecRegistry,
* BsonTypeClassMap, org.bson.Transformer)
*/
public DocumentCodecxProvider(final Transformer valueTransformer) {
this(new BsonTypeClassMap(), valueTransformer);
}
/**
* Construct a new instance with the given instance of {@code BsonTypeClassMap}.
*
* @param bsonTypeClassMap
* the non-null {@code BsonTypeClassMap} with which to construct
* instances of {@code DocumentCodec} and {@code
* ListCodec}
*/
public DocumentCodecxProvider(final BsonTypeClassMap bsonTypeClassMap) {
this(bsonTypeClassMap, null);
}
/**
* Construct a new instance with the given instance of {@code BsonTypeClassMap}.
*
* @param bsonTypeClassMap
* the non-null {@code BsonTypeClassMap} with which to construct
* instances of {@code DocumentCodec} and {@code
* ListCodec}.
* @param valueTransformer
* the value transformer for decoded values
*/
public DocumentCodecxProvider(final BsonTypeClassMap bsonTypeClassMap, final Transformer valueTransformer) {
this.bsonTypeClassMap = notNull("bsonTypeClassMap", bsonTypeClassMap);
this.valueTransformer = valueTransformer;
}
@Override
@SuppressWarnings("unchecked")
public <T> Codec<T> get(final Class<T> clazz, final CodecRegistry registry) {
return (Codec<T>) new DocumentCodecx(registry, bsonTypeClassMap, valueTransformer);
}
@Override
public boolean equals(final Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
DocumentCodecxProvider that = (DocumentCodecxProvider) o;
if (!bsonTypeClassMap.equals(that.bsonTypeClassMap)) {
return false;
}
if (valueTransformer != null ? !valueTransformer.equals(that.valueTransformer)
: that.valueTransformer != null) {
return false;
}
return true;
}
@Override
public int hashCode() {
int result = bsonTypeClassMap.hashCode();
result = 31 * result + (valueTransformer != null ? valueTransformer.hashCode() : 0);
return result;
}
}
|
<filename>src/FactoryImplementationBase.cpp
#include <stddef.h> // for NULL
#include "FactoryImplementationBase.h"
#include "FactorySymbolTable.h" // for FactorySymbolTable
#include "ObjectStub.h" // for ObjectStub
#include "Types.h" // for Kind::STUB, Kind::ANY
FactoryImplementationBase::FactoryImplementationBase(FactoryImplementationBase* parent) {
parentFactory = parent;
mySymbolTable = new FactorySymbolTable(true);
}
FactoryImplementationBase::~FactoryImplementationBase() {
mySymbolTable->deleteAllSymbols();
delete mySymbolTable;
}
bool
FactoryImplementationBase::add(ObjectStub* stub) {
return mySymbolTable->addSymbol((Factory*)stub);
}
bool
FactoryImplementationBase::add(FactoryImplementationBase* subFactory) {
return mySymbolTable->addSymbol((Factory*)subFactory);
}
ObjectStub*
FactoryImplementationBase::isObjectPresent(const string& objectName) {
string* prefix = getPrefix(objectName);
if (prefix->empty()) {
// empty string - return NULL
delete prefix;
return NULL;
}
Factory* searchResult = NULL;
if ((searchResult = mySymbolTable->searchSymbol(*prefix)) == NULL) {
delete prefix;
return NULL;
}
if (searchResult->getKind() == STUB) {
delete prefix;
return (ObjectStub*)searchResult;
}
string* suffix = getSuffix(objectName);
ObjectStub* stub =
((FactoryImplementationBase*)searchResult)->isObjectPresent(*suffix);
delete suffix;
return stub;
}
ObjectStub*
FactoryImplementationBase::isObjectPresent(const Factory* objectClass) {
return isObjectPresent(objectClass->getName());
}
FactoryImplementationBase*
FactoryImplementationBase::getParentFactory() const {
return parentFactory;
}
FactoryImplementationBase*
FactoryImplementationBase::getMainFactory() const {
FactoryImplementationBase* parent = parentFactory;
while (parent->getParentFactory() != NULL) {
parent = parent->getParentFactory();
}
return parent;
}
string
FactoryImplementationBase::listAll(const string& prefix) {
Factory* base;
string newPrefix = prefix + getName();
string list;
base = mySymbolTable->iterativeSearch(ANY);
while (base != NULL) {
if (base->getKind() == STUB) {
list += newPrefix + "." + base->getName();
list += " [" + ((ObjectStub*)base)->getInformation() + "]\n";
} else {
list +=
((FactoryImplementationBase*)base)->listAll(newPrefix);
}
base = mySymbolTable->successor();
}
return list;
}
string*
FactoryImplementationBase::getPrefix(const string& objectName) {
string* returnString = NULL;
string::size_type position = objectName.find(".");
if (position != string::npos) {
// everything before the "." is the prefix
returnString = new string(objectName.substr(0,position));
} else {
returnString = new string(objectName);
}
return returnString;
}
string*
FactoryImplementationBase::getSuffix(const string& objectName) {
string* returnString = NULL;
string::size_type position = objectName.find(".");
if (position != string::npos) {
// everything after the "." is the suffix
returnString = new string(objectName.substr(position+1));
} else {
returnString = new string(objectName);
}
return returnString;
}
|
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE UnicodeSyntax #-}
module File.Util
( tagFile
, rmLastTag
) where
import Data.Maybe (catMaybes)
import Debug (dbg)
import DB.Base
import DB.Read (tagEntityNamed)
import DB.Write (ensureFileTag, rmFileTag, rmFile)
import Parse
-- Move to DB.Write?
tagFile ∷ DB → FileId → [TagName] → IO ()
tagFile db fileId tagNames = do
maybeTagEntities ← mapM (tagEntityNamed db) tagNames
let tagIds = map tagId (catMaybes maybeTagEntities)
-- 'ensureFileTag' because might already be tagged w/ some of these tags.
mapM_ (ensureFileTag db fileId) tagIds
-- THIS IS SPECIFICALLY FOR removeFile.
-- For last tagName of fromPath (if any), rm FileTag.
-- IF NO TAGS, THEN REMOVE FILE.
rmLastTag ∷ DB → FileId → FilePath → IO ()
rmLastTag db fileId filePath = do
let (tagNames, _) = parseFilePath filePath
if null tagNames
then rmFile db fileId -- & all associated FileTags
else rmFileTag db fileId (last tagNames)
|
Zombie movies have been stuck in a bit of a rut for over a decade. Since 28 Days Later revived the genre back in 2002, there’s been a zombie outbreak in popular culture but not a whole lot of originality. If the early buzz is to be believed, The Girl with All the Gifts, might be the fresh blood we’ve been looking for.
The story focuses on Melanie (Sennia Nanua), a 12-year-old girl being held in a mysterious warehouse with other children who are essentially part-zombies. In this case, they’re called “Hungrys.” Glenn Close plays a scientist that’s studying the Hungrys in hopes of finding a cure. When the facility is attacked by a mob of the infected, Close’s scientist and Melanie have to hit the road with a small team.
Almost across the board, critics are praising the film for its originality and craft. You can see a trailer above and the latest one embedded at bottom. The movie hits DirectTV on January 26th and then rolls out in theaters on February 24th.
Here’s some of the early word:
Eric Kohn, Indiewire
A thrilling zombie movie with brains, “The Girl With All the Gifts” strengthens its traditional qualities a greater element of surprise…
Chilean composer Cristobal Tapia de Veer’s pounding soundtrack creates a constant sense of uneasiness, while the film’s vibrant imagery pairs immersive master shots of the empty landscapes with telling closeups that hint at divided allegiances. In the spectacular finale, the movie takes on the haunting, expressionistic dread of a Bosch painting.
“The Girl With All the Gifts” really does offer up a fleshed-out world rich with eerie implications, saving the biggest one for the memorable finale. As Melanie grows more confident in her understanding of the threat around her, she begins to take control in ways her human peers can’t anticipate, and her defiance creates a complicated moral base for the story.
Helen O’hara, Empire
It’s Nanua who proves the secret weapon. Her Melanie is bright and eager to learn, but more attentive and analytical than the adults sometimes realise. The combination of childlike delight in this brave new world and sometimes violent cunning is brilliantly balanced.
The best zombie-ish apocalypse in years. Sennia Nanua is a major discovery, but it’s the dense social commentary and moral dilemmas that will haunt you.
Joshua Rothkopf, Time Out
Easily the best thing to happen to the undead since 28 Days Later, Colm McCarthy’s The Girl with All the Gifts peps up its tired old zombie blood with fresh ideas, some unusually poetic imagery and a dark end-of-civilization aftertaste. Here the monsters are called “hungries”—jaw-snapping fast things that don’t feel the need to constantly moan or gasp. Most of the time, they’re standing completely still, as if asleep, waiting for the dinner call. It’s an eerie revision to the usual lumbering stroll.
Charles Gant, Screen Daily
One strong suit is the film’s richly imagined and complete universe, delivered on an expansive scale that continues to surprise – for example, when the camera is pulled skywards to offer a wide aerial overview of the first zombie attack. Combat sequences benefit from a bloody intensity that amplifies the dramatic stakes, while in an ambitiously realised final act, visual effects blend seamlessly with production design to suggest an abandoned London cityscape, long ago reclaimed by nature.
Tim Robey, The Telegraph
Propulsive, scary and intelligently bolted together, this inventive British science fiction film guides us into familiar genre territory from a perspective that changes everything.
Germain Lussier, io9
We’ve seen this overrun city before, but there are small tweaks here and there that place it in its own world. There’s a history that’s unique and well-developed. Plus, it’s delightfully off-putting how the dark interiors of this world are portrayed as safe havens, but the sunny, brightly lit exteriors are conversely menacing, partially because of the hundreds and hundreds of zombie walking around the biggest action scenes.
The Girl With All the Gifts is just plain great. It’s captivating, entertaining, provocative, and most of all, it’s the freshest take on the zombie genre we’ve seen in years. |
#pragma once
#ifndef INIREADER_H
#define INIREADER_H
using namespace std;
class CIniReader
{
public:
CIniReader(char* szFileName);
int ReadInt(const char* szSection, const char* szKey, int iDefaultValue);
float ReadFloat(const char* szSection, const char* szKey, float fltDefaultValue);
bool ReadBool(const char* szSection, const char* szKey, bool bolDefaultValue);
char* ReadString(const char* szSection, const char* szKey, const char* szDefaultValue);
private:
char m_szFileName[255];
};
#endif//INIREADER_H |
def ebd_env(self):
d = {}
for k in self._ebd_env_options:
d[f"PKGCORE_{k.upper()}"] = str(getattr(self.options, k)).lower()
d["PKGCORE_EAPI_INHERITS"] = ' '.join(x._magic for x in self.inherits)
d["EAPI"] = self._magic
return ImmutableDict(d) |
<reponame>omilia/omilia-channels-api-sample
package com.omilia.channels.commons.model;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import com.fasterxml.jackson.annotation.JsonProperty;
import lombok.Getter;
@Getter
@JsonIgnoreProperties(ignoreUnknown = true)
public class Coordinates {
// attributes
@JsonProperty(value = "lat")
private Double latitude;
@JsonProperty(value = "long")
private Double longitude;
// public
public String toString() {
return latitude + "," + longitude;
}
}
|
///
/// Given a list of file paths, loads files and their progresses.
///
fn load_levels(
&self,
loader: &Loader,
storage: &AssetStorage<GameLevel>,
dir_list: Vec<PathBuf>,
) -> (Vec<Handle<GameLevel>>, Vec<ProgressCounter>) {
let mut levels = Vec::new();
let mut progresses = Vec::new();
for path in dir_list {
if let Some((level, progress)) = self.load_level(loader, storage, path) {
levels.push(level);
progresses.push(progress);
}
}
(levels, progresses)
} |
<reponame>kethcode/evm.codes<gh_stars>0
import { useContext, useEffect, useMemo, useState, useCallback } from 'react'
import { useRegisterActions, Action } from 'kbar'
import Select, { OnChangeValue, components } from 'react-select'
import { EthereumContext } from 'context/ethereumContext'
import { SettingsContext, Setting } from 'context/settingsContext'
import { CURRENT_FORK } from 'util/constants'
import { toKeyIndex } from 'util/string'
import { Icon, Label } from 'components/ui'
const ChainOption = (props: any) => {
const { data, children } = props
const isCurrent = data.value === CURRENT_FORK
return (
<components.Option {...props}>
{children}
{isCurrent && <Label>Live</Label>}
</components.Option>
)
}
const ChainSelector = () => {
const { settingsLoaded, getSetting, setSetting } = useContext(SettingsContext)
const { forks, selectedFork, onForkChange } = useContext(EthereumContext)
const [forkValue, setForkValue] = useState()
const [actions, setActions] = useState<Action[]>([])
const forkOptions = useMemo(
() => forks.map((fork) => ({ value: fork.name, label: fork.name })),
[forks],
)
const defaultForkOption = useMemo(
() => forkOptions.find((fork) => fork.value === selectedFork?.name),
[forkOptions, selectedFork],
)
const handleForkChange = useCallback(
(option: OnChangeValue<any, any>) => {
setForkValue(option)
onForkChange(option.value)
setSetting(Setting.VmFork, option)
},
[onForkChange, setSetting],
)
useEffect(() => {
if (defaultForkOption) {
handleForkChange(getSetting(Setting.VmFork) || defaultForkOption)
}
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [settingsLoaded, defaultForkOption])
useEffect(() => {
const forkIds: string[] = []
const forkActions = forkOptions.map(
(option: OnChangeValue<any, any>, index) => {
const keyId = toKeyIndex('fork', index)
forkIds.push(keyId)
return {
id: keyId,
name: option.label,
shortcut: [],
keywords: option.label,
section: '',
perform: () => handleForkChange(option),
parent: 'fork',
}
},
)
if (forkIds.length > 0) {
setActions([
...forkActions,
{
id: 'fork',
name: 'Select hardfork…',
shortcut: ['f'],
keywords: 'fork network evm',
section: 'Preferences',
children: forkIds,
},
])
}
}, [forkOptions, handleForkChange])
useRegisterActions(actions, [actions])
return (
<div className="flex justify-end items-center rounded">
{forks.length > 0 && (
<div className="flex items-center mr-2">
<Icon name="git-branch-line" className="text-indigo-500 mr-2" />
<Select
onChange={handleForkChange}
options={forkOptions}
value={forkValue}
isSearchable={false}
classNamePrefix="select"
menuPlacement="auto"
components={{ Option: ChainOption }}
/>
</div>
)}
</div>
)
}
export default ChainSelector
|
def order(self, key:str) -> list:
if torch.is_tensor(self.storage[key][0]):
ordered = torch.stack(self.storage[key][:self.rollout], -2)
shape = ordered.shape
ordered = ordered.view(*shape[:-3], self.rollout * self.workers, shape[-1])
return ordered
else:
ordered = []
for i in range(self.workers):
ordered += [self.storage[key][j][i] for j in range(self.rollout)]
return ordered |
Filibusters aren’t just more numerous; they’re more mundane, too. Consider an earlier bill to extend unemployment benefits, passed in late 2009. It faced two filibusters — despite bipartisan backing and its eventual passage by a 98-0 margin. A bill that should have zipped through in a few days took four weeks, including seven days of floor debate. Or take the nomination of Judge Barbara Milano Keenan to the United States Court of Appeals for the Fourth Circuit: she, too, faced a filibuster, even though she was later confirmed 99 to 0.
Part of the problem lies with today’s partisan culture, in which blocking the other party takes priority over passing legislation or confirming candidates to key positions. And part of the problem lies with changes in Senate practices during the 1970s, which allowed the minority to filibuster a piece of legislation without holding up other items of business.
But the biggest factor is the nature of the filibuster itself. Senate rules put the onus on the majority for ending a debate, regardless of how frivolous the filibuster might be.
Newsletter Sign Up Continue reading the main story Please verify you're not a robot by clicking the box. Invalid email address. Please re-enter. You must select a newsletter to subscribe to. Sign Up You will receive emails containing news content , updates and promotions from The New York Times. You may opt-out at any time. You agree to receive occasional updates and special offers for The New York Times's products and services. Thank you for subscribing. An error has occurred. Please try again later. View all New York Times newsletters.
If the majority leader wants to end a debate, he or she first calls for unanimous consent for cloture, basically a voice vote from all the senators present in the chamber. But if even one member of the filibustering minority is present to object to the motion, the majority leader has to hold a roll call vote. If the majority leader can’t round up the necessary 60 votes, the debate continues.
Getting at least 60 senators on the floor several times a week is no mean feat given travel schedules, illnesses and campaign obligations. The most recent debate over extending unemployment benefits, for example, took so long in part because the death of Senator Robert Byrd, a Democrat from West Virginia, left the majority with only 59 votes for cloture. The filibuster was brought to an end only after West Virginia’s governor appointed a replacement.
True, the filibuster has its benefits: it gives the minority party the power to block hasty legislation and force a debate on what it considers matters of national significance. So how can the Senate reform the filibuster to preserve its usefulness but prevent its abuse?
For starters, the Senate could replace the majority’s responsibility to end debate with the minority’s responsibility to keep it going. It would work like this: for the first four weeks of debate, the Senate would operate under the old rules, in which the majority has to find enough senators to vote for cloture. Once that time has elapsed, the debate would automatically end unless the minority could assemble 40 senators to continue it.
An even better step would be to return to the old “Mr. Smith Goes to Washington” model — in which a filibuster means that the Senate has to stop everything and debate around the clock — by allowing a motion requiring 40 votes to continue debate every three hours while the chamber is in continuous session. That way it is the minority that has to grab cots and mattresses and be prepared to take to the floor night and day to keep their filibuster alive.
Under such a rule, a sufficiently passionate minority could still preserve the Senate’s traditions and force an extended debate on legislation. But frivolous and obstructionist misuse of the filibuster would be a thing of the past. |
#include <bits/stdc++.h>
using namespace std;
const int MAXN=5005,INF=0x3f3f3f3f;
int n,b,c[MAXN],d[MAXN],f[MAXN][MAXN],g[MAXN][MAXN];
vector <int> G[MAXN];
int dfs(int u)
{
int su=0;
f[u][0]=g[u][0]=0;
for (int i=0;i<G[u].size();i++)
{
int v=G[u][i];
int sv=dfs(v);
su+=sv;
for (int j=su;j>=1;j--)
for (int k=max(1,j-(su-sv));k<=sv&&k<=j;k++)
{
f[u][j]=min(f[u][j],f[u][j-k]+f[v][k]);
g[u][j]=min(g[u][j],g[u][j-k]+g[v][k]);
}
}
su++;
int t=c[u]-d[u];
for (int i=su;i>=1;i--)
{
g[u][i]=min(g[u][i],g[u][i-1]+c[u]);
f[u][i]=min(f[u][i-1]+t,g[u][i]);
}
return su;
}
int main()
{
//freopen("read.txt","r",stdin);
scanf("%d%d",&n,&b);
for (int i=1;i<=n;i++)
{
scanf("%d%d",&c[i],&d[i]);
if (i>1)
{
int x;
scanf("%d",&x);
G[x].push_back(i);
}
}
memset(f,0x3f,sizeof(f));
memset(g,0x3f,sizeof(g));
dfs(1);
// for (int i=0;i<=n;i++)
// printf("%d\n",f[1][i]);
for (int i=n;i>=0;i--)
if (f[1][i]<=b)
{
printf("%d\n",i);
break;
}
return 0;
}
|
/*
* Copyright (c) 2017-2018, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef NVGPU_ATOMIC_H
#define NVGPU_ATOMIC_H
#ifdef __KERNEL__
#include <nvgpu/linux/atomic.h>
#elif defined(__NVGPU_POSIX__)
#include <nvgpu/posix/atomic.h>
#else
#include <nvgpu_rmos/include/atomic.h>
#endif
#define NVGPU_ATOMIC_INIT(i) __nvgpu_atomic_init(i)
#define NVGPU_ATOMIC64_INIT(i) __nvgpu_atomic64_init(i)
static inline void nvgpu_atomic_set(nvgpu_atomic_t *v, int i)
{
__nvgpu_atomic_set(v, i);
}
static inline int nvgpu_atomic_read(nvgpu_atomic_t *v)
{
return __nvgpu_atomic_read(v);
}
static inline void nvgpu_atomic_inc(nvgpu_atomic_t *v)
{
__nvgpu_atomic_inc(v);
}
static inline int nvgpu_atomic_inc_return(nvgpu_atomic_t *v)
{
return __nvgpu_atomic_inc_return(v);
}
static inline void nvgpu_atomic_dec(nvgpu_atomic_t *v)
{
__nvgpu_atomic_dec(v);
}
static inline int nvgpu_atomic_dec_return(nvgpu_atomic_t *v)
{
return __nvgpu_atomic_dec_return(v);
}
static inline int nvgpu_atomic_cmpxchg(nvgpu_atomic_t *v, int old, int new)
{
return __nvgpu_atomic_cmpxchg(v, old, new);
}
static inline int nvgpu_atomic_xchg(nvgpu_atomic_t *v, int new)
{
return __nvgpu_atomic_xchg(v, new);
}
static inline bool nvgpu_atomic_inc_and_test(nvgpu_atomic_t *v)
{
return __nvgpu_atomic_inc_and_test(v);
}
static inline bool nvgpu_atomic_dec_and_test(nvgpu_atomic_t *v)
{
return __nvgpu_atomic_dec_and_test(v);
}
static inline bool nvgpu_atomic_sub_and_test(int i, nvgpu_atomic_t *v)
{
return __nvgpu_atomic_sub_and_test(i, v);
}
static inline int nvgpu_atomic_add_return(int i, nvgpu_atomic_t *v)
{
return __nvgpu_atomic_add_return(i, v);
}
static inline int nvgpu_atomic_add_unless(nvgpu_atomic_t *v, int a, int u)
{
return __nvgpu_atomic_add_unless(v, a, u);
}
static inline void nvgpu_atomic64_set(nvgpu_atomic64_t *v, long i)
{
return __nvgpu_atomic64_set(v, i);
}
static inline long nvgpu_atomic64_read(nvgpu_atomic64_t *v)
{
return __nvgpu_atomic64_read(v);
}
static inline void nvgpu_atomic64_add(long x, nvgpu_atomic64_t *v)
{
__nvgpu_atomic64_add(x, v);
}
static inline void nvgpu_atomic64_inc(nvgpu_atomic64_t *v)
{
__nvgpu_atomic64_inc(v);
}
static inline long nvgpu_atomic64_inc_return(nvgpu_atomic64_t *v)
{
return __nvgpu_atomic64_inc_return(v);
}
static inline void nvgpu_atomic64_dec(nvgpu_atomic64_t *v)
{
__nvgpu_atomic64_dec(v);
}
static inline void nvgpu_atomic64_dec_return(nvgpu_atomic64_t *v)
{
__nvgpu_atomic64_dec_return(v);
}
static inline long nvgpu_atomic64_cmpxchg(nvgpu_atomic64_t *v, long old,
long new)
{
return __nvgpu_atomic64_cmpxchg(v, old, new);
}
static inline void nvgpu_atomic64_sub(long x, nvgpu_atomic64_t *v)
{
__nvgpu_atomic64_sub(x, v);
}
static inline long nvgpu_atomic64_sub_return(long x, nvgpu_atomic64_t *v)
{
return __nvgpu_atomic64_sub_return(x, v);
}
#endif /* NVGPU_ATOMIC_H */
|
#include<stdio.h>
//#include<string.h>
main()
{
//freopen("in.txt", "r", stdin);
long long i, c1 = 0, c2 = 0, m, n, sum;
scanf("%I64d %I64d", &m, &n);
int a[n];
for(i = 0; i<n; i++)
{
scanf("%d", &a[i]);
}
sum = a[0] - 1;
for(i = 1; i<n; i++)
{
if(a[i]>=a[i-1])
sum = sum + a[i] - a[i-1];
else
sum = sum + m - a[i-1] + a[i];
}
printf("%I64d", sum);
}
|
// ConnSecurity returns a ConnOption which sets the security configuration.
func ConnSecurity(s security.Config) ConnOption {
return func(c *connConfig) {
c.secConf = s
}
} |
import {Generation} from '../data/interface';
import {toID} from '../util';
import {getItemBoostType} from '../items';
import {RawDesc} from '../desc';
import {Field} from '../field';
import {Move} from '../move';
import {Pokemon} from '../pokemon';
import {Result} from '../result';
import {computeFinalStats, getMoveEffectiveness, handleFixedDamageMoves} from './util';
export function calculateRBYGSC(
gen: Generation,
attacker: Pokemon,
defender: Pokemon,
move: Move,
field: Field
) {
if (gen.num === 1) {
computeFinalStats(gen, attacker, defender, field, 'atk', 'def', 'spc', 'spe');
} else {
computeFinalStats(gen, attacker, defender, field, 'atk', 'def', 'spa', 'spd', 'spe');
}
const desc: RawDesc = {
attackerName: attacker.name,
moveName: move.name,
defenderName: defender.name,
};
const result = new Result(gen, attacker, defender, move, field, 0, desc);
if (move.bp === 0) {
return result;
}
if (field.defenderSide.isProtected) {
desc.isProtected = true;
return result;
}
// Fixed damage moves (eg. Night Shade) ignore type effectiveness in Gen 1
if (gen.num === 1) {
const fixedDamage = handleFixedDamageMoves(attacker, move);
if (fixedDamage) {
result.damage = fixedDamage;
return result;
}
}
const type1Effectiveness =
getMoveEffectiveness(gen, move, defender.type1, field.defenderSide.isForesight);
const type2Effectiveness = defender.type2
? getMoveEffectiveness(gen, move, defender.type2, field.defenderSide.isForesight)
: 1;
const typeEffectiveness = type1Effectiveness * type2Effectiveness;
if (typeEffectiveness === 0) {
return result;
}
if (gen.num === 2) {
const fixedDamage = handleFixedDamageMoves(attacker, move);
if (fixedDamage) {
result.damage = fixedDamage;
return result;
}
}
if (move.hits > 1) {
desc.hits = move.hits;
}
// Flail and Reversal are variable BP and never crit
if (move.named('Flail', 'Reversal')) {
move.isCrit = false;
const p = Math.floor((48 * attacker.curHP()) / attacker.maxHP());
move.bp = p <= 1 ? 200 : p <= 4 ? 150 : p <= 9 ? 100 : p <= 16 ? 80 : p <= 32 ? 40 : 20;
desc.moveBP = move.bp;
}
const isPhysical = gen.types.get(toID(move.type))!.category === 'Physical';
const attackStat = isPhysical ? 'atk' : (gen.num === 1 ? 'spc' : 'spa');
const defenseStat = isPhysical ? 'def' : (gen.num === 1 ? 'spc' : 'spd');
let at = attacker.stats[attackStat]!;
let df = defender.stats[defenseStat]!;
// Whether we ignore Reflect, Light Screen, stat stages, and burns if attack is a crit differs
// by gen - in gen 2 we also need to check that the attacker does not have stat stage advantage
const ignoreMods = move.isCrit &&
(gen.num === 1 ||
(gen.num === 2 && attacker.boosts[attackStat]! <= defender.boosts[defenseStat]!));
let lv = attacker.level;
if (ignoreMods) {
at = attacker.rawStats[attackStat]!;
df = defender.rawStats[defenseStat]!;
if (gen.num === 1) {
lv *= 2;
desc.isCritical = true;
}
} else {
if (attacker.boosts[attackStat] !== 0) desc.attackBoost = attacker.boosts[attackStat];
if (defender.boosts[defenseStat] !== 0) desc.defenseBoost = defender.boosts[defenseStat];
if (isPhysical && attacker.hasStatus('brn')) {
at = Math.floor(at / 2);
desc.isBurned = true;
}
}
if (move.named('Explosion', 'Self-Destruct')) {
df = Math.floor(df / 2);
}
if (!ignoreMods) {
if (isPhysical && field.defenderSide.isReflect) {
df *= 2;
desc.isReflect = true;
} else if (!isPhysical && field.defenderSide.isLightScreen) {
df *= 2;
desc.isLightScreen = true;
}
}
if ((attacker.named('Pikachu') && attacker.hasItem('Light Ball') && !isPhysical) ||
(attacker.named('Cubone', 'Marowak') && attacker.hasItem('Thick Club') && isPhysical)) {
at *= 2;
desc.attackerItem = attacker.item;
}
if (at > 255 || df > 255) {
at = Math.floor(at / 4) % 256;
df = Math.floor(df / 4) % 256;
}
// Gen 2 Present has a glitched damage calculation using the secondary types of the Pokemon
// for the Attacker's Level and Defender's Defense.
if (move.named('Present')) {
const lookup: {[id: string]: number} = {
Normal: 0, Fighting: 1, Flying: 2, Poison: 3, Ground: 4, Rock: 5, Bug: 7,
Ghost: 8, Steel: 9, '???': 19, Fire: 20, Water: 21, Grass: 22, Electric: 23,
Psychic: 24, Ice: 25, Dragon: 26, Dark: 27,
};
at = 10;
df = Math.max(lookup[attacker.type2 ? attacker.type2 : attacker.type1], 1);
lv = Math.max(lookup[defender.type2 ? defender.type2 : defender.type1], 1);
}
if (defender.named('Ditto') && defender.hasItem('Metal Powder')) {
df = Math.floor(df * 1.5);
desc.defenderItem = defender.item;
}
let baseDamage = Math.floor(
Math.floor((Math.floor((2 * lv) / 5 + 2) * Math.max(1, at) * move.bp) / Math.max(1, df)) / 50
);
// Gen 1 handles move.isCrit above by doubling level
if (gen.num === 2 && move.isCrit) {
baseDamage *= 2;
desc.isCritical = true;
}
if (move.named('Pursuit') && field.defenderSide.isSwitching === 'out') {
baseDamage = Math.floor(baseDamage * 2);
desc.isSwitching = 'out';
}
// In Gen 2 and no other gens, Dragon Fang in a no-op and Dragon Scale erroneously has its effect
const itemBoostType =
attacker.hasItem('Dragon Fang')
? undefined
: getItemBoostType(attacker.hasItem('Dragon Scale') ? 'Dragon Fang' : attacker.item);
if (move.hasType(itemBoostType)) {
baseDamage = Math.floor(baseDamage * 1.1);
desc.attackerItem = attacker.item;
}
baseDamage = Math.min(997, baseDamage) + 2;
if ((field.hasWeather('Sun') && move.hasType('Fire')) ||
(field.hasWeather('Rain') && move.hasType('Water'))) {
baseDamage = Math.floor(baseDamage * 1.5);
desc.weather = field.weather;
} else if (
(field.hasWeather('Sun') && move.hasType('Water')) ||
(field.hasWeather('Rain') && (move.hasType('Fire') || move.named('Solar Beam')))
) {
baseDamage = Math.floor(baseDamage / 2);
desc.weather = field.weather;
}
if (move.hasType(attacker.type1, attacker.type2)) {
baseDamage = Math.floor(baseDamage * 1.5);
}
baseDamage = Math.floor(baseDamage * typeEffectiveness);
// Flail and Reversal don't use random factor
if (move.named('Flail', 'Reversal')) {
result.damage = baseDamage;
return result;
}
result.damage = [];
for (let i = 217; i <= 255; i++) {
result.damage[i - 217] = Math.floor((baseDamage * i) / 255);
}
return result;
}
|
<reponame>Mahasweta-usc/helo_word
from abc import abstractmethod
import os
from . import util
from .filepath import FilePath
def choice_track(track_num):
if track_num == 0:
return Track0()
if track_num == 1:
return Track1()
if track_num == 3:
return Track3()
class Track:
def __init__(self, track_num):
self.fp = FilePath()
self.TRACK_NUM = track_num
self.TRACK_PATH = f"{self.fp.root}/track{track_num}"
@property
def train_modes(self):
raise NotImplementedError
@property
def subsets(self):
raise NotImplementedError
def get_databin_path(self, train_mode):
assert train_mode in self.train_modes
return f"{self.TRACK_PATH}/data-bin/{train_mode}"
def get_ckpt_dir(self, train_mode, model, lr=5e-4, dropout=0.3, seed=None, prev_model_dir=None):
def _get_ckpt_dir_basename(train_mode, model, lr, dropout, seed, prev_model_dir):
basenames = []
if prev_model_dir is not None:
prev_model_basename = util.get_basename(prev_model_dir, include_path=False, include_extension=False)
basenames.append(prev_model_basename)
basename = f"{train_mode}-{model}-lr{lr}-dr{dropout}"
if seed is not None:
basename += f"-s{seed}"
basenames.append(basename)
return "_".join(basenames)
ckpt_basename = _get_ckpt_dir_basename(train_mode, model, lr, dropout, seed, prev_model_dir)
return f"{self.TRACK_PATH}/ckpt/{ckpt_basename}"
def get_output_dir(self, ckpt):
def _get_output_dir_from_ckpt_dir(ckpt_dir):
dir_basename = util.get_basename(ckpt_dir, include_path=False)
return f"{self.TRACK_PATH}/outputs/{dir_basename}"
def _get_output_dir_from_ckpt_fpath(ckpt_fpath):
ckpts = ckpt_fpath.split(':')
# not ensemble
if len(ckpts) == 1:
ckpt_dir = os.path.dirname(ckpt_fpath)
return _get_output_dir_from_ckpt_dir(ckpt_dir)
# ensemble
else:
dirname_lst = []
for ckpt in ckpts:
ckpt_dir = os.path.dirname(ckpt)
ckpt_dir_basename = util.get_basename(ckpt_dir, include_path=False)
dirname_lst.append(ckpt_dir_basename)
return f"{self.TRACK_PATH}/outputs/" + ":".join(dirname_lst)
if os.path.isdir(ckpt):
return _get_output_dir_from_ckpt_dir(ckpt)
else:
return _get_output_dir_from_ckpt_fpath(ckpt)
@abstractmethod
def get_subset_datapath(self, subset):
raise NotImplementedError
@staticmethod
def get_model_config(model, lr, dropout, max_epoch, seed, reset=False):
assert model in ['base', 'copy', 't2t']
if model == 'base':
model_config = f"--arch transformer --share-all-embeddings " \
f"--optimizer adam --lr {lr} --label-smoothing 0.1 --dropout {dropout} " \
f"--max-tokens 4000 --min-lr '1e-09' --lr-scheduler inverse_sqrt " \
f"--weight-decay 0.0001 --criterion label_smoothed_cross_entropy " \
f"--max-epoch {max_epoch} --warmup-updates 4000 --warmup-init-lr '1e-07' --max-tokens 4000 " \
f"--adam-betas '(0.9, 0.98)' --save-interval-updates 5000 "
elif model == 'copy':
model_config = f"--ddp-backend=no_c10d --arch copy_augmented_transformer " \
f"--update-freq 8 --alpha-warmup 10000 --optimizer adam --lr {lr} " \
f"--dropout {dropout} --max-tokens 4000 --min-lr '1e-09' --save-interval-updates 5000 " \
f"--lr-scheduler inverse_sqrt --weight-decay 0.0001 --max-epoch {max_epoch} " \
f"--warmup-updates 4000 --warmup-init-lr '1e-07' --adam-betas '(0.9, 0.98)' "
else: # model == 't2t':
model_config = f"--arch transformer_wmt_en_de_big_t2t --share-all-embeddings " \
f"--criterion label_smoothed_cross_entropy --label-smoothing 0.1 " \
f"--optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0 " \
f"--lr-scheduler inverse_sqrt --warmup-init-lr '1e-07' --max-epoch {max_epoch} " \
f"--warmup-updates 4000 --lr {lr} --min-lr '1e-09' --dropout {dropout} " \
f"--weight-decay 0.0 --max-tokens 4000 --save-interval-updates 5000 "
if seed is not None:
model_config += f"--seed {seed} "
if reset:
model_config += f"--reset-optimizer --reset-lr-scheduler "
return model_config
class Track0(Track):
def __init__(self):
super(Track0, self).__init__(0)
train_modes = ['pretrain', 'train', 'finetune']
subsets = ['valid', 'conll2014', 'jfleg']
def get_pref(self, train_mode):
assert train_mode in self.train_modes
if train_mode == 'pretrain':
trainpref = os.path.splitext(self.fp.DAE_ORI0)[0]
elif train_mode == 'train':
trainpref = os.path.splitext(self.fp.TRAIN_ORI0)[0]
else: # finetune
trainpref = os.path.splitext(self.fp.FINETUNE_ORI0)[0]
validpref = os.path.splitext(self.fp.VALID_ORI0)[0]
return trainpref, validpref
def get_subset_datapath(self, subset):
assert subset in self.subsets
if subset == 'valid':
gold_m2 = f"{self.fp.conll2013_m2}/official-preprocessed.m2"
ori_path = self.fp.CONLL2013_ORI
ori_bpe_path = None
gen_subset = "valid"
scorer_type = "m2scorer"
elif subset == 'conll2014':
gold_m2 = f"{self.fp.conll2014_m2}/official-2014.combined.m2"
ori_path = self.fp.CONLL2014_ORI
ori_bpe_path = self.fp.CONLL2014_TOK_ORI
gen_subset = None
scorer_type = "m2scorer"
else: # 'jfleg':
gold_m2 = None
ori_path = self.fp.JFLEG_ORI
ori_bpe_path = self.fp.JFLEG_TOK_ORI
gen_subset = None
scorer_type = "jfleg"
return gold_m2, ori_path, ori_bpe_path, gen_subset, scorer_type
class Track1(Track):
def __init__(self):
super(Track1, self).__init__(1)
train_modes = ['pretrain', 'train', 'finetune', 'dev']
subsets = ['valid', 'test', 'conll2014']
def get_pref(self, train_mode):
assert train_mode in self.train_modes
if train_mode == 'pretrain':
trainpref = os.path.splitext(self.fp.DAE_ORI1)[0]
elif train_mode == 'train':
trainpref = os.path.splitext(self.fp.TRAIN_ORI1)[0]
elif train_mode == 'finetune': # finetune
trainpref = os.path.splitext(self.fp.FINETUNE_ORI1)[0]
else:
trainpref = os.path.splitext(self.fp.VALID_ORI1)[0]
validpref = os.path.splitext(self.fp.VALID_ORI1)[0]
return trainpref, validpref
def get_subset_datapath(self, subset):
assert subset in self.subsets
if subset == 'valid':
gold_m2 = f"{self.fp.wi_m2}/ABCN.dev.gold.bea19.m2"
ori_path = self.fp.WI_DEV_ORI
ori_bpe_path = None
gen_subset = "valid"
scorer_type = 'errant'
elif subset == 'test':
gold_m2 = None
ori_path = self.fp.WI_TEST_ORI
ori_bpe_path = self.fp.WI_TEST_TOK_ORI
gen_subset = None
scorer_type = None
else: # 'conll2014':
gold_m2 = f"{self.fp.conll2014_m2}/official-2014.combined.m2"
ori_path = self.fp.CONLL2014_ORI
ori_bpe_path = self.fp.CONLL2014_TOK_ORI
gen_subset = None
scorer_type = 'm2scorer'
return gold_m2, ori_path, ori_bpe_path, gen_subset, scorer_type
class Track3(Track):
def __init__(self):
super(Track3, self).__init__(3)
train_modes = ['pretrain', 'finetune']
subsets = ['valid', 'test', 'conll2014']
def get_pref(self, train_mode):
assert train_mode in self.train_modes
if train_mode == 'pretrain':
trainpref = os.path.splitext(self.fp.DAE_ORI3)[0]
else:
trainpref = os.path.splitext(self.fp.FINETUNE_ORI3)[0]
validpref = os.path.splitext(self.fp.VALID_ORI3)[0]
return trainpref, validpref
def get_subset_datapath(self, subset):
assert subset in self.subsets
if subset == 'valid':
gold_m2 = f"{self.fp.wi_m2}/ABCN.dev.gold.bea19.1k.m2"
ori_path = self.fp.WI_DEV_1K_ORI
ori_bpe_path = None
gen_subset = "valid"
scorer_type = 'errant'
elif subset == 'test':
gold_m2 = None
ori_path = self.fp.WI_TEST_ORI
ori_bpe_path = self.fp.WI_TEST_TOK_ORI
gen_subset = None
scorer_type = None
else: # 'conll2014':
gold_m2 = f"{self.fp.conll2014_m2}/official-2014.combined.m2"
ori_path = self.fp.CONLL2014_ORI
ori_bpe_path = self.fp.CONLL2014_TOK_ORI
gen_subset = None
scorer_type = 'm2scorer'
return gold_m2, ori_path, ori_bpe_path, gen_subset, scorer_type
|
/**
* Represents a Light Theme.
*/
public class LightTheme extends Theme {
private final String lightThemeResource = requireNonNull(getClass().getResource("/view/LightTheme.css"))
.toExternalForm();
private final String lightExtensionsResource = requireNonNull(getClass()
.getResource("/view/ExtensionsLight.css")).toExternalForm();
private final String lightAddTagWindow = requireNonNull(getClass()
.getResource("/view/AddTagWindowLight.css")).toExternalForm();
private final String lightAddProfileWindow = requireNonNull(getClass()
.getResource("/view/AddProfileWindowLight.css")).toExternalForm();
public LightTheme() {}
/**
* Switch to light theme.
*/
@Override
@FXML
public void applyTheme(Stage stage, AddTagWindow addTagWindow, AddProfileWindow addProfileWindow) {
stage.getScene().getStylesheets().clear();
stage.getScene().getStylesheets().add(this.lightThemeResource);
stage.getScene().getStylesheets().add(this.lightExtensionsResource);
addTagWindow.getSecondaryStage().getScene().getStylesheets().clear();
addTagWindow.getSecondaryStage().getScene().getStylesheets().add(this.lightAddTagWindow);
addProfileWindow.getSecondaryStage().getScene().getStylesheets().clear();
addProfileWindow.getSecondaryStage().getScene().getStylesheets().add(this.lightAddProfileWindow);
}
@Override
public String toString() {
return "Light Theme";
}
@Override
public boolean equals(Object other) {
return other == this
|| (other instanceof LightTheme
&& lightThemeResource.equals(((LightTheme) other).lightThemeResource)
&& lightExtensionsResource.equals(((LightTheme) other).lightExtensionsResource)
&& lightAddTagWindow.equals(((LightTheme) other).lightAddTagWindow)
&& lightAddProfileWindow.equals(((LightTheme) other).lightAddProfileWindow));
}
} |
The George Zimmerman not guilty verdict based on reasonable doubt has been blasted by lots of celebrities (as well as ordinary people) on social media, many using the hashtag #JusticeForTrayvon. Rapper Lupe Fiasco appears to be taking a different view of the Zimmerman trial outcome.
Fiasco sent out several controversial tweets after the verdict was announced that in turn prompted a heated response on Twitter.
As virtually everyone across the country (and the world perhaps) knows, George Zimmerman, a former neighborhood watch captain, was put on trial for fatally shooting Trayvon Martin, 17, on February 26, 2012, after confronting the teenager as he walked back to the house where he was staying in a gated community outside of Orlando, Florida. Zimmerman entered a plea of not guilty on self-defense grounds and went on trial in a Seminole County courtroom starting on June 24. After about 16 hours of deliberation, the six-person, all-female jury last night found him not guilty of both second degree murder and manslaughter.
The not-guilty verdict means that the prosecutors for the state of Florida failed to uphold the burden of proof that George Zimmerman was guilty beyond a reasonable doubt, i.e., to the exclusion of reasonable doubt. A not guilty verdict doesn’t necessarily mean innocent, more precisely it means not proven in the eyes of the law based on the evidence presented in the trial.
One of Lupe Fiasco’s tweets read “Nobody knows what really happened except Trayvon and Zimmerman. The justice system relies on reasonable doubt not our emotions.” Another tweet stated “Rub your face in it! Swallow down that hard pill! Black blood spills in the streets of America nightly at the hands other blacks.”
You can view Lupe Fiasco’s George Zimmerman-related tweets and the intense social media debate that it touched off at Twitchy.
As we reported previously, back in January Lupe Fiasco was reportedly thrown off stage and escorted off the premises at a presidential pre-inauguration concert after he went on an anti-war, anti-Obama rant, according to accounts from concertgoers.
Nobody knows what really happened except trayvon and Zimmerman. The justice system relies on reasonable doubt not our emotions. — Lupe Fiasco (@LupeFiasco) July 14, 2013
What do you think of the Lupe Fiasco Zimmerman trial tweets? |
Brandon Fibbs has a beautiful post at On Faith about how Carl Sagan took his faith away and replaced it with something so much better:
I did not abandon my faith because I was hurt or angry or disillusioned. I did not abandon my faith because I wanted to rebel, or live a life of sin, or refuse god’s authority. I left because I could no longer believe. I left because I felt there simply was no convincing evidence for my belief. I left because my faith insulted reason one too many times. I left because once I applied the same level of skepticism and incredulity to Christianity that I always had to all other faiths, it likewise imploded. Once I accepted that the Bible’s account of cosmic and human origins could not possibly be true, I began to realize that it was just the first in an interminably long line of things the Bible was wrong about. Science killed my faith. Not “science,” the perverse parody invented by some Christians — a nefarious, liberal, secular agenda whose sole purpose is to turn people from god — but rather science, an objective, methodological tool that uses reason and evidence to systematical study the world around us, and which is willing, unlike faith, to change direction with the accumulation of that evidence. Science is a humble and humbling exercise. Science is the impossibly dense core of curiosity — always asking, always seeking, always yearning to know more, never satisfied.
And, for Brandon, Sagan was the instigator of all of that. Even better: there are many more science popularizers and promoters of reason today than there were decades ago. If, say, Neil deGrasse Tyson doesn’t do it for you (blasphemy!), there’s no shortage of others you can latch onto and listen to and learn from.
(Image via Shutterstock) |
/**
* Created by emnity on 10/8/17.
*/
public class ImageCompress {
private static final String TAG = ImageCompress.class.getSimpleName();
//max width and height values of the compressed image is taken as 612x816
private int maxWidth = 612;
private int maxHeight = 816;
private Bitmap.CompressFormat compressFormat = Bitmap.CompressFormat.JPEG;
private int quality = 80;
private String destinationDirectoryPath;
public ImageCompress(Context context) {
destinationDirectoryPath = context.getCacheDir().getPath() + File.separator + "images";
}
public ImageCompress setMaxWidth(int maxWidth) {
this.maxWidth = maxWidth;
return this;
}
public ImageCompress setMaxHeight(int maxHeight) {
this.maxHeight = maxHeight;
return this;
}
public ImageCompress setCompressFormat(Bitmap.CompressFormat compressFormat) {
this.compressFormat = compressFormat;
return this;
}
public ImageCompress setQuality(int quality) {
this.quality = quality;
return this;
}
public ImageCompress setDestinationDirectoryPath(String destinationDirectoryPath) {
this.destinationDirectoryPath = destinationDirectoryPath;
return this;
}
public File compressToFile(File imageFile, String path) throws IOException {
return compressToFile(imageFile, System.currentTimeMillis()+".jpg", path);
}
public File compressToFile(File imageFile, String compressedFileName, String path) throws IOException {
File file = new File(destinationDirectoryPath, path);
return ImageUtil.compressImage(imageFile, maxWidth, maxHeight, compressFormat, quality,
file.getAbsolutePath() + File.separator + compressedFileName);
// return ImageUtil.compressImageV2(imageFile, file.getAbsolutePath() + File.separator + compressedFileName, quality, maxWidth, maxHeight);
}
} |
// NewTag create a new Tags instance.
func NewTag() *Tags {
return &Tags{
tags: []string{},
tagm: make(map[string]interface{}),
}
} |
<filename>src/Graph.h<gh_stars>1-10
#ifndef _GRAPH_H_
#define _GRAPH_H_
#include <stdlib.h>
#include <assert.h>
#include "Seq.h"
typedef int NodeId;
class Graph
{
public:
int numNodes;
Seq<NodeId>* inEdges;
Seq<NodeId>* outEdges;
bool* present;
Graph(int numNodes);
~Graph();
void invert();
void addEdge(NodeId src, NodeId dst);
void delEdge(NodeId src, NodeId dst);
void delNode(NodeId node);
void undelNode(NodeId node);
void incoming(NodeId node, Seq<NodeId>* result);
void outgoing(NodeId node, Seq<NodeId>* result);
void roots(Seq<NodeId>* result);
bool topSort(Seq<NodeId>* result);
bool revTopSort(Seq<NodeId>* result);
int countEdges();
};
#endif
|
# bug reproduction script for bug #1796 of AFM
import sys
import time
import uiautomator2 as u2
def wait(seconds=2):
for i in range(0, seconds):
print("wait 1 second ..")
time.sleep(1)
if __name__ == '__main__':
avd_serial = sys.argv[1]
d = u2.connect(avd_serial)
d.app_start("com.amaze.filemanager")
wait()
current_app = d.app_current()
print(current_app)
while True:
if current_app['package'] == "com.amaze.filemanager":
break
time.sleep(2)
wait()
out = d(className="android.widget.TextView", text="Alarms").click()
if not out:
print("Success: press Alarms")
wait()
out = d(className="android.widget.ImageButton", resourceId="com.amaze.filemanager:id/fab_expand_menu_button").click()
if not out:
print("Success: press plus")
wait()
out = d(className="android.widget.ImageButton", resourceId="com.amaze.filemanager:id/menu_new_folder").click()
if not out:
print("Success: press Folder")
wait()
out = d(className="android.widget.EditText", resourceId="com.amaze.filemanager:id/singleedittext_input").set_text(text="test")
if out:
print("Success: set folder name")
wait()
out = d(className="android.widget.TextView", text="CREATE").click()
if not out:
print("Success: press CREATE")
wait()
out = d(className="android.widget.TextView", text="test").long_click()
if out:
print("Success: long click test")
wait()
out = d(className="android.widget.TextView", resourceId="com.amaze.filemanager:id/cut").click()
if not out:
print("Success: press cut")
wait()
out = d(className="android.widget.TextView", text="test").click()
if not out:
print("Success: press test")
wait()
out = d(className="android.widget.TextView", resourceId="com.amaze.filemanager:id/paste").click()
if not out:
print("Success: press paste")
wait()
while True:
d.service("uiautomator").stop()
time.sleep(2)
out = d.service("uiautomator").running()
if not out:
print("DISCONNECT UIAUTOMATOR2 SUCCESS")
break
time.sleep(2)
|
/**
* @author Ignacio del Valle Alles [email protected]
*/
public class GenericMethodStartTest extends BcTraceTest {
@Test
public void test() throws Exception {
final StringBuilder steps = new StringBuilder();
Class clazz = getInstrumentClass(TestClass.class, new Hook[]{
new GenericMethodHook(
new AllFilter(),
new GenericMethodStartListener() {
@Override
public void onStart(int methodId, Class clazz, Object instance, Object[] args) {
assertEquals(clazz.getName(), TestClass.class.getName());
steps.append("1");
}
}
),
new GenericMethodHook(
new AllFilter(),
new GenericMethodStartListener() {
@Override
public void onStart(int methodId, Class clazz, Object instance, Object[] args) {
assertEquals(clazz.getName(), TestClass.class.getName());
steps.append("2");
}
}
)
});
clazz.getMethod("execVoid").invoke(null);
System.out.println(clazz.getClassLoader());
assertEquals("12", steps.toString());
}
@Test
public void testNoArguments() throws Exception {
final StringBuilder steps = new StringBuilder();
Class clazz = getInstrumentClass(TestClass.class, new Hook[]{
new GenericMethodHook(
new AllFilter(),
new GenericMethodStartListener() {
@Override
public boolean requiresArguments() {
return false;
}
@Override
public void onStart(int methodId, Class clazz,
Object instance, Object[] args) {
if (args == null) {
steps.append("1");
}
}
}
)
});
clazz.getMethod("execVoid").invoke(null);
assertEquals("1", steps.toString());
}
@Test
public void testListeneUnexpectedException() throws Exception {
final StringBuilder steps = new StringBuilder();
Class clazz = getInstrumentClass(TestClass.class, new Hook[]{
new GenericMethodHook(
new AllFilter(),
new GenericMethodStartListener() {
@Override
public boolean requiresArguments() {
return false;
}
@Override
public void onStart(int methodId, Class clazz,
Object instance, Object[] args) {
steps.append("1");
throw new RuntimeException("Unexpected!");
}
}
)
});
clazz.getMethod("execVoid").invoke(null);
assertEquals("1", steps.toString());
}
@Test
public void testListeneExpectedException() throws Exception {
final StringBuilder steps = new StringBuilder();
final RuntimeException re = new RuntimeException("Expected!");
Class clazz = getInstrumentClass(TestClass.class, new Hook[]{
new GenericMethodHook(
new AllFilter(),
new GenericMethodStartListener() {
@Override
public boolean requiresArguments() {
return false;
}
@Override
public void onStart(int methodId, Class clazz,
Object instance, Object[] args) {
steps.append("1");
throw new BctraceRuntimeException(re);
}
}
)
});
try {
String ret = (String) clazz.getMethod("getString", String.class).invoke(null, "hello");
} catch (InvocationTargetException ite) {
steps.append("2");
assertTrue(ite.getTargetException() == re);
}
assertEquals("12", steps.toString());
}
} |
<gh_stars>0
package printer
import (
"astrid/board"
"astrid/configuration"
"astrid/wordcolumn"
"fmt"
"github.com/fatih/color"
"sort"
"strings"
"sync"
)
type byLength []string
func (s byLength) Len() int {
return len(s)
}
func (s byLength) Swap(i, j int) {
s[i], s[j] = s[j], s[i]
}
func (s byLength) Less(i, j int) bool {
return len(s[i]) > len(s[j])
}
type printColumn struct {
idx int
words []string
wordCount int
longestWordLen int
}
const spaceBetweenColumns int = 2
var printColumns []printColumn
var longestWordLen int
func makePrintColumns(board *board.Board, wordColumns []wordcolumn.WordColumn) {
printColumns = make([]printColumn, board.Size)
var wg sync.WaitGroup
wg.Add(board.Size)
for i, wc := range wordColumns {
go func(pc *printColumn, wc wordcolumn.WordColumn) {
defer wg.Done()
pc.words = make([]string, len(wc.Words))
pc.longestWordLen = wc.LongestWordLen
pc.idx = wc.RootIndex
pc.wordCount = wc.WordCount
j := 0
for k := range wc.Words {
pc.words[j] = k
j++
}
sort.Sort(byLength(pc.words))
}(&printColumns[i], wc)
}
wg.Wait()
}
func findLongestWord() {
for _, pc := range printColumns {
if pc.longestWordLen > longestWordLen {
longestWordLen = pc.longestWordLen
}
}
}
func pad(length int) int {
return (longestWordLen - length) + spaceBetweenColumns
}
func printColumnHeaders(start int, end int) {
c := color.New(color.FgCyan, color.Bold)
for i := start; i < end; i++ {
str := fmt.Sprintf("[%d]", i+1)
c.Printf("%s%*s", str, pad(len(str)), "")
}
fmt.Println()
}
func getLongestColumn(start int, end int) int {
var count int
for i := start; i < end; i++ {
if printColumns[i].wordCount > count {
count = printColumns[i].wordCount
}
}
return count
}
func printWord(word string, endColumn bool) {
var padding int
if !endColumn {
padding = pad(len(word))
}
str := fmt.Sprintf("%s%*s", word, padding, "")
if strings.ContainsAny(word, configuration.Config.HighlightLetters) {
c := color.New(color.FgRed, color.Bold)
c.Printf(str)
} else {
fmt.Printf(str)
}
}
//PrintWords ...
func PrintWords(board *board.Board, wordColumns []wordcolumn.WordColumn) {
makePrintColumns(board, wordColumns)
findLongestWord()
colsPerRow := configuration.Config.WordColumnsPerRow
maxWordsPerRow := configuration.Config.MaxWordsPerRow
colHeaderStart := 0
colHeaderEnd := colHeaderStart + colsPerRow
for i := 0; i < board.Size; i += colsPerRow {
printColumnHeaders(colHeaderStart, colHeaderEnd)
longestColumn := getLongestColumn(colHeaderStart, colHeaderEnd)
numPrintedRows := 0
for j := 0; j < longestColumn; j++ {
if numPrintedRows == maxWordsPerRow {
numPrintedRows = 0
fmt.Println()
printColumnHeaders(colHeaderStart, colHeaderEnd)
}
numPrintedRows++
for k := colHeaderStart; k < colHeaderEnd; k++ {
if printColumns[k].wordCount > j {
printWord(printColumns[k].words[j], k == (colHeaderEnd-1))
} else {
fmt.Printf("%*s", pad(0), "")
}
}
fmt.Println()
}
fmt.Println()
colHeaderStart += colsPerRow
colHeaderEnd = colHeaderStart + colsPerRow
if colHeaderEnd >= board.Size {
colHeaderEnd = board.Size
}
}
for i := 0; i < (longestWordLen+spaceBetweenColumns)*colsPerRow; i++ {
fmt.Print("+")
}
fmt.Printf("\n\n")
}
|
def connection_and_cursor(path_to_db):
if not os.path.exists(path_to_db):
p, db_name = os.path.split(path_to_db)
if p:
plist = p.split(os.path.sep)
for i in range(len(plist)):
dirpath = os.path.sep.join(plist[:i+1])
if dirpath and not os.path.exists( dirpath ):
os.mkdir(dirpath)
conn = sqlite3.connect(path_to_db, detect_types=sqlite3.PARSE_DECLTYPES)
cur = conn.cursor()
return conn,cur |
def extract_labels(args):
gt = pd.read_csv(args.annotation_path, header=None)
annotation_path = Path(args.annotation_path)
dataset_dir = annotation_path.parent
labels_dir = os.path.join(str(dataset_dir), "xmls")
if not os.path.exists(labels_dir):
os.mkdir(labels_dir)
print("==================== Start Creating xml Files! ====================")
for frame_number in range(4501):
frame = gt[gt[1] == frame_number]
x_min = list(frame[8])
y_min = list(frame[9])
x_max = list(frame[10])
y_max = list(frame[11])
bboxes = [[xmin, ymin, xmax, ymax] for xmin, ymin, xmax, ymax in zip(x_min, y_min, x_max, y_max)]
bboxes = list(map(prepare_boxes, bboxes))
xml = convert_to_xml(bboxes, frame_number)
xml_file_name = os.path.join(labels_dir, str(frame_number) + ".xml")
xml.write(xml_file_name, pretty_print=True)
print("annotation number {0} prepared".format(frame_number), end="\r") |
<filename>api/src/services/session.ts<gh_stars>0
import { getStoreFromId, StoreRevenue } from '@honesty-store/store';
import { CardDetails, getCardDetails } from '@honesty-store/topup';
import { AUTO_REFUND_PERIOD, Transaction } from '@honesty-store/transaction';
import { userRegistered } from '@honesty-store/user';
import { StoreItem, storeItems } from '../services/store';
import { getExpandedTransactionsAndAccount } from '../services/transaction';
export interface UserSessionData {
balance: number;
transactions: Transaction[];
cardDetails: CardDetails;
features: any;
emailAddress?: string;
creditLimit: number;
id: string;
flags: any;
}
export interface UserRevenue {
startInclusive: number;
total: number;
}
export interface StoreSessionData {
code: string;
items: StoreItem[];
userRevenue: UserRevenue[];
}
export interface SessionData {
user: UserSessionData;
store: StoreSessionData;
refreshToken: string;
accessToken: string;
autoRefundPeriod: number;
}
const getUserSessionData = async (key, user): Promise<UserSessionData> => {
const { id, accountId, emailAddress, flags } = user;
const { balance = 0, transactions = [], creditLimit = 0 } = accountId
? await getExpandedTransactionsAndAccount({ key, accountID: accountId })
: {};
let cardDetails = null;
if (userRegistered(user)) {
try {
cardDetails = await getCardDetails(key, accountId);
} catch (e) {
if (e.code !== 'NoCardDetailsPresent') {
throw e;
}
/* User is registered but has no card details - this means they managed
* to sign up but weren't successful in their initial topup. */
}
}
return {
id,
emailAddress,
balance,
transactions,
cardDetails,
creditLimit,
features: {},
flags
};
};
const getRecentUserRevenue = (storeRevenue: StoreRevenue[], userId: string): UserRevenue[] => {
const existingUserRevenue: UserRevenue[] = storeRevenue
.map(({ startInclusive, seller }) => ({ startInclusive, total: seller[userId] || 0 }));
const today = new Date();
const expectedDates = [
Date.UTC(today.getUTCFullYear(), today.getUTCMonth(), 1),
Date.UTC(today.getUTCFullYear(), today.getUTCMonth() - 1, 1),
Date.UTC(today.getUTCFullYear(), today.getUTCMonth() - 2, 1),
Date.UTC(today.getUTCFullYear(), today.getUTCMonth() - 3, 1),
Date.UTC(today.getUTCFullYear(), today.getUTCMonth() - 4, 1),
Date.UTC(today.getUTCFullYear(), today.getUTCMonth() - 5, 1)
];
return expectedDates.map((timestamp) => {
return existingUserRevenue.find(({ startInclusive }) => startInclusive === timestamp) ||
({ startInclusive: timestamp, total: 0 });
});
};
const getStoreSessionData = async (key, user): Promise<StoreSessionData> => {
const { defaultStoreId, id: userId } = user;
const [{ code, revenue }, items] = await Promise.all([
getStoreFromId(key, defaultStoreId),
storeItems(key, defaultStoreId, userId)
]);
return {
items,
code,
userRevenue: getRecentUserRevenue(revenue, userId)
};
};
export const getSessionData = async (key, { user }): Promise<SessionData> => {
const { accessToken, refreshToken } = user;
const [userProfile, store] = await Promise.all([
getUserSessionData(key, user),
getStoreSessionData(key, user)
]);
return {
autoRefundPeriod: AUTO_REFUND_PERIOD,
user: userProfile,
store,
refreshToken,
accessToken
};
};
|
def rand_stim(self, stimshape=None, batch_size=None):
batch_size = batch_size or self.batch_size or 100
length, height = stimshape or self.stimshape
X = np.zeros((length*height, batch_size))
for i in range(batch_size):
which = np.random.randint(self.data.shape[-1])
nrows, ncols = self.data[:, :, which].shape
row = self.buffer + int(np.ceil((nrows-length-2*self.buffer)*np.random.rand()))
col = self.buffer + int(np.ceil((nrows-height-2*self.buffer)*np.random.rand()))
animage = self.data[row:row+length,
col:col+height,
which]
animage = animage.reshape(self.stimsize)
if self.patchwisenorm:
animage -= animage.mean()
animage /= animage.std()
X[:, i] = animage
return X |
//package algorithm.div617;
import java.util.Scanner;
/**
* Created with: IntelliJ IDEA
* Function:
* User: [email protected]
* Date: 2020-02-07 15:44:30
*/
public class ArrayWithOddSum {
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
int n = Integer.parseInt(sc.nextLine());
while(n-- > 0){
int len = Integer.parseInt(sc.nextLine());
int[] tmp = new int[len];
String[] temp = sc.nextLine().split(" ");
for(int i = 0; i < len; i++){
tmp[i] = Integer.parseInt(temp[i]);
}
solve(tmp);
}
}
public static void solve(int[] arr){
int oddNum = 0;
int ovenNum = 0;
for(int a : arr){
if(a % 2 == 0){
ovenNum++;
}else oddNum++;
}
if(oddNum == 0 && ovenNum > 0) System.out.println("NO");
else if(ovenNum == 0 && oddNum % 2 == 0) System.out.println("NO");
else System.out.println("YES");
}
}
|
<gh_stars>1-10
/*
* Copyright 2019 <NAME>
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package me.masstrix.eternalnature;
import me.masstrix.eternalnature.command.EternalCommand;
import me.masstrix.eternalnature.command.HydrateCommand;
import me.masstrix.eternalnature.command.NatureCommand;
import me.masstrix.eternalnature.config.Configurable;
import me.masstrix.eternalnature.config.Configuration;
import me.masstrix.eternalnature.core.metric.Metrics;
import me.masstrix.eternalnature.core.temperature.TemperatureIcon;
import me.masstrix.eternalnature.external.PlaceholderSupport;
import me.masstrix.eternalnature.listeners.*;
import me.masstrix.eternalnature.log.DebugLogger;
import me.masstrix.eternalnature.trigger.TriggerManager;
import me.masstrix.eternalnature.util.BuildInfo;
import me.masstrix.eternalnature.util.StringUtil;
import me.masstrix.lang.langEngine.LanguageEngine;
import me.masstrix.version.MinecraftRelease;
import me.masstrix.version.MinecraftVersion;
import me.masstrix.version.checker.VersionCheckInfo;
import me.masstrix.version.checker.VersionChecker;
import org.bukkit.Bukkit;
import org.bukkit.command.ConsoleCommandSender;
import org.bukkit.command.PluginCommand;
import org.bukkit.event.Listener;
import org.bukkit.plugin.PluginManager;
import org.bukkit.plugin.java.JavaPlugin;
import java.io.*;
import java.net.URL;
import java.nio.charset.StandardCharsets;
import java.util.logging.Level;
public class EternalNature extends JavaPlugin {
private static final MinecraftVersion REQUIRED_VER = new MinecraftVersion("1.17");
private static final MinecraftVersion LATEST_BUILD = new MinecraftVersion("1.17.1");
private static boolean serverVerUnTested;
private EternalEngine engine;
private LanguageEngine languageEngine;
private VersionCheckInfo versionCheckInfo = null;
private DebugLogger debugLogger;
private TriggerManager triggerManager;
private Configuration playerCfg;
private Configuration config;
public EternalEngine getEngine() {
return engine;
}
public VersionCheckInfo getVersionInfo() {
return versionCheckInfo;
}
/**
* @return the language engine.
*/
public LanguageEngine getLanguageEngine() {
return languageEngine;
}
public Configuration getPlayerConfig() {
return playerCfg;
}
public Configuration getRootConfig() {
return config;
}
public DebugLogger getDebugLogger() {
return debugLogger;
}
public TriggerManager getTriggerManager() {
return triggerManager;
}
public static boolean isIsServerVerUnTested() {
return serverVerUnTested;
}
@Override
public void onLoad() {
BuildInfo.load(this);
debugLogger = new DebugLogger(this);
debugLogger.info("----------------------------------------");
debugLogger.info("Plugin Information");
debugLogger.info("Name: " + getDescription().getName());
debugLogger.info("Version: " + getDescription().getVersion());
debugLogger.info("Build Kind: " + BuildInfo.getBuildKind());
debugLogger.info("----------------------------------------");
// This will make sure only 30 days worth of logs are kept. Any log files that are older than
// 30 days will be deleted.
debugLogger.cleanOldLogs(5);
}
@Override
public void onEnable() {
BuildInfo.load(this);
debugLogger.info("Enabling Plugin");
MinecraftVersion serverVer = MinecraftRelease.getServerVersion();
// Make sure the server is new enough to run the plugin
if (serverVer.isBehind(REQUIRED_VER)) {
getLogger().warning("Unsupported version!"
+ " This version requires the server "
+ "to be running at least "
+ REQUIRED_VER.getName());
debugLogger.warning("Server is running a not supported version. Disabling plugin.");
debugLogger.info("--------------------------------------");
debugLogger.info("server version: " + serverVer);
debugLogger.info("required version: " + REQUIRED_VER);
debugLogger.info("--------------------------------------");
getPluginLoader().disablePlugin(this);
return;
}
// If the server version is ahead of the latest build version send a message saying
// that the plugin has not been tested on this version and it may not function as
// expected.
if (serverVer.isAhead(LATEST_BUILD)) {
getLogger().warning(
"This version of minecraft has not been tested yet for this " +
"plugin and so there might be issues. " +
"If you have issues running the plugin on your server support will not be given.");
serverVerUnTested = true;
}
config = new Configuration(this, "config").create(true);
playerCfg = new Configuration(this, "players").create(false);
// Init language engine
File langFolder = new File(getDataFolder(), "lang");
languageEngine = new LanguageEngine(langFolder, "en").setLogger(getLogger());
writeLangFiles(false);
// Load languages
languageEngine.loadLanguages();
languageEngine.setLanguage(getConfig().getString("general.language"));
TemperatureIcon.reloadLang(languageEngine);
engine = new EternalEngine(this);
registerCommands(new HydrateCommand(this), new NatureCommand(this));
registerListeners(new MoveListener(this), new ConnectionListener(this),
new ConsumeListener(this), new BlockListener(this),
new ItemListener(this), new DeathListener(this),
new InteractListener(this));
// Only check for updates if enabled.
if (getConfig().getBoolean("general.check-for-updates")) {
new VersionChecker(PluginData.RESOURCE_ID, getDescription().getVersion()).run(info -> {
if (info.isUnknown()) {
getLogger().log(Level.WARNING, "Failed to check plugin version. Are you running offline?");
debugLogger.warning("Failed to check version.");
}
else if (info.isDev()) {
getLogger().log(Level.WARNING, "You are running a development build. Expect extra bugs.");
}
else if (info.isLatest()) {
getLogger().log(Level.INFO, "Plugin is up to date.");
}
else if (info.isBehind()) {
ConsoleCommandSender sender = Bukkit.getConsoleSender();
sender.sendMessage(StringUtil.color(""));
sender.sendMessage(StringUtil.color("&e New update available for " + getDescription().getName()));
sender.sendMessage(StringUtil.color(" Current version: &e" + info.getCurrent().getName()));
sender.sendMessage(StringUtil.color(" Latest version: &e" + info.getLatest().getName()));
sender.sendMessage(StringUtil.color(""));
}
this.versionCheckInfo = info;
debugLogger.info("Finished checking plugin version.");
});
}
// Enable metrics
new Metrics(this);
// Register placeholders if plugin is installed
if (Bukkit.getPluginManager().getPlugin("PlaceholderAPI") != null) {
new PlaceholderSupport(this).register();
}
triggerManager = new TriggerManager(this);
triggerManager.load();
engine.start();
config.subscribe(TemperatureIcon.BURNING);
config.save();
config.reload();
}
/**
* Writes all internal .lang files to external lang folder to be edited.
*
* @param override should this override any .lang files that have
* already been created.
*/
public void writeLangFiles(boolean override) {
File langFolder = new File(getDataFolder(), "lang");
// Save internal resource.lang files externally
String[] langFiles = new String[] {"en", "es", "zh_CN"};
for (String s : langFiles) {
File destination = new File(langFolder, s + ".lang");
if (!override && destination.exists()) continue;
try {
URL url = getClass().getResource("/lang/" + s + ".lang");
debugLogger.info("Writing language file " + s + ".lang");
InputStreamReader streamReader = new InputStreamReader(url.openStream(), StandardCharsets.UTF_8);
BufferedReader reader = new BufferedReader(streamReader);
//List<String> lines = Files.readAllLines(path);
BufferedWriter writer = new BufferedWriter(new FileWriter(destination));
// Write file data
for (String line; (line = reader.readLine()) != null;) {
writer.write(line);
writer.newLine();
}
writer.flush();
writer.close();
} catch (IOException e) {
e.printStackTrace();
debugLogger.error("Failed to write file.", e);
}
}
}
@Override
public void onDisable() {
if (engine != null) engine.shutdown();
triggerManager.clear();
debugLogger.info("Shutdown Plugin");
debugLogger.close();
}
private void registerCommands(EternalCommand... commands) {
for (EternalCommand cmd : commands) {
PluginCommand pc = Bukkit.getPluginCommand(cmd.getName());
if (pc == null) continue;
pc.setExecutor(cmd);
pc.setTabCompleter(cmd);
}
}
protected void registerListeners(Listener... listeners) {
PluginManager manager = Bukkit.getPluginManager();
for (Listener l : listeners) {
manager.registerEvents(l, this);
if (Configurable.class.isAssignableFrom(l.getClass())) {
config.subscribe((Configurable) l);
}
}
}
}
|
def process(source, target, rdfsonly, base=None, logger=logging):
for link in source.match():
s, p, o = link[:3]
if s == (base or '') + '@docheader': continue
if p in RESOURCE_MAPPING: p = RESOURCE_MAPPING[p]
if o in RESOURCE_MAPPING: o = RESOURCE_MAPPING[o]
if p == VERSA_BASEIRI + 'refines':
tlinks = list(source.match(s, TYPE_REL))
if tlinks:
if tlinks[0][TARGET] == VERSA_BASEIRI + 'Resource':
p = I(RDFS_NAMESPACE + 'subClassOf')
elif tlinks[0][TARGET] == VERSA_BASEIRI + 'Property':
p = I(RDFS_NAMESPACE + 'subPropertyOf')
if p == VERSA_BASEIRI + 'properties':
suri = I(iri.absolutize(s, base)) if base else s
target.add((URIRef(o), URIRef(RDFS_NAMESPACE + 'domain'), URIRef(suri)))
continue
if p == VERSA_BASEIRI + 'value':
if o not in ['Literal', 'IRI']:
ouri = I(iri.absolutize(o, base)) if base else o
target.add((URIRef(s), URIRef(RDFS_NAMESPACE + 'range'), URIRef(ouri)))
continue
s = URIRef(s)
p = RDF.type if p == TYPE_REL else URIRef(p)
o = URIRef(o) if isinstance(o, I) else Literal(o)
if not rdfsonly or p.startswith(RDF_NAMESPACE) or p.startswith(RDFS_NAMESPACE):
target.add((s, p, o))
return |
<reponame>BesedinAlex/Student_WebLabs<filename>besedin-twenty-one/src/app/services/catalog.service.ts
import { Injectable } from '@angular/core';
import {DatabaseService} from './database.service';
@Injectable({
providedIn: 'root'
})
export class CatalogService extends DatabaseService {
data: [];
lastActionIsDone: boolean;
private dataIsLoaded = false;
async getDevices() {
try {
this.data = await this.getData('devices');
this.dataIsLoaded = true;
} catch (err) {
alert('Unable to access database. Try again.');
}
}
async addDevice(data) {
this.lastActionIsDone = false;
if (this.dataIsLoaded) {
try {
await this.postData('devices', data);
await this.getDevices();
this.lastActionIsDone = true;
} catch (err) {
alert('Connection to database was lost. Trying to reconnect again.');
await this.getDevices();
}
} else {
await this.getDevices();
if (this.data !== undefined) {
await this.addDevice(data);
}
}
}
}
|
/**
* Returns user with given id.
*
* @param id Id of the needed object.
* @return Object with given id or <tt>null</tt> if object not found.
*/
@Override
public User findById(final int id) {
User result = null;
try (Connection connection = CONNECTION_POOL.getConnection();
PreparedStatement find = connection.prepareStatement(QUERIES.get("findUserById"))
) {
result = this.dbSelectUserById(find, id);
} catch (SQLException e) {
LOG.error(String.format("SQL exception: %s", e.getMessage()));
}
return result;
} |
def key_base64() -> bytes:
return base64.standard_b64encode(bcl.symmetric.secret()).decode('utf-8') |
def model_fn(features, labels, mode, params):
if 'initializer' in params:
initializer = configurable.Configurable.initialize(
params['initializer'])
tf.logging.info('Using %s initializer', params['initializer'])
tf.get_variable_scope().set_initializer(initializer)
else:
tf.logging.info(
'Not setting a global initializer. TF defaults to Xavier.')
model_instance = cls(mode=mode, config=params, dataset=dataset)
predictions = model_instance(features)
if labels:
loss = model_instance.loss(
predictions=predictions,
targets=labels,
multi_answer_loss=params['train_with_multi_answer_loss'])
else:
assert mode == MODE_KEYS.PREDICT
loss = None
optimizer = configurable.Configurable.load(params['optimizer'])
optimizer_instance = optimizer(config=params['optimizer'])
if mode == MODE_KEYS.TRAIN:
train_op = optimizer_instance(loss, train_steps)
else:
train_op = None
if params['init_checkpoint']:
latest_checkpoint = tf.train.latest_checkpoint(model_dir)
if not latest_checkpoint:
misc_util.init_from_checkpoint(
params['init_checkpoint'], params['fn'])
else:
tf.logging.info('Latest checkpoint %s exists. No init from %s.' % (
latest_checkpoint, params['init_checkpoint']))
tf.logging.info('mode: %s' % mode)
tf.logging.info('params: %s' % params)
if params['optimizer']['ema_decay'] != 1.0 and mode != MODE_KEYS.TRAIN:
ema = optimizer_instance.exponential_moving_average
trainable_vars, _, has_partition = misc_util.get_trainable_vars(
exclude_pattern=params['optimizer']['nograd_var'])
variable_map = ema.variables_to_restore(trainable_vars)
if has_partition:
_update_partition_info(
variable_map, params['optimizer']['nograd_var'])
saver = tf.train.Saver(variable_map)
scaffold = tf.train.Scaffold(saver=saver)
else:
scaffold = None
eval_metric_ops = None
if mode in [MODE_KEYS.TRAIN, MODE_KEYS.EVAL]:
eval_metric_ops = model_instance.metrics(
predictions=predictions, targets=labels)
if use_estimator:
return tf.estimator.EstimatorSpec(
mode=mode,
predictions=predictions,
loss=loss,
train_op=train_op,
eval_metric_ops=eval_metric_ops,
scaffold=scaffold)
else:
return tf.contrib.learn.ModelFnOps(
mode=mode,
predictions=predictions,
loss=loss,
train_op=train_op,
eval_metric_ops=eval_metric_ops,
scaffold=scaffold) |
// Copyright 2017-2019 <NAME>
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
// This file is a part of scnlib:
// https://github.com/eliaskosunen/scnlib
#ifndef SCN_DETAIL_RANGES_STREAM_H
#define SCN_DETAIL_RANGES_STREAM_H
#include "../context.h"
#include "../erased_stream.h"
#include "config.h"
namespace scn {
namespace ranges {
SCN_BEGIN_NAMESPACE
namespace detail {
template <typename CharT>
class erased_range_stream_base {
public:
using char_type = CharT;
erased_range_stream_base(const erased_range_stream_base&) =
delete;
erased_range_stream_base& operator=(
const erased_range_stream_base&) = delete;
erased_range_stream_base(erased_range_stream_base&&) = default;
erased_range_stream_base& operator=(
erased_range_stream_base&&) = default;
virtual ~erased_range_stream_base() = default;
virtual size_t chars_read() const = 0;
protected:
erased_range_stream_base() = default;
};
template <typename Stream>
class erased_range_stream_impl
: public erased_range_stream_base<typename Stream::char_type> {
using base = ::scn::detail::erased_stream_impl<Stream>;
public:
using char_type = typename base::char_type;
erased_range_stream_impl(Stream& s)
: m_stream(std::addressof(s))
{
}
size_t chars_read() const override
{
return m_stream->chars_read();
}
private:
Stream* m_stream;
};
} // namespace detail
template <typename CharT, bool Sized>
class basic_erased_range_stream
: public std::conditional_t<Sized,
erased_sized_stream<CharT>,
erased_stream<CharT>> {
using base = std::conditional_t<Sized,
erased_sized_stream<CharT>,
erased_stream<CharT>>;
public:
using char_type = CharT;
using is_sized_stream = std::integral_constant<bool, Sized>;
template <typename Stream>
basic_erased_range_stream(Stream s)
: base(std::move(s)),
m_stream(::scn::detail::make_unique<
detail::erased_range_stream_impl<Stream>>(
base::template get_as<Stream>().get()))
{
}
size_t chars_read() const
{
return m_stream->chars_read();
}
private:
::scn::detail::unique_ptr<detail::erased_range_stream_base<CharT>>
m_stream;
};
template <typename CharT>
using erased_range_stream = basic_erased_range_stream<CharT, false>;
template <typename CharT>
using erased_sized_range_stream =
basic_erased_range_stream<CharT, true>;
SCN_CLANG_PUSH
SCN_CLANG_IGNORE("-Wpadded")
template <typename Range>
class basic_bidirectional_range_stream : public stream_base {
public:
using range_type = Range;
using underlying_iterator = SCN_RANGES_NS::iterator_t<range_type>;
using underlying_sentinel = SCN_RANGES_NS::sentinel_t<range_type>;
using iterator = underlying_iterator;
using char_type = SCN_RANGES_NS::value_type_t<iterator>;
constexpr basic_bidirectional_range_stream(range_type& r) noexcept
: m_range(std::addressof(r)),
m_begin(SCN_RANGES_NS::begin(*m_range)),
m_next(m_begin)
{
}
constexpr expected<char_type> read_char() noexcept
{
if (m_next == SCN_RANGES_NS::end(*m_range)) {
return error(error::end_of_stream, "EOF");
}
auto ch = *m_next;
++m_next;
return ch;
}
constexpr error putback(char_type) noexcept
{
SCN_EXPECT(m_next != m_begin);
--m_next;
return {};
}
size_t chars_read() const noexcept
{
return static_cast<size_t>(
std::distance(SCN_RANGES_NS::begin(*m_range), m_next));
}
protected:
range_type* m_range;
iterator m_begin, m_next;
};
SCN_CLANG_POP
template <typename Range>
class basic_sized_bidirectional_range_stream
: public basic_bidirectional_range_stream<Range> {
using base = basic_bidirectional_range_stream<Range>;
public:
using char_type = typename base::char_type;
using is_sized_stream = std::true_type;
basic_sized_bidirectional_range_stream(Range& r) : base(r) {}
constexpr error read_sized(span<char_type> s) noexcept
{
if (chars_to_read() < s.size()) {
return error(error::end_of_stream,
"Cannot complete read_sized: EOF encountered");
}
const auto ssize = static_cast<std::ptrdiff_t>(s.size());
std::copy(base::m_next, base::m_next + ssize, s.begin());
base::m_next += ssize;
return {};
}
constexpr error putback_n(size_t n) noexcept
{
auto sn = static_cast<std::ptrdiff_t>(n);
if (std::distance(base::m_begin, base::m_next) < sn) {
return error(error::invalid_argument,
"Cannot putback more than chars read");
}
base::m_next -= sn;
return {};
}
constexpr error set_roll_back() noexcept
{
base::m_begin = base::m_next;
return {};
}
constexpr error roll_back() noexcept
{
base::m_next = base::m_begin;
return {};
}
constexpr size_t chars_to_read() const noexcept
{
return static_cast<size_t>(std::distance(
base::m_next, SCN_RANGES_NS::end(*base::m_range)));
}
constexpr error skip(size_t n) noexcept
{
if (chars_to_read() < n) {
base::m_next = SCN_RANGES_NS::end(*base::m_range);
return error(error::end_of_stream, "EOF");
}
base::m_next += static_cast<std::ptrdiff_t>(n);
return {};
}
constexpr error skip_all() noexcept
{
base::m_next = SCN_RANGES_NS::end(*base::m_range);
return {};
}
};
template <typename Range>
class basic_forward_range_stream : public stream_base {
public:
using range_type = Range;
using underlying_iterator = SCN_RANGES_NS::iterator_t<range_type>;
using underlying_sentinel = SCN_RANGES_NS::sentinel_t<range_type>;
using iterator = underlying_iterator;
using char_type = SCN_RANGES_NS::value_type_t<iterator>;
constexpr basic_forward_range_stream(range_type& r)
: m_range(std::addressof(r)),
m_begin(SCN_RANGES_NS::begin(*m_range)),
m_next(m_begin)
{
}
expected<char_type> read_char() noexcept
{
if (m_rollback.size() > 0) {
auto top = m_rollback.back();
m_rollback.pop_back();
return top;
}
if (m_begin == SCN_RANGES_NS::end(*m_range)) {
return error(error::end_of_stream, "EOF");
}
auto ch = *m_begin;
++m_begin;
return ch;
}
error putback(char_type ch)
{
m_rollback.push_back(ch);
return {};
}
size_t chars_read() const noexcept
{
return static_cast<size_t>(
std::distance(SCN_RANGES_NS::begin(*m_range), m_next));
}
protected:
range_type* m_range;
iterator m_begin, m_next;
::scn::detail::small_vector<char_type, 64> m_rollback{};
};
namespace detail {
CPP_template(typename R)(
requires SCN_RANGES_NS::BidirectionalRange<R> &&
!SCN_RANGES_NS::SizedRange<R>)
basic_bidirectional_range_stream<R> make_underlying_stream(R& r)
{
return {r};
}
CPP_template(typename R)(
requires SCN_RANGES_NS::BidirectionalRange<R>&&
SCN_RANGES_NS::SizedRange<R>)
basic_sized_bidirectional_range_stream<
R> make_underlying_stream(R& r)
{
return {r};
}
CPP_template(typename R)(requires SCN_RANGES_NS::ForwardRange<R> &&
!SCN_RANGES_NS::BidirectionalRange<R>)
basic_forward_range_stream<R> make_underlying_stream(R& r)
{
return {r};
}
template <typename R>
struct erased_stream_for;
template <typename R>
struct erased_stream_for<basic_bidirectional_range_stream<R>> {
template <typename CharT>
using type = erased_range_stream<CharT>;
};
template <typename R>
struct erased_stream_for<
basic_sized_bidirectional_range_stream<R>> {
template <typename CharT>
using type = erased_sized_range_stream<CharT>;
};
template <typename R>
struct erased_stream_for<basic_forward_range_stream<R>> {
template <typename CharT>
using type = erased_range_stream<CharT>;
};
} // namespace detail
template <typename R,
typename CharT =
SCN_RANGES_NS::value_type_t<SCN_RANGES_NS::iterator_t<R>>>
auto make_stream(R& r)
{
return detail::make_underlying_stream(r);
}
CPP_template(typename R)(
requires SCN_RANGES_NS::Range<R>) auto erase_stream(R& r)
{
using CharT =
SCN_RANGES_NS::value_type_t<SCN_RANGES_NS::iterator_t<R>>;
auto s = make_stream(r);
return typename detail::erased_stream_for<decltype(
s)>::template type<CharT>(std::move(s));
}
SCN_END_NAMESPACE
} // namespace ranges
} // namespace scn
#endif // SCN_DETAIL_RANGES_RANGES_H
|
/**
This class implements a check out mechanism that allows
threads to do an exclusive "checkout" on a key value.
Threads that attempt to check out a key value that is already checked
out will wait until the key is checked back in and available.
Threads that have waited beyond the timeout period will throw an exception.
*/
public final class AWCheckoutManager extends AWBaseObject
{
private static final int MaxWaitingThreadsPerKey = 5;
private static final long MaxThreadWaitMillis = 5 * 60 * 1000;
private final AWCountingHashtable _waitingThreadForKeyCount =
new AWCountingHashtable();
private final Map<Object, Thread> _checkedOutKeys = MapUtil.map();
private int _maxWaitingThreadsPerKey = MaxWaitingThreadsPerKey;
private long _maxThreadWaitMillis = MaxThreadWaitMillis;
private String _instanceName;
public AWCheckoutManager (String instanceName)
{
super();
_instanceName = instanceName;
}
///////////////
// Threasholds
///////////////
public synchronized void setMaxWaitingThreads (int maxWaitingThreadsPerKey)
{
_maxWaitingThreadsPerKey = maxWaitingThreadsPerKey;
}
public int maxWaitingThreads ()
{
return _maxWaitingThreadsPerKey;
}
public synchronized void setMaxThreadWaitMillis (long maxThreadWaitMillis)
{
_maxThreadWaitMillis = maxThreadWaitMillis;
}
public long maxThreadWaitMillis ()
{
return _maxThreadWaitMillis;
}
///////////////
// Checkin/out
///////////////
public synchronized void checkin (Object key)
{
_checkedOutKeys.remove(key);
if (_waitingThreadForKeyCount.count(key) != 0) {
// only call notify if there are
// threads actually waiting.
this.notifyAll();
}
}
public synchronized void checkout (Object key)
{
if (_checkedOutKeys.get(key) != null) {
Assert.that(_checkedOutKeys.get(key) != Thread.currentThread(),
"Recursive call to AWCheckoutManager.checkout() detected.");
// We only enter this if a thread already has the key checked out.
if (_waitingThreadForKeyCount.count(key) >= _maxWaitingThreadsPerKey) {
Log.aribaweb.warning(9370, ThreadDebugState.makeString());
throw new AWMaxWaitingThreadException("instance " +
_instanceName + " key " + key);
}
_waitingThreadForKeyCount.add(key);
try {
long checkoutDeadline = System.currentTimeMillis() + _maxThreadWaitMillis;
Thread checkedOutThread = _checkedOutKeys.get(key);
while (checkedOutThread != null) {
waitForTimeout();
if (System.currentTimeMillis() > checkoutDeadline) {
throwThreadTimeoutException(key, checkedOutThread);
}
checkedOutThread = _checkedOutKeys.get(key);
}
}
finally {
_waitingThreadForKeyCount.remove(key);
}
}
_checkedOutKeys.put(key, Thread.currentThread());
}
private void throwThreadTimeoutException (Object key, Thread checkedOutThread)
{
String stackStr = "No Stack";
StackTraceElement[] stack = checkedOutThread.getStackTrace();
if (stack != null) {
FastStringBuffer sb = new FastStringBuffer();
for (StackTraceElement line: stack) {
sb.append("\n\t");
sb.append(line);
}
stackStr = sb.toString();
}
String message = Fmt.S("instance: %s, key: %s, checked out thread stack: %s",
_instanceName, key, stackStr);
throw new AWThreadTimeoutException(message);
}
public synchronized boolean isCheckedOut (Object key)
{
if (key == null) {
return false;
}
return _checkedOutKeys.get(key) != null;
}
private void waitForTimeout ()
{
// This will wait until _maxThreadWaitMillis or
// until notified from the checkin method
try {
this.wait(_maxThreadWaitMillis);
}
catch (InterruptedException exception) {
// swallow the exception
exception = null;
}
}
} |
package routes
import (
"net/http"
"github.com/gin-gonic/gin"
)
// Say handles requests to /say route
func Say(c *gin.Context) {
something := c.Param("something")
c.String(http.StatusOK, "Hello from Go! %s", something)
}
|
<filename>src/province/province.interface.ts
import { BaseInterface } from 'src/share/interfaces/base.interface'
import { Unit } from 'src/unit/unit.interface'
/**
* Province Interface
* @author KhoaVD
*/
export interface Province extends BaseInterface, District {
district?: District[]
}
interface District {
code: number
name: string
unit?: Unit
}
|
Thiamine Deficiency in a Nondrinker and Secondary Pulmonary Edema after Thiamine Replenishment
A 48-year-old man was brought to our emergency room with acute abdominal pain and systemic edema, indicating acute circulatory failure with lactic acidosis. Furosemide treatment paradoxically worsened the systemic edema and induced confusion. He had no drinking history but hardly ate legumes or meats containing thiamine. Administration of fursultiamine dramatically improved the symptoms and subsequently caused pulmonary edema. Thiamine deficiency may occur in nondrinkers with an unbalanced diet. In this condition, diuretic therapy can worsen the symptoms before thiamine supplementation by promoting the flushing of water-soluble vitamins but is needed for the management of secondary pulmonary edema after thiamine replenishment.
Introduction
Thiamine (vitamin B1) is an essential cofactor of the key enzymes in aerobic glucose metabolism. Its deficiency damages neurons, which depend on aerobic metabolism, and causes loss of tendon reflexes and mental confusion (1). Autonomic dysfunction leads to high-output heart failure by impairing vasocontraction, and anaerobic metabolism causes the accumulation of lactate in tissues (2). Since thiamine deficiency can be life-threatening due to central nervous disorders, heart failure and/or lactic acidosis, its early diagnosis and treatment with thiamine injection are mandatory (3). Thiamine deficiency has been considered to occur in heavy drinkers (4), although it can develop in nondrinkers with an unbalanced diet. Medical awareness of the clinical presentations and treatment strategies of this life-threatening condition are needed for it to be considered in the differential diagnosis and for the early treatment of affected patients.
Case Report
A 48-year-old man was brought to our emergency room by ambulance complaining of acute abdominal pain and sys-temic edema. He had been aware of systemic edema and weight gain for several months. He had neither a remarkable medical history nor any drinking history. He also denied tobacco and illicit drug use.
On arrival, his vital signs were remarkable for a low blood pressure of 76/27 mmHg, sinus tachycardia of 110 beats per minute, a respiratory rate of 24 per minute and a 94% oxygen saturation on room air. His body temperature was 35.9°C with cold sweat. A physical examination revealed diffuse abdominal distension with tenderness generalized throughout the abdomen. Pitting edema was found in his limbs. Cardiac auscultation did not reveal accessory heart sounds or murmurs, and the lung fields were clear. The results of a complete blood cell count were normal, and the C-reactive protein level was within the normal range. The following myogenic enzymes were elevated: aspartate aminotransferase, 336 IU/L (normal: <40 IU/L); alanine aminotransferase, 153 IU/L (normal: <40 IU/L); lactate dehydrogenase, 744 IU/L (normal: <250 IU/L); creatinine kinase, 1,930 IU/L (normal: <270 IU/L). Both serum creatinine and urea nitrogen levels were also elevated to 2.18 mg/dL (normal: <1.10 mg/dL) and 68.3 mg/dL (normal: < 21.0 mg/dL), respectively, whereas the results of a urinalysis were normal. The plasma brain natriuretic peptide level was Intern Med 59: 373-376, 2020 DOI: 10.2169/internalmedicine.3585-19 remarkably increased to 3,100 pg/mL (normal: <18 pg/mL). An arterial blood gas analysis revealed lactic acidosis . These findings indicated acute circulatory failure. The cardiac wall motion, ejection fraction (EF: 60%) and ratio of the early to late ventricular filling velocities (E/A: 1.0) were normal, although the echocardiographic cardiac output (CO: 8.3 L/min) and tricuspid regurgitation peak gradient (TRPG: 40 mmHg) were slightly elevated. Chest X-ray showed an enlarged cardiac silhouette, especially the right atrium (Fig. 1A). An electrocardiogram showed right axis deviation and flat t-waves ( Fig. 2A). Contrast-enhanced computed tomography displayed no apparent causes for the acute abdominal pain.
Dopamine and rehydration were started to normalize his blood pressure, and continuous hemodiafiltration (CHDF) was performed to correct his lactic acidosis. The abdominal pain disappeared along with the correction of lactic acidosis, and systemic edema decreased through CHDF. However, systemic edema worsened again despite intravenous furosemide 20 mg/day after stopping CHDF. He presented signs of confusion on day 9 of hospitalization. Neurological examinations revealed loss of the patellar and achilles tendon reflexes.
Thiamine deficiency was suspected based on the combination of the following clinical symptoms: neurological disorder, high-output heart failure and lactic acidosis. Intravenous injection of fursultiamine 300 mg immediately improved his confusion and dramatically decreased the systemic edema. However, the patient developed dyspnea on day 12, requiring treatment with noninvasive positive-pressure ventilation. Chest X-ray showed pulmonary edema (Fig. 1B). Echocardiographic CO was normalized (4.7 L/min), whereas the EF decreased to 40%. E/A and TRPG were elevated to 1.7 and 72 mmHg, respectively. Diuretics (furosemide 20 mg/day and tolvaptan 7.5 mg/day) concomitantly with thiamine administration ameliorated his dyspnea and improved his chest X-ray (Fig. 1C), electrocardiogram (Fig. 2B) and echocardiography findings (Fig. 3). The patient was discharged from the hospital on day 28 and has been maintained with oral thiamine supplement.
Discussion
Thiamine deficiency, despite being a relatively rare condition, is easily misdiagnosed in critically ill patients. This medical situation can present with high-output heart failure, an unusual condition that remains poorly recognized (2), and is characterized by an increase in CO to compensate for the decreased systemic vascular resistance, as well as mild to moderate pulmonary hypertension, reflecting hyperhemodynamics (5-7). A lack of thiamine increases lactic production by altering the aerobic metabolism. Lactic acidosis often presents with gastrointestinal symptoms, such as abdominal pain, nausea and vomiting (8). Clinicians should therefore suspect thiamine deficiency in critically ill patients with unexplained systemic edema and gastrointestinal symptoms.
Thiamine is a vitamin included in whole grains, legumes and some meats and is prone to deficiencies in Asian people who regularly eat refined rice. Since relatively little thiamine is stored in the body, and given its short half-life, thiamine must be ingested daily. Drinking alcohol interferes with thiamine absorption, and diuretics promote the flushing of this water-soluble vitamin (4). However, thiamine deficiency can occur even in nondrinkers with inadequate nutrition or habitual users of diuretics (9). While the present patient was a nondrinker, he subsisted on refined rice and hardly ate legumes or meat for many years due to a poor living condition subsequent to his lack of regular employment. The administration of furosemide before supplementation of thiamine in this patient paradoxically aggravated the systemic edema and resulted in the development of confusion.
Thiamine deficiency is diagnosed according to the unique clinical symptoms and the dramatic improvement of the symptoms after the administration of thiamine (10). The blood thiamine level could not be measured in the present patient. However, thiamine deficiency cannot be denied based simply on blood thiamine levels, as thiamine is widely distributed in tissues (11). Furthermore, blood thia-mine levels cannot be determined instantly in the emergency room. Thiamine should thus be empirically administered to all patients suspected of thiamine deficiency. High-dose thiamine administered at a dose of 100-300 mg/day is needed to cure patients with critical thiamine deficiency due to their impaired thiamine utilization (12). The effect of thiamine administration is quick, generally manifesting within 24 hours of dosing (13).
In the present patient, pulmonary edema occurred after the replenishment of thiamine followed by a dramatic improvement in the systemic edema. This occurred because the normalization of vasocontraction rapidly increased the systemic vascular resistance (cardiac afterload) and venous return (cardiac preload), ultimately leading to congestive heart failure (14). We should be alert for the potential development of secondary pulmonary edema during the treatment of patients with thiamine deficiency (15). In our case, the pulmonary edema was successfully treated by diuretics concomitantly with thiamine. In patients with thiamine deficiency, diuretics can worsen the symptoms before the supplementation of thiamine; however, they should still be used concomitantly with thiamine in order to prevent secondary pulmonary edema after thiamine replenishment.
The authors state that they have no Conflict of Interest (COI). |
use rocket::response::{Content, Stream};
use {
crate::{
structs::{File, State},
utils,
},
rocket::{get, http::ContentType},
std::{
io::{self, prelude::*, BufReader, Read, Write},
time::Duration,
},
};
const BUF_SIZE: usize = 52428800;
impl Read for File {
fn read(&mut self, mut buf: &mut [u8]) -> io::Result<usize> {
match self.state {
State::Flush => {
self.state = State::Sleep;
return Err(io::ErrorKind::WouldBlock.into());
}
State::Sleep => std::thread::sleep(Duration::from_secs(self.delay)),
State::Write => {}
}
self.state = State::Flush;
self.data.clear();
self.data += &format!("data: Watching file \"{}\"\ndata: \n", self.name);
for line in utils::return_bufreader(&self.name).lines() {
self.data += &format!("data: {}\n", line.expect("Error"))
}
self.data += "\n\n";
buf.write_all(self.data.as_bytes())?;
Ok(self.data.len())
}
}
type CounterStream = Stream<BufReader<File>>;
#[get("/updates")]
pub fn updates(file_data: rocket::State<File>) -> Content<CounterStream> {
let reader = BufReader::with_capacity(
BUF_SIZE,
File {
data: file_data.data.clone(),
state: file_data.state,
name: file_data.name.clone(),
delay: file_data.delay,
},
);
let ct = ContentType::with_params("text", "event-stream", ("charset", "utf-8"));
Content(ct, Stream::from(reader))
}
|
import styled from 'styled-components/native';
import { RectButton } from 'react-native-gesture-handler';
interface IButton {
primary: boolean;
};
export const InputWrapper = styled.TextInput`
width: 100%;
border-radius: 8px;
border-width: 1px;
border-color: #5D0DE0;
height: 50px;
padding: 10px;
margin-bottom: 20px;
`;
export const Container = styled.View`
width: 100%;
flex: 1;
padding: 30px;
padding-top: 180px;
background-color: #F2F6F9;
position: relative;
`;
export const Title = styled.Text`
color: #fff;
font-size: 25px;
`;
export const ButtonStyle = styled.Button<IButton>`
width: 60px;
height: 50px;
padding: 5px;
color: #fff;
background-color: #5D0DE0;
`;
export const ButtonWrapper = styled(RectButton)`
width: 30px;
height: 30px;
`; |
Recent Changes > Android Studio 1.1 Release Candidate 1 Available This release contains only a small set of bug fixes on top of the beta 4 release from last week. Installation This will download and install a small patch rather than download a full IDE image. If you are using an older version, you'll need to download a full install from the downloads page You can manually check for updates via Help > Check for Update... (on OSX, look in the Android Studio menu).
NOTE: 1.1 RC 1 is currently only available in the canary channel, and Android Studio 1.0 will by default look in the Stable channel, so if you want to update, open the preference dialog, go to the Updates category and change the channel setting.
Problems? If you run into problems, be sure to check the Known Issues page which we'll update as necessary. We've just released Android Studio 1.1 RC 1 to the canary channel. |
<gh_stars>10-100
import {
PluginInstanceDSL,
MaterialsPlugin,
PluginParams,
PluginUniversalEventTrigger,
GlobalMeta,
PageRouter,
} from '@vize/types';
import { cancelCustomEvent, emitCustomEvent, onCustomEvent } from './customEvents';
import { getMaterialsPlugin } from './materialsMap';
import { generatePluginEventHandlers } from '../utils/eventHandlers';
export interface ExecutePluginsParams {
pluginInstances: PluginInstanceDSL[];
meta: GlobalMeta;
globalData: object;
globalStyle: object;
pageData: object;
pageStyle: object;
router: PageRouter;
win?: Window;
}
export function executePlugins({
pluginInstances,
meta,
globalData,
globalStyle,
pageData,
pageStyle,
router,
win = window,
}: ExecutePluginsParams) {
return pluginInstances.forEach(async instance => {
const { key, plugin, data, events } = instance;
const handlers = generatePluginEventHandlers(events, router);
if (handlers[PluginUniversalEventTrigger.BEFORE_EXEC]) {
await handlers[PluginUniversalEventTrigger.BEFORE_EXEC]!(null, {
globalData,
globalStyle,
pageData,
pageStyle,
meta,
});
}
const dataParams = { globalData, globalStyle, pageData, pageStyle, meta, router };
const pluginFunction: MaterialsPlugin = getMaterialsPlugin(plugin)!;
const params: PluginParams = {
pluginKey: key,
data,
...dataParams,
on: (eventName, callback) => {
onCustomEvent('plugin', eventName, callback, key);
},
cancel: (eventName, callback) => {
cancelCustomEvent('plugin', eventName, callback, key);
},
emit: eventName => {
emitCustomEvent({
events,
eventName,
...dataParams,
});
},
};
try {
await pluginFunction.bind(win)(params);
} catch (e) {
console.error(`Exec Plugin(key=${key}) throw error: `, e);
}
if (handlers[PluginUniversalEventTrigger.AFTER_EXEC]) {
await handlers[PluginUniversalEventTrigger.AFTER_EXEC]!(null, dataParams);
}
});
}
|
/*
* ASCLITE
* Author: <NAME>, <NAME>, <NAME>, <NAME>
*
* This software was developed at the National Institute of Standards and Technology by
* employees of the Federal Government in the course of their official duties. Pursuant
* to title 17 Section 105 of the United States Code this software is not subject to
* copyright protection and is in the public domain. ASCLITE is an experimental system.
* NIST assumes no responsibility whatsoever for its use by other parties, and makes no
* guarantees, expressed or implied, about its quality, reliability, or any other
* characteristic. We would appreciate acknowledgement if the software is used.
*
* THIS SOFTWARE IS PROVIDED "AS IS." With regard to this software, NIST MAKES NO EXPRESS
* OR IMPLIED WARRANTY AS TO ANY MATTER WHATSOEVER, INCLUDING MERCHANTABILITY,
* OR FITNESS FOR A PARTICULAR PURPOSE.
*/
/**
* Represent the Levenshtein Distance Matrix with compression
*/
#include "compressedlevenshteinmatrix.h"
Logger* CompressedLevenshteinMatrix::m_pLogger = Logger::getLogger();
CompressedLevenshteinMatrix::CompressedLevenshteinMatrix(const size_t& _NbrDimensions, size_t* _TabDimensionDeep)
{
m_MaxMemoryKBProp = static_cast<size_t>(ceil(1024*1024*atof(Properties::GetProperty("recording.maxnbofgb").c_str())));
m_BlockSizeKB = static_cast<uint>(atoi(Properties::GetProperty("align.memorycompressionblock").c_str()));
if(m_BlockSizeKB > 1048576)
m_BlockSizeKB = 1048575;
/* LZMA Properties ; see lzma/LzmaLib.h */
m_lzmaLevel = 4;
m_lzmaDictionarySize = 1 << 24;
m_lzmaLc = 3;
m_lzmaLp = 0;
m_lzmaPb = 2;
m_lzmaFb = 32;
m_lzmaNumberThreads = 2;
m_lzmaPropertiesSize = LZMA_PROPS_SIZE;
m_NbrDimensions = _NbrDimensions;
m_TabDimensionDeep = new size_t[m_NbrDimensions];
m_MultiplicatorDimension = new ullint[m_NbrDimensions];
m_TabBlockDivider = new size_t[m_NbrDimensions];
m_TabBlockDimensionDeep = new size_t[m_NbrDimensions];
m_MultiplicatorDimension[0] = 1;
m_TabDimensionDeep[0] = _TabDimensionDeep[0] - 1;
m_MaxSize = m_TabDimensionDeep[0];
for(size_t i=1; i<m_NbrDimensions; ++i)
{
m_TabDimensionDeep[i] = _TabDimensionDeep[i] - 1;
m_MultiplicatorDimension[i] = m_MultiplicatorDimension[i-1]*m_TabDimensionDeep[i-1];
m_MaxSize = m_MaxSize * m_TabDimensionDeep[i];
}
BlockComputation(0);
if(m_BaseLengthIn < 0.2*m_BlockSizeKB*1024)
BlockComputation(1);
/* To guarantee that the compressed data will fit in its buffer, allocate
an output buffer of size 2% larger than the uncompressed data, plus extra
size for the compression properties. */
m_BaseLengthOut = m_BaseLengthIn + m_BaseLengthIn / 50 + m_lzmaPropertiesSize;
m_MultiplicatorBlockDimension = new size_t[m_NbrDimensions];
m_MultiplicatorDivider = new size_t[m_NbrDimensions];
m_MultiplicatorBlockDimension[0] = 1;
m_MultiplicatorDivider[0] = 1;
for(size_t i=1; i<m_NbrDimensions; ++i)
{
m_MultiplicatorBlockDimension[i] = m_MultiplicatorBlockDimension[i-1]*m_TabBlockDimensionDeep[i-1];
m_MultiplicatorDivider[i] = m_MultiplicatorDivider[i-1]*m_TabBlockDivider[i-1];
}
m_TabStartByte = new int * [m_NbrCompressedTabs];
m_TabStartByteCompressed = new int * [m_NbrCompressedTabs];
m_TabSizes = new uint[m_NbrCompressedTabs];
m_TabbIsCompressed = new bool[m_NbrCompressedTabs];
m_TabHitsTimer = new ulint[m_NbrCompressedTabs];
m_TabIsCreated = new bool[m_NbrCompressedTabs];
m_CurrentMemorySize = 0;
m_Decompressions = 0;
m_Compressions = 0;
m_NbrCompressedBlocks = 0;
m_NbrDecompressedBlocks = 0;
m_Accesses = 0;
for(size_t i=0; i<m_NbrCompressedTabs; ++i)
{
m_TabIsCreated[i] = false;
m_TabSizes[i] = 0;
m_TabStartByte[i] = NULL;
m_TabStartByteCompressed[i] = NULL;
m_CurrentMemorySize += 0;
}
m_SizeOfArray = 0;
m_NbrCreatedBlocks = 0;
m_UsableMemoryKB = 0.98*((double) m_MaxMemoryKBProp);
m_PercentageMemoryTriggerStart = 0.01;
m_PercentageMemoryTriggerStop = 0.2;
LOG_DEBUG(m_pLogger, "Allocation done!");
char buffer[BUFFER_SIZE];
sprintf(buffer, "Compressed Levenshtein Matrix: %lu blocks of %.1fKB, Usable: %.0fKB, StartGC: %.0fKB, StopGC: %.0fKB",
(ulint) m_NbrCompressedTabs, ((double)(m_BaseLengthIn))/1024.0, m_UsableMemoryKB, m_UsableMemoryKB*(1.0-m_PercentageMemoryTriggerStart), m_UsableMemoryKB*(1.0-m_PercentageMemoryTriggerStop));
LOG_DEBUG(m_pLogger, buffer);
}
CompressedLevenshteinMatrix::~CompressedLevenshteinMatrix()
{
char buffer[BUFFER_SIZE];
sprintf(buffer, "Compressed Levenshtein Matrix: TotalNbrCells: %llu, CalculatedCells: %llu, RatioCells: %.1f%%, TheoryBlocks: %lu, CreatedBlocks: %lu, RatioBlocks: %.1f%%, ActualSize: %.1fKB, ExpendedSize: %.1fKB", (ullint) m_MaxSize, (ullint) m_SizeOfArray, 100.0*((double)m_SizeOfArray)/((double)m_MaxSize), (ulint) m_NbrCompressedTabs, (ulint) m_NbrCreatedBlocks, 100.0*((double)m_NbrCreatedBlocks)/((double)m_NbrCompressedTabs), ((double) m_CurrentMemorySize)/1024.0, ((double) m_NbrCreatedBlocks)*((double)(m_BaseLengthIn))/1024.0);
LOG_DEBUG(m_pLogger, buffer);
for(size_t i=0; i<m_NbrCompressedTabs; ++i)
{
if(isBlockCreated(i))
{
if(m_TabbIsCompressed[i])
{
if(m_TabStartByteCompressed[i])
free(m_TabStartByteCompressed[i]);
}
else
{
if(m_TabStartByte[i])
free(m_TabStartByte[i]);
}
}
}
delete [] m_TabStartByte;
delete [] m_TabStartByteCompressed;
delete [] m_TabSizes;
delete [] m_TabbIsCompressed;
delete [] m_TabHitsTimer;
delete [] m_TabIsCreated;
delete [] m_TabBlockDimensionDeep;
delete [] m_MultiplicatorBlockDimension;
delete [] m_TabBlockDivider;
delete [] m_TabDimensionDeep;
delete [] m_MultiplicatorDivider;
delete [] m_MultiplicatorDimension;
}
void CompressedLevenshteinMatrix::CreateBlock(const size_t& block_index)
{
if(! isBlockCreated(block_index))
{
uint decomp_lengh = m_BaseLengthIn;
m_TabStartByte[block_index] = (int*) malloc(m_BaseLengthIn);
memset(m_TabStartByte[block_index], C_UNCALCULATED, decomp_lengh);
m_TabSizes[block_index] = decomp_lengh;
m_CurrentMemorySize += decomp_lengh;
m_TabbIsCompressed[block_index] = false;
++m_NbrDecompressedBlocks;
m_TabIsCreated[block_index] = true;
TouchBlock(block_index);
++m_NbrCreatedBlocks;
GarbageCollection();
}
}
void CompressedLevenshteinMatrix::CompressBlock(const size_t& block_index)
{
CreateBlock(block_index);
if(!m_TabbIsCompressed[block_index])
{
// Block is not compressed, then compress it;
size_t decomp_lengh = m_TabSizes[block_index];
size_t comp_lengh = m_BaseLengthOut;
m_TabStartByteCompressed[block_index] = (int*) malloc(m_BaseLengthOut);
size_t outPropsize = m_lzmaPropertiesSize;
if( LzmaCompress( (unsigned char*) m_TabStartByteCompressed[block_index] + m_lzmaPropertiesSize, &comp_lengh,
(unsigned char*) m_TabStartByte[block_index], decomp_lengh,
(unsigned char*) m_TabStartByteCompressed[block_index], &outPropsize,
m_lzmaLevel,
m_lzmaDictionarySize,
m_lzmaLc,
m_lzmaLp,
m_lzmaPb,
m_lzmaFb,
m_lzmaNumberThreads ) != SZ_OK)
{
LOG_FATAL(m_pLogger, "Compression: 'LzmaCompress()' failed!");
exit(EXIT_FAILURE);
}
if( (comp_lengh + m_lzmaPropertiesSize >= decomp_lengh) || (outPropsize > m_lzmaPropertiesSize) )
{
//Incompressible data
LOG_DEBUG(m_pLogger, "Compression: Incompressible block ignoring compression!");
free(m_TabStartByteCompressed[block_index]);
m_TabStartByteCompressed[block_index] = NULL;
}
else
{
free(m_TabStartByte[block_index]);
m_TabStartByte[block_index] = NULL;
m_TabSizes[block_index] = comp_lengh + m_lzmaPropertiesSize;
m_TabbIsCompressed[block_index] = true;
m_CurrentMemorySize += comp_lengh + m_lzmaPropertiesSize - decomp_lengh;
++m_Compressions;
++m_NbrCompressedBlocks;
--m_NbrDecompressedBlocks;
}
}
}
bool CompressedLevenshteinMatrix::DecompressBlock(const size_t& block_index)
{
CreateBlock(block_index);
bool decomp = false;
if((decomp = m_TabbIsCompressed[block_index]))
{
// Block is compressed, then decompress it;
size_t comp_lengh = m_TabSizes[block_index] - m_lzmaPropertiesSize;
size_t decomp_lengh = m_BaseLengthIn;
m_TabStartByte[block_index] = (int*) malloc(m_BaseLengthIn);
if(LzmaUncompress( (unsigned char*) m_TabStartByte[block_index], &decomp_lengh,
(unsigned char*) m_TabStartByteCompressed[block_index] + m_lzmaPropertiesSize, &comp_lengh,
(unsigned char*) m_TabStartByteCompressed[block_index], m_lzmaPropertiesSize) != SZ_OK)
{
LOG_FATAL(m_pLogger, "Decompression: 'LzmaUncompress()' failed!");
exit(EXIT_FAILURE);
}
free(m_TabStartByteCompressed[block_index]);
m_TabStartByteCompressed[block_index] = NULL;
m_TabSizes[block_index] = decomp_lengh;
m_TabbIsCompressed[block_index] = false;
m_CurrentMemorySize += decomp_lengh - comp_lengh;
++m_Decompressions;
--m_NbrCompressedBlocks;
++m_NbrDecompressedBlocks;
}
TouchBlock(block_index);
return decomp;
}
void CompressedLevenshteinMatrix::GarbageCollection()
{
char buffer[BUFFER_SIZE];
sprintf(buffer, "Compressed Levenshtein Matrix: Current: %lu KB, Limit: %lu KB, CompressedBlocks: %lu, UncompressedBlocks: %lu", m_CurrentMemorySize/1024,
static_cast<size_t>(m_UsableMemoryKB),
m_NbrCompressedBlocks,
m_NbrDecompressedBlocks);
LOG_DEBUG(m_pLogger, buffer);
if(isCallGarbageCollector())
{
bool found = false;
ulint count = 0;
do
{
if((found = ForcedGarbageCollection()))
++count;
}
while(found && !isStopGarbageCollector());
char buffer[BUFFER_SIZE];
sprintf(buffer, "Garbage collection: %lu blocks compressed", count);
LOG_DEBUG(m_pLogger, buffer);
}
}
bool CompressedLevenshteinMatrix::ForcedGarbageCollection()
{
ulint mintouch = ULONG_MAX;
size_t min_index = 0;
// Do the ugly Java GC
bool found = false;
for(size_t i=0; i<m_NbrCompressedTabs; ++i)
{
if(isBlockCreated(i))
{
if(!m_TabbIsCompressed[i])
{
// not compressed
if(m_TabHitsTimer[i] < mintouch)
{
mintouch = m_TabHitsTimer[i];
min_index = i;
found = true;
}
}
}
}
if(found)
CompressBlock(min_index);
return found;
}
string CompressedLevenshteinMatrix::ToString()
{
return string("");
}
void CompressedLevenshteinMatrix::CoordinatesToBlockOffset(size_t* coordinates, size_t& blockNum, size_t& blockOffset)
{
blockNum = 0;
blockOffset = 0;
for(size_t i=0; i<m_NbrDimensions; ++i)
{
blockNum += (coordinates[i]/m_TabBlockDimensionDeep[i])*m_MultiplicatorDivider[i];
blockOffset += (coordinates[i]%m_TabBlockDimensionDeep[i])*m_MultiplicatorBlockDimension[i];
}
}
int CompressedLevenshteinMatrix::GetCostFor(size_t* coordinates)
{
size_t coord_x;
size_t coord_y;
CoordinatesToBlockOffset(coordinates, coord_x, coord_y);
bool decomp = DecompressBlock(coord_x);
int out = m_TabStartByte[coord_x][coord_y];
if(decomp)
GarbageCollection();
return (out);
}
void CompressedLevenshteinMatrix::SetCostFor(size_t* coordinates, const int& cost)
{
size_t coord_x;
size_t coord_y;
CoordinatesToBlockOffset(coordinates, coord_x, coord_y);
bool decomp = DecompressBlock(coord_x);
if(m_TabStartByte[coord_x][coord_y] == C_UNCALCULATED)
++m_SizeOfArray;
m_TabStartByte[coord_x][coord_y] = cost;
if(decomp)
GarbageCollection();
}
void CompressedLevenshteinMatrix::BlockComputation(const size_t& levelopt)
{
// Declaration Vars
size_t* Cursor = new size_t[m_NbrDimensions];
vector <size_t>* PrimeDiv = new vector <size_t>[m_NbrDimensions];
size_t* tmpDivider = new size_t[m_NbrDimensions];
size_t* tmpBlockDimensions = new size_t[m_NbrDimensions];
size_t blocksize = m_BlockSizeKB*256;
// Computation
// Initialization
for(size_t i=0; i<m_NbrDimensions; ++i)
{
if(m_TabDimensionDeep[i] == 1)
PrimeDiv[i].push_back(1);
for(size_t j=2; j<=m_TabDimensionDeep[i]; ++j)
if( (m_TabDimensionDeep[i] % j == 0) ||
( (levelopt >= 1) && ((m_TabDimensionDeep[i]+1) % 2 == 0) && ((m_TabDimensionDeep[i]+1) % j == 0) ) ||
( (levelopt >= 2) && ((m_TabDimensionDeep[i]+levelopt) % j == 0) )
)
PrimeDiv[i].push_back(j);
Cursor[i] = 0;
}
// End Initialization
// Main research
bool finished = false;
size_t closestsize = ULONG_MAX;
do
{
if(Cursor[0] == PrimeDiv[0].size())
{
finished = true;
}
else
{
size_t size = 1;
for(size_t i=0; i<m_NbrDimensions; ++i)
{
tmpDivider[i] = PrimeDiv[i][Cursor[i]];
tmpBlockDimensions[i] = m_TabDimensionDeep[i]/tmpDivider[i];
if(m_TabDimensionDeep[i] % tmpDivider[i] != 0)
++(tmpBlockDimensions[i]);
size *= tmpBlockDimensions[i];
}
const size_t closer = (blocksize > size) ? blocksize - size : size - blocksize;
if(closer < closestsize)
{
closestsize = closer;
for(size_t i=0; i<m_NbrDimensions; ++i)
{
m_TabBlockDivider[i] = tmpDivider[i];
m_TabBlockDimensionDeep[i] = tmpBlockDimensions[i];
}
}
// Next
size_t currdim = m_NbrDimensions - 1;
++(Cursor[currdim]);
while( (currdim > 0) && (Cursor[currdim] == PrimeDiv[currdim].size()) )
{
Cursor[currdim] = 0;
--currdim;
++(Cursor[currdim]);
}
}
}
while(!finished);
// Main research
m_BlockSizeElts = 1;
m_NbrCompressedTabs = 1;
for(size_t i=0; i<m_NbrDimensions; ++i)
{
m_BlockSizeElts *= m_TabBlockDimensionDeep[i];
m_NbrCompressedTabs *= m_TabBlockDivider[i];
}
if(m_BlockSizeElts*sizeof(int) < 16)
m_BlockSizeElts = 16/sizeof(int);
m_BaseLengthIn = m_BlockSizeElts * sizeof(int);
// End Computation
// Destruction Vars
delete [] Cursor;
for(size_t i=0; i<m_NbrDimensions; ++i)
PrimeDiv[i].clear();
delete [] PrimeDiv;
delete [] tmpBlockDimensions;
delete [] tmpDivider;
//return isIncreasable;
}
|
(Image: Lauren Walker / Truthout)
Stories like this only get published because of reader support. Want to see more of them? Donate to Truthout today to ensure we have a future!
At a protest in downtown Denver, on April 29, 2015, a police officer stole Jessica Benn’s smartphone.
Benn had been filming her husband, Jesse, from the safety of the sidewalk as police arrested him. That was enough for her to be targeted and to have her property illegally seized.
“An officer just stepped up to me and grabbed it right out of my hand,” she told Truthout. “Right behind him was an officer in SWAT gear who then took me and pushed me up against a bus with a baton across my neck and held me there.”
Benn grew increasingly alarmed as the officer ignored her questions.
“The fact is that photography is power. People are loathe to give up power, including police officers.”
“It was very chaotic, people were yelling and getting arrested all around us, and the nature of the arrests were very violent. So at that point I was concerned about my safety and I told this officer that I was pregnant and could he please not hurt my stomach.”
The officer shoved her back on the sidewalk and released her. “He said ‘well, then get out of here,’ kind of like I shouldn’t have been there at all if I wasn’t expecting to be physically assaulted.”
Video footage shot by other activists later helped identify Antonio Lopez, a Denver Police Department (DPD) district commander, as the officer who seized her phone and ordered another cop to “grab her.”
Before Benn hired a lawyer and sued the city, she tried every other recourse available to her.
“Immediately that afternoon I called the possessions department at the DPD to see if my phone had been turned in,” she said.
She called multiple departments, but the police never returned her phone — or even admitted to knowing where it was.
“I also tried internal affairs after that, which was unfruitful,” she said. “And then I went to the independent monitor, trying all my channels of potential agency; none of them were fruitful in getting any kind of accountability from DPD.”
Now she’s one of two Denver residents engaged in civil rights lawsuits that could set an important precedent in Colorado, reaffirming the constitutional right to film the police.
“There’s no published opinion concerning this issue coming out of Denver or Colorado,” Elizabeth Wang, Benn’s legal representative, told Truthout.
“Absolutely Protected by the First Amendment”
Unfortunately, even a win in court may not be enough to stay the hand of the next officer tempted to seize or even arrest a bystander who is filming their actions, judging by their brethren in other cities around the United States.
“The fact is that photography is power,” said Jay Stanley, a senior policy analyst at the American Civil Liberties Union (ACLU) and creator of the organization’s guide to photographers’ rights. “People are loathe to give up power, including police officers.”
He acknowledged that police often ignore these court decisions, and that the problem is an ongoing one. In 2013, Stanley wrote an ACLU blog post about the issue in which he lamented, “Why is it so hard for police officers to learn the law?”
However, officers’ reluctance to relinquish this particular power runs contrary to federal law, according to Wang, who is a partner in the Chicago-based civil rights law firm Loevy & Loevy.
“We negotiated how to film, and everything has failed. The only thing they want is control and domination of us.”
“What Jessica was doing was no different from what any mainstream news journalist does every day — gathering news — and no one questions whether such news gathering by mainstream journalists is protected by the First Amendment,” Wang said. “Jessica’s recording of the police that day was absolutely protected by the First Amendment and the federal Privacy Protection Act.”
Wang is also representing Levi Frasier. Denver police took Frasier’s tablet and beat him when they saw him filming an August 2014 arrest.
If a host of other cases from elsewhere in the country provides any clues, the Colorado District Court is likely to reaffirm the First Amendment right to film the police.
In March, the Pennsylvania chapter of the ACLU appealed a decision denying the right to film police. “Otherwise, the court’s been pretty clear” in many other cases, Stanley noted.
In response to incidents like the pair that Wang is taking to court, the Denver Police Department has already altered its operations manual to specifically affirm the right of individuals to film:
Members of the public, including but not limited to media representatives and bystanders, have a First Amendment right to observe and record officers in public places, as long as their actions do not interfere with the officer’s duties or the safety of officers or others. Officers should assume that they are being recorded at all times when on-duty in a public space.
Merely changing the manual isn’t enough to always ensure police accountability, however, Stanley warned.
“Some of it falls to police management for not properly training their forces on just what the law says and how clear it is. And also for not, in some cases, properly disciplining officers that do violate people’s constitutional rights.”
But Police “Don’t Care” About Copwatchers’ First Amendment Rights
Joshua Pineda’s arrest on April 10, just after midnight, would mark his fourth arrest for filming police in Austin, Texas. A lead organizer from the Peaceful Streets Project, a copwatching collective that educates the public about its right to film, Pineda regularly records police action with a group that patrols 6th Street, the city’s downtown club district.
“An incident broke out where they had shoved this African-American guy,” Pineda told Truthout.
As the Peaceful Streets Project team, with Pineda in the lead, approached the incident, which involved an officer named Cameron Staff, most of the other police involved returned to their posts. But Austin Police Cpl. Richard Mears approached the group, ordering another member to “get back.”
Like Denver, the policy manual for the Austin Police Department (APD) specifically forbids interfering with people who film officers’ duties. The manual reads, in part:
Officers are reminded that photography, including videotaping, of places, buildings, structures and events are common and normally lawful activities…. In areas open to the public, officers shall allow bystanders the same access for photography as is given to members of the news. Officers shall be aware that … [a] bystander has the same right to take photographs or make recordings as a member of the media, as long as the bystander has a legal right to be present where he or she is located.
Pineda told Truthout his group was at least 10 feet from the incident (with other club-going bystanders much closer to the police) and he knew the manual forbids interfering with filming, so he half-kneeled on the sidewalk and continued recording.
“He started screaming at me to get back, and I had no interest in it, so he grabbed me and immediately arrested me,” Pineda said.
The APD not only confiscated the device he was using to film and another camera they found in his backpack, but also one of his most prized possessions: a collar he constantly wears, which had deep personal meaning for Pineda.
“Before we’ll negotiate, you have to stop shooting people; you have to stop violating people’s rights.”
“My collars have changed with each era of my life. It’s always been a big, kind of notorious part of my life, especially with APD. They’ve mentioned it several times when they’ve taunted us” during interactions at cop watch events, he said. “I know they kept it as a trophy.”
The Peaceful Streets Project formed after its founder, Antonio Buehler, was arrested for filming a violent drunk driving arrest on New Year’s Day in 2012. Despite winning multiple times in court, the group’s activists are still harassed, insulted and arrested.
“While I was being carried off,” Pineda recalled, “I told them, ‘you lost in court, you lost this case to Antonio, it’s an unlawful order,’ and they went, ‘We don’t care.’ They don’t care. They’re not held accountable.”
Police have repeatedly tried to dismiss freedom of speech arguments by arguing that Peaceful Streets Project members are guilty of “interference with police duties” by getting too close. Pineda dismissed their claims.
“Their inability to pay attention should result in them being fired. If you can’t pay attention because someone is filming you, we shouldn’t be paying you,” he said.
“Before We’ll Negotiate, You Have to Stop Violating People’s Rights”
In March 2016, Peaceful Streets created a viral video of an APD officer, Cameron Caldwell, using pepper spray on a handcuffed man in a police van. Tensions with police escalated after that, and after an incident in which the group harshly criticized the APD in the wake of the shooting of a police officer during a burglary.
Pineda acknowledged that Peaceful Streets’ tactics are more confrontational, and more openly anti-police than many copwatching groups, but he countered this by pointing out that copwatchers are harassed even when they treat police with respect, such as during the Denver incidents with Benn and Frasier.
“Within the span of four years … we’ve tried working with the police in every which way we possibly can,” Pineda added. “We negotiated how to film, and everything has failed…. The only thing they want is control and domination of us.”
“If they have 50, 100 people every single day stopping to film them, that kind of pressure is going to start mounting up.”
The group quickly grew fed up, especially as violent police incidents continued in Austin, such as the fatal shooting, in February 2016, of David Joseph, a naked and unarmed Black man. “Before we’ll negotiate, you have to stop shooting people; you have to stop violating people’s rights,” Pineda said.
“You put a magic badge and a uniform on them and people think you have to treat them with respect, but it’s just ridiculous.”
Despite the risks, Pineda stressed the potential positive impact of filming police. “It has to get uploaded,” he said.
That’s one reason Pineda said working with a larger organization is so important, and he urged activists to contact Peaceful Streets for advice. The group offers training to new copwatching groups that helps them form vital networks for mutual aid.
“We’ll help them set that up,” he added, “we’ll help them get some sort of network set up and get the resources and knowledge into their community to be able to film and to be able to defend themselves while filming.”
Pineda believes that only a mass movement for transparencycan create police accountability. Peaceful Streets recently began working with WeCopwatch, a national organization that films the police in places like Ferguson, Missouri, and Detroit.
“If they have 50, 100 people every single day stopping to film them, that kind of pressure is going to start mounting up,” Pineda said.
“If 1,000 people filmed in a day, that would outweigh what I’ve done in four years.” |
<reponame>gabrielfernandes320/nestjs-template-v1.0
import { Module } from '@nestjs/common';
import { TypeOrmModule } from '@nestjs/typeorm';
import { UsersController } from './infra/http/UsersController';
import { User } from './infra/typeorm/entities/UserEntity';
import providers from './providers';
import ShowUserByEmailService from './services/ShowUserByEmailService';
import UpdateUserService from './services/UpdateUserService';
import ShowUserService from './services/ShowUserService';
import CreateUserService from './services/CreateUserService';
import ListUserService from './services/ListUserService';
import DeleteUserService from './services/DeleteUserService';
@Module({
imports: [TypeOrmModule.forFeature([User])],
providers: [
...[
ShowUserService,
CreateUserService,
UpdateUserService,
ListUserService,
DeleteUserService,
ShowUserByEmailService,
],
...providers,
],
controllers: [UsersController],
exports: [...[ShowUserByEmailService], ...providers],
})
export class UsersModule {}
|
def handle_callback_query(bot, update, session, user):
query = update.callback_query
data = query.data
[callback_type, payload, action] = data.split(':')
callback_type = int(callback_type)
action = int(action)
chat = session.query(Chat).get(query.message.chat.id)
tg_chat = query.message.chat
if CallbackType(callback_type).name == 'report_ban':
handle_report_ban(session, action, query, payload, chat, tg_chat)
elif CallbackType(callback_type).name == 'report_nsfw':
handle_report_nsfw(session, action, query, payload, chat, tg_chat)
elif CallbackType(callback_type).name == 'report_furry':
handle_report_furry(session, action, query, payload, chat, tg_chat)
elif CallbackType(callback_type).name == 'report_next':
handle_report_next(session, action, query, payload, chat, tg_chat)
elif CallbackType(callback_type).name == 'check_user_tags':
handle_check_user(session, bot, action, query, payload, chat, tg_chat)
elif CallbackType(callback_type).name == 'ban_set':
handle_ban_set(session, action, query, payload, chat, tg_chat)
elif CallbackType(callback_type).name == 'nsfw_set':
handle_nsfw_set(session, action, query, payload, chat, tg_chat)
elif CallbackType(callback_type).name == 'fur_set':
handle_fur_set(session, action, query, payload, chat, tg_chat)
elif CallbackType(callback_type).name == 'change_set_language':
handle_change_set_language(session, action, query, payload, chat, tg_chat)
elif CallbackType(callback_type).name == 'deluxe_set':
handle_deluxe_set(session, action, query, payload, chat, tg_chat)
elif CallbackType(callback_type).name == 'newsfeed_next_set':
handle_next_newsfeed_set(session, bot, action, query, payload, chat, tg_chat, user)
elif CallbackType(callback_type).name == 'next':
handle_tag_next(session, bot, user, query, chat, tg_chat)
elif CallbackType(callback_type).name == 'cancel':
handle_cancel_tagging(session, bot, user, query, chat, tg_chat)
elif CallbackType(callback_type).name == 'edit_sticker':
handle_fix_sticker_tags(session, payload, user, chat, tg_chat)
elif CallbackType(callback_type).name == 'tag_set':
initialize_set_tagging(bot, tg_chat, session, payload, chat, user)
elif CallbackType(callback_type).name == 'continue_tagging':
handle_continue_tagging_set(session, bot, payload, user, chat, tg_chat)
elif CallbackType(callback_type).name == 'deluxe_set_user_chat':
handle_deluxe_set_user_chat(session, bot, action, query, payload, user)
return |
/**
* <p>Java class for customer-wishlist-itemType complex type.
*
* <p>The following schema fragment specifies the expected content contained within this class.
*
* <pre>
* <complexType name="customer-wishlist-itemType">
* <complexContent>
* <restriction base="{http://www.w3.org/2001/XMLSchema}anyType">
* <sequence>
* <element name="tags" type="{}tagsType"/>
* <element name="quantity" type="{http://www.w3.org/2001/XMLSchema}decimal"/>
* <element name="price" type="{}customer-wishlist-item-priceType"/>
* </sequence>
* <attribute name="id" type="{http://www.w3.org/2001/XMLSchema}long" />
* <attribute name="guid" type="{http://www.w3.org/2001/XMLSchema}string" />
* <attribute name="wishlist-type" use="required" type="{http://www.w3.org/2001/XMLSchema}string" />
* <attribute name="visibility" use="required" type="{http://www.w3.org/2001/XMLSchema}string" />
* <attribute name="sku-code" use="required" type="{http://www.w3.org/2001/XMLSchema}string" />
* <attribute name="fulfilment-centre-code" use="required" type="{http://www.w3.org/2001/XMLSchema}string" />
* <attribute name="import-mode" type="{}entityImportModeType" />
* </restriction>
* </complexContent>
* </complexType>
* </pre>
*
*
*/
@XmlAccessorType(XmlAccessType.FIELD)
@XmlType(name = "customer-wishlist-itemType", propOrder = {
"tags",
"quantity",
"price"
})
public class CustomerWishlistItemType {
@XmlElement(required = true)
protected TagsType tags;
@XmlElement(required = true)
protected BigDecimal quantity;
@XmlElement(required = true)
protected CustomerWishlistItemPriceType price;
@XmlAttribute(name = "id")
protected Long id;
@XmlAttribute(name = "guid")
protected String guid;
@XmlAttribute(name = "wishlist-type", required = true)
protected String wishlistType;
@XmlAttribute(name = "visibility", required = true)
protected String visibility;
@XmlAttribute(name = "sku-code", required = true)
protected String skuCode;
@XmlAttribute(name = "fulfilment-centre-code", required = true)
protected String fulfilmentCentreCode;
@XmlAttribute(name = "import-mode")
protected EntityImportModeType importMode;
/**
* Gets the value of the tags property.
*
* @return
* possible object is
* {@link TagsType }
*
*/
public TagsType getTags() {
return tags;
}
/**
* Sets the value of the tags property.
*
* @param value
* allowed object is
* {@link TagsType }
*
*/
public void setTags(TagsType value) {
this.tags = value;
}
/**
* Gets the value of the quantity property.
*
* @return
* possible object is
* {@link BigDecimal }
*
*/
public BigDecimal getQuantity() {
return quantity;
}
/**
* Sets the value of the quantity property.
*
* @param value
* allowed object is
* {@link BigDecimal }
*
*/
public void setQuantity(BigDecimal value) {
this.quantity = value;
}
/**
* Gets the value of the price property.
*
* @return
* possible object is
* {@link CustomerWishlistItemPriceType }
*
*/
public CustomerWishlistItemPriceType getPrice() {
return price;
}
/**
* Sets the value of the price property.
*
* @param value
* allowed object is
* {@link CustomerWishlistItemPriceType }
*
*/
public void setPrice(CustomerWishlistItemPriceType value) {
this.price = value;
}
/**
* Gets the value of the id property.
*
* @return
* possible object is
* {@link Long }
*
*/
public Long getId() {
return id;
}
/**
* Sets the value of the id property.
*
* @param value
* allowed object is
* {@link Long }
*
*/
public void setId(Long value) {
this.id = value;
}
/**
* Gets the value of the guid property.
*
* @return
* possible object is
* {@link String }
*
*/
public String getGuid() {
return guid;
}
/**
* Sets the value of the guid property.
*
* @param value
* allowed object is
* {@link String }
*
*/
public void setGuid(String value) {
this.guid = value;
}
/**
* Gets the value of the wishlistType property.
*
* @return
* possible object is
* {@link String }
*
*/
public String getWishlistType() {
return wishlistType;
}
/**
* Sets the value of the wishlistType property.
*
* @param value
* allowed object is
* {@link String }
*
*/
public void setWishlistType(String value) {
this.wishlistType = value;
}
/**
* Gets the value of the visibility property.
*
* @return
* possible object is
* {@link String }
*
*/
public String getVisibility() {
return visibility;
}
/**
* Sets the value of the visibility property.
*
* @param value
* allowed object is
* {@link String }
*
*/
public void setVisibility(String value) {
this.visibility = value;
}
/**
* Gets the value of the skuCode property.
*
* @return
* possible object is
* {@link String }
*
*/
public String getSkuCode() {
return skuCode;
}
/**
* Sets the value of the skuCode property.
*
* @param value
* allowed object is
* {@link String }
*
*/
public void setSkuCode(String value) {
this.skuCode = value;
}
/**
* Gets the value of the fulfilmentCentreCode property.
*
* @return
* possible object is
* {@link String }
*
*/
public String getFulfilmentCentreCode() {
return fulfilmentCentreCode;
}
/**
* Sets the value of the fulfilmentCentreCode property.
*
* @param value
* allowed object is
* {@link String }
*
*/
public void setFulfilmentCentreCode(String value) {
this.fulfilmentCentreCode = value;
}
/**
* Gets the value of the importMode property.
*
* @return
* possible object is
* {@link EntityImportModeType }
*
*/
public EntityImportModeType getImportMode() {
return importMode;
}
/**
* Sets the value of the importMode property.
*
* @param value
* allowed object is
* {@link EntityImportModeType }
*
*/
public void setImportMode(EntityImportModeType value) {
this.importMode = value;
}
} |
import pandas
import matplotlib.pyplot as plt
import datetime
f = open('UNRATE.csv', 'r')
data = f.read()
rows = data.split('\n')
unrate=[]
for row in rows:
song=row.split(',')
unrate.append(song)
print(unrate)
xaxis=["2017/1/1","2017/2/1","2017/3/1","2017/4/1","2017/5/1","2017/6/1","2017/7/1","2017/8/1","2017/9/1","2017/10/1","2017/11/1","2017/12/1"]
data=[1,3,5,7,3,1,2,5,10,9,11,62]
date_time=[]
for xax in xaxis:
date = datetime.datetime.strptime(xax,'%Y/%m/%d')
date_time.append(date)
plt.plot(date_time,data)
plt.xticks(rotation=90)
plt.show()
|
from rest_framework import status
from django.contrib.auth import get_user_model
from rest_framework.reverse import reverse
from django.utils.encoding import force_text
from django.utils.http import urlsafe_base64_encode
from django.contrib.auth.tokens import default_token_generator
from .basetest import BaseTestCase
User = get_user_model()
class UserApiTestCase(BaseTestCase):
def test_login_with_unverified_user(self):
self.create_user = User(
username="jude", email="<EMAIL>")
self.create_user.set_password("<PASSWORD>")
self.create_user.save()
self.login_data_unverified = {
"user": {
"email": "<EMAIL>",
"password": "<PASSWORD>",
}
}
url = reverse("authentication:login")
response = self.client.post(
url, self.login_data_unverified, format="json")
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn("Your account is not verified, Please check your email to verify your account",
str(response.data))
def test_invalid_verification_link(self):
uid = "c3VsYUBzdWxhLnN1bGE"
activation_token = "<PASSWORD>"
url = reverse("authentication:activate_account",
args=(uid, activation_token,))
response = self.client.get(url, format="json")
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
def test_correct_verification_link(self):
user = User.objects.get(email=self.login_data["user"]["email"])
uid = force_text(urlsafe_base64_encode(user.email.encode("utf8")))
activation_token = default_token_generator.make_token(user)
url = reverse("authentication:activate_account",
args=(uid, activation_token,))
response = self.client.get(url, format="json")
self.assertEqual(response.status_code, status.HTTP_200_OK)
def test_verified_user(self):
user = User.objects.get(email=self.login_data["user"]["email"])
uid = force_text(urlsafe_base64_encode(user.email.encode("utf8")))
activation_token = default_token_generator.make_token(user)
url = reverse("authentication:activate_account",
args=(uid, activation_token,))
self.client.get(url, format="json")
response = self.client.get(url, format="json")
response_message = {
"message": 'Your account is already verified, Please login.'}
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data, response_message)
|
#include <bits/stdc++.h>
using namespace std;
typedef long long int LLI;
LLI gcd(LLI a, LLI b){ return (b == 0 ? a : gcd(b, a%b)); }
LLI lcm(LLI a, LLI b){ return a * b / gcd(a,b); }
int main(void)
{
LLI a,b,c,d;
cin >> a >> b >> c >> d;
// reduce fractions
LLI d1 = gcd(a,b);
a /= d1;
b /= d1;
LLI d2 = gcd(c,d);
c /= d2; d /= d2;
LLI b_a = a * d, b_c = c * b; // 1 is longer than 2 horizontally (proportionally)
LLI b_b = b * c, b_d = c * a;
// we stretch horizontally
LLI arr1 = a * b * c * c;
LLI arr2 = c * d * a * a;
LLI option1 = arr1-arr2 * arr1;
// we stretch vertically
LLI tarr1 = a * b * d * d;
LLI tarr2 = c * d * b * b;
LLI option2 = tarr1-tarr2 * tarr1;
if(!(b_a >= b_c)){
LLI dr = gcd(arr1 - arr2, arr1);
//cout << "stretching vertically" << endl;
printf("%lld/%lld", (arr1 - arr2) / dr , arr1/dr);
}
else {
//cout << "stretching horizontally" << endl;
LLI dr = gcd(tarr1 - tarr2, tarr1);
printf("%lld/%lld", (tarr1 - tarr2) / dr , tarr1/dr);
}
}
|
<reponame>chuwwwi/minter-sdk<gh_stars>10-100
-- | No allowlisting.
--
-- This module is to be imported qualified.
module Lorentz.Contracts.NoAllowlist where
import Lorentz
type Allowlist = ()
type Entrypoints = Never
|
# Generated by Django 2.2.13 on 2020-06-25 13:55
from django.db import migrations
class Migration(migrations.Migration):
atomic = False
dependencies = [
('cms', '0053_auto_20200625_1453'),
]
operations = [
migrations.RenameModel(
old_name='BlogGuestAuthor',
new_name='BlogAuthor',
),
]
|
The run was as hard to believe as the game it won.
Six different Saints touched Marshawn Lynch on the play when he never went down, a 67-yard touchdown run. If ever there was an exclamation point, Lynch delivered it with a one-armed shove that knocked Saints cornerback Tracy Porter 5 yards backward.
And once Lynch reached the end zone, it was clear that these Saints, the defending Super Bowl champions, had been KO’d by a punch no one saw coming.
The final score: Seattle 41, New Orleans 36.
Lynch came back to the sidelines after his game-clinching TD, ball in his left hand, right arm raised as he flexed his biceps, and if there was any doubt, Lynch showed that yes, these Seahawks were stronger than anyone expected. The largest home underdog in NFL playoff history had won.
Doubted all week and down by 10 points to New Orleans, the first NFL team with a losing record ever to win a division went and earned its way to a fifth consecutive home playoff victory and one more game. At least.
Seattle will play at either Atlanta or Chicago next week, depending on the outcome of Sunday’s game between Green Bay and Philadelphia. If the Packers win, Seattle plays at Chicago. If the Eagles win, Seattle will play at the Falcons.
So when did your jaw hit the floor?
Was it with 1:15 left in the second quarter and Matt Hasselbeck threw a 45-yard touchdown pass to Brandon Stokley to give Seattle its first lead over New Orleans?
Or was it when Seattle faced third-and-2 on its first possession of the second half, and Hasselbeck lobbed a 38-yard touchdown pass to Mike Williams, to put Seattle ahead by 11?
Or maybe it wasn’t until the fourth quarter after the defending Super Bowl champions cut Seattle’s lead to four points and were playing to get the ball back only to Lynch run 67 yards to a touchdown as 66,336 fans had Qwest Field echoing with TNT-grade decibels.
The Saints scored a touchdown on their next possession, but with 1:30 left, they attempted an onside kick recovered by tight end John Carlson. Seattle ran Marshawn Lynch twice took a knee and began to celebrate.
Matt Hasselbeck passed for four touchdowns, a Seahawks playoff record.
Hasselbeck – who had his third pass of the game picked off – came back to throw three touchdown passes in the first two quarters. Two of them were to tight end John Carlson, whose only touchdown of the regular season came all the way back in Week 3.
This Seattle offense that scored more than 24 points in only four games all season had that many points at halftime. The defense that gave up five consecutive touchdown drives to the Saints back in November, forced two punts in the first half and a turnover that set up Seattle’s game-tying field goal.
That field goal was sandwiched between two touchdown passes by Hasselbeck.
Seattle scored 17 consecutive points, took a 24-17 lead on Stokley’s 45-yard touchdown catch.
The Saints came back to kick a field goal on the final play of the first half, and trailed 24-20 at halftime. Seattle got the ball and scored on Williams’ touchdown reception.
Seattle led by 14 points when the third quarter ended yet the Seahawks appeared on the verge of losing the game after New Orleans scored 10 points in the first 6 minutes of the fourth quarter first on Julius Jones’ 4-yard touchdown run and then on Garrett Hartley’s 21-yard field goal.
With 5 minutes, 36 seconds left, Seattle held a four-point lead and New Orleans had the ball at its own 6-yard line.
The first play was an 11-yard pass to Julius Jones. He was tackled by linebacker Lofa Tatupu, a collision so fierce both players left the game wobbly. Then came a false-start penalty against New Orleans, the Saints’ third of the game.
An incompletion follow by a 7-yard pass to fullback Heath Evans brought up third-and-8, and Brees threw incomplete on a play in which the Saints were penalized for holding. The Seahawks declined and got the ball back with 4:20 remaining in the game.
Two plays later Lynch ran roughshod over the Saints, sealing the victory. |
package main
import (
"flag"
"os"
"webscraper"
)
func main() {
url := flag.String("url", "", "example https://en.wikipedia.org/wiki/Slope_One")
depth := flag.Int("depth", 1, "integer value e.g. 1")
pattern := flag.String("pattern", "", "regex pattern to extract link e.g. ")
rate := flag.Int("rate", 1, "requests per seconds allowed;default is 1 and max is 5")
flag.Parse()
if *url == "" || *depth < 1 {
flag.PrintDefaults()
os.Exit(1)
}
if *rate < 1 {
flag.PrintDefaults()
os.Exit(1)
}
webscraper.Run(webscraper.HttpFetcher{}, *url, *depth, *pattern, *rate)
}
|
// Keys returns the keys of the set in the order of insertion.
func (s *OrderedHashSet) Keys() []interface{} {
keys := make([]interface{}, s.order.Len())
i := 0
for element := s.order.Front(); element != nil; element = element.Next() {
keys[i] = element.Value
i++
}
return keys
} |
<reponame>tanimoto/watchit<filename>src/WatchIt.hs
{-# LANGUAGE OverloadedStrings #-}
-------------------------------------------------------------------------------
-- |
-- Module : WatchIt
-- Copyright : (c) 2014 <NAME>
-- License : BSD3
--
-- Maintainer : <NAME> <<EMAIL>>
--
-------------------------------------------------------------------------------
module WatchIt
( defaultMain
, watchIt
) where
-------------------------------------------------------------------------------
import WatchIt.Options
import WatchIt.Types
import Control.Concurrent (threadDelay)
import Control.Monad (forever, void, when)
import Data.Pool (Pool (..), createPool, tryWithResource)
import Data.Streaming.Process (Inherited (..), shell, streamingProcess,
waitForStreamingProcess)
import qualified Data.Text as Text
import qualified Filesystem.Path.CurrentOS as FS
import Options.Applicative (execParser)
import System.FSNotify (eventPath, watchDir, watchTree,
withManager)
-------------------------------------------------------------------------------
defaultMain :: IO ()
defaultMain = do
options <- execParser infoOptions
watchIt $ parseConfig options
-------------------------------------------------------------------------------
parseConfig :: Options -> Config
parseConfig options = Config
{ configPath = withDef configPath optionsPath FS.decodeString
, configFilter = withDef configFilter optionsExt
(flip FS.hasExtension . Text.pack)
, configAction = withDef configAction optionsCmd (const . run)
, configForce = withDef configForce optionsForce id
, configNumJobs = withDef configNumJobs optionsNumJobs id
, configRecur = withDef configRecur optionsNotRec not
}
where
withDef conf opt f = maybe (conf defaultConfig) f (opt options)
watchIt :: Config -> IO ()
watchIt config = do
-- Set up Config
let path = configPath config
let filterEvent = configFilter config . eventPath
let numJobs = configNumJobs config
pool <- createWorkerPool numJobs
let action = configAction config
let handleEvent = withPool pool action . eventPath
let forced = configForce config
let watch = if configRecur config then watchTree else watchDir
let longDelay = 12 * 3600 * 10000 -- maxBound
-- Watch it
putStrLn "watchit started..."
withManager $ \man -> do
when forced $ action FS.empty
void $ watch man path
filterEvent
handleEvent
forever $ threadDelay longDelay
--------------------------------------------------------------------------------
createWorkerPool :: Int -> IO (Pool ())
createWorkerPool stripes =
createPool
(return ())
(const $ return ())
stripes timeLeftOpen numPerStripe
where
timeLeftOpen = 1
numPerStripe = 1
withPool :: Pool a -> (FS.FilePath -> IO ()) -> FS.FilePath -> IO ()
withPool pool f file =
void $ tryWithResource pool $ const $ f file
--------------------------------------------------------------------------------
run :: String -> IO ()
run cmd = do
putStrLn $ replicate 72 '-'
(Inherited, Inherited, Inherited, handle) <-
streamingProcess (shell cmd)
waitForStreamingProcess handle >>= print
|
/**
* Created by Jayson on 8/7/2017.
* <p>
* This Fragment handles the UI for the Time Attack game mode
*/
public class PracticeGameFragment
extends GameFragment
implements PracticeGameContract.View {
public final String LOG_TAG = this.getClass().getSimpleName();
// Reference to our presenter
PracticeGameContract.UserActionsListener mPracticeActionsListener;
// UI Views
Button mDebugRefreshView;
FImageButton mPauseButton;
FButton mHintButton;
long mElapsedMillis = 0; // Maintain timer progress between lifecycle changes
boolean mIsPaused = false; // Maintain whether or not we're already paused
boolean mIsGameOver = false; // Maintain whether we finished the current game
// Default constructor
public PracticeGameFragment() {
}
public static PracticeGameFragment newInstance() {
return new PracticeGameFragment();
}
/*
* Each game mode needs to set up its own onCreateView and assign the following
* member variables of the superclass
* mActionsListener a reference to our presenter
* mRecyclerGridView a reference to the RecyclerView for the game
* As well as restore the previous state if applicable
*/
@Nullable
@Override
public View onCreateView(LayoutInflater inflater,
@Nullable ViewGroup container,
@Nullable Bundle savedInstanceState) {
// Call our superclass to handle basic game state restoration
super.onCreateView(inflater, container, savedInstanceState);
View root;
root = inflater.inflate(
R.layout.fragment_game_practice, container, false);
// Load game mode specific data from a saved state
if (savedInstanceState != null) {
mElapsedMillis = savedInstanceState
.getLong(getString(R.string.bundle_key_elapsed_millis));
}
// Instance the presenter our fragment uses and grab a reference
mPracticeActionsListener = new PracticeGamePresenter(this);
// Have the superclass use the PracticeGamePresenter as its GamePresenter
mActionsListener = mPracticeActionsListener;
// Set up the RecyclerView and assign it to the superclass
mRecyclerGridView = root.findViewById(R.id.game_recycler_grid);
// Grab references to our views
mPauseButton =
root.findViewById(R.id.button_pause);
mHintButton =
root.findViewById(R.id.button_hint);
// Hook up click listeners
mPauseButton.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
mPracticeActionsListener.onPauseClicked();
}
});
mHintButton.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
mPracticeActionsListener.onHintClicked();
}
});
// Initialize a game
mPracticeActionsListener.initGame(mExistingGame);
return root;
}
/*
* Save our game state to be restored on rotation or fragment recreation
*/
@Override
public void onSaveInstanceState(Bundle outState) {
super.onSaveInstanceState(outState);
}
@Override
public void onPause() {
// Pause the game if we aren't already
if (!mIsPaused && !mIsGameOver) {
mPracticeActionsListener.onPauseClicked();
}
super.onPause();
}
@Override
public void onResume() {
// Workaround to capture 'back' button presses from the fragment
// since we want 'back' to pause the game
// https://stackoverflow.com/a/29166971/7009268
if(getView() == null){
return;
}
getView().setFocusableInTouchMode(true);
getView().requestFocus();
getView().setOnKeyListener(new View.OnKeyListener() {
@Override
public boolean onKey(View v, int keyCode, KeyEvent event) {
if (event.getAction() == KeyEvent.ACTION_UP
&& keyCode == KeyEvent.KEYCODE_BACK){
if (!mIsPaused) pauseGame();
return true;
}
return false;
}
});
super.onResume();
}
/**
* Cleanup resources
*/
@Override
public void onStop() {
super.onStop();
}
@Override
public void onSetSuccess() {
// Nothing to do here, handled by superclass
}
@Override
public void onSetFailure() {
// Nothing to do here, handled by superclass
}
@Override
public void showGameOver() {
// Nothing to do here, no game over in practice mode
}
/**
* Pauses the game and opens the PauseFragment as a dialog for result.
*/
@Override
public void pauseGame() {
if (!mIsPaused) {
mIsPaused = true;
// Set up PauseFragment
android.support.v4.app.DialogFragment pauseFragment = new PauseFragment();
pauseFragment.setCancelable(false);
pauseFragment.setTargetFragment(this, 1);
pauseFragment.setStyle(STYLE_NORMAL, R.style.PauseDialogStyle);
// Show fragment
pauseFragment.show(getFragmentManager(), "dialog");
}
}
/*
* Retrieve the results from the pause pop up menu and react accordingly.
*/
@Override
public void onActivityResult(int requestCode, int resultCode, Intent data) {
if (resultCode == PauseContract.RESULT_RESUME) {
mPracticeActionsListener.onPauseResultResume();
} else if (resultCode == PauseContract.RESULT_RESTART) {
mPracticeActionsListener.onPauseResultRestart();
} else if (resultCode == PauseContract.RESULT_MAIN_MENU) {
mPracticeActionsListener.onPauseResultMainMenu();
} else {
Log.d(LOG_TAG, "The pause menu returned an unexpected code.");
}
}
/**
* Un-pause the current game. This is called when leaving resuming from the pause
* menu and coming back from being minimized.
*/
@Override
public void resumeGame() {
mIsPaused = false;
}
/**
* Start a new game. Ends the current game.
*/
@Override
public void restartGame() {
// Swap in the Single Player Menu Fragment
FragmentManager fragmentManager = getFragmentManager();
FragmentTransaction transaction = fragmentManager.beginTransaction();
transaction.replace(R.id.content_frame, PracticeGameFragment.newInstance());
transaction.commit();
}
/**
* Clear the task stack and open the main menu
*/
@Override
public void openMainMenu() {
getActivity().finishAfterTransition();
}
} |
<filename>src/main/java/net/nharyes/drivecopy/biz/wfm/TokenWorkflowManagerImpl.java
/*
* Copyright 2012-2015 <NAME>
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package net.nharyes.drivecopy.biz.wfm;
import com.google.api.client.auth.oauth2.Credential;
import com.google.api.client.googleapis.auth.oauth2.GoogleAuthorizationCodeFlow;
import com.google.api.client.googleapis.auth.oauth2.GoogleTokenResponse;
import com.google.api.client.http.HttpTransport;
import com.google.api.client.json.JsonFactory;
import com.google.api.services.drive.DriveScopes;
import com.google.inject.Inject;
import com.google.inject.Singleton;
import net.nharyes.drivecopy.biz.bo.TokenBO;
import net.nharyes.drivecopy.biz.exc.WorkflowManagerException;
import org.apache.commons.configuration.ConfigurationException;
import org.apache.commons.configuration.PropertiesConfiguration;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.Collections;
@Singleton
public class TokenWorkflowManagerImpl extends BaseWorkflowManager<TokenBO> implements TokenWorkflowManager {
/*
* Constants
*/
private final String CLIENT_ID_KEY = "clientId";
private final String CLIENT_SECRET_KEY = "clientSecret";
private final String ACCESS_TOKEN_KEY = "accessToken";
private final String REFRESH_TOKEN_KEY = "refreshToken";
private final String REDIRECT_URI = "urn:ietf:wg:oauth:2.0:oob";
// configuration
private PropertiesConfiguration config;
// HTTP transport
private HttpTransport httpTransport;
// JSON factory
private JsonFactory jsonFactory;
@Inject
public TokenWorkflowManagerImpl(PropertiesConfiguration config, HttpTransport httpTransport, JsonFactory jsonFactory) {
this.config = config;
this.httpTransport = httpTransport;
this.jsonFactory = jsonFactory;
}
public TokenBO handleWorkflow(TokenBO businessObject, int action) throws WorkflowManagerException {
switch (action) {
case ACTION_GET:
return get(businessObject);
default:
throw new WorkflowManagerException("Action not found");
}
}
private TokenBO get(TokenBO token) throws WorkflowManagerException {
try {
// check client ID and client secret configuration existence
if (!config.containsKey(CLIENT_ID_KEY) || !config.containsKey(CLIENT_SECRET_KEY)) {
// request client data to user
System.out.println("Configuration file not found; generating a new one...");
System.out.println("(see https://github.com/Gherynos/DriveCopy/wiki/Setup for help)");
System.out.println();
System.out.println("Please insert CLIENT ID:");
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
String clientId = br.readLine();
System.out.println("Please insert CLIENT SECRET:");
String clientSecret = br.readLine();
// store client data
config.setProperty(CLIENT_ID_KEY, clientId);
config.setProperty(CLIENT_SECRET_KEY, clientSecret);
config.save();
}
// check tokens configuration existence
if (!config.containsKey(ACCESS_TOKEN_KEY) || !config.containsKey(REFRESH_TOKEN_KEY)) {
// request authorization to user
GoogleAuthorizationCodeFlow flow = new GoogleAuthorizationCodeFlow.Builder(httpTransport, jsonFactory, config.getString(CLIENT_ID_KEY), config.getString(CLIENT_SECRET_KEY), Collections.singletonList(DriveScopes.DRIVE)).build();
String url = flow.newAuthorizationUrl().setRedirectUri(REDIRECT_URI).build();
System.out.println("Please open the following URL in your browser then type the authorization code:");
System.out.println(" " + url);
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
String code = br.readLine();
// process response
GoogleTokenResponse response = flow.newTokenRequest(code).setRedirectUri(REDIRECT_URI).execute();
Credential credential = flow.createAndStoreCredential(response, null);
// store tokens
config.setProperty(ACCESS_TOKEN_KEY, credential.getAccessToken());
config.setProperty(REFRESH_TOKEN_KEY, credential.getRefreshToken());
config.save();
}
// return token
return new TokenBO(config.getString(CLIENT_ID_KEY), config.getString(CLIENT_SECRET_KEY), config.getString(ACCESS_TOKEN_KEY), config.getString(REFRESH_TOKEN_KEY));
} catch (IOException | ConfigurationException ex) {
// re-throw exception
throw new WorkflowManagerException(ex.getMessage(), ex);
}
}
}
|
<filename>src/main/java/de/maximilianheidenreich/jnet/net/Connection.java
package de.maximilianheidenreich.jnet.net;
import de.maximilianheidenreich.jeventloop.utils.ExceptionUtils;
import de.maximilianheidenreich.jnet.events.RecvPacketEvent;
import de.maximilianheidenreich.jnet.packets.AbstractPacket;
import de.maximilianheidenreich.jnet.packets.core.NameChangePacket;
import lombok.Getter;
import lombok.extern.log4j.Log4j;
import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.net.Socket;
import java.util.UUID;
import java.util.concurrent.CompletableFuture;
/**
* A connection which can represent a client to server / server to client connection.
*/
@Log4j
@Getter
public class Connection implements Runnable {
// ====================== VARS
/**
* A name for the connection.
*/
private String name;
/**
* The parent AbstractPacketManager which will handle any packets.
*/
private final AbstractPacketManager packetManager;
/**
* Reference to the {@link Socket} instance.
*/
private final Socket socket;
/**
* Wrapper around the sockets {@link java.io.OutputStream}.
*/
private final ObjectOutputStream outputStream;
/**
* Wrapper around the sockets {@link java.io.InputStream}.
*/
private final ObjectInputStream inputStream;
// ====================== CONSTRUCTOR
public Connection(AbstractPacketManager packetManager, Socket socket, String name) throws IOException {
this.name = name;
this.packetManager = packetManager;
this.socket = socket;
this.outputStream = new ObjectOutputStream(getSocket().getOutputStream());
this.inputStream = new ObjectInputStream(getSocket().getInputStream());
}
public Connection(AbstractPacketManager packetManager, Socket socket) throws IOException {
this(packetManager, socket, UUID.randomUUID().toString());
}
// ====================== BUSINESS LOGIC
@Override
public void run() {
log.debug(String.format("[JNet] Started new ConnectionThread for %s", getSocket().getRemoteSocketAddress().toString()));
Thread.currentThread().setName(
String.format(
"ConnectionThread for %s |%s",
getSocket().getRemoteSocketAddress().toString(),
Thread.currentThread().getName()
)
);
while (!Thread.currentThread().isInterrupted() && getPacketManager().getEventLoop().isRunning()) {
try {
AbstractPacket packet = recv();
log.trace("[JNet] SOCK (" + getName() + ") Read " + packet);
getPacketManager().getEventLoop().dispatch(new RecvPacketEvent(packet, this));
}
catch (IOException | ClassNotFoundException e) {
log.error("[JNet] SOCK (" + getName() + ") Received invalid packet in " + Thread.currentThread() + "!");
log.error(ExceptionUtils.getStackTraceAsString(e));
}
}
}
/**
* Receives an {@link Object} and returns it as a {@link AbstractPacket}.
*
* @return The returned packet
* @throws IOException
* @throws ClassNotFoundException
*/
private AbstractPacket recv() throws IOException, ClassNotFoundException {
return (AbstractPacket) getInputStream().readObject();
}
// ====================== SENDING PACKETS
/**
* Sends a {@link AbstractPacket} over the socket connection.
*
* @param packet
* The packet to send
* @param flush
* Whether to flush the channel afterwards
* @throws IOException
*/
public void sendRaw(AbstractPacket packet, boolean flush) throws IOException {
getOutputStream().writeObject(packet);
if (flush)
getOutputStream().flush();
log.trace(String.format("[JNet] SOCK (%s) Writing %s, flush: %s", getName(), packet.toString(), flush));
}
/**
* Wrapper around {@link #sendRaw(AbstractPacket, boolean)} with flush defaulting to {@code false}.
*
* @param packet
* The packet to send
* @throws IOException
*/
public void sendRaw(AbstractPacket packet) throws IOException {
sendRaw(packet, false);
}
/**
* Wrapper around {@link #send(AbstractPacket, boolean)} that returns callback that gets completed when
* a packet with matching id is received.
*
* @param packet
* The packet to send
* @return
* The callback
* @throws IOException
*/
public CompletableFuture<AbstractPacket> send(AbstractPacket packet, boolean flush) throws IOException {
CompletableFuture<AbstractPacket> future = new CompletableFuture<>();
getPacketManager().addCallback(packet, future);
sendRaw(packet, flush);
return future;
}
/**
* Wrapper around {@link #send(AbstractPacket, boolean)} with flush defaulting to {@code false}.
*
* @param packet
* The packet to send
* @return
* The callback
* @throws IOException
*/
public CompletableFuture<AbstractPacket> send(AbstractPacket packet) throws IOException {
return send(packet, false);
}
// ====================== HELPERS
/**
* Updates the name locally.
*
* @param name
* The new name to use
*/
public void setName(String name) {
this.name = name;
}
/**
* Updates the name locally and on connected peers.
*
* @param name
* The new name to use
* @throws IOException
*/
public void setNameRemote(String name) throws IOException {
String oldName = getName();
this.name = name;
send(new NameChangePacket(oldName, name))
.exceptionally(err -> {
err.printStackTrace(); // TODO: fix nicer
return null;
});
}
}
|
Complaining about your boss and pay is a favorite pastime of everyone from assembly-line workers to corporate cubicle neighbors. But what if you work from home? Or drive an Uber? If you don’t even know who your fellow employees are, how can you commiserate with them?
If you’re not an employee, you can’t join a union. But you can join a Facebook group.
A new kind of corporate culture is being created by independent contractors who yearn for the camaraderie of marching to a communal beat. Rural doctors, property appraisers, freelance writers, and ride-share drivers are forging their own water-cooler communities by connecting online with their “colleagues” they’ve never met.
Freelance employees are searching for a digital platform to give them a collective voice. Joining a union includes sharing information and bringing common interests together, which are endeavors perfectly suited for social media. On a digital platform, the longstanding union membership barriers of cost, venue, and visibility are also minimized.
In the US, traditional American union membership is declining. Employees have historically joined labor unions for all kinds of protection, from low wages to threats to professional dignity. These reasons are especially relevant to independent contractors in the new economy. Facebook groups can replicate the efforts of unions with private posts that are only visible to group members, far from the intrusive eyes of company management. In this way, membership in Facebook groups is “mirroring social groups in the physical world like churches, governments, and unions.”
And they’re working. To strike for higher pay, delivery employees at Instacart organized a “no-delivery day” through their Facebook group. And even where employees already have a union, a Facebook group can fill the gaps: “USPS Non-Career Employees” has a mission statement that reads “Do you feel that management is taking advantage of you? Do you feel your union could intervene more?…We are not an official union, but there are strength in numbers…[we] need to stick together so we can protect our careers.”
Facebook’s 2 billion users include groups that have amassed over 100 million members. Work-focused groups have proven to be compelling in their ability to blend elements of labor unions, company-sponsored employee-resources groups, internal communication platforms, and gossipy anonymous messaging sites like Blind. The groups run the gamut from social to professional, and many aim for the holy grail of traditional unions at the industry level. For example, flight attendants in “Airline FA Contract Compare & Share” “discuss the details of each airlines’ contracts between flight attendants and their companies,” and “I am a Real Estate Appraiser” strives to “take control of our industry.”
Any Facebook user can create a group on any theme and serve as its administrator, and membership usually must be approved before another user can join and view the group posts. Facebook group administrators hold the power to delete posts that do not comply with the group guidelines, and block members. Some are stricter than others.
One of the rules that some groups follow is that the first rule of Fight Club is not talking about Fight Club. The Facebook group “Magicians Only” asks “Please DO NOT discuss methods;” while the 21,000 members of “Real Estate Rockstar Agents” stipulate “no commercial postings, listings, property ads,” and 23,000 “Yoga Teachers” preach no “self-promotion.”
Wannabees creeping for industry insight are also discouraged. “Worldwide Working Freelance Makeup Artists” is specifically for those already working in the industry, describing that “this playground is only for the big kids”; the 6,800-people-strong air-conditioner group “HVAC Technician” warns “if you do not work in this trade and we find out, you will be deleted from the group. No homeowner DIYs”; the “Uber and Lyft Drivers Breakroom” tells participants to “be prepared to show proof that you are a driver”; and the even larger “Speech Pathologists at Large” forum stipulates that “your public (Facebook) profile **must** indicate your professional or student status.”
Digital union efforts also come in the form of hashtags. For example, Zara employees came together to demand better pay, more hours, and equal opportunity for growth—not in the lunchroom or at happy hour, but by tagging their social media posts #ChangeZara. Fashion photographers use #NoFreePhotos to assign credit, and #NameTheTranslator calls out book reviews that leave the translator invisible. #ChangeZara was a success: By sharing campaign graphics and “solidarity selfies” on social media, Zara employees in New York amassed 1,400 signatures on a petition to management using the digital platform coworker.org. They ultimately received pay raises and increased advancement opportunities in response.
Employers often discourage unionizing, largely to preserve their ability to hire and fire “at-will.” As a result, it can be just as risky for workers to unionize today as it was on the heels of the industrial revolution, when workers were locked out of factories, replaced, and threated with violence. For example, a leading New York-based news site recently closed down after the staff voted to join the Writers Guild, and WalMart discouraged workers from joining unions by closing the meat counter and outsourcing packaged products after its butchers at a Texas store unionized. Some of the largest industries are actually the hardest to unionize. For example, the franchise business model splinters employees of large corporations into small groups, which has allowed McDonald’s to operate free of union pressure.
You also can’t assume that your Facebook group is a safe haven for griping about the boss. Just like in real life on and off the factory floor, if a Facebook post does not fall under what the National Labor Relations Board classifies as “protected concerted activity,” it can be grounds for lawful termination. Protected activity includes discussing work conditions and taking action for mutual benefit on employment terms and conditions, but excludes personal attacks on colleagues or posts that disrupt work. But privacy rights in the internet era are universally valued enough to have earned bipartisan support at the state level, and many states now restrict employer access to employees’ social-media accounts.
Protecting the common interest of workers has long been the domain of labor unions—but they are notorious as “latecomers to digital organizing.” Now that workers are competing for jobs in this new gig-based economy, workers are turning online to vent, collaborate, and survive. Where traditional unions have failed to deliver, social media has stepped up.
Union membership may not be declining at all—just losing territory to the competition.
This article is part of Quartz Ideas, our home for bold arguments and big thinkers. |
/**
* The agent database stores necessary data about connected agents. The data is then used to call
* the agents signing service and to generate the ssh authentication request.
*/
public class AgentDatabase extends RobustSQLiteOpenHelper {
public static final String TAG = "TB.AgentDatabase";
public static final String DB_NAME = "agents";
public static final int DB_VERSION = 1;
public static final String TABLE_AGENTS = "agents";
public static final String FIELD_AGENT_KEY_IDENTIFIER = "keyidentifier";
public static final String FIELD_AGENT_KEY_TYPE = "keytype";
public static final String FIELD_AGENT_DESCRIPTION = "description";
public static final String FIELD_AGENT_PACKAGE_NAME = "packagename";
public static final String FIELD_AGENT_PUBLIC_KEY = "publickey";
static {
addTableName(TABLE_AGENTS);
}
private static final Object sInstanceLock = new Object();
private static AgentDatabase sInstance;
private final SQLiteDatabase mDataBase;
public static AgentDatabase get(Context context) {
synchronized (sInstanceLock) {
if (sInstance != null) {
return sInstance;
}
Context AppContext = context.getApplicationContext();
sInstance = new AgentDatabase(AppContext);
return sInstance;
}
}
private AgentDatabase(Context context) {
super(context, DB_NAME, null, DB_VERSION);
mDataBase = getWritableDatabase();
}
@Override
public void onCreate(SQLiteDatabase db) {
super.onCreate(db);
createTables(db);
}
private void createTables(SQLiteDatabase Database) {
Database.execSQL("CREATE TABLE " + TABLE_AGENTS + " ( " +
"_id INTEGER PRIMARY KEY,"
+ FIELD_AGENT_KEY_IDENTIFIER + " TEXT, "
+ FIELD_AGENT_KEY_TYPE + " TEXT, "
+ FIELD_AGENT_DESCRIPTION + " TEXT, "
+ FIELD_AGENT_PACKAGE_NAME + " TEXT, "
+ FIELD_AGENT_PUBLIC_KEY + " BLOB)"
);
}
@Override
public void onRobustUpgrade(SQLiteDatabase db, int oldVersion, int newVersion)
throws SQLiteException {
}
/**
* Gets an agent by its id.
*
* @param agentId The agent id.
* @return The agent if it is present. Null otherwise.
*/
public AgentBean findAgentById(long agentId) {
Cursor c = mDataBase.query(TABLE_AGENTS, null,
"_id = ?", new String[] {String.valueOf(agentId)},
null, null, null);
return getFirstAgentBean(c);
}
/**
* Inserts or updates the given agent into the agent database.
*
* @param agent The agent to insert or update
* @return The agent. The field id is now synced with the id property.
*/
public AgentBean saveAgent(AgentBean agent) {
long id = agent.getId();
mDataBase.beginTransaction();
try {
if (id == -1) {
id = mDataBase.insert(TABLE_AGENTS, null, agent.getValues());
} else {
mDataBase.update(TABLE_AGENTS, agent.getValues(), "_id = ?",
new String[] {String.valueOf(id)});
}
mDataBase.setTransactionSuccessful();
} finally {
mDataBase.endTransaction();
}
agent.setId(id);
return agent;
}
/**
* Deletes the agent with the supplied id from the database
*
* @param agentId Id of agent that should be deleted
*/
public void deleteAgentById(long agentId) {
if (agentId == HostDatabase.AGENTID_NONE) {
return;
}
mDataBase.beginTransaction();
try {
mDataBase.delete(TABLE_AGENTS, "_id = ?",
new String[] {Long.toString(agentId)});
mDataBase.setTransactionSuccessful();
} finally {
mDataBase.endTransaction();
}
}
/**
* Creates a list of agent beans from a database cursor.
*
* @param c cursor to read from
*/
private List<AgentBean> createAgentBeans(Cursor c) {
List<AgentBean> agents = new ArrayList<>();
final int COL_ID = c.getColumnIndexOrThrow("_id"),
COL_KEY_IDENTIFIER = c.getColumnIndexOrThrow(FIELD_AGENT_KEY_IDENTIFIER),
COL_KEY_TYPE = c.getColumnIndexOrThrow(FIELD_AGENT_KEY_TYPE),
COL_DESCRIPTION = c.getColumnIndexOrThrow(FIELD_AGENT_DESCRIPTION),
COL_PACKAGE_NAME = c.getColumnIndexOrThrow(FIELD_AGENT_PACKAGE_NAME),
COL_PUBLIC_KEY = c.getColumnIndexOrThrow(FIELD_AGENT_PUBLIC_KEY);
while (c.moveToNext()) {
AgentBean agent = new AgentBean();
agent.setId(c.getLong(COL_ID));
agent.setKeyIdentifier(c.getString(COL_KEY_IDENTIFIER));
agent.setKeyType(c.getString(COL_KEY_TYPE));
agent.setDescription(c.getString(COL_DESCRIPTION));
agent.setPackageName(c.getString(COL_PACKAGE_NAME));
agent.setPublicKey(c.getBlob(COL_PUBLIC_KEY));
agents.add(agent);
}
return agents;
}
private AgentBean getFirstAgentBean(Cursor c) {
AgentBean agent = null;
List<AgentBean> agents = createAgentBeans(c);
if (agents.size() > 0) {
agent = agents.get(0);
}
c.close();
return agent;
}
} |
/**
* @author Quentin Boileau (Geomatys)
*/
public class ParameterDescriptorJSONSerializer extends JsonSerializer<GeneralParameterDescriptor> {
@Override
public void serialize(GeneralParameterDescriptor generalParameterDescriptor, JsonGenerator writer, SerializerProvider serializerProvider) throws IOException, JsonProcessingException {
writeGeneralDesc(generalParameterDescriptor, writer);
}
private void writeGeneralDesc(GeneralParameterDescriptor generalDesc, JsonGenerator writer) throws IOException {
writer.writeStartObject();
final String name = generalDesc.getName().getCode();
writer.writeStringField("name", name);
writer.writeNumberField("minOccurs", generalDesc.getMinimumOccurs());
writer.writeNumberField("maxOccurs", generalDesc.getMaximumOccurs());
if (generalDesc.getDescription() != null) {
writer.writeStringField("description", generalDesc.getDescription().toString());
}
if (generalDesc instanceof ParameterDescriptor) {
ParameterDescriptor descParam = (ParameterDescriptor) generalDesc;
writeParamDesc(descParam, writer);
} else if (generalDesc instanceof ParameterDescriptorGroup) {
ParameterDescriptorGroup descGroup = (ParameterDescriptorGroup) generalDesc;
writeGroupDesc(descGroup, writer);
}
writer.writeEndObject();
}
private void writeGroupDesc(ParameterDescriptorGroup descGroup, JsonGenerator writer) throws IOException {
List<GeneralParameterDescriptor> descriptors = descGroup.descriptors();
writer.writeArrayFieldStart("descriptors");
for (GeneralParameterDescriptor descriptor : descriptors) {
writeGeneralDesc(descriptor, writer);
}
writer.writeEndArray();
}
private void writeParamDesc(ParameterDescriptor descParam, JsonGenerator writer) throws IOException {
writer.writeStringField("class", descParam.getValueClass().getCanonicalName());
if (descParam.getUnit() != null) {
String unit = ObjectConverters.convert(descParam.getUnit(), String.class);
writer.writeStringField("unit", unit);
}
final Object defaultValue = descParam.getDefaultValue();
if (defaultValue != null) {
writer.writeFieldName("defaultValue");
JsonUtils.writeValue(defaultValue, writer);
}
final Set validValues = descParam.getValidValues();
final Comparable minValue = descParam.getMinimumValue();
final Comparable maxValue = descParam.getMaximumValue();
if (validValues != null || minValue != null || maxValue != null) {
writer.writeObjectFieldStart("restriction");
if (minValue != null) {
writer.writeFieldName("minValue");
JsonUtils.writeValue(minValue, writer);
}
if (maxValue != null) {
writer.writeFieldName("maxValue");
JsonUtils.writeValue(maxValue, writer);
}
if (validValues != null) {
writer.writeArrayFieldStart("validValues");
for (Object validValue : validValues) {
JsonUtils.writeValue(validValue, writer);
}
writer.writeEndArray();
}
writer.writeEndObject();
}
// write user map entries if exist
if (descParam instanceof ExtendedParameterDescriptor) {
ExtendedParameterDescriptor extDesc = (ExtendedParameterDescriptor) descParam;
Map<String, Object> userObject = extDesc.getUserObject();
if (userObject != null) {
writer.writeObjectFieldStart("ext");
for (Map.Entry<String, Object> entry : userObject.entrySet()) {
writer.writeFieldName(entry.getKey());
JsonUtils.writeValue(entry.getValue(), writer);
}
writer.writeEndObject();
}
}
}
} |
// removeChannel removes the given channel with the index from the given
// channels slice in an unordered fashion.
func removeChannel(channels []*discord.Channel, i int) []*discord.Channel {
channels[i] = channels[len(channels)-1]
channels[len(channels)-1] = nil
channels = channels[:len(channels)-1]
return channels
} |
Tuesday night’s celebrity-studded gala to raise money for victims of Harvey and Irma certainly didn’t have the draw that previous fundraisers have had. Beyoncé, who provided a recorded message, did managed to make some viewers’ ears perk up, though, at around the 1:25 mark. That’s when she got into the effects of climate change being seen around the world.
Watch: Beyoncé's heartfelt message on natural disasters & climate change. #HandInHand pic.twitter.com/B0McFh21Dt — BEYONCÉ LEGION (@Bey_Legion) September 13, 2017
Um …
Beyoncé's list of climate change disasters seem to include the Mexico 8.1 Earthquake. pic.twitter.com/LrAEZNIt7L — Ryan Maue (@RyanMaue) September 13, 2017
Did Beyonce just blame the Mexican earthquake on climate change? pic.twitter.com/QwPX1gSLbG — Alan Robertson (@warobertson) September 13, 2017
Beyoncé just claimed that an earthquake was an example of climate change. So there's that. ? — Christopher Brown (@sleepycatchris) September 13, 2017
Someone tell Beyoncé earthquakes and climate change aren't related. Wildfires & stronger hurricanes? Yes. Earthquake, no. You're not helping — Alan Woodson (@WoodyAlanW) September 13, 2017
Yes, Beyoncé did list the recent earthquake in Mexico among the visible signs of climate change. That didn’t stop Mother Jones, though, from “fact-checking” her video and concluding that she knows more science than the entirety of the GOP.
Beyoncé’s Hurricane Video Shows She Knows More About Science Than the Entire GOP https://t.co/oBNqowY2Ob pic.twitter.com/zkPF5KrgHt — Mother Jones (@MotherJones) September 13, 2017
I shoe-horned Beyoncé into @MotherJones's climate coverage & found she knows more than, like, any Republican leader. https://t.co/waercTBeML — James West (@jameswest2010) September 13, 2017
Mother Jones fudged a bit, but eventually did admit that the link between earthquakes and climate change is “not so clear” — as in, there is no link.
Earthquake thing aside, Beyoncé still knows more about the science than most of Congress and the White House. EPA chief Scott Pruitt told CNN ahead of Irma that it was “insensitive” to talk about climate change while the storm hit. President Trump has previously called it a hoax.
“Earthquake thing aside” is a pretty generous way of grading a science test; kind of like promoting late-term abortion while putting that whole “viable human being” thing aside, or arguing that no one knows exactly how many genders there are, putting that whole “does it have male or female reproductive organs” thing aside, or comparing the body count of terror attacks in the U.S. while leaving that whole “9/11” thing aside.
But … Beyoncé said it was true, and she knows more about science than maybe even Bill Nye the failed improv comedian guy.
* * *
Related: |
<gh_stars>100-1000
//! Stock class entries registered with PHP, primarily exceptions.
#![allow(clippy::unwrap_used)]
use crate::ffi::{
zend_ce_argument_count_error, zend_ce_arithmetic_error, zend_ce_compile_error,
zend_ce_division_by_zero_error, zend_ce_error_exception, zend_ce_exception,
zend_ce_parse_error, zend_ce_throwable, zend_ce_type_error, zend_ce_unhandled_match_error,
zend_ce_value_error, zend_standard_class_def,
};
use super::ClassEntry;
/// Returns the base `stdClass` class.
pub fn stdclass() -> &'static ClassEntry {
unsafe { zend_standard_class_def.as_ref() }.unwrap()
}
/// Returns the base `Throwable` class.
pub fn throwable() -> &'static ClassEntry {
unsafe { zend_ce_throwable.as_ref() }.unwrap()
}
/// Returns the base `Exception` class.
pub fn exception() -> &'static ClassEntry {
unsafe { zend_ce_exception.as_ref() }.unwrap()
}
/// Returns the base `ErrorException` class.
pub fn error_exception() -> &'static ClassEntry {
unsafe { zend_ce_error_exception.as_ref() }.unwrap()
}
/// Returns the base `CompileError` class.
pub fn compile_error() -> &'static ClassEntry {
unsafe { zend_ce_compile_error.as_ref() }.unwrap()
}
/// Returns the base `ParseError` class.
pub fn parse_error() -> &'static ClassEntry {
unsafe { zend_ce_parse_error.as_ref() }.unwrap()
}
/// Returns the base `TypeError` class.
pub fn type_error() -> &'static ClassEntry {
unsafe { zend_ce_type_error.as_ref() }.unwrap()
}
/// Returns the base `ArgumentCountError` class.
pub fn argument_count_error() -> &'static ClassEntry {
unsafe { zend_ce_argument_count_error.as_ref() }.unwrap()
}
/// Returns the base `ValueError` class.
pub fn value_error() -> &'static ClassEntry {
unsafe { zend_ce_value_error.as_ref() }.unwrap()
}
/// Returns the base `ArithmeticError` class.
pub fn arithmetic_error() -> &'static ClassEntry {
unsafe { zend_ce_arithmetic_error.as_ref() }.unwrap()
}
/// Returns the base `DivisionByZeroError` class.
pub fn division_by_zero_error() -> &'static ClassEntry {
unsafe { zend_ce_division_by_zero_error.as_ref() }.unwrap()
}
/// Returns the base `UnhandledMatchError` class.
pub fn unhandled_match_error() -> &'static ClassEntry {
unsafe { zend_ce_unhandled_match_error.as_ref() }.unwrap()
}
|
//! userlevel=Advanced
//: /dev/audio IO base class.
class SphereBaseC
: public AudioIOBaseC
{
public:
SphereBaseC();
SphereBaseC(const StringC &fn,int channel,bool forInput,const type_info &dtype);
~SphereBaseC();
bool IOpen(const StringC &fn,int channel,const type_info &dtype);
bool OOpen(const StringC &fn,int channel,const type_info &dtype);
virtual bool SetSampleBits(IntT bits);
virtual bool SetSampleRate(RealT rate);
virtual bool GetSampleBits(IntT &bits);
virtual bool GetSampleRate(RealT &rate);
bool SetupChannels(const type_info &dtype);
bool Read(void *buf,IntT &len);
bool Write(const void *buf,IntT len);
bool IsOpen() const
{ return bis.Stream() && !endOfFile; }
bool Seek(UIntT off);
UIntT Tell() const;
UIntT Size() const;
protected:
HashC<StringC,StringC> attribs;
RealT sampleRate;
IntT bits;
UInt16T dataOffset;
BinIStreamC bis;
bool endOfFile;
} |
<reponame>cping/LGame<gh_stars>100-1000
package loon.utils.reflect;
import loon.gwtref.client.Type;
@SuppressWarnings("rawtypes")
public final class Field {
private final loon.gwtref.client.Field field;
Field (loon.gwtref.client.Field field) {
this.field = field;
}
public String getName () {
return field.getName();
}
public Class getType () {
return field.getType().getClassOfType();
}
public Class getDeclaringClass () {
return field.getEnclosingType().getClassOfType();
}
public boolean isAccessible () {
return field.isPublic();
}
public void setAccessible (boolean accessible) {
}
public boolean isDefaultAccess () {
return !isPrivate() && !isProtected() && !isPublic();
}
public boolean isFinal () {
return field.isFinal();
}
public boolean isPrivate () {
return field.isPrivate();
}
public boolean isProtected () {
return field.isProtected();
}
public boolean isPublic () {
return field.isPublic();
}
public boolean isStatic () {
return field.isStatic();
}
public boolean isTransient () {
return field.isTransient();
}
public boolean isVolatile () {
return field.isVolatile();
}
public boolean isSynthetic () {
return field.isSynthetic();
}
public Class getElementType (int index) {
Type elementType = field.getElementType(index);
return elementType != null ? elementType.getClassOfType() : null;
}
public boolean isAnnotationPresent (Class<? extends java.lang.annotation.Annotation> annotationType) {
java.lang.annotation.Annotation[] annotations = field.getDeclaredAnnotations();
for (java.lang.annotation.Annotation annotation : annotations) {
if (annotation.annotationType().equals(annotationType)) {
return true;
}
}
return false;
}
public Annotation[] getDeclaredAnnotations () {
java.lang.annotation.Annotation[] annotations = field.getDeclaredAnnotations();
Annotation[] result = new Annotation[annotations.length];
for (int i = 0; i < annotations.length; i++) {
result[i] = new Annotation(annotations[i]);
}
return result;
}
public Annotation getDeclaredAnnotation (Class<? extends java.lang.annotation.Annotation> annotationType) {
java.lang.annotation.Annotation[] annotations = field.getDeclaredAnnotations();
for (java.lang.annotation.Annotation annotation : annotations) {
if (annotation.annotationType().equals(annotationType)) {
return new Annotation(annotation);
}
}
return null;
}
public Object get (Object obj) throws ReflectionException {
try {
return field.get(obj);
} catch (IllegalArgumentException e) {
throw new ReflectionException("Could not get " + getDeclaringClass() + "#" + getName() + ": " + e.getMessage(), e);
} catch (IllegalAccessException e) {
throw new ReflectionException("Illegal access to field " + getName() + ": " + e.getMessage(), e);
}
}
public void set (Object obj, Object value) throws ReflectionException {
try {
field.set(obj, value);
} catch (IllegalArgumentException e) {
throw new ReflectionException("Could not set " + getDeclaringClass() + "#" + getName() + ": " + e.getMessage(), e);
} catch (IllegalAccessException e) {
throw new ReflectionException("Illegal access to field " + getName() + ": " + e.getMessage(), e);
}
}
}
|
package execution_agent.execution;
import execution_agent.objects.RecoveryAction;
/**
* Created by alex on 13.09.17.
*/
public interface ActionExecutionInterface {
void executeAction(RecoveryAction action);
}
|
<gh_stars>1-10
#ifndef ROSE_BinaryAnalysis_String_H
#define ROSE_BinaryAnalysis_String_H
#include <MemoryMap.h>
#include <Sawyer/CommandLine.h>
#include <Sawyer/Optional.h>
namespace rose {
namespace BinaryAnalysis {
/** Suport for finding strings in memory.
*
* This namespace provides support for various kinds of strings in specimen memory, including an @ref StringFinder "analysis"
* that searches for strings in specimen memory. A string is a sequence of characters encoded in one of a variety of ways in
* memory. For instance, NUL-terminated ASCII is a common encoding from C compilers. The characters within the string must
* all satisfy some valid-character predicate. The terms used in this analysis are based on the Unicode standard, and are
* defined here in terms of string encoding (translation of a string as printed to a sequence of octets). Although this
* analysis can encode strings, its main purpose is decoding strings from an octet stream into a sequence of code points.
*
* Unicode and its parallel standard, the ISO/IEC 10646 Universal Character Set, together constitute a modern, unified
* character encoding. Rather than mapping characters directly to octets (bytes), they separately define what characters are
* available, their numbering, how those numbers are encoded as a series of "code units" (limited-size numbers), and finally
* how those units are encoded as a stream of octets. The idea behind this decomposition is to establish a universal set of
* characters that can be encoded in a variety of ways. To describe this model correctly one needs more precise terms than
* "character set" and "character encoding." The terms used in the modern model follow:
*
* A character repertoire is the full set of abstract characters that a system supports. The repertoire may be closed, i.e. no
* additions are allowed without creating a new standard (as is the case with ASCII and most of the ISO-8859 series), or it
* may be open, allowing additions (as is the case with Unicode and to a limited extent the Windows code pages). The
* characters in a given repertoire reflect decisions that have been made about how to divide writing systems into basic
* information units. The basic variants of the Latin, Greek and Cyrillic alphabets can be broken down into letters, digits,
* punctuation, and a few special characters such as the space, which can all be arranged in simple linear sequences that are
* displayed in the same order they are read. Even with these alphabets, however, diacritics pose a complication: they can be
* regarded either as part of a single character containing a letter and diacritic (known as a precomposed character), or as
* separate characters. The former allows a far simpler text handling system but the latter allows any letter/diacritic
* combination to be used in text. Ligatures pose similar problems. Other writing systems, such as Arabic and Hebrew, are
* represented with more complex character repertoires due to the need to accommodate things like bidirectional text and
* glyphs that are joined together in different ways for different situations.
*
* A coded character set (CCS) specifies how to represent a repertoire of characters using a number of (typically
* non-negative) integer values called code points. For example, in a given repertoire, a character representing the capital
* letter "A" in the Latin alphabet might be assigned to the integer 65, the character for "B" to 66, and so on. A complete
* set of characters and corresponding integers is a coded character set. Multiple coded character sets may share the same
* repertoire; for example ISO/IEC 8859-1 and IBM code pages 037 and 500 all cover the same repertoire but map them to
* different codes. In a coded character set, each code point only represents one character, i.e., a coded character set is a
* function.
*
* A character encoding form (CEF) specifies the conversion of a coded character set's integer codes into a set of
* limited-size integer code values that facilitate storage in a system that represents numbers in binary form using a fixed
* number of bits (i.e. practically any computer system). For example, a system that stores numeric information in 16-bit
* units would only be able to directly represent integers from 0 to 65,535 in each unit, but larger integers could be
* represented if more than one 16-bit unit could be used. This is what a CEF accommodates: it defines a way of mapping a
* single code point from a range of, say, 0 to 1.4 million, to a series of one or more code values from a range of, say, 0 to
* 65,535.
*
* The simplest CEF system is simply to choose large enough units that the values from the coded character set can be encoded
* directly (one code point to one code value). This works well for coded character sets that fit in 8 bits (as most legacy
* non-CJK encodings do) and reasonably well for coded character sets that fit in 16 bits (such as early versions of
* Unicode). However, as the size of the coded character set increases (e.g. modern Unicode requires at least 21
* bits/character), this becomes less and less efficient, and it is difficult to adapt existing systems to use larger code
* values. Therefore, most systems working with later versions of Unicode use either UTF-8, which maps Unicode code points to
* variable-length sequences of octets, or UTF-16, which maps Unicode code points to variable-length sequences of 16-bit
* words.
*
* Next, a character encoding scheme (CES) specifies how the fixed-size integer code values should be mapped into an octet
* sequence suitable for saving on an octet-based file system or transmitting over an octet-based network. With Unicode, a
* simple character encoding scheme is used in most cases, simply specifying whether the bytes for each integer should be in
* big-endian or little-endian order (even this isn't needed with UTF-8). However, there are also compound character encoding
* schemes, which use escape sequences to switch between several simple schemes (such as ISO/IEC 2022), and compressing
* schemes, which try to minimise the number of bytes used per code unit (such as SCSU, BOCU, and Punycode).
*
* Once the code points of a string are encoded as octets, the string as a whole needs some description to demarcate it from
* surrounding data. ROSE currently supports two styles of demarcation: length-encoded strings and terminated strings. A
* length-encoded string's code point octets are preceded by octets that encode the string length, usually in terms of the
* number of code points. Decoding such a string consists of decoding the length and then decoding code points until the
* required number of code points have been obtained. On the other hand, terminated strings are demarcated from surrounding
* data by a special code point such as the NUL character for ASCII strings. Decoding a terminated string consists of decoding
* code points until a terminator is found, then discarding the terminator.
*
* @section ex1 Example 1
*
* This example shows how to find all strings in memory that is readable but not writable using a list of common encodings
* such as C-style NUL-terminated printable ASCII, zero terminated UTF-16 little-endian, 2-byte little-endian length encoded
* ASCII, etc.
*
* @code
* #include <rose/BinaryString.h> // binary analysis string support
* using namespace rose::BinaryAnalysis::Strings;
* MemoryMap map = ...; // initialized elsewhere
*
* StringFinder finder; // holds settings
* finder.settings().minLength = 5; // no strings shorter than 5 characters
* finder.settings().maxLength = 65536; // ignore very long strings
* finder.insertCommonEncoders(); // how to match strings
* finder.find(map.require(MemoryMap::READABLE).prohibit(MemoryMap::WRITABLE));
*
* BOOST_FOREACH (const EncodedString &string, finder.strings()) {
* std::cout <<"string at " <<string.address() <<" for " <<string.size() <<" bytes\n";
* std::cout <<"encoding: " <<string.encoder()->name() <<"\n";
* std::cout <<"narrow value: \"" <<StringUtility::cEscape(string.narrow()) <<"\"\n"; // std::string
* std::cout <<"wide value: " <<string.wide() <<"\n"; // std::wstring
* }
*
* // This works too if you're not picky about the output format
* std::cout <<finder;
* @endcode
*
* @section ex2 Example 2
*
* The @ref StringFinder analysis is tuned for searching for strings at unknown locations while trying to decode multiple
* encodings simultaneously. If all you want to do is read a single string from a known location having a known encoding then
* you're probabily better off reading it directly from the @ref MemoryMap. The @ref StringFinder analysis can be used for
* that, but it's probably overkill. In any case, here's the overkill version to find a 2-byte little endian length-encoded
* UTF-8 string:
*
* @code
* #include <rose/BinaryString.h>
* using namespace rose::BinaryAnalysis::Strings;
* MemoryMap map = ...; // initialized elsewhere
* rose_addr_t stringVa = ...; // starting address of string
*
* StringFinder finder; // holds settings
* finder.settings().minLength = 0; // no strings shorter than 5 characters
* finder.settings().maxLength = 65536; // ignore very long strings
* finder.encoder(lengthEncodedString(basicLengthEncoder(2, ByteOrder::ORDER_LSB), // 2-byte little-endian length
* utf8CharacterEncodingForm(), // UTF-8 encoding
* basicCharacterEncodingScheme(1), // 1:1 mapping to octets
* anyCodePoint()); // allow any characters
* std::wstring s;
* BOOST_FOREACH (const EncodedString &string, finder.find(map.at(stringVa)).strings()) {
* s = string.wide();
* break;
* }
* @endcode
*
* @section ex3 Example 3
*
* The encoders can also be used to decode directly from a stream of octets. For instance, lets say you have a vector of
* octets that map 1:1 to code values, and then you want to decode the code values as a UTF-8 stream to get some code
* points. All decoders are implemented as state machines to make it efficient to send the same octets to many decoders
* without having to rescan/reread from a memory map. The UTF-8 decoder decodes one octet at a time and when it enters the
* FINAL_STATE or COMPLETED_STATE then a decoded code value can be consumed.
*
* @code
* #include <rose/BinaryString.h>
* using namespace rose::BinaryAnalysis::Strings;
* std::vector<Octet> octets = ...; // initialized elsewhere
*
* // Instantiate the encoder/decoder. These things are all reference
* // counted so there's no need to explicitly free them.
* Utf8CharacterEncodingForm::Ptr utf8 = utf8CharacterEncodingForm();
*
* CodePoints codePoints;
* BOOST_FOREACH (Octet octet, octets) {
* CodeValue codeValue = octet; // 1:1 translation
* if (isDone(utf8->decode(codeValue))) {
* codePoints.push_back(utf8->consume());
* } else if (utf8->state() == ERROR_STATE) {
* utf8->reset(); // skip this code value
* }
* }
* @endcode */
namespace Strings {
/** Diagnostics specific to string analysis. */
extern Sawyer::Message::Facility mlog;
typedef uint8_t Octet; /**< One byte in a sequence that encodes a code value. */
typedef std::vector<Octet> Octets; /**< A sequence of octets. */
typedef unsigned CodeValue; /**< One value in a sequence that encodes a code point. */
typedef std::vector<CodeValue> CodeValues; /**< A sequence of code values. */
typedef unsigned CodePoint; /**< One character in a coded character set. */
typedef std::vector<CodePoint> CodePoints; /**< A sequence of code points, i.e., a string. */
/** Errors for string analysis. */
class Exception: public std::runtime_error {
public:
Exception(const std::string &s): std::runtime_error(s) {}
};
/** Decoder state. Negative values are reserved.
*
* A decoder must follow these rules when transitioning from one state to another:
*
* @li A decoder is in the INITIAL_STATE when it is constructed and after calling @c reset.
*
* @li If the decoder is in an ERROR_STATE then @c decode does not change the state.
*
* @li If the decoder is in the FINAL_STATE then @c decode transitions to ERROR_STATE.
*
* @li If the decoder is in FINAL_STATE or COMPLETED state then @c consume transitions to INITIAL_STATE.
*
* All other transitions are user defined. */
enum State {
FINAL_STATE = -1, /**< Final state where nothing more can be decoded. */
COMPLETED_STATE = -2, /**< Completed state, but not a final state. */
INITIAL_STATE = -3, /**< Initial state just after a reset. */
ERROR_STATE = -4, /**< Decoder is in an error condition. */
USER_DEFINED_0 = 0, /**< First user-defined value. */
USER_DEFINED_1 = 1, /**< Second user-defined value. */
USER_DEFINED_2 = 2, /**< Third user-defined value. */
USER_DEFINED_MAX = 128 /**< Maximum user-defined value. */
};
/** Returns true for COMPLETED_STATE or FINAL_STATE. */
bool isDone(State st);
/** Initialize the diagnostics facility. This is called by @ref rose::Diagnostics::initialize. */
void initDiagnostics();
/** Defines mapping between code points and code values.
*
* A code point represents one character of a coded character set, such as one character of approximately 1.4 million
* distinct Unicode characters. The CharacterEncodingForm (CEF) is responsible for converting a code point to a sequence
* of one or more code values, or vice versa. Each code value, which may be multiple bytes, is eventually encoded into a
* sequence of octets by the @ref CharacterEncodingScheme (CES). */
class ROSE_DLL_API CharacterEncodingForm: public Sawyer::SharedObject {
protected:
State state_;
public:
CharacterEncodingForm(): state_(INITIAL_STATE) {}
virtual ~CharacterEncodingForm() {}
/** Shared ownership pointer to a @ref CharacterEncodingForm. See @ref heap_object_shared_ownership. */
typedef Sawyer::SharedPointer<CharacterEncodingForm> Ptr;
/** Create a new encoder from this one. */
virtual Ptr clone() const = 0;
/** Name of encoder. */
virtual std::string name() const = 0;
/** Encode a code point into a sequence of one or more code values.
*
* For instance, an ecoder for UTF-16 will encode a code point into one or more values in the range 0 through (2^16)-1. */
virtual CodeValues encode(CodePoint) = 0;
/** Decoder state. */
State state() const { return state_; }
/** Decode one code value.
*
* Processes a single code value and updates the decoder state machine. Returns the decoder's new state. See documentation
* for @ref State for restrictions on state transitions. */
virtual State decode(CodeValue) = 0;
/** Consume a decoded code point.
*
* The decoder must be in the FINAL_STATE or COMPLETED_STATE, and upon return will be in the INITIAL_STATE. */
virtual CodePoint consume() = 0;
/** Reset the decoder state machine. */
virtual void reset() = 0;
};
/** A no-op character encoding form.
*
* Encodes code points to code values and vice versa such that code points are equal to code values. */
class ROSE_DLL_API NoopCharacterEncodingForm: public CharacterEncodingForm {
CodePoint cp_;
protected:
NoopCharacterEncodingForm(): cp_(0) {}
public:
/** Shared-ownership pointer to a @ref NoopCharacterEncodingFormat. See @ref heap_object_shared_ownership. */
typedef Sawyer::SharedPointer<NoopCharacterEncodingForm> Ptr;
static Ptr instance() { return Ptr(new NoopCharacterEncodingForm); }
virtual CharacterEncodingForm::Ptr clone() const ROSE_OVERRIDE { return Ptr(new NoopCharacterEncodingForm(*this)); }
virtual std::string name() const ROSE_OVERRIDE { return "no-op"; }
virtual CodeValues encode(CodePoint cp) ROSE_OVERRIDE;
virtual State decode(CodeValue) ROSE_OVERRIDE;
virtual CodePoint consume() ROSE_OVERRIDE;
virtual void reset() ROSE_OVERRIDE;
};
/** Returns a new no-op character encoding form. */
NoopCharacterEncodingForm::Ptr noopCharacterEncodingForm();
/** UTF-8 character encoding form.
*
* Encodes each code point as one to six 8-bit code values. */
class ROSE_DLL_API Utf8CharacterEncodingForm: public CharacterEncodingForm {
CodePoint cp_;
protected:
Utf8CharacterEncodingForm(): cp_(0) {}
public:
/** Shared-ownership pointer to a @ref Utf8CharacterEncodingForm. See @ref heap_object_shared_ownership. */
typedef Sawyer::SharedPointer<Utf8CharacterEncodingForm> Ptr;
static Ptr instance() { return Ptr(new Utf8CharacterEncodingForm); }
virtual CharacterEncodingForm::Ptr clone() const ROSE_OVERRIDE { return Ptr(new Utf8CharacterEncodingForm(*this)); }
virtual std::string name() const ROSE_OVERRIDE { return "UTF-8"; }
virtual CodeValues encode(CodePoint cp) ROSE_OVERRIDE;
virtual State decode(CodeValue) ROSE_OVERRIDE;
virtual CodePoint consume() ROSE_OVERRIDE;
virtual void reset() ROSE_OVERRIDE;
};
/** Returns a new UTF-8 character encoding form. */
Utf8CharacterEncodingForm::Ptr utf8CharacterEncodingForm();
/** UTF-16 character encoding form.
*
* Encodes each code point as one or two 16-bit code values. */
class ROSE_DLL_API Utf16CharacterEncodingForm: public CharacterEncodingForm {
CodePoint cp_;
protected:
Utf16CharacterEncodingForm(): cp_(0) {}
public:
/** Shared-ownership pointer to a @ref Utf16CharacterEncodingForm. See @ref heap_object_shared_ownership. */
typedef Sawyer::SharedPointer<Utf16CharacterEncodingForm> Ptr;
static Ptr instance() { return Ptr(new Utf16CharacterEncodingForm); }
virtual CharacterEncodingForm::Ptr clone() const ROSE_OVERRIDE { return Ptr(new Utf16CharacterEncodingForm(*this)); }
virtual std::string name() const ROSE_OVERRIDE { return "UTF-16"; }
virtual CodeValues encode(CodePoint cp) ROSE_OVERRIDE;
virtual State decode(CodeValue) ROSE_OVERRIDE;
virtual CodePoint consume() ROSE_OVERRIDE;
virtual void reset() ROSE_OVERRIDE;
};
/** Returns a new UTF-16 character encoding form. */
Utf16CharacterEncodingForm::Ptr utf16CharacterEncodingForm();
/** Defines the mapping between code values and octets.
*
* A code value (one or more of which compose a code point, or a single character in a coded character set), is encoded as
* one or more octets. For instance, a UTF-16 code value will be converted to two octets in big or little endian order
* depending on the character encoding scheme. */
class ROSE_DLL_API CharacterEncodingScheme: public Sawyer::SharedObject {
protected:
State state_;
public:
CharacterEncodingScheme(): state_(INITIAL_STATE) {}
virtual ~CharacterEncodingScheme() {}
/** Shared ownership pointer to a @ref CharacterEncodingScheme. See @ref heap_object_shared_ownership. */
typedef Sawyer::SharedPointer<CharacterEncodingScheme> Ptr;
/** Create a new copy of this encoder. */
virtual Ptr clone() const = 0;
/** Name of encoder. */
virtual std::string name() const = 0;
/** Encode a code value into a sequence of octets. For instance, an encoder for UTF-16 will encode a code value into
* two octets. */
virtual Octets encode(CodeValue) = 0;
/** Decoder state. */
State state() const { return state_; }
/** Decode one octet.
*
* Processes a single octet and updates the decoder state machine. Returns the decoder's new state. See documentation for
* @ref State for restrictions on state transitions. */
virtual State decode(Octet) = 0;
/** Consume a decoded code value.
*
* The decoder must be in the FINAL_STATE or COMPLETED_STATE and upon return will be in the INITIAL_STATE. */
virtual CodeValue consume() = 0;
/** Reset the decoder state machine. */
virtual void reset() = 0;
};
/** Basic character encoding scheme.
*
* This character encoding scheme converts code value to a sequence of octets in big- or little-endian order, and vice
* versa. It needs to know the number of octets per code value, and the byte order of the octets per code value is larger
* than one. */
class ROSE_DLL_API BasicCharacterEncodingScheme: public CharacterEncodingScheme {
size_t octetsPerValue_;
ByteOrder::Endianness sex_;
CodeValue cv_;
protected:
BasicCharacterEncodingScheme(size_t octetsPerValue, ByteOrder::Endianness sex)
: octetsPerValue_(octetsPerValue), sex_(sex), cv_(0) {
ASSERT_require(1==octetsPerValue || sex!=ByteOrder::ORDER_UNSPECIFIED);
ASSERT_require(octetsPerValue <= sizeof(CodeValue));
}
public:
static Ptr instance(size_t octetsPerValue, ByteOrder::Endianness sex = ByteOrder::ORDER_UNSPECIFIED) {
return Ptr(new BasicCharacterEncodingScheme(octetsPerValue, sex));
}
virtual Ptr clone() const ROSE_OVERRIDE {
return Ptr(new BasicCharacterEncodingScheme(*this));
}
virtual std::string name() const ROSE_OVERRIDE;
virtual Octets encode(CodeValue) ROSE_OVERRIDE;
virtual State decode(Octet) ROSE_OVERRIDE;
virtual CodeValue consume() ROSE_OVERRIDE;
virtual void reset() ROSE_OVERRIDE;
};
/** Returns a new basic character encoding scheme. */
BasicCharacterEncodingScheme::Ptr basicCharacterEncodingScheme(size_t octetsPerValue,
ByteOrder::Endianness sex = ByteOrder::ORDER_UNSPECIFIED);
/** Encoding for the length of a string.
*
* Strings that are length-encoded must specify a length encoding scheme that gives the length of the string measured in
* code points. */
class ROSE_DLL_API LengthEncodingScheme: public Sawyer::SharedObject {
protected:
State state_;
public:
LengthEncodingScheme(): state_(INITIAL_STATE) {}
virtual ~LengthEncodingScheme() {}
/** Shared ownership pointer to a @ref LengthEncodingScheme. See @ref heap_object_shared_ownership. */
typedef Sawyer::SharedPointer<LengthEncodingScheme> Ptr;
/** Create a new copy of this encoder. */
virtual Ptr clone() const = 0;
/** Name of encoder. */
virtual std::string name() const = 0;
/** Encode a length into a sequence of octets. */
virtual Octets encode(size_t) = 0;
/** Decoder state. */
State state() const { return state_; }
/** Decode one octet.
*
* Processes a single octet and updates the decoder state machine. Returns the decoder's new state. See documentation for
* @ref State for restrictions on state transitions. */
virtual State decode(Octet) = 0;
/** Consume a decoded length.
*
* The decoder must be in the FINAL_STATE or COMPLETE_STATE, and upon return will be in the INITIAL_STATE. */
virtual size_t consume() = 0;
/** Reset the decoder state machine. */
virtual void reset() = 0;
};
/** Basic length encoding scheme.
*
* This length encoding scheme converts a length to a sequence of octets in big- or little-endian order, and vice
* versa. It needs to know the number of octets per length value, and the byte order of the octets if the length is
* greater than one. */
class ROSE_DLL_API BasicLengthEncodingScheme: public LengthEncodingScheme {
size_t octetsPerValue_;
ByteOrder::Endianness sex_;
size_t length_;
protected:
BasicLengthEncodingScheme(size_t octetsPerValue, ByteOrder::Endianness sex)
: octetsPerValue_(octetsPerValue), sex_(sex), length_(0) {
ASSERT_require(1==octetsPerValue || sex!=ByteOrder::ORDER_UNSPECIFIED);
ASSERT_require(octetsPerValue <= sizeof(size_t));
}
public:
static Ptr instance(size_t octetsPerValue, ByteOrder::Endianness sex = ByteOrder::ORDER_UNSPECIFIED) {
return Ptr(new BasicLengthEncodingScheme(octetsPerValue, sex));
}
virtual Ptr clone() const ROSE_OVERRIDE {
return Ptr(new BasicLengthEncodingScheme(*this));
}
virtual std::string name() const ROSE_OVERRIDE;
virtual Octets encode(size_t) ROSE_OVERRIDE;
virtual State decode(Octet) ROSE_OVERRIDE;
virtual size_t consume() ROSE_OVERRIDE;
virtual void reset() ROSE_OVERRIDE;
};
/** Returns a new basic length encoding scheme. */
BasicLengthEncodingScheme::Ptr basicLengthEncodingScheme(size_t octetsPerValue,
ByteOrder::Endianness sex = ByteOrder::ORDER_UNSPECIFIED);
/** Valid code point predicate.
*
* This predicate tests that the specified code point is valid for a string. */
class ROSE_DLL_API CodePointPredicate: public Sawyer::SharedObject {
public:
virtual ~CodePointPredicate() {}
/** Shared ownership pointer to a @ref CodePointPredicate. See @ref heap_object_shared_ownership. */
typedef Sawyer::SharedPointer<CodePointPredicate> Ptr;
/** Name of predicate. */
virtual std::string name() const = 0;
/** Predicate. */
virtual bool isValid(CodePoint) = 0;
};
/** ASCII valid code points.
*
* Returns true if the code point is a printable US-ASCII character. Printable characters are seven-bit code points for
* which C's @c isprint predicate returns true (anything but control characters). */
class ROSE_DLL_API PrintableAscii: public CodePointPredicate {
protected:
PrintableAscii() {}
public:
static Ptr instance() {
return Ptr(new PrintableAscii);
}
virtual std::string name() const ROSE_OVERRIDE { return "printable ASCII"; }
virtual bool isValid(CodePoint) ROSE_OVERRIDE;
};
/** Returns a new printable ASCII predicate. */
PrintableAscii::Ptr printableAscii();
/** Matches any code point.
*
* Returns true for all code points. */
class ROSE_DLL_API AnyCodePoint: public CodePointPredicate {
protected:
AnyCodePoint() {}
public:
static Ptr instance() { return Ptr(new AnyCodePoint); }
virtual std::string name() const ROSE_OVERRIDE { return "any code point"; }
virtual bool isValid(CodePoint) ROSE_OVERRIDE { return true; }
};
/** Returns a new predicate that matches all code points. */
AnyCodePoint::Ptr anyCodePoint();
/** String encoding scheme.
*
* A string encoding scheme indicates how a string (sequence of code points) is encoded as a sequence of octets and vice
* versa. */
class ROSE_DLL_API StringEncodingScheme: public Sawyer::SharedObject {
protected:
State state_; // decoding state
CodePoints codePoints_; // unconsumed code points
size_t nCodePoints_; // number of code points decoded since reset
CharacterEncodingForm::Ptr cef_;
CharacterEncodingScheme::Ptr ces_;
CodePointPredicate::Ptr cpp_;
protected:
StringEncodingScheme(): state_(INITIAL_STATE), nCodePoints_(0) {}
StringEncodingScheme(const CharacterEncodingForm::Ptr &cef, const CharacterEncodingScheme::Ptr &ces,
const CodePointPredicate::Ptr &cpp)
: cef_(cef), ces_(ces), cpp_(cpp) {}
public:
virtual ~StringEncodingScheme() {}
/** Shared ownership pointer to a @ref StringEncodingScheme. See @ref heap_object_shared_ownership. */
typedef Sawyer::SharedPointer<StringEncodingScheme> Ptr;
/** Name of encoding */
virtual std::string name() const = 0;
/** Create a new copy of this encoder. */
virtual Ptr clone() const = 0;
/** Encode a string into a sequence of octets. */
virtual Octets encode(const CodePoints&) = 0;
/** Decoder state. */
State state() const { return state_; }
/** Decode one octet.
*
* Processes a single octet and updates the decoder state machine. Returns the new state. See documentation for @ref
* State for restrictions on state transitions. */
virtual State decode(Octet) = 0;
/** Consume pending decoded code points.
*
* Returns code points that haven't been consume yet, and then removes them from the decoder. This can be called from any
* state because we want the caller to be able to consume code points as they're decoded, which is a little bit different
* than how @c consume methods operate in the decoders that return scalar values. A @ref reset will discard pending code
* points. */
CodePoints consume();
/** Return pending decoded code points without consuming them. */
const CodePoints& codePoints() const { return codePoints_; }
/** Number of code points decoded since reset. */
size_t length() const { return nCodePoints_; }
/** Reset the state machine to an initial state. */
virtual void reset();
/** Property: Character encoding format.
*
* The character encoding format is responsible for converting each code point to a sequence of code values. For instance,
* a UTF-16 encoding will convert each code point (a number between zero and about 1.2 million) into a sequence of
* 16-bit code values. Each code value will eventually be converted to a pair of octets by the character encoding
* scheme.
*
* @{ */
CharacterEncodingForm::Ptr characterEncodingForm() const { return cef_; }
void characterEncodingForm(const CharacterEncodingForm::Ptr &cef) { cef_ = cef; }
/** @} */
/** Property: Character encoding scheme.
*
* The character encoding scheme is responsible for converting each code value to a sequence of one or more octets. The
* code value is part of a sequence of code values generated by the character encoding format for a single code point. For
* instance, a character encoding scheme for UTF-16 will need to know whether the octets are stored in bit- or
* little-endian order.
*
* @{ */
CharacterEncodingScheme::Ptr characterEncodingScheme() const { return ces_; }
void characterEncodingScheme(const CharacterEncodingScheme::Ptr &ces) { ces_ = ces; }
/** @} */
/** Property: Code point predicate.
*
* The code point predicate tests whether a specific code point is allowed as part of a string. For instance, when
* decoding NUL-terminated ASCII strings one might want to consider only those strings that contain printable characters
* and white space in order to limit the number of false positives when searching for strings in memory.
*
* @{ */
CodePointPredicate::Ptr codePointPredicate() const { return cpp_; }
void codePointPredicate(const CodePointPredicate::Ptr &cpp) { cpp_ = cpp; }
/** @} */
};
/** Length-prefixed string encoding scheme.
*
* A string encoding where the octets for the characters are prefixed with an encoded length. */
class ROSE_DLL_API LengthEncodedString: public StringEncodingScheme {
LengthEncodingScheme::Ptr les_;
Sawyer::Optional<size_t> declaredLength_; // decoded length
protected:
LengthEncodedString(const LengthEncodingScheme::Ptr &les, const CharacterEncodingForm::Ptr &cef,
const CharacterEncodingScheme::Ptr &ces, const CodePointPredicate::Ptr &cpp)
: StringEncodingScheme(cef, ces, cpp), les_(les) {}
public:
/** Shared ownership pointer to a @ref LengthEncodedString. See @ref heap_object_shared_ownership. */
typedef Sawyer::SharedPointer<LengthEncodedString> Ptr;
static Ptr instance(const LengthEncodingScheme::Ptr &les, const CharacterEncodingForm::Ptr &cef,
const CharacterEncodingScheme::Ptr &ces, const CodePointPredicate::Ptr &cpp) {
return Ptr(new LengthEncodedString(les, cef, ces, cpp));
}
virtual StringEncodingScheme::Ptr clone() const ROSE_OVERRIDE {
LengthEncodingScheme::Ptr les = les_->clone();
CharacterEncodingForm::Ptr cef = cef_->clone();
CharacterEncodingScheme::Ptr ces = ces_->clone();
CodePointPredicate::Ptr cpp = cpp_; // not cloned since they have no state
LengthEncodedString *inst = new LengthEncodedString(les, cef, ces, cpp);
inst->state_ = state_;
inst->codePoints_ = codePoints_;
inst->nCodePoints_ = nCodePoints_;
inst->declaredLength_ = declaredLength_;
return Ptr(inst);
}
virtual std::string name() const ROSE_OVERRIDE;
virtual Octets encode(const CodePoints&) ROSE_OVERRIDE;
virtual State decode(Octet) ROSE_OVERRIDE;
virtual void reset() ROSE_OVERRIDE;
/** Returns the declared length, if any.
*
* The declared length is the value of the decoded length prefix, not necessarily the number of code points that have been
* decoded. This can be called from any state except it will always return nothing in the INITIAL_STATE. Therefore, this
* method should be called prior to the @ref consume call. */
Sawyer::Optional<size_t> declaredLength() const { return declaredLength_; }
/** Property: Lengh encoding scheme.
*
* The length encoding scheme is responsible for encoding the string length as a sequence of octets.
*
* @{ */
LengthEncodingScheme::Ptr lengthEncodingScheme() const { return les_; }
void lengthEncodingScheme(const LengthEncodingScheme::Ptr &les) { les_ = les; }
/** @} */
};
/** Returns a new length-prefixed string encoder. */
LengthEncodedString::Ptr lengthEncodedString(const LengthEncodingScheme::Ptr &les, const CharacterEncodingForm::Ptr &cef,
const CharacterEncodingScheme::Ptr &ces, const CodePointPredicate::Ptr &cpp);
/** Returns a new encoder for length-encoded printable ASCII strings. A byte order must be specified for length encodings
* larger than a single byte. */
LengthEncodedString::Ptr lengthEncodedPrintableAscii(size_t lengthSize,
ByteOrder::Endianness order = ByteOrder::ORDER_UNSPECIFIED);
/** Returns a new encoder for multi-byte length-encoded printable ASCII strings. */
LengthEncodedString::Ptr lengthEncodedPrintableAsciiWide(size_t lengthSize, ByteOrder::Endianness order, size_t charSize);
/** Terminated string encoding scheme.
*
* A string whose character octets are followed by octets for a special code point that marks the end of the string but is
* not included as part of the string's characters. An example is C-style NUL-terminated ASCII. */
class ROSE_DLL_API TerminatedString: public StringEncodingScheme {
CodePoints terminators_;
Sawyer::Optional<CodePoint> terminated_; // decoded termination
protected:
TerminatedString(const CharacterEncodingForm::Ptr &cef, const CharacterEncodingScheme::Ptr &ces,
const CodePointPredicate::Ptr &cpp, const CodePoints &terminators)
: StringEncodingScheme(cef, ces, cpp), terminators_(terminators) {}
public:
/** Shared ownership pointer to a @ref TerminatedString. See @ref heap_object_shared_ownership. */
typedef Sawyer::SharedPointer<TerminatedString> Ptr;
static Ptr instance(const CharacterEncodingForm::Ptr &cef, const CharacterEncodingScheme::Ptr &ces,
const CodePointPredicate::Ptr &cpp, const CodePoints &terminators) {
return Ptr(new TerminatedString(cef, ces, cpp, terminators));
}
static Ptr instance(const CharacterEncodingForm::Ptr &cef, const CharacterEncodingScheme::Ptr &ces,
const CodePointPredicate::Ptr &cpp, CodePoint terminator = 0) {
return Ptr(new TerminatedString(cef, ces, cpp, CodePoints(1, terminator)));
}
virtual StringEncodingScheme::Ptr clone() const ROSE_OVERRIDE {
CharacterEncodingForm::Ptr cef = cef_->clone();
CharacterEncodingScheme::Ptr ces = ces_->clone();
CodePointPredicate::Ptr cpp = cpp_; // not cloned since they have no state
TerminatedString *inst = new TerminatedString(cef, ces, cpp, terminators_);
inst->state_ = state_;
inst->codePoints_ = codePoints_;
inst->nCodePoints_ = nCodePoints_;
inst->terminated_ = terminated_;
return Ptr(inst);
}
virtual std::string name() const ROSE_OVERRIDE;
virtual Octets encode(const CodePoints&) ROSE_OVERRIDE;
virtual State decode(Octet) ROSE_OVERRIDE;
virtual void reset() ROSE_OVERRIDE;
/** Returns the decoded termination character, if any.
*
* This can be called from any state except it will always return nothing in the INITIAL_STATE. Therefore, this method
* should be called prior to the @ref consume call. */
Sawyer::Optional<CodePoint> terminated() const { return terminated_; }
/** Property: string termination code points.
*
* A list of code points (characters) that cause a string to be terminated. When decoding a string, if a terminating code
* point is encountered then the string ends at the previous code point even if the terminating code point also satisfies
* the code point predicate.
*
* @{ */
const CodePoints& terminators() const { return terminators_; }
CodePoints& terminators() { return terminators_; }
/** @} */
};
/** Returns a new encoder for NUL-terminated printable ASCII strings. */
TerminatedString::Ptr nulTerminatedPrintableAscii();
/** Returns a new encoder for multi-byte NUL-terminated printable ASCII strings. */
TerminatedString::Ptr nulTerminatedPrintableAsciiWide(size_t charSize, ByteOrder::Endianness order);
/** An encoder plus interval.
*
* Represents a string by specifying the encoding and an interval of virtual addresses where the encoded octets are
* stored. */
class ROSE_DLL_API EncodedString {
StringEncodingScheme::Ptr encoder_; // how string is encoded
AddressInterval where_; // where encoded string is located
public:
EncodedString() {}
EncodedString(const StringEncodingScheme::Ptr &encoder, const AddressInterval &where)
: encoder_(encoder), where_(where) {}
/** Information about the string encoding. */
StringEncodingScheme::Ptr encoder() const { return encoder_; }
/** Where the string is located in memory. */
const AddressInterval& where() const { return where_; }
/** Starting address of string in memory. */
rose_addr_t address() const { return where_.least(); }
/** Size of encoded string in bytes. */
size_t size() const { return where_.size(); }
/** Length of encoded string in code points. */
size_t length() const { return encoder_->length(); }
/** Code points associated with the string.
*
* If code points have been consumed then they may be partly or fully absent from the decoder. */
const CodePoints& codePoints() const { return encoder_->codePoints(); }
/** Return code points as a C++ std::string.
*
* This truncates each code point to eight bits. */
std::string narrow() const;
/** Return code points as a C++ std::wstring. */
std::wstring wide() const;
/** Decodes the string from memory.
*
* A string need not store its code points, in which case this method can decode them from memory. The memory should be
* the same as when the string was originally found, otherwise an std::runtime_error might be thrown. */
void decode(const MemoryMap&);
};
/** %Analysis to find encoded strings.
*
* This analysis searches user-specified parts of a binary specimen's memory space to find strings encoded in various formats
* specfieid by the user.
*
* See the @ref rose::BinaryAnalysis::Strings "Strings" namespace for details. */
class ROSE_DLL_API StringFinder {
public:
/** Settings and properties.
*
* These properties can be set directly or by the command-line parser. */
struct Settings {
/** Minimum length of matched strings.
*
* Strings having fewer than this many code points are discarded. If @ref minLength is larger than @ref maxLength then
* no strings will be matched. */
size_t minLength;
/** Maximum length of matched strings.
*
* Strings having more than this many code points are discarded. If @ref maxLength is smaller than @ref minLength then
* no strings will be matched. */
size_t maxLength;
/** Whether to allow overlapping strings.
*
* The number of strings that can overlap at a single address per encoder. For instance, for C-style NUL-terminated
* ASCII strings encoded as bytes, if memory contains the consecutive values 'a', 'n', 'i', 'm', 'a', 'l', '\0' then
* up to seven strings are possible: "animal", "nimal", "imal", "mal", "al", "l", and "". If the maximum overlap is
* set to three then only "animal", "nimal", and "imal" are found. Setting the maximum overlap to zero has the same
* effect as setting it to one: no overlapping is allowed. The overlap limits are applied before results are pruned
* based on length, so if the minimum legnth is four, the "imal" and shorter strings won't be found even though they
* are decoded under the covers.
*
* A maximum overlap of at least two is recommended if two-byte-per-character encoding is used when detecting
* NUL-terminated ASCII strings. The reason is that one decoder will be active at one address while another decoder is
* desired for the next address; then if the first address proves to not be part of a string, the second address can
* still be detected as a string. Similarly, a maximum overlap of at least four is recommended for
* four-byte-per-character encodings. Length-encoded strings will have similar issues. */
size_t maxOverlap;
/** Whether to keep only longest non-overlapping strings.
*
* If set, then only the longest detected strings are kept. The algorithm sorts all detected strings by decreasing
* length, then removes any string whose memory addresses overlap with any prior string in the list. */
bool keepingOnlyLongest;
Settings(): minLength(5), maxLength(-1), maxOverlap(8), keepingOnlyLongest(true) {}
};
private:
Settings settings_; // command-line settings for this analysis
bool discardingCodePoints_; // whether to store decoded code points
std::vector<StringEncodingScheme::Ptr> encoders_; // encodings to use when searching
std::vector<EncodedString> strings_; // strings that have been found
public:
/** Constructor.
*
* Initializes the analysis with default settings but no encoders. Encoders will need to be added before this analysis can
* be used to find any strings. */
StringFinder(): discardingCodePoints_(false) {}
/** Property: %Analysis settings often set from a command-line.
*
* @{ */
const Settings& settings() const { return settings_; }
Settings& settings() { return settings_; }
/** @} */
/** Property: Whether to discard code points.
*
* If this property is set, then the process of decoding strings does not actually store the code points (characters)
* of the string. This is useful when searching for lots of strings to reduce the amount of memory required. A string
* can be decoded again later if the code points are needed.
*
* @{ */
bool discardingCodePoints() const { return discardingCodePoints_; }
StringFinder& discardingCodePoints(bool b) { discardingCodePoints_=b; return *this; }
/** @} */
/** Property: List of string encodings.
*
* When searching for strings, this analysis must know what kinds of strings to look for, and does that with a vector of
* pointers to encoders. The default is an empty vector, in which no strings will be found.
*
* @{ */
const std::vector<StringEncodingScheme::Ptr>& encoders() const { return encoders_; }
std::vector<StringEncodingScheme::Ptr>& encoders() { return encoders_; }
/** @} */
/** Command-line parser for analysis settings.
*
* Returns the switch group that describes the command-line switches for this analysis. The caller can provide a @ref
* Settings object that will be adjusted when the command-line is parsed and applied; if no argument is supplied then the
* settings of this analysis are affected. In either case, the settings or analysis object must still be allocated when
* the command-line is parsed.
*
* @{ */
static Sawyer::CommandLine::SwitchGroup commandLineSwitches(Settings&);
Sawyer::CommandLine::SwitchGroup commandLineSwitches();
/** @} */
/** Inserts common encodings.
*
* Inserts the following string encodings into the analysis:
*
* @li NUL-terminated, byte-encoded, printable ASCII characters.
* @li NUL-terminated, 16-bit encoded, printable ASCII characters.
* @li NUL-terminated, 32-bit encoded, printable ASCII characters.
* @li 2-byte length-prefixed, byte encoded, printable ASCII characters.
* @li 4-byte length-prefixed, byte encoded, printable ASCII characters.
* @li 2-byte length-prefixed, 16-bit encoded, printable ASCII characters.
* @li 4-byte length-prefixed, 16-bit encoded, printable ASCII characters.
* @li 4-byte length-prefixed, 32-bit encoded, printable ASCII characters.
*
* The specified endianness is used for all multi-byte values. */
StringFinder& insertCommonEncoders(ByteOrder::Endianness);
/** Inserts less common encodings.
*
* Inserts the following string encodings into the analyses:
*
* @li Printable ASCII terminated by other code points or non-readable memory. */
StringFinder& insertUncommonEncoders(ByteOrder::Endianness);
/** Reset analysis results.
*
* Clears analysis results but does not change settings or properties. */
StringFinder& reset() { strings_.clear(); return *this; }
/** Finds strings by searching memory.
*
* Clears previous analysis results (e.g., @ref reset) and then searches for new strings. The resulting strings can be
* obtained from the @ref strings method.
*
* The memory constraints indicate where to search for strings, and the properties of this StringFinder class determine
* how to find strings. Specifically, this class must have at least one encoding registered in order to find anything (see
* @ref encoders).
*
* The search progresses by looking at each possible starting address using each registered encoding. The algorithm reads
* each byte from memory only one time, simultaneously attempting all encoders. If the MemoryMap constraint contains an
* anchor point (e.g., @ref MemoryMap::at) then only strings starting at the specified address are returned.
*
* Example 1: Find all C-style, NUL-terminated, ASCII strings contaiing only printable characters (no control characters)
* and containing at least five characters but not more than 31 (not counting the NUL terminator). Make sure that the
* string is in memory that is readable but not writable, and don't allow strings to overlap one another (i.e., "foobar"
* and "bar" cannot share their last for bytes):
*
* @code
* using namespace rose::BinaryAnalysis::String;
* MemoryMap map = ...;
* StringFinder sf;
* sf.encoder(nulTerminatedPrintableAscii());
* sf.settings().minLength = 5;
* sf.settings().maxLength = 31;
* sf.settings().allowOverlap = false;
* std::vector<EncodedString> strings = sf.find(map.require(MemoryMap::READABLE).prohibit(MemoryMap::WRITABLE)).strings();
* @endcode */
StringFinder& find(const MemoryMap::ConstConstraints&, Sawyer::Container::MatchFlags flags=0);
/** Obtain strings that were found.
*
* @{ */
const std::vector<EncodedString>& strings() const { return strings_; }
std::vector<EncodedString>& strings() { return strings_; }
/** @} */
/** Print results.
*
* Print information about each string, one string per line. Strings are displayed with C/C++ string syntax. */
std::ostream& print(std::ostream&) const;
};
std::ostream& operator<<(std::ostream&, const StringFinder&);
} // namespace
} // namespace
} // namespace
#endif
|
/**
* Basic hash bin node, used for most entries. (See below for
* TreeNode subclass, and in LinkedHashMap for its Entry subclass.)
*/
static class Node<K,V> implements Map.Entry<K,V> {
final int hash;
final K key;
V value;
Node<K,V> next;
Node(int hash, K key, V value, Node<K,V> next) {
this.hash = hash;
this.key = key;
this.value = value;
this.next = next;
}
public final K getKey() { return key; }
public final V getValue() { return value; }
public final String toString() { return key + "=" + value; }
public final int hashCode() {
return objectHashCode(key) ^ objectHashCode(value);
}
public final V setValue(V newValue) {
V oldValue = value;
value = newValue;
return oldValue;
}
public final boolean equals(Object o) {
if (o == this)
return true;
if (o instanceof Map.Entry) {
Map.Entry<?,?> e = (Map.Entry<?,?>)o;
if (objectEquals(key, e.getKey()) &&
objectEquals(value, e.getValue()))
return true;
}
return false;
}
} |
/**
* @return removes the component with the specified class from the entity and returns it.
* Returns null if no component could be removed.
*/
@Override
public <T extends Component> Component remove(long entityId, Class<T> componentClass) {
ReentrantLock lock = locks[selectLock(entityId)];
lock.lock();
try {
TLongObjectMap<Component> entityMap = store.get(componentClass);
if (entityMap != null) {
Component removed = entityMap.remove(entityId);
if (removed != null) {
int remainingComps = numComponents.adjustOrPutValue(entityId, -1, 0);;
if (remainingComps == 0) {
numComponents.remove(entityId);
revisions.remove(entityId);
} else {
revisions.increment(entityId);
}
}
return removed;
}
return null;
} finally {
lock.unlock();
}
} |
/**
* Expands a node in the specified tree.
*
* @param tree a tree, which nodes should be expanded
* @param visitor a visitor that controls expanding of tree nodes
* @param consumer a path consumer called on EDT if path is found and expanded
*/
public static void expand(@NotNull JTree tree, @NotNull TreeVisitor visitor, @NotNull Consumer<? super TreePath> consumer) {
promiseMakeVisibleOne(tree, visitor, path -> {
expandPathWithDebug(tree, path);
consumer.accept(path);
});
} |
/**
* An application event published when OS account instances are added to the
* Sleuth Kit data model for a case.
*/
public final class OsAcctInstancesAddedEvent extends TskDataModelChangedEvent<OsAccountInstance, OsAccountInstance> {
private static final long serialVersionUID = 1L;
/**
* Constructs an application event published when OS account instances are
* added to the Sleuth Kit data model for a case.
*
* @param osAcctInstances The OS account instances that were added.
*/
public OsAcctInstancesAddedEvent(List<OsAccountInstance> osAcctInstances) {
super(OS_ACCT_INSTANCES_ADDED.toString(), null, null, osAcctInstances, OsAccountInstance::getInstanceId);
}
/**
* Gets the OS account instances that have been added.
*
* @return The OS account instances.
*/
public List<OsAccountInstance> getOsAccountInstances() {
return getNewValue();
}
@Override
protected List<OsAccountInstance> getNewValueObjects(SleuthkitCase caseDb, List<Long> ids) throws TskCoreException {
return caseDb.getOsAccountManager().getOsAccountInstances(ids);
}
} |
<filename>src/main/java/mchorse/metamorph/client/gui/editor/GuiAnimation.java
package mchorse.metamorph.client.gui.editor;
import mchorse.mclib.client.gui.framework.elements.GuiElement;
import mchorse.mclib.client.gui.framework.elements.buttons.GuiButtonElement;
import mchorse.mclib.client.gui.framework.elements.buttons.GuiToggleElement;
import mchorse.mclib.client.gui.framework.elements.input.GuiTrackpadElement;
import mchorse.mclib.client.gui.framework.elements.list.GuiInterpolationList;
import mchorse.mclib.client.gui.framework.elements.list.GuiListElement;
import mchorse.mclib.client.gui.utils.keys.IKey;
import mchorse.mclib.utils.Interpolation;
import mchorse.metamorph.api.morphs.utils.Animation;
import net.minecraft.client.Minecraft;
import net.minecraftforge.fml.relauncher.Side;
import net.minecraftforge.fml.relauncher.SideOnly;
@SideOnly(Side.CLIENT)
public class GuiAnimation extends GuiElement
{
/* Animated poses */
public GuiToggleElement animates;
public GuiToggleElement ignored;
public GuiTrackpadElement animationDuration;
public GuiButtonElement pickInterpolation;
public GuiListElement<Interpolation> interpolations;
public Animation animation;
public GuiAnimation(Minecraft mc)
{
this(mc, false);
}
public GuiAnimation(Minecraft mc, boolean addIgnore)
{
super(mc);
/* Animated poses */
this.animates = new GuiToggleElement(mc, IKey.lang("metamorph.gui.animation.animates"), false, (b) ->
{
this.animation.animates = this.animates.isToggled();
this.animation.reset();
});
this.ignored = new GuiToggleElement(mc, IKey.lang("metamorph.gui.animation.ignored"), false, (b) ->
{
this.animation.ignored = this.ignored.isToggled();
});
this.animationDuration = new GuiTrackpadElement(mc, (value) ->
{
this.animation.duration = value.intValue();
this.animation.reset();
});
this.animationDuration.tooltip(IKey.lang("metamorph.gui.animation.animation_duration"));
this.animationDuration.limit(0).integer();
this.pickInterpolation = new GuiButtonElement(mc, IKey.lang("metamorph.gui.animation.pick_interpolation"), (b) ->
{
this.interpolations.toggleVisible();
});
this.interpolations = new GuiInterpolationList(mc, (interp) ->
{
this.animation.interp = interp.get(0);
});
this.interpolations.markIgnored().flex().relative(this.pickInterpolation).y(1F).w(1F).h(96);
this.flex().column(5).vertical().stretch().height(20).padding(10);
this.add(this.animates, this.animationDuration, this.pickInterpolation);
if (addIgnore)
{
this.addAfter(this.animationDuration, this.ignored);
}
this.add(this.interpolations);
}
public void fill(Animation animation)
{
this.animation = animation;
this.animation.reset();
this.animates.toggled(animation.animates);
this.ignored.toggled(animation.ignored);
this.animationDuration.setValue(animation.duration);
this.interpolations.setCurrent(animation.interp);
this.interpolations.setVisible(false);
}
}
|
/**
* @brief Allocates memory block and initializes it with random data
*
* This is to avoid any page faults or copy-on-write exceptions later on
* when measuring cycles.
*
* For simplicity, malloc() is used to allocate memory. Ideally special
* allocator should be used that allocates physically contiguous memory block.
*
* @param sz size of memory block in bytes
*
* @return Pointer to allocated memory block
*/
static void *
init_memory(const size_t sz)
{
char *p = NULL;
size_t i;
if (sz <= 0)
return NULL;
p = (char *)malloc(sz);
if (p == NULL)
return NULL;
for (i = 0; i < sz; i += 32)
p[i] = (char)rand();
return (void *)p;
} |
Shutterstock
The next phone in Huawei's high-end P Series, which according to the numerical naming system of the range will be the P8, is set for a London launch on 15 April.
Huawei will not be unveiling any new phones at Mobile World Congress -- just three wearable devices -- and will instead hold a separate event in April, confirmed Jerry Huang, the company's director of communications. Huawei previously launched the P6 in London in 2013 and last year launched the P7 in Paris. "[P Series launches] will always be in a European country," said Jie Jinjin, Huawei's vice president of handsets, at a briefing attended by WIRED.co.uk in Shanghai. Choosing the country is tricky, he added, and Huawei must take a number of factors into account, but the company already has a well-established handset business in Europe. "This is why we didn't choose the US. We chose Europe because Europe is the market where we have very good sales of high-end smartphones." Europe is also very important to the "brand strategy" of the P Series, he said.
Advertisement
Huawei is highly successful in China, but is keen to be seen as an "international company". Whereas in China, the much larger Ascend Mate 7 is very popular, Huawei's smaller, sleeker P8 is the flagship phone that the company will push to European consumers.
This will be particularly important over the next year, as the company attempts to build better brand awareness in the UK and other European countries, where it is still not that well known -- especially in comparison to its biggest high-end competitors.
Whereas previous P Series phones have been the slimmest handsets in the world at the time of their release, it is unlikely that the P8 will be able to boast this. Jinjin maintained that the sleek industrial design of the phone would continue to be an important feature in the next iteration of the P Series, but said that that slimness "should not mitigate other functions", including battery life.
Advertisement
Another feature he mentioned that would be extremely important is the camera. Unlike other major Android manufacturers, like Sony and Samsung, Huawei has never had an imaging business of its own. It has, however, been working hard on its camera technology and been partnering with companies that it believes will be able to bring best in class camera tech to its smartphones.
While little more is known at the moment about the P8, Huawei will undoubtedly build in its homegrown LTE technology and continue to use premium products in its design. As for the screen, it won't be 4K. The company's president of handsets told WIRED.co.uk in an earlier briefing he doesn't believe in sacrificing battery life for such high resolution on a small screen. Instead it will be LCD, but the size and exact resolution are as yet unclear.
WIRED.co.uk will bring you hands-on previews of all Huawei's latest products from both the P8's London launch and Mobile World Congress in Barcelona. |
<gh_stars>1-10
/* Output bytecodes for GNU C-compiler.
Copyright (C) 1993, 1994 Free Software Foundation, Inc.
This file is part of GNU CC.
GNU CC is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
GNU CC is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with GNU CC; see the file COPYING. If not, write to
the Free Software Foundation, 59 Temple Place - Suite 330,
Boston, MA 02111-1307, USA. */
#include "config.h"
#ifdef __STDC__
#include <stdarg.h>
#else
#include <varargs.h>
#endif
#include "machmode.h"
#include "rtl.h"
#include "real.h"
#include "obstack.h"
#include "bytecode.h"
#ifdef __GNUC__
#include "bytetypes.h"
#endif
#include "bc-emit.h"
#include "bc-opcode.h"
#include "bc-typecd.h"
#include "bi-run.h"
#include <stdio.h>
extern char *xmalloc (), *xrealloc ();
extern void free ();
extern struct obstack *rtl_obstack;
/* Indexed by mode class, gives the narrowest mode for each class. */
extern enum machine_mode class_narrowest_mode[(int) MAX_MODE_CLASS];
/* Commonly used modes. */
/* Mode whose width is BITS_PER_UNIT */
extern enum machine_mode byte_mode;
/* Mode whose width is BITS_PER_WORD */
extern enum machine_mode word_mode;
/* Vector indexed by opcode giving info about the args for each opcode. */
static struct arityvec arityvec[] = {
#include "bc-arity.h"
};
/* How to print a symbol name for the assembler. */
static void
prsym (file, s)
FILE *file;
char *s;
{
if (*s == '*')
fprintf (file, "%s", s + 1);
else
#ifdef NAMES_HAVE_UNDERSCORES
fprintf (file, "_%s", s);
#else
fprintf (file, "%s", s);
#endif
}
/* Maintain a bucket hash table for symbol names. */
#define HASH_BITS 32
#define HASH_SIZE 509
static struct bc_sym *hashtab[HASH_SIZE];
static unsigned int
hash (name)
char *name;
{
unsigned int hash = 0;
while (*name)
{
hash = hash << 3 | hash >> HASH_BITS - 3;
hash += *name++;
}
return hash % HASH_SIZE;
}
/* Look up the named symbol, creating it if it doesn't exist. */
struct bc_sym *
sym_lookup (name)
char *name;
{
int i;
struct bc_sym *s;
i = hash (name);
for (s = hashtab[i]; s; s = s->next)
if (!strcmp (s->name, name))
return s;
s = (struct bc_sym *) xmalloc (sizeof (struct bc_sym));
s->name = xmalloc (strlen (name) + 1);
strcpy (s->name, name);
s->defined = s->global = s->common = 0;
s->val = 0;
s->next = hashtab[i];
hashtab[i] = s;
return s;
}
/* Write out .globl and common symbols to the named file. */
static void
bc_sym_write (file)
FILE *file;
{
int i;
struct bc_sym *s;
for (i = 0; i < HASH_SIZE; ++i)
for (s = hashtab[i]; s; s = s->next)
{
if (s->global)
{
fprintf (file, "\n\t.globl ");
prsym (file, s->name);
putc ('\n', file);
if (s->common)
{
fprintf (file, "\n\t.comm ");
prsym (file, s->name);
fprintf (file, ", %lu\n", s->val);
}
}
else if (s->common)
{
fprintf (file, "\n\t.lcomm ");
prsym (file, s->name);
fprintf (file, ", %lu\n", s->val);
}
}
}
/* Create and initialize a new segment. */
static struct bc_seg *
seg_create ()
{
struct bc_seg *result;
result = (struct bc_seg *) xmalloc (sizeof (struct bc_seg));
result->alloc = 256;
result->data = xmalloc (result->alloc);
result->size = 0;
result->syms = 0;
result->relocs = 0;
return result;
}
/* Advance the segment index to the next alignment boundary. */
static void
seg_align (seg, log)
struct bc_seg *seg;
int log;
{
unsigned int oldsize = seg->size;
seg->size = seg->size + (1 << log) - 1 & ~((1 << log) - 1);
if (seg->size > seg->alloc)
{
while (seg->size > seg->alloc)
seg->alloc *= 2;
seg->data = xrealloc (seg->data, seg->alloc);
}
bzero (seg->data + oldsize, seg->size - oldsize);
}
/* Append the given data to the given segment. */
static void
seg_data (seg, data, size)
struct bc_seg *seg;
char *data;
unsigned int size;
{
if (seg->size + size > seg->alloc)
{
while (seg->size + size > seg->alloc)
seg->alloc *= 2;
seg->data = xrealloc (seg->data, seg->alloc);
}
bcopy (data, seg->data + seg->size, size);
seg->size += size;
}
/* Append a zero-filled skip to the given segment. */
static void
seg_skip (seg, size)
struct bc_seg *seg;
unsigned int size;
{
if (seg->size + size > seg->alloc)
{
while (seg->size + size > seg->alloc)
seg->alloc *= 2;
seg->data = xrealloc (seg->data, seg->alloc);
}
memset (seg->data + seg->size, 0, size);
seg->size += size;
}
/* Define the given name as the current offset in the given segment. It
is an error if the name is already defined. Return 0 or 1 indicating
failure or success respectively. */
static int
seg_defsym (seg, name)
struct bc_seg *seg;
char *name;
{
struct bc_sym *sym;
struct bc_segsym *segsym;
sym = sym_lookup (name);
if (sym->defined)
return 0;
sym->defined = 1;
sym->val = seg->size;
segsym = (struct bc_segsym *) xmalloc (sizeof (struct bc_segsym));
segsym->sym = sym;
segsym->next = seg->syms;
seg->syms = segsym;
return 1;
}
/* Generate in seg's data a reference to the given sym, adjusted by
the given offset. */
static void
seg_refsym (seg, name, offset)
struct bc_seg *seg;
char *name;
int offset;
{
struct bc_sym *sym;
struct bc_segreloc *segreloc;
sym = sym_lookup (name);
segreloc = (struct bc_segreloc *) xmalloc (sizeof (struct bc_segreloc));
segreloc->offset = seg->size;
segreloc->sym = sym;
segreloc->next = seg->relocs;
seg->relocs = segreloc;
seg_data (seg, (char *) &offset, sizeof offset);
}
/* Concatenate the contents of given segments into the first argument. */
static void
seg_concat (result, seg)
struct bc_seg *result, *seg;
{
unsigned int fix;
struct bc_segsym *segsym;
struct bc_segreloc *segreloc;
seg_align (result, MACHINE_SEG_ALIGN);
fix = result->size;
seg_data (result, seg->data, seg->size);
free (seg->data);
/* Go through the symbols and relocs of SEG, adjusting their offsets
for their new location in RESULT. */
if (seg->syms)
{
segsym = seg->syms;
do
segsym->sym->val += fix;
while (segsym->next && (segsym = segsym->next));
segsym->next = result->syms;
result->syms = seg->syms;
}
if (seg->relocs)
{
segreloc = seg->relocs;
do
segreloc->offset += fix;
while (segreloc->next && (segreloc = segreloc->next));
segreloc->next = result->relocs;
result->relocs = seg->relocs;
}
free ((char *) seg);
}
/* Write a segment to a file. */
static void
bc_seg_write (seg, file)
struct bc_seg *seg;
FILE *file;
{
struct bc_segsym *segsym, *nsegsym, *psegsym;
struct bc_segreloc *segreloc, *nsegreloc, *psegreloc;
int i, offset, flag;
/* Reverse the list of symbols. */
for (psegsym = 0, segsym = seg->syms; segsym; segsym = nsegsym)
{
nsegsym = segsym->next;
segsym->next = psegsym;
psegsym = segsym;
}
seg->syms = psegsym;
/* Reverse the list of relocs. */
for (psegreloc = 0, segreloc = seg->relocs; segreloc; segreloc = nsegreloc)
{
nsegreloc = segreloc->next;
segreloc->next = psegreloc;
psegreloc = segreloc;
}
seg->relocs = psegreloc;
/* Output each byte of the segment. */
for (i = 0, segsym = seg->syms, segreloc = seg->relocs; i < seg->size; ++i)
{
while (segsym && segsym->sym->val == i)
{
if (i % 8 != 0)
putc ('\n', file);
BC_WRITE_SEGSYM (segsym, file);
segsym = segsym->next;
flag = 1;
}
if (segreloc && segreloc->offset == i)
{
if (i % 8 != 0)
putc ('\n', file);
bcopy (seg->data + i, (char *) &offset, sizeof (int));
i += sizeof (int) - 1;
BC_WRITE_RELOC_ENTRY (segreloc, file, offset);
segreloc = segreloc->next;
flag = 1;
}
else
{
if (i % 8 == 0 || flag)
BC_START_BYTECODE_LINE (file);
BC_WRITE_BYTECODE (i % 8 == 0 || flag ? ' ' : ',',
seg->data[i] & 0xFF,
file);
flag = 0;
if (i % 8 == 7)
putc ('\n', file);
}
}
/* Paranoia check--we should have visited all syms and relocs during
the output pass. */
if (segsym || segreloc)
abort ();
}
/* Text and data segments of the object file in making. */
static struct bc_seg *bc_text_seg;
static struct bc_seg *bc_data_seg;
/* Called before anything else in this module. */
void
bc_initialize ()
{
int min_class_size[(int) MAX_MODE_CLASS];
enum machine_mode mode;
int i;
bc_init_mode_to_code_map ();
bc_text_seg = seg_create ();
bc_data_seg = seg_create ();
dconst0 = REAL_VALUE_ATOF ("0", DFmode);
dconst1 = REAL_VALUE_ATOF ("1", DFmode);
dconst2 = REAL_VALUE_ATOF ("2", DFmode);
dconstm1 = REAL_VALUE_ATOF ("-1", DFmode);
/* Find the narrowest mode for each class and compute the word and byte
modes. */
for (i = 0; i < (int) MAX_MODE_CLASS; i++)
min_class_size[i] = 1000;
for (mode = VOIDmode; (int) mode < (int) MAX_MACHINE_MODE;
mode = (enum machine_mode) ((int) mode + 1))
{
if (GET_MODE_SIZE (mode) < min_class_size[(int) GET_MODE_CLASS (mode)])
{
class_narrowest_mode[(int) GET_MODE_CLASS (mode)] = mode;
min_class_size[(int) GET_MODE_CLASS (mode)] = GET_MODE_SIZE (mode);
}
if (GET_MODE_CLASS (mode) == MODE_INT
&& GET_MODE_BITSIZE (mode) == BITS_PER_UNIT)
byte_mode = mode;
if (GET_MODE_CLASS (mode) == MODE_INT
&& GET_MODE_BITSIZE (mode) == BITS_PER_WORD)
word_mode = mode;
}
}
/* External addresses referenced in a function. Rather than trying to
work relocatable address directly into bytecoded functions (which would
require us to provide hairy location info and possibly obey alignment
rules imposed by the architecture) we build an auxiliary table of
pointer constants, and encode just offsets into this table into the
actual bytecode. */
static struct bc_seg *ptrconsts;
/* Trampoline code for the function entry. */
struct bc_seg *trampoline;
/* Actual byte code of the function. */
struct bc_seg *bytecode;
/* List of labels defined in the function. */
struct bc_label *labels;
/* List of label references in the function. */
struct bc_labelref *labelrefs;
/* Add symbol to pointer table. Return offset into table where
pointer was stored. The offset usually goes into the bytecode
stream as a constP literal. */
int
bc_define_pointer (p)
char *p;
{
int offset = ptrconsts->size;
seg_refsym (ptrconsts, p, 0);
return offset;
}
/* Begin a bytecoded function. */
int
bc_begin_function (name)
char *name;
{
ptrconsts = seg_create ();
trampoline = seg_create ();
bytecode = seg_create ();
return seg_defsym (trampoline, name);
}
/* Force alignment in inline bytecode. */
void
bc_align_bytecode (align)
int align;
{
seg_align (bytecode, align);
}
/* Emit data inline into bytecode. */
void
bc_emit_bytecode_const (data, size)
char *data;
unsigned int size;
{
if (bytecode)
seg_data (bytecode, data, size);
}
/* Create a new "bytecode label", to have its value defined later.
Bytecode labels have nothing to do with the object file symbol table,
and are purely local to a given bytecoded function. */
struct bc_label *
bc_get_bytecode_label ()
{
struct bc_label *result;
result = (struct bc_label *) xmalloc (sizeof (struct bc_label));
result->defined = 0;
result->next = labels;
result->uid = 0;
labels = result;
return result;
}
/* Define the given label with the current location counter. */
int
bc_emit_bytecode_labeldef (label)
struct bc_label *label;
{
extern int bc_new_uid ();
if (!label || label->defined)
return 0;
label->offset = bytecode->size;
label->defined = 1;
label->uid = bc_new_uid ();
#ifdef DEBUG_PRINT_CODE
fprintf (stderr, "$%lx:\n", label);
#endif
return 1;
}
/* Generate a location-relative reference to the given bytecode label.
It need not be defined yet; label references will be backpatched later. */
void
bc_emit_bytecode_labelref (label)
struct bc_label *label;
{
struct bc_labelref *labelref;
static int zero;
labelref = (struct bc_labelref *) xmalloc (sizeof (struct bc_labelref));
labelref->label = label;
labelref->offset = bytecode->size;
labelref->next = labelrefs;
labelrefs = labelref;
#ifdef DEBUG_PRINT_CODE
fprintf (stderr, " $%lx", label);
#endif
seg_data (bytecode, (char *) &zero, sizeof zero);
}
/* Emit a reference to an external address; generate the reference in the
ptrconst area, and emit an offset in the bytecode. */
void
bc_emit_code_labelref (name, offset)
char *name;
int offset;
{
int ptroff;
ptroff = ptrconsts->size / sizeof (char *);
seg_data (bytecode, (char *) &ptroff, sizeof ptroff);
seg_refsym (ptrconsts, name, offset);
#ifdef DEBUG_PRINT_CODE
fprintf (stderr, " [external <%x> %s]", ptroff, name);
#endif
}
/* Backpatch label references in the byte code, and concatenate the bytecode
and pointer constant segments to the cumulative text for the object file.
Return a label name for the pointer constants region. */
char *
bc_end_function ()
{
int addr;
struct bc_label *label, *next;
struct bc_labelref *ref, *nextref;
char ptrconsts_label[20];
static int nlab;
/* Backpatch bytecode label references. */
for (ref = labelrefs; ref; ref = ref->next)
if (ref->label->defined)
{
addr = ref->label->offset;
bcopy ((char *) &addr, bytecode->data + ref->offset, sizeof addr);
}
/* Free the chains of labelrefs and labeldefs. */
for (ref = labelrefs; ref; ref = nextref)
{
nextref = ref->next;
free ((char *) ref);
}
for (label = labels; label; label = next)
{
next = label->next;
free ((char *) label);
}
seg_concat (trampoline, bytecode);
seg_align (trampoline, MACHINE_SEG_ALIGN);
sprintf (ptrconsts_label, "*LP%d", nlab++);
seg_defsym (trampoline, ptrconsts_label);
seg_concat (trampoline, ptrconsts);
seg_concat (bc_text_seg, trampoline);
labels = 0;
labelrefs = 0;
trampoline = 0;
bytecode = 0;
ptrconsts = 0;
return sym_lookup (ptrconsts_label)->name;
}
/* Force alignment in const data. */
void
bc_align_const (align)
int align;
{
seg_align (bc_text_seg, align);
}
/* Emit const data. */
void
bc_emit_const (data, size)
char *data;
unsigned int size;
{
seg_data (bc_text_seg, data, size);
}
/* Emit a zero-filled constant skip. */
void
bc_emit_const_skip (size)
unsigned int size;
{
seg_skip (bc_text_seg, size);
}
/* Emit a label definition in const data. */
int
bc_emit_const_labeldef (name)
char *name;
{
return seg_defsym (bc_text_seg, name);
}
/* Emit a label reference in const data. */
void
bc_emit_const_labelref (name, offset)
char *name;
int offset;
{
seg_refsym (bc_text_seg, name, offset);
}
/* Force alignment in data. */
void
bc_align_data (align)
int align;
{
seg_align (bc_data_seg, align);
}
/* Emit data. */
void
bc_emit_data (data, size)
char *data;
unsigned int size;
{
seg_data (bc_data_seg, data, size);
}
/* Emit a zero-filled data skip. */
void
bc_emit_data_skip (size)
unsigned int size;
{
seg_skip (bc_data_seg, size);
}
/* Emit label definition in data. */
int
bc_emit_data_labeldef (name)
char *name;
{
return seg_defsym (bc_data_seg, name);
}
/* Emit label reference in data. */
void
bc_emit_data_labelref (name, offset)
char *name;
int offset;
{
seg_refsym (bc_data_seg, name, offset);
}
/* Emit a common block of the given name and size. Note that
when the .o file is actually written non-global "common"
blocks will have to be turned into space in the data section. */
int
bc_emit_common (name, size)
char *name;
unsigned int size;
{
struct bc_sym *sym;
sym = sym_lookup (name);
if (sym->defined)
return 0;
sym->defined = 1;
sym->common = 1;
sym->val = size;
return 1;
}
/* Globalize the given label. */
void
bc_globalize_label (name)
char *name;
{
struct bc_sym *sym;
sym = sym_lookup (name);
sym->global = 1;
}
static enum { in_text, in_data } section = in_text;
void
bc_text ()
{
section = in_text;
}
void
bc_data ()
{
section = in_data;
}
void
bc_align (align)
int align;
{
if (section == in_text)
bc_align_const (align);
else
bc_align_data (align);
}
void
bc_emit (data, size)
char *data;
unsigned int size;
{
if (section == in_text)
bc_emit_const (data, size);
else
bc_emit_data (data, size);
}
void
bc_emit_skip (size)
unsigned int size;
{
if (section == in_text)
bc_emit_const_skip (size);
else
bc_emit_data_skip (size);
}
int
bc_emit_labeldef (name)
char *name;
{
if (section == in_text)
return bc_emit_const_labeldef (name);
else
return bc_emit_data_labeldef (name);
}
void
bc_emit_labelref (name, offset)
char *name;
int offset;
{
if (section == in_text)
bc_emit_const_labelref (name, offset);
else
bc_emit_data_labelref (name, offset);
}
void
bc_write_file (file)
FILE *file;
{
BC_WRITE_FILE (file);
}
/* Allocate a new bytecode rtx.
If you supply a null BC_LABEL, we generate one. */
rtx
bc_gen_rtx (label, offset, bc_label)
char *label;
int offset;
struct bc_label *bc_label;
{
rtx r;
if (bc_label == 0)
bc_label = (struct bc_label *) xmalloc (sizeof (struct bc_label));
r = gen_rtx (CODE_LABEL, VOIDmode, label, bc_label);
bc_label->offset = offset;
return r;
}
/* Print bytecode rtx */
void
bc_print_rtl (fp, r)
FILE *fp;
rtx r;
{
#if 0 /* This needs to get fixed to really work again. */
/* BC_WRITE_RTL has a definition
that doesn't even make sense for this use. */
BC_WRITE_RTL (r, fp);
#endif
}
/* Emit a bytecode, keeping a running tally of the stack depth. */
void
bc_emit_bytecode (bytecode)
enum bytecode_opcode bytecode;
{
char byte;
static int prev_lineno = -1;
byte = (char) bytecode;
#ifdef BCDEBUG_PRINT_CODE
if (lineno != prev_lineno)
{
fprintf (stderr, "<line %d>\n", lineno);
prev_lineno = lineno;
}
fputs (opcode_name[(unsigned int) bytecode], stderr);
#endif
/* Due to errors we are often requested to output bytecodes that
will cause an interpreter stack undeflow when executed. Instead of
dumping core on such occasions, we omit the bytecode. Erroneous code
should not be executed, regardless. This makes life much easier, since
we don't have to deceive ourselves about the known stack depth. */
bc_emit_bytecode_const (&byte, 1);
if ((stack_depth -= arityvec[(int) bytecode].ninputs) >= 0)
{
if ((stack_depth += arityvec[(int) bytecode].noutputs) > max_stack_depth)
max_stack_depth = stack_depth;
}
#ifdef VALIDATE_STACK_FOR_BC
VALIDATE_STACK_FOR_BC ();
#endif
}
#ifdef BCDEBUG_PRINT_CODE
#define PRLIT(TYPE, PTR) fprintf (stderr, " [%x]", *(TYPE *) PTR)
#else
#define PRLIT(X,Y)
#endif
/* Emit a complete bytecode instruction, expecting the correct number
of literal values in the call. First argument is the instruction, the
remaining arguments are literals of size HOST_WIDE_INT or smaller. */
void
bc_emit_instruction VPROTO((enum bytecode_opcode opcode, ...))
{
#ifndef __STDC__
enum bytecode_opcode opcode;
#endif
va_list arguments;
int nliteral, instruction;
VA_START (arguments, opcode);
#ifndef __STDC__
opcode = va_arg (arguments, enum bytecode_opcode);
#endif
/* Emit instruction bytecode */
bc_emit_bytecode (opcode);
instruction = (int) opcode;
/* Loop literals and emit as bytecode constants */
for (nliteral = 0; nliteral < arityvec[instruction].nliterals; nliteral++)
{
switch (arityvec[instruction].literals[nliteral])
{
/* This conditional is a kludge, but it's necessary
because TYPE might be long long. */
#ifdef __GNUC__
/* Expand definitions into case statements */
#define DEFTYPECODE(CODE, NAME, MODE, TYPE) \
case CODE: \
{ \
TYPE temp = va_arg (arguments, TYPE); \
bc_emit_bytecode_const ((void *) &temp, sizeof temp); \
PRLIT (TYPE, &temp); } \
break;
#include "bc-typecd.def"
#undef DEFTYPECODE
#endif /* __GNUC__ */
default:
abort ();
}
}
#ifdef BCDEBUG_PRINT_CODE
fputc ('\n', stderr);
#endif
}
/* Emit the machine-code interface trampoline at the beginning of a byte
coded function. The argument is a label name of the interpreter
bytecode callinfo structure; the return value is a label name for
the beginning of the actual bytecode. */
char *
bc_emit_trampoline (callinfo)
char *callinfo;
{
char mylab[20];
static int n;
sprintf (mylab, "*LB%d", n++);
BC_EMIT_TRAMPOLINE (trampoline, callinfo);
seg_defsym (bytecode, mylab);
return sym_lookup (mylab)->name;
}
/* Simple strdup */
char *
bc_xstrdup (str)
char *str;
{
char *tmp = xmalloc (strlen (str) + 1);
strcpy (tmp, str);
return tmp;
}
|
<gh_stars>0
#include "stdafx.h"
void world::init(int size)
{
this->size = size;
size *= size;
chunks = new chunk[size]();
//printf("%d,%d,%d",chunks[0].size(),chunks[1].size(),chunks[2].size());
earch = new surface();
characters = new std::vector<character*>();
set_bg(0,0,0);
set_view_distance(size / 5,size / 5);
toUpdate = NULL;
toRemove = NULL;
}
world::world(void)
{
init(256);
}
world::world(int size)
{
init(256);
}
world::~world(void)
{
delete[] chunks;
delete earch;
delete characters;
}
void world::set_sun(int frames_per_cycle,GLenum light_index,GLfloat* ambient,GLfloat* diffuse,GLfloat* specular)
{
glLightfv(light_index,GL_AMBIENT,ambient);
glLightfv(light_index,GL_DIFFUSE,diffuse);
glLightfv(light_index,GL_SPECULAR,specular);
glEnable(light_index);
sun = new looper(-90.0, 90.0, frames_per_cycle, 0.0, LOOPER_TRUN_BACK, LOOPER_AUTO_ENABLE);
}
void world::set_bg(float r,float g,float b)
{
bg = new float[3];
bg[0] = r;
bg[1] = g;
bg[2] = b;
}
void world::set_view_distance(int distance,int foging_range)
{
this->distance = distance;
glEnable(GL_FOG);
glFogi(GL_FOG_MODE, GL_LINEAR);
glFogfv(GL_FOG_COLOR, bg);
glFogf(GL_FOG_END, distance);
glFogf(GL_FOG_START, distance - foging_range);
}
chunk* world::get_chunk(int z,int x)
{
return chunks + size * normalize_position(z) + normalize_position(x);
}
chunk* world::get_chunk(float z,float x)
{
return chunks + size * normalize_position(z) + normalize_position(x);
}
void world::reshape(int w, int h)
{
glViewport (0, 0, (GLsizei) w, (GLsizei) h);
glMatrixMode (GL_PROJECTION);
glLoadIdentity ();
gluPerspective(75.0, (GLfloat) w / (GLfloat) h, 0.1, distance);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
void world::init_frame()
{
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glClearColor(bg[0],bg[1],bg[2],1.0);
}
void world::draw(character* user)
{
GLfloat static light_pos_ori[] = { 0.0, 0.0, 0.0, 1.0 };
if(toUpdate != NULL){ toUpdate->update(); toUpdate = NULL; }
if(toRemove != NULL){ delete toRemove; hurtables.erase(toRemove); toRemove = NULL; }
if(sun != NULL)
{
glPushMatrix();
glRotatef(*sun, 0.0, 0.0, 1.0);
glTranslatef(0.0,20.0,0.0);
glLightfv(GL_LIGHT0, GL_POSITION, light_pos_ori);
set_mtl(MTL_fire);
glutSolidSphere(0.75, 30, 30);
pop_mtl();
glPopMatrix();
}
#define U(X) ((int) user->get##X())
int user_r = user->get_r() + 45;
user_r %= 360;
int d_t = distance / 2;
int d_tx = d_t;
int d_tz = d_t;
int x_d = 0;
int z_d = 0;
if(user_r > 270){
d_tz *= 2;
x_d = -d_t;
}
else if(user_r > 180){
d_tx *= 2;
z_d = d_t;
}
else if(user_r > 90){
d_tz *= 2;
x_d = d_t;
}
else{
d_tx *= 2;
z_d = -d_t;
}
d_tx += 10;
d_tz += 10;
earch->draw(
U(z) - d_tz + z_d,
U(z) + d_tz + z_d,
U(x) - d_tx + x_d,
U(x) + d_tx + x_d
);
spot* tar = user->lockTarget();
if(tar != NULL){
set_mtl(MTL_hurtable_fire);
glPushMatrix();
glTranslatef(tar->getx(),tar->gety(),tar->getz());
glutSolidSphere(0.06, 30, 30);
glPopMatrix();
pop_mtl();
if(tar->isTmp()) delete tar;
}
}
void world::add(hurtable* h)
{
if(h != NULL)
this->hurtables.insert(h);
}
void world::add(character* c)
{
if(c != NULL){
characters->insert(characters->begin(),c);
}
}
void world::add(wall* w)
{
if(w != NULL){
this->walls.insert(w);
}
}
int world::get_size()
{
return size;
}
int world::normalize_position(float i)
{
return normalize_position((int) floor(i));
}
int world::normalize_position(int i)
{
while(i < 0 || i >= size){
if(i < 0) i += size;
if(i >= size) i -= size;
}
return i;
}
//void world::add_triangles(triangle* t)
//{
// this->earch->add(t);
//}
//
//void world::add_spot(spot* s)
//{
// this->earch->add(s);
//}
|
#include <bits/stdc++.h>
using namespace std;
#define SWAP(x,y,z) {if ( y > z ) swap(y,z); if ( x > y ) swap(x,y); if (y > z) swap(y,z);}
typedef pair<int, int> pii;
typedef long long ll;
typedef unsigned long long ull;
double const eps = 1e-6;
double const pi = 3.1415926535;
ll const mod = 1e9+7;
int main()
{
int a,b,c;
cin >> a >> b;
c = a + b;
ostringstream a1,b1,c1;
a1 << a;
b1 << b;
c1 << c;
string d1 = "", d2 = "",d3 = "";
for ( int i = 0; i < (a1.str()).size(); i++ ) if (a1.str()[i]!='0') d1 += a1.str()[i];
for ( int i = 0; i < (b1.str()).size(); i++ ) if (b1.str()[i]!='0') d2 += b1.str()[i];
for ( int i = 0; i < (c1.str()).size(); i++ ) if (c1.str()[i]!='0') d3 += c1.str()[i];
if ( atoi(d3.c_str()) == atoi(d1.c_str())+atoi(d2.c_str()) ) cout << "YES" << endl;
else cout << "NO" << endl;
return 0;
}
|
<reponame>midoblgsm/ubiquity-csi<gh_stars>0
package main
import "C"
import (
"errors"
"fmt"
"log"
"net"
"os"
"sync"
"github.com/BurntSushi/toml"
"github.com/midoblgsm/ubiquity-csi/controller"
csi_utils "github.com/midoblgsm/ubiquity-csi/utils"
"golang.org/x/net/context"
"google.golang.org/grpc"
"flag"
"path"
"github.com/container-storage-interface/spec/lib/go/csi"
"github.com/midoblgsm/ubiquity/resources"
"github.com/midoblgsm/ubiquity/utils"
"github.com/midoblgsm/ubiquity/utils/logs"
)
////////////////////////////////////////////////////////////////////////////////
// CLI //
////////////////////////////////////////////////////////////////////////////////
var configFile = flag.String(
"config",
"ubiquity-client.conf",
"config file with ubiquity client configuration params",
)
func main() {
l, err := csi_utils.GetCSIEndpointListener()
if err != nil {
fmt.Fprintf(os.Stderr, "error: failed to listen: %v\n", err)
os.Exit(1)
}
ctx := context.Background()
//init the controller
flag.Parse()
var config resources.UbiquityPluginConfig
fmt.Printf("Starting ubiquity plugin with %s config file\n", *configFile)
if _, err := toml.DecodeFile(*configFile, &config); err != nil {
fmt.Println(err)
return
}
defer logs.InitFileLogger(logs.DEBUG, path.Join(config.LogPath, "ubiquity-csi.log"))()
logger, logFile := utils.SetupLogger(config.LogPath, "ubiquity-csi")
defer utils.CloseLogs(logFile)
storageAPIURL := fmt.Sprintf("http://%s:%d/ubiquity_storage", config.UbiquityServer.Address, config.UbiquityServer.Port)
controller, err := controller.NewController(logger, "ubiquity", storageAPIURL, config)
if err != nil {
logger.Printf("error-creating-controller %#v\n", err)
panic(fmt.Sprintf("error-creating-controller", err))
}
s := &sp{controller: controller}
if err := s.Serve(ctx, l); err != nil {
fmt.Fprintf(os.Stderr, "error: grpc failed: %v\n", err)
os.Exit(1)
}
}
////////////////////////////////////////////////////////////////////////////////
// Go Plug-in //
////////////////////////////////////////////////////////////////////////////////
const name = "ubiquity"
var (
errServerStarted = errors.New("gocsi: the server has been started")
errServerStopped = errors.New("gocsi: the server has been stopped")
)
// ServiceProviders is an exported symbol that provides a host program
// with a map of the service provider names and constructors.
var ServiceProviders = map[string]func() interface{}{
name: func() interface{} { return &sp{name: name} },
}
type sp struct {
sync.Mutex
name string
server *grpc.Server
closed bool
controller *controller.Controller
}
// ServiceProvider.Serve
func (s *sp) Serve(ctx context.Context, li net.Listener) error {
log.Println(name + ".Serve")
if err := func() error {
s.Lock()
defer s.Unlock()
if s.closed {
return errServerStopped
}
if s.server != nil {
return errServerStarted
}
s.server = grpc.NewServer()
return nil
}(); err != nil {
return errServerStarted
}
csi.RegisterControllerServer(s.server, s)
csi.RegisterIdentityServer(s.server, s)
csi.RegisterNodeServer(s.server, s)
// start the grpc server
if err := s.server.Serve(li); err != grpc.ErrServerStopped {
return err
}
return errServerStopped
}
// ServiceProvider.Stop
func (s *sp) Stop(ctx context.Context) {
log.Println(name + ".Stop")
s.Lock()
defer s.Unlock()
if s.closed || s.server == nil {
return
}
s.server.Stop()
s.server = nil
s.closed = true
}
// ServiceProvider.GracefulStop
func (s *sp) GracefulStop(ctx context.Context) {
log.Println(name + ".GracefulStop")
s.Lock()
defer s.Unlock()
if s.closed || s.server == nil {
return
}
s.server.GracefulStop()
s.server = nil
s.closed = true
}
////////////////////////////////////////////////////////////////////////////////
// Controller Service //
////////////////////////////////////////////////////////////////////////////////
func (s *sp) CreateVolume(ctx context.Context, req *csi.CreateVolumeRequest) (*csi.CreateVolumeResponse, error) {
s.Lock()
defer s.Unlock()
createVolumeResponse, err := s.controller.CreateVolume(*req)
if err != nil {
return nil, err
}
return &createVolumeResponse, nil
}
func (s *sp) DeleteVolume(
ctx context.Context,
req *csi.DeleteVolumeRequest) (
*csi.DeleteVolumeResponse, error) {
s.Lock()
defer s.Unlock()
response, err := s.controller.DeleteVolume(*req)
if err != nil {
// UNDEFINED
return nil, err
}
return &response, nil
}
func (s *sp) ControllerPublishVolume(
ctx context.Context,
req *csi.ControllerPublishVolumeRequest) (
*csi.ControllerPublishVolumeResponse, error) {
s.Lock()
defer s.Unlock()
response, err := s.controller.Attach(*req)
if err != nil {
// UNDEFINED
return nil, err
}
return &response, nil
}
func (s *sp) ControllerUnpublishVolume(
ctx context.Context,
req *csi.ControllerUnpublishVolumeRequest) (
*csi.ControllerUnpublishVolumeResponse, error) {
s.Lock()
defer s.Unlock()
detachResponse, err := s.controller.Detach(*req)
if err != nil {
// UNDEFINED
return nil, err
}
return &detachResponse, nil
}
func (s *sp) ValidateVolumeCapabilities(
ctx context.Context,
req *csi.ValidateVolumeCapabilitiesRequest) (
*csi.ValidateVolumeCapabilitiesResponse, error) {
resp, err := s.controller.ValidateCapabilities(*req)
if err != nil {
return nil, err
}
return &resp, nil
}
func (s *sp) ListVolumes(
ctx context.Context,
req *csi.ListVolumesRequest) (
*csi.ListVolumesResponse, error) {
s.Lock()
defer s.Unlock()
listResponse, err := s.controller.ListVolumes(*req)
if err != nil {
// UNDEFINED
return nil, err
}
return &listResponse, nil
}
func (s *sp) GetCapacity(
ctx context.Context,
req *csi.GetCapacityRequest) (
*csi.GetCapacityResponse, error) {
response, err := s.controller.GetCapacity(*req)
if err != nil {
// UNDEFINED
return nil, err
}
return &response, nil
}
func (s *sp) ControllerGetCapabilities(
ctx context.Context,
req *csi.ControllerGetCapabilitiesRequest) (
*csi.ControllerGetCapabilitiesResponse, error) {
response, err := s.controller.ControllerGetCapabilities(*req)
if err != nil {
return nil, err
}
return &response, nil
}
////////////////////////////////////////////////////////////////////////////////
// Identity Service //
////////////////////////////////////////////////////////////////////////////////
// Server API for Identity service
//type IdentityServer interface {
// GetSupportedVersions(context.Context, *GetSupportedVersionsRequest) (*GetSupportedVersionsResponse, error)
// GetPluginInfo(context.Context, *GetPluginInfoRequest) (*GetPluginInfoResponse, error)
//}
func (s *sp) GetSupportedVersions(
ctx context.Context,
req *csi.GetSupportedVersionsRequest) (
*csi.GetSupportedVersionsResponse, error) {
response, err := s.controller.GetSupportedVersions(*req)
if err != nil {
return nil, err
}
return &response, nil
}
func (s *sp) GetPluginInfo(
ctx context.Context,
req *csi.GetPluginInfoRequest) (
*csi.GetPluginInfoResponse, error) {
response, err := s.controller.GetPluginInfos(*req)
if err != nil {
return nil, err
}
return &response, nil
}
////////////////////////////////////////////////////////////////////////////////
// Node Service //
////////////////////////////////////////////////////////////////////////////////
// Server API for Node service
//
//type NodeServer interface {
// NodePublishVolume(context.Context, *NodePublishVolumeRequest) (*NodePublishVolumeResponse, error)
// NodeUnpublishVolume(context.Context, *NodeUnpublishVolumeRequest) (*NodeUnpublishVolumeResponse, error)
// GetNodeID(context.Context, *GetNodeIDRequest) (*GetNodeIDResponse, error)
// ProbeNode(context.Context, *ProbeNodeRequest) (*ProbeNodeResponse, error)
// NodeGetCapabilities(context.Context, *NodeGetCapabilitiesRequest) (*NodeGetCapabilitiesResponse, error)
//}
func (s *sp) NodePublishVolume(
ctx context.Context,
req *csi.NodePublishVolumeRequest) (
*csi.NodePublishVolumeResponse, error) {
s.Lock()
defer s.Unlock()
response, err := s.controller.Mount(*req)
if err != nil {
return nil, err
}
return &response, nil
}
func (s *sp) NodeUnpublishVolume(
ctx context.Context,
req *csi.NodeUnpublishVolumeRequest) (
*csi.NodeUnpublishVolumeResponse, error) {
s.Lock()
defer s.Unlock()
response, err := s.controller.Unount(*req)
if err != nil {
return nil, err
}
return &response, nil
}
func (s *sp) GetNodeID(
ctx context.Context,
req *csi.GetNodeIDRequest) (
*csi.GetNodeIDResponse, error) {
response, err := s.controller.GetNodeID(*req)
if err != nil {
return nil, err
}
return &response, nil
}
func (s *sp) ProbeNode(
ctx context.Context,
req *csi.ProbeNodeRequest) (
*csi.ProbeNodeResponse, error) {
response, err := s.controller.ProbeNode(*req)
if err != nil {
return nil, err
}
return &response, nil
}
func (s *sp) NodeGetCapabilities(
ctx context.Context,
req *csi.NodeGetCapabilitiesRequest) (
*csi.NodeGetCapabilitiesResponse, error) {
response, err := s.controller.GetNodeCapabilities(*req)
if err != nil {
return nil, err
}
return &response, nil
}
////////////////////////////////////////////////////////////////////////////////
// Utils //
////////////////////////////////////////////////////////////////////////////////
const (
kib uint64 = 1024
mib uint64 = kib * 1024
gib uint64 = mib * 1024
gib100 uint64 = gib * 100
tib uint64 = gib * 1024
tib100 uint64 = tib * 100
)
|
/*
* Removes the nth SpeciesFeatureType from the ListOfSpeciesFeatureTypes.
*/
SpeciesFeatureType*
MultiSpeciesType::removeSpeciesFeatureType(unsigned int n)
{
return mListOfSpeciesFeatureTypes.remove(n);
} |
/**
* @version $Rev$ $Date$
*/
public class RequestContextImpl implements RequestContext {
private ProxyFactoryExtensionPoint proxyFactoryExtensionPoint;
public RequestContextImpl(RuntimeComponent component) {
ExtensionPointRegistry registry = component.getComponentContext().getExtensionPointRegistry();
proxyFactoryExtensionPoint = registry.getExtensionPoint(ProxyFactoryExtensionPoint.class);
}
public Subject getSecuritySubject() {
Subject subject = null;
for (Object header : ThreadMessageContext.getMessageContext().getHeaders()){
if (header instanceof Subject){
subject = (Subject)header;
break;
}
}
return subject;
}
public String getServiceName() {
return ThreadMessageContext.getMessageContext().getTo().getContract().getName();
}
public <B> CallableReference<B> getServiceReference() {
Message msgContext = ThreadMessageContext.getMessageContext();
// FIXME: [rfeng] Is this the service reference matching the caller side?
EndpointReference to = msgContext.getTo();
RuntimeComponentService service = (RuntimeComponentService) to.getContract();
RuntimeComponent component = (RuntimeComponent) to.getComponent();
CallableReference<B> callableReference = component.getComponentContext().getCallableReference(null, component, service);
ReferenceParameters parameters = msgContext.getFrom().getReferenceParameters();
((CallableReferenceExt<B>) callableReference).attachCallbackID(parameters.getCallbackID());
((CallableReferenceExt<B>) callableReference).attachConversation(parameters.getConversationID());
return callableReference;
}
public <CB> CB getCallback() {
CallableReference<CB> cb = getCallbackReference();
if (cb == null) {
return null;
}
return cb.getService();
}
@SuppressWarnings("unchecked")
public <CB> CallableReference<CB> getCallbackReference() {
Message msgContext = ThreadMessageContext.getMessageContext();
EndpointReference to = msgContext.getTo();
RuntimeComponentService service = (RuntimeComponentService) to.getContract();
RuntimeComponentReference callbackReference = (RuntimeComponentReference)service.getCallbackReference();
if (callbackReference == null) {
return null;
}
JavaInterface javaInterface = (JavaInterface) callbackReference.getInterfaceContract().getInterface();
Class<CB> javaClass = (Class<CB>)javaInterface.getJavaClass();
List<RuntimeWire> wires = callbackReference.getRuntimeWires();
ProxyFactory proxyFactory = new ExtensibleProxyFactory(proxyFactoryExtensionPoint);
CallbackReferenceImpl ref = CallbackReferenceImpl.newInstance(javaClass, proxyFactory, wires);
if (ref != null) {
//ref.resolveTarget();
ReferenceParameters parameters = msgContext.getFrom().getReferenceParameters();
ref.attachCallbackID(parameters.getCallbackID());
if (ref.getConversation() != null) {
ref.attachConversationID(parameters.getConversationID());
}
}
return ref;
}
} |
#include <algorithm>
#include <bitset>
#include <cassert>
#include <cctype>
#include <cfloat>
#include <climits>
#include <cmath>
#include <cstdio>
#include <cstdlib>
#include <cstring>
#include <ctime>
#include <deque>
#include <functional>
#include <iostream>
#include <limits>
#include <list>
#include <map>
#include <queue>
#include <set>
#include <sstream>
#include <stack>
#include <streambuf>
#include <string>
#include <utility>
#include <vector>
#define r(i, n) rb(i, 0, n)
#define rb(i, b, n) rbc(i, b, n, <)
#define re(i, n) rbe(i, 0, n)
#define rbe(i, b, n) rbc(i, b, n, <=)
#define rbc(i, b, n, c) for(int i = (b); i c ((int)(n)); i++)
#define ri r(i, n)
#define rj r(j, n)
#define rk r(k, n)
#define rim r(i, m)
#define rjm r(j, m)
#define rkm r(k, m)
#define pv(v) ri { p(v[i]); } pl;
#define pm(m) ri { rjm { p(m[i][j]); } pl; } pl;
#define p(x) cout << x << " "
#define pl cout << endl
#define pn(x) cout << x << endl
#define s(v) ((int) v.size())
#define all(v) v.begin(), v.end()
using namespace std;
typedef unsigned long long ull;
typedef long long ll;
typedef long double ld;
int main() {
int n, m;
while(cin >> n >> m) {
int tab[155][155] = {{0}};
ri {
rjm {
char c;
cin >> c;
if (c == 'W') {
tab[i][j] = 1;
}
}
}
// pm(tab);
int minj[155], maxj[155], last = 0;
ri {
minj[i] = 155;
maxj[i] = -1;
rjm {
if (tab[i][j]) {
minj[i] = min(minj[i], j);
maxj[i] = max(maxj[i], j);
}
if (minj[i] != 155) last = i;
}
}
minj[n] = minj[n - 1];
maxj[n] = maxj[n - 1];
// pv(minj);
// pv(maxj);
int cur = 0, res = 0;
ri {
if (i > last) break;
if (i % 2 == 0) {
int end = max(maxj[i], maxj[i + 1]);
if (end == -1) end = cur;
res += abs(end - cur);
cur = end;
} else {
int end = min(minj[i], minj[i + 1]);
if (end == 155) end = cur;
res += abs(cur - end);
cur = end;
}
// p(res);
}
cout << res + last << endl;
}
}
|
/**
* Utility class for internal needs.
*/
final class SecurityUtil {
private static final Logger LOGGER = Logger.getLogger(SecurityUtil.class.getName());
private SecurityUtil() {
}
static Set<Class<? extends Annotation>> getAnnotations(Map<SecurityProvider, Boolean> providers) {
Set<Class<? extends Annotation>> annotations = new HashSet<>();
for (SecurityProvider provider : providers.keySet()) {
annotations.addAll(provider.supportedAnnotations());
}
return annotations;
}
static Tracer getTracer(boolean tracingEnabled, Tracer builderTracer) {
if (tracingEnabled) {
return (builderTracer == null) ? GlobalTracer.get() : builderTracer;
} else {
return NoopTracerFactory.create();
}
}
static AuditProvider.TracedAuditEvent wrapEvent(String tracingId, AuditProvider.AuditSource auditSource, AuditEvent event) {
return new AuditProvider.TracedAuditEvent() {
@Override
public AuditProvider.AuditSource auditSource() {
return auditSource;
}
@Override
public String tracingId() {
return tracingId;
}
@Override
public String eventType() {
return event.eventType();
}
@Override
public Optional<Throwable> throwable() {
return event.throwable();
}
@Override
public List<AuditEvent.AuditParam> params() {
return event.params();
}
@Override
public String messageFormat() {
return event.messageFormat();
}
@Override
public AuditEvent.AuditSeverity severity() {
return event.severity();
}
@Override
public String toString() {
return event.toString();
}
};
}
static String forAuditNamed(List<? extends NamedProvider<?>> collection) {
return collection.stream().map(p -> p.getName() + ": " + p.getProvider().getClass().getName())
.collect(Collectors.toList()).toString();
}
static String forAudit(Collection<?> collection) {
return collection.stream().map(p -> p.getClass().getName()).collect(Collectors.toList()).toString();
}
static <T> T instantiate(String className, Class<? extends T> type, Config config) {
Class<?> clazz;
try {
clazz = Class.forName(className);
} catch (Exception e) {
throw new SecurityException("Failed to get class " + className, e);
}
Exception configException = null;
if (null != config) {
try {
return type.cast(config.as(clazz).get());
} catch (ClassCastException e) {
throw new SecurityException("Class " + className + " is not instance of expected type: " + type.getName());
} catch (ConfigMappingException e) {
LOGGER.log(Level.FINEST,
e,
() -> "Class " + className + " failed to get mapped by config. Will attempt public default "
+ "constructor");
configException = e;
}
}
// last chance - public default constructor
try {
return type.cast(clazz.getConstructor().newInstance());
} catch (Exception e) {
LOGGER.log(Level.SEVERE, "Could not instantiate: " + className + ". Class must either have a default public"
+ " constructor or be mappable by Config");
configException = ((null == configException) ? e : configException);
throw new SecurityException("Failed to load " + type
.getName() + " from class " + clazz + ", parsing from config failed with: "
+ extractExceptionDetails(configException), e);
}
}
private static String extractExceptionDetails(Exception configException) {
Throwable prev = configException;
Throwable cause;
StringBuilder details = new StringBuilder();
details.append(configException.getMessage());
while (true) {
cause = prev.getCause();
if ((null == cause) || (cause == prev)) {
break;
}
details.append(", caused by: ").append(cause.getMessage());
prev = cause;
}
return details.toString();
}
} |
/**
* Add a column to the end of a table.
* @param o An array of identically structured objects with the
* same number of elements as other columns in the table.
*
* @return _more_
*
* @throws FitsException _more_
*/
public int addColumn(Object o) throws FitsException {
int primeDim = Array.getLength(o);
extendArrays(nCol + 1);
Class base = ArrayFuncs.getBaseClass(o);
if (isVarying(o)) {
flags[nCol] |= COL_VARYING;
dimens[nCol] = new int[] { 2 };
}
if (isVaryingComp(o)) {
flags[nCol] |= COL_VARYING | COL_COMPLEX;
dimens[nCol] = new int[] { 2 };
}
if ( !isVarCol(nCol)) {
int[] allDim = ArrayFuncs.getDimensions(o);
if (base == String.class) {
int[] xdim = new int[allDim.length + 1];
System.arraycopy(allDim, 0, xdim, 0, allDim.length);
xdim[allDim.length] = -1;
allDim = xdim;
}
if (allDim.length == 1) {
dimens[nCol] = new int[0];
} else {
dimens[nCol] = new int[allDim.length - 1];
System.arraycopy(allDim, 1, dimens[nCol], 0,
allDim.length - 1);
o = ArrayFuncs.flatten(o);
}
}
addFlattenedColumn(o, dimens[nCol]);
if ((nRow == 0) && (nCol == 0)) {
nRow = primeDim;
}
nCol += 1;
return getNCols();
} |
package protoc
import "github.com/urfave/cli"
var Cmd = cli.Command{
Name: "protoc",
Aliases: []string{"p"},
Usage: "ox protoc tools",
Action: Run,
SkipFlagParsing: false,
UsageText: ProtocHelpTemplate,
Flags: []cli.Flag{
&cli.BoolFlag{
Name: "grpc,g",
Usage: "whether to generate GRPC code",
Destination: &option.withGRPC,
},
&cli.BoolFlag{
Name: "server,s",
Usage: "whether to generate grpc server code",
Destination: &option.withServer,
},
&cli.StringFlag{
Name: "file,f",
Usage: "Path of proto file",
Required: true,
Destination: &option.protoFilePath,
},
&cli.StringFlag{
Name: "out,o",
Usage: "Path of code generation",
Required: true,
Destination: &option.outputDir,
},
&cli.StringFlag{
Name: "prefix,p",
Usage: "prefix(current project name)",
Required: false,
Destination: &option.prefix,
},
},
}
|
def _iterate_folder_db(self):
folder = self._stack_item._convert_folder_to_db()
self.bookmarks.append(folder)
parent_id = folder.id
for child in self._stack_item:
child.parent_id = parent_id
if child.type == "folder":
if child.children:
self._stack.append(child)
else:
folder = child._convert_folder_to_db()
self.bookmarks.append(folder)
else:
url = child._convert_url_to_db()
self.bookmarks.append(url) |
/**
* Transposes the given tabular data, swapping rows with columns.
*
* @param <T> the type of the table
* @param table the table
* @return the transposed table
* @throws NullPointerException if the given table is {@code null}
*/
public static <T> List<List<T>> transpose(final List<List<T>> table) {
if (isNullOrEmpty(table)) {
return new ArrayList<>();
}
final List<List<T>> result = new ArrayList<>();
for (int i = 0; i < table.get(0).size(); i++) {
final List<T> col = new ArrayList<>();
for (List<T> row : table) {
col.add(row.get(i));
}
result.add(col);
}
return result;
} |
<gh_stars>0
export class MinBinaryHeap {
_values: number[];
constructor() {
this._values = [];
}
insert(value: number) {
let newValueIndex = this._values.push(value) - 1;
let parentIndex: number | undefined, temp: number;
// bubble up
while (parentIndex !== 0) {
parentIndex = Math.floor((newValueIndex - 1) / 2);
if (this._values[parentIndex] > this._values[newValueIndex]) {
temp = this._values[parentIndex];
this._values[parentIndex] = this._values[newValueIndex];
this._values[newValueIndex] = temp;
newValueIndex = parentIndex;
} else break;
}
return this;
}
extractMin() {
const last = this._values.pop();
if (!this._values.length) return last;
const min = this._values[0];
this._values[0] = last as number;
let parent = 0,
left: number | undefined,
right: number | undefined,
smallestChild: number | undefined,
temp: number | undefined;
// sink down
while (true) {
left = 2 * parent + 1;
right = left + 1;
smallestChild = this._values[right] < this._values[left] ? right : left;
if (this._values[smallestChild] < this._values[parent]) {
temp = this._values[parent];
this._values[parent] = this._values[smallestChild];
this._values[smallestChild] = temp;
parent = smallestChild;
} else {
break;
}
}
return min;
}
}
|
/* Simple ... UifU the args and per-lane pessimise the results. */
static
IRAtom* binary8Ix16 ( MCEnv* mce, IRAtom* vatom1, IRAtom* vatom2 )
{
IRAtom* at;
at = mkUifUV128(mce, vatom1, vatom2);
at = mkPCast8x16(mce, at);
return at;
} |
def unrange(data, rangechar, joinchar):
"""
data - string, e.g. '8,10-13,20'
rangechar - e.g. '-' for above string
joinchar - e.g.',' for above string
returns - e.g. '8,10,11,12,13,20 string
"""
result=[]
# check if range char actually in data:
if not rangechar in data: return data, None
for item in data.split(rangechar):
# form split list checking that i is not empty
item_split=[i for i in item.split(joinchar) if i]
if result:
start_int=int(result[-1])
try:
end_int=int(item_split[0])
except ValueError as e:
log.error("ttp.match.unrange: Unrange failed, data '{}', rangechar '{}', joinchar '{}', error: {}".format(data, rangechar, joinchar, e))
return data, None
list_of_ints_range=[str(i) for i in list(range(start_int,end_int))]
result += list_of_ints_range[1:] + item_split
else:
result=item_split
data = joinchar.join(result)
return data, None |
export class Student {
constructor(
public id: number,
public name: string,
public legalRepresentative: any,
public birthday: Date,
public address: string
) {}
}
|
import { Text } from '../db/Text';
import { IProxyText } from './IProxyText';
import { StringUtils } from '../../Utils/StringUtils';
import { ISO8601Util } from '../../Utils/ISO8601Util';
import { UberApplication } from '../../UberApplication';
export class ProxyText implements IProxyText
{
private _Text_id:number;
public get Text_id():number {return this._Text_id;}
public set Text_id(value:number) {this._Text_id = value;}
private _Title:string
public get Title():string {return this._Title;}
public set Title(value:string) {this._Title = value;}
private _Author:string
public get Author():string {return this._Author;}
public set Author(value:string) {this._Author = value;}
private _Genre:string
public get Genre():string {return this._Genre;}
public set Genre(value:string) {this._Genre = value;}
private _Reading_level:string
public get Reading_level():string {return this._Reading_level;}
public set Reading_level(value:string) {this._Reading_level = value;}
private _user_id:number;
public get User_id():number {return this._user_id;}
public set User_id(value:number) {this._user_id = value;}
private _ComplexText:boolean
public get ComplexText():boolean {return this._ComplexText;}
public set ComplexText(value:boolean) {this._ComplexText = value;}
private _deleted:boolean;
public get Deleted():boolean
{
return this._deleted;
}
public set Deleted(value:boolean)
{
this._deleted = value;
}
private _date:Date;
public get _Date():Date
{
return this._date;
}
public set _Date(value:Date)
{
this._date = value;
}
private _textType:string;
public get TextType():string
{
return this._textType;
}
public set TextType(type:string)
{
this._textType = type;
}
private _topic_id:number;
public get Topic_id():number
{
return this._topic_id;
}
public set Topic_id(value:number)
{
this._topic_id = value;
}
private _content_length:number = 0;
public get Content_length():number
{
return this._content_length;
}
public set Content_length(value:number)
{
this._content_length = value;
}
private _trialMode1:boolean;
public get Trial_Mode1_Enabled():boolean{ return this._trialMode1; }
public set Trial_Mode1_Enabled(type:boolean){ this._trialMode1 = type; }
private _trialMode2:boolean;
public get Trial_Mode2_Enabled():boolean{ return this._trialMode2; }
public set Trial_Mode2_Enabled(type:boolean){ this._trialMode2 = type; }
public init(text:Text):void
{
this.Text_id = text.Text_id;
this.Title = text.Title;
this.Author = text.Author;
this.Genre = text.Genre;
this.Reading_level = text.Reading_level;
this.User_id = text.User_id;
this.ComplexText = text.ComplexText;
this._Date = text._Date;
this.Deleted = text.Deleted;
}
public static fromJson(jsonObject:any):ProxyText
{
var retVal:ProxyText = new ProxyText();
retVal.Text_id = jsonObject.Text_id;
retVal.Title = StringUtils.DecodeFromJSONUri(jsonObject.Title);
retVal._Date = ISO8601Util.parseDateTimeString(jsonObject.Date);
retVal.Author = StringUtils.DecodeFromJSONUri(jsonObject.Author);
retVal.Genre = StringUtils.DecodeFromJSONUri(jsonObject.Genre);
retVal.Reading_level = jsonObject.Reading_level;
retVal.ComplexText = jsonObject.ComplexText;
retVal.Deleted = jsonObject.Deleted;
retVal.Content_length = jsonObject.Content_length;
if(jsonObject.Trial_mode1_enabled != null)
{
retVal.Trial_Mode1_Enabled = jsonObject.Trial_mode1_enabled;
}
if(jsonObject.Trial_mode2_enabled != null)
{
retVal.Trial_Mode2_Enabled = jsonObject.Trial_mode2_enabled;
}
if (jsonObject.User_id != null)
{
retVal.User_id = jsonObject.User_id;
retVal.TextType = UberApplication.GetInstance().GetUiTextByKey("USER_PROXY_TEXT_LABEL");
}
else
{
retVal.TextType = UberApplication.GetInstance().GetUiTextByKey("DEFAULT_PROXY_TEXT_LABEL");
}
if (jsonObject.Topic_id != null)
{
retVal.Topic_id = jsonObject.Topic_id;
}
return retVal;
}
public static fromLibrary(jsonObject:Text):ProxyText
{
var retVal:ProxyText = new ProxyText();
retVal.Text_id = jsonObject.Text_id;
retVal.Title = jsonObject.Title;
retVal._Date = jsonObject._Date;
retVal.Author = StringUtils.DecodeFromJSONUri(jsonObject.Author);
retVal.Genre = jsonObject.Genre;
retVal.Reading_level = jsonObject.Reading_level;
retVal.ComplexText = jsonObject.ComplexText;
retVal.Deleted = jsonObject.Deleted;
if(jsonObject.Content_length != null)
{
retVal.Content_length = jsonObject.Content_length;
}
if (jsonObject.Topic_id != null)
{
retVal.Topic_id = jsonObject.Topic_id;
}
if (jsonObject.User_id != null)
{
retVal.User_id = jsonObject.User_id;
retVal.TextType = UberApplication.GetInstance().GetUiTextByKey("USER_PROXY_TEXT_LABEL");
}
else
{
retVal.TextType = UberApplication.GetInstance().GetUiTextByKey("DEFAULT_PROXY_TEXT_LABEL");
}
return retVal;
}
} |
Steelers Notebook: Colbert quietly team's 1st GM
The Steelers make known their big promotions by saying nothing. They have done that again through their usual means of merely changing a title in their media guide.
Kevin Colbert has been promoted to general manager by the team, becoming the first to hold that title in Steelers history. He held the title of director of football operations since he was hired in 2000, a title Tom Donahoe also held as his predecessor.
The team has had three presidents -- Art Rooney Sr., Dan Rooney and now Art Rooney II -- but never a general manager. The Steelers have had player personnel directors and director of football operations, but never a general manager. Years ago, Dan Rooney was asked why he did not have a general manager as many NFL teams had and he responded, "Because I am the general manager."
That title was never used, however, until now and it was merely slipped into their 2011 media guide, which is available only in digital version at the moment on the team's website.
There also is no elaboration as to the promotion, merely a title change for Colbert and a line in his biography that he is in "his first year as the team's general manager." That is the same way the Steelers announced that Art Rooney II had succeeded his father Dan as team president, and the same way they announced that Dan Rooney had succeeded his father Art Sr. as team president.
Colbert, 54, has directed the Steelers scouting and player personnel moves since he joined them Feb. 15, 2000, from the Detroit Lions as their pro scouting director for 10 years. He also was a college scout for the Miami Dolphins after getting his start as a scout with BLESTO under its director Jack Butler, one of two seniors nominees for the Pro Football Hall of Fame this year.
A Pittsburgh native, Colbert graduated from North Catholic High School and Robert Morris University.
Five players cut
Five players were waived and rookie running back Baron Batch was placed on the team's injured reserve list Sunday as the Steelers pare their roster to get to the mandatory 80-player limit.
Released were tight end Vaughn Charlton, cornerback Kevin Dockery, wide receiver Eric Greenwood, wide receiver Kenneth Moore and tight end Miguel Chavis, who was waived/injured after sustaining a torn pectoral muscle Saturday night against the Atlanta Falcons.
The Steelers need to cut four more players by 4 p.m. Tuesday.
Batch, a seventh-round draft choice, tore the anterior cruciate ligament in his knee at practice at training camp several weeks ago.
Gerry Dulac contributed to this report. Ed Bouchette: [email protected]
First published on August 29, 2011 at 12:00 am |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.