content
stringlengths 10
4.9M
|
---|
Both Microsoft and Sony have approached Bohemia Interactive about porting DayZ to new hardware, but creator Dean “Rocket” Hall is more in favour of the PlayStation 4 than Xbox One.
“We talked to both of them. But, as I’m sure you’re aware, Sony lets you self-publish and they don’t make you pay for updates,” Hall told Eurogamer.
“Microsoft requires you to have a publisher. They have no digital distribution strategy and they require you to pay $10,000, or whatever it is, for updates.”
Hall said that Bohemia likes Sony, and that he personally liked what he saw on the PlayStation 4.
“I like what I saw on the Xbox in a lot of cases as well,” he added. I’m not shitting on them. I’m kind of hopeful that Microsoft has just forgot to talk about its indie support. Maybe I’m being a bit naive.”
The architecture of both consoles mean Hall doesn’t see many barriers to getting DayZ running on either of them.
“It’s totally achievable. The only barriers are the ones console manufacturers put up in the way of indie devs,” he said.
“That’s the only barrier now. I’d love to see this on Xbox Live. I’d love to see it on PS4.”
A console port is probably inevitable but the DayZ team won’t start on it until the PC standalone release is finished.
Thanks, Massively. |
a = list(raw_input())
b = list(raw_input())
i = 0
c = []
while i in range(len(a)):
if a[i]==b[i]:
c.append('0')
if a[i]!=b[i]:
c.append('1')
i = i+1
n = ''.join(c)
print n |
package controllers;
import router.Routes;
import models.*;
import play.data.Form;
import play.data.FormFactory;
import play.mvc.*;
import play.twirl.api.Html;
import router.Routes;
import views.html.*;
import javax.inject.Inject;
import java.util.List;
/**
* This controller contains an action to handle HTTP requests
* to the application's home page.
*/
public class HomeController extends Controller {
private final FormFactory formFactory;
@Inject
public HomeController(FormFactory formFactory) {
this.formFactory = formFactory;
}
/**
* An action that renders an HTML page with a welcome message.
* The configuration in the <code>routes</code> file means that
* this method will be called when the application receives a
* <code>GET</code> request with a path of <code>/</code>.
*/
public Result index() {
List<Capabilities> all = Capabilities.find.all();
List<Gallery> all2 = Gallery.find.all();
List <School> all3 = School.find.all();
List <Team> all4 = Team.find.all();
return ok(index.render(all, all2, all3, all4));
}
@Security.Authenticated(SecuredController.class)
public Result message(){
return ok(sendMessage.render());
}
@Security.Authenticated(SecuredController.class)
public Result more(){
User user = User.find.byId(session("email"));
return ok(registrUser.render(user));
}
}
|
#include<bits/stdc++.h>
using namespace std;
int main()
{
long long n,m;
int i,j;
int b;
int f1=0;
int a[100050];
int sum1=0,sum2=0,sum3=0;
int t1,t2,t3;
scanf("%lld%lld",&m,&n);
i=m/(pow(2.0,n)-1);
for(;f1!=1;i++)
{
if(m<(2*i-1+n)*n/2)
{
printf("NO\n");
break;
}
else if(m<=(pow(2.0,n)-1)*i) f1=1;
}
b=i-1;
if(f1==1)
{
printf("YES\n");
for(i=0;i<n;i++)
{
sum1+=i+b;
a[i]=i+b;
}
sum2=m-sum1;
for(i=n-1;i>0;i--)
{
t1=a[i-1]*2-a[i];
if(t1>=sum2) t1=sum2;
sum2-=t1;
a[i]+=t1;
if(i==1) i=n;
if(sum2==0) break;
}
for(i=0;i<n-1;i++) printf("%d ",a[i]);
printf("%d\n",a[n-1]);
}
return 0;
} |
Nearly two decades into America’s fantasy sports craze, ESPN is giving fantasy a spot in its weekday lineup.
According to Variety, ESPN fantasy analyst Matthew Berry will host a show creatively dubbed The Fantasy Show, to air weekdays on ESPN2 during football season. The show will reportedly launch August 1 only on WatchESPN and other digital platforms, leading into an ESPN Fantasy Football Marathon on August 14 and 15. After that, The Fantasy Show will debut on ESPN2.
In addition, ESPN will debut Fantasy Football Now, a live Sunday-morning show featuring Berry, Tim Hasselbeck, Field Yates and Stephania Bell. That will start up on the first day of NFL action, Sept. 7, and continue throughout the season.
Via Variety:
“For so long, fantasy sports has been a great digital property for us,” said Norby Williamson, ESPN’s exec VP of production. “For the first time, we’re taking a franchise and really, truly making it integrated across platforms, clearly with the focus on the marathon.”
Fantasy sports, of course, have become a huge traffic driver for ESPN, with millions of users each year flocking to the site. Fantasy is a primary rooting interest for many football fans, particularly when their favorite team isn’t playing, making it relevant to a large portion of NFL TV viewers. Given the scope of the fantasy audience, it’s somewhat surprising ESPN didn’t already have a marquee fantasy program.
It sounds like ESPN plans to play up the fact that The Fantasy Show will be available not only on TV but also on phones, tablets and computers. Fantasy players obviously check their teams on those devices, so it figures that ESPN wants to stretch its fantasy coverage across many platforms.
“With all this activity, we wanted to establish a consistent fantasy show — Monday through Friday — that’s available on every screen our fans have access to,” said Williamson.
Berry is a longtime ESPN columnist known for his energy, enthusiasm, pop-culture references and personal anecdotes. Last August, ESPN announced it had signed Berry to a contract lasting through 2021, signaling a commitment to fantasy sports coverage, which is reinforced by Friday’s announcement.
Berry will be an extraordinarily busy man during football season, hosting The Fantasy Show during the week and Fantasy Football Now on Sunday, while also co-hosting the Fantasy Focus podcast, in addition to whatever writing he will still do.
[Variety] |
def main():
app = app_lib.app_state([], {}, True)
if read_settings_from_file(app) == 'Error' or read_data_from_file(app) == 'Error':
messagebox.showerror('BondMarket', 'An error occurred while reading the file.\n\nProbably the files are not compatible!')
exit(win, app)
if app.settings.first_start is True:
welcome_window()
win = tk.Tk()
style = ThemedStyle(win)
style.theme_use(app.settings.appearance.ttk_theme)
win.title('')
try:
win.iconbitmap('Icons\BondMarket_Icon.ico')
except:
win.iconbitmap('BondMarket\Icons\BondMarket_Icon.ico')
win.protocol('WM_DELETE_WINDOW', (lambda : exit(win, app)))
win.config(bg=app.settings.appearance.bg_color)
win.geometry(f'{1000}x{650}')
win.minsize(1000, 650)
center_window(win, 1000, 650)
data_frame = tk.Frame(win, bg=app.settings.appearance.bg_color, relief='flat')
entry_frame = tk.Frame(win, bg=app.settings.appearance.bg_color, relief='flat')
notebook = tk.Frame(win, bg=app.settings.appearance.bg_color, relief='flat')
button_frame = tk.Frame(win, bg=app.settings.appearance.bg_color, relief='flat')
tabs = ttk.Notebook(notebook)
tab1 = ttk.Frame(tabs, relief='flat')
tab2 = ttk.Frame(tabs, relief='flat')
tab3 = ttk.Frame(tabs, relief='flat')
tab4 = ttk.Frame(tabs, relief='flat')
tabs.add(tab1, text =' Expenses ')
tabs.add(tab2, text = ' Debts ')
tabs.add(tab3, text =' Settings ')
tabs.add(tab4, text =' Help/Info ')
tabs.pack(side='left', fill='both', padx=5, pady=5, ipadx=25)
menuebar = tk.Menu(win)
app_menue = tk.Menu(menuebar, relief='flat', tearoff=0)
app_menue.add_command(label='Save', command=(lambda : save(app)))
app_menue.add_separator()
app_menue.add_command(label='Exit', command=(lambda : exit(win, app)))
menuebar.add_cascade(label='App', menu=app_menue)
file_menue = tk.Menu(menuebar, relief='flat', tearoff=0)
file_menue.add_command(label='Open File', command=(lambda : open_file(app)))
file_menue.add_command(label='New File', command=(lambda : open_new_file(app)))
file_menue.add_separator()
file_menue.add_command(label='Create Backup', command=(lambda : save_backup_lib.create_backup(app)))
file_menue.add_command(label='Restore Backup', command=(lambda : restore_backup(app)))
menuebar.add_cascade(label='File', menu=file_menue)
win.bind('<Control-s>', lambda event: save(app))
win.bind('<Control-o>', lambda event: open_file(app))
win.bind('<Control-n>', lambda event: open_new_file(app))
win.bind('<Escape>', lambda event: exit(win, app))
win.bind('<Return>', lambda event: safe_to_dataarray(app))
win.bind('<Delete>', lambda event: delet_from_dataarray(app))
table_(data_frame, app)
entry_(entry_frame, app)
expenses_(tab1, app)
debts_(tab2, app)
settings_(win, tab3, app, main)
info_(tab4, app)
button_(button_frame, app)
if app.settings.appearance.name == 'LIGHT':
try:
logo = Image.open("Icons\BondMarket_Logo_dark.png")
except:
logo = Image.open("BondMarket\Icons\BondMarket_Logo_dark.png")
if app.settings.appearance.name == 'DARK':
try:
logo = Image.open("Icons\BondMarket_Logo_white.png")
except:
logo = Image.open("BondMarket\Icons\BondMarket_Logo_white.png")
logo = logo.resize((256, 50))
logo = ImageTk.PhotoImage(logo)
win.config(menu=menuebar)
tk.Label(win, text='Python 3.10.1 %s Version %s' %(code_copyright,code_version), font=tkinter.font.Font(family="Segoe UI", size=8), fg=app.settings.appearance.fg_color, bg=app.settings.appearance.bg_color, width=5000).pack(side='bottom', fill='x')
button_frame.pack(side='bottom', anchor='se', fill='x')
notebook.pack(side='right', padx=2, pady=2, fill='y')
tk.Label(win, image=logo, bg=app.settings.appearance.bg_color).pack(side='top', anchor='sw', padx=5, pady=5)
entry_frame.pack(side='bottom', anchor='w', padx=2, pady=2, fill='both')
data_frame.pack(side='top', anchor='nw', padx=2, pady=2, fill='both')
win.mainloop() |
/**
* Translates internal query into PreparedStatement.
*/
public PreparedStatement createStatement() throws Exception {
long t1 = System.currentTimeMillis();
String sqlStr = createSqlString();
QueryLogger.logQuery(sqlStr, attributes, values, System.currentTimeMillis() - t1);
PreparedStatement stmt = connection.prepareStatement(sqlStr);
initStatement(stmt);
return stmt;
} |
package com.gibarsin;
import com.gibarsin.nonnull.List;
public final class Main {
public static void main(final String[] args) {
final List<Integer> integers = new List<>();
integers.add(0);
integers.add(1);
integers.add(2);
integers.beConsumedBy(System.out::print);
System.out.println();
integers.deleteByIndex(1);
integers.beConsumedBy(System.out::print);
System.out.println();
integers.add(1);
integers.beConsumedBy(System.out::print);
System.out.println();
System.out.println(integers.exists(1));
System.out.println(integers.exists(3));
}
}
|
/**
* Use vectorized reader provided by the Hive to read ORC files. We copy one column completely at a time,
* instead of one row at time.
*/
public class HiveORCVectorizedReader extends HiveAbstractReader {
/**
* For transactional orc files, the row data is stored in the struct vector at position 5
*/
static final int TRANS_ROW_COLUMN_INDEX = 5;
private org.apache.hadoop.hive.ql.io.orc.RecordReader hiveOrcReader;
private ORCCopier[] copiers;
private DremioORCRecordUtils.DefaultDataReader dataReader;
/**
* Hive vectorized ORC reader reads into this batch. It is a heap based structure and reused until the reader exhaust
* records. Most of the heap structures remain constant size on heap, but the variable width structures may get
* reallocated if there is not enough space.
*/
private VectorizedRowBatch hiveBatch;
// non-zero value indicates partially read batch in previous iteration.
private int offset;
public HiveORCVectorizedReader(final HiveTableXattr tableAttr, final SplitAndPartitionInfo split,
final List<SchemaPath> projectedColumns, final OperatorContext context, final JobConf jobConf,
final AbstractSerDe tableSerDe, final StructObjectInspector tableOI, final AbstractSerDe partitionSerDe,
final StructObjectInspector partitionOI, final ScanFilter filter, final Collection<List<String>> referencedTables,
final UserGroupInformation readerUgi) {
super(tableAttr, split, projectedColumns, context, jobConf, tableSerDe, tableOI, partitionSerDe, partitionOI, filter,
referencedTables, readerUgi);
}
private int[] getOrdinalIdsOfSelectedColumns(List< OrcProto.Type > types, List<Integer> selectedColumns, boolean isOriginal) {
int rootColumn = isOriginal ? 0 : TRANS_ROW_COLUMN_INDEX + 1;
int[] ids = new int[types.size()];
OrcProto.Type root = types.get(rootColumn);
// iterating over only direct children
for(int i = 0; i < root.getSubtypesCount(); ++i) {
if (selectedColumns.contains(i)) {
// find the position of this column in the types list
ids[i] = root.getSubtypes(i);
}
}
return ids;
}
class SearchResult {
public int index;
public ObjectInspector oI;
}
/*
PreOrder tree traversal
position.index will contain preorder tree traversal index starting at root=0
*/
private static boolean searchAllFields(final ObjectInspector rootOI,
final String name,
final int[] childCounts,
SearchResult position
) {
Category category = rootOI.getCategory();
if (category == Category.STRUCT) {
position.index++; // first child is immediately next to parent
StructObjectInspector sOi = (StructObjectInspector) rootOI;
for (StructField sf : sOi.getAllStructFieldRefs()) {
// We depend on the fact that caller takes care of calling current method
// once for each segment in the selected column path. So, we should always get
// searched field as immediate child
if (sf.getFieldName().equalsIgnoreCase(name)) {
position.oI = sf.getFieldObjectInspector();
return true;
} else {
if (position.index >= childCounts.length) {
return false;
}
position.index += childCounts[position.index];
}
}
} else if (category == Category.MAP) {
position.index++; // first child is immediately next to parent
if (name.equalsIgnoreCase(HiveUtilities.MAP_KEY_FIELD_NAME)) {
ObjectInspector kOi = ((MapObjectInspector) rootOI).getMapKeyObjectInspector();
position.oI = kOi;
return true;
}
if (position.index >= childCounts.length) {
return false;
}
position.index += childCounts[position.index];
if (name.equalsIgnoreCase(HiveUtilities.MAP_VALUE_FIELD_NAME)) {
ObjectInspector vOi = ((MapObjectInspector) rootOI).getMapValueObjectInspector();
position.oI = vOi;
return true;
}
}
return false;
}
// Takes SchemaPath and sets included bits of only fields in selected schema path
// and all children of last segment
// Example: if table schema is <col1: int, col2:struct<f1:int, f2:string>, col3: string>
// then childCounts will be [6, 1, 3, 1, 1, 1]
// calling this method for co2.f2 will set include[2] for struct and include[4] for field f2
private void getIncludedColumnsFromTableSchema(ObjectInspector rootOI, int rootColumn, SchemaPath selectedField, int[] childCounts, boolean[] include) {
SearchResult searchResult = new SearchResult();
searchResult.index = rootColumn;
searchResult.oI = null;
List<String> nameSegments = selectedField.getNameSegments();
ListIterator<String> listIterator = nameSegments.listIterator();
while (listIterator.hasNext()) {
String name = listIterator.next();
boolean found = searchAllFields(rootOI, name, childCounts, searchResult);
if (found) {
rootColumn = searchResult.index;
rootOI = searchResult.oI;
if (rootColumn < include.length) {
if (listIterator.hasNext()) {
include[rootColumn] = true;
} else {
int childCount = childCounts[rootColumn];
for (int child = 0; child < childCount; ++child) {
include[rootColumn + child] = true;
}
}
}
} else {
break;
}
}
}
private void getIncludedColumnsFromTableSchema(ObjectInspector oi, int rootColumn, int[] childCounts, boolean[] include) {
if (rootColumn < include.length) {
include[rootColumn] = true;
}
Collection<SchemaPath> selectedColumns = getColumns();
for (SchemaPath selectedField: selectedColumns) {
getIncludedColumnsFromTableSchema(oi, rootColumn, selectedField, childCounts, include);
}
}
/*
For each root, populate total number of nodes in the tree starting from it
*/
private int getChildCountsFromTableSchema(ObjectInspector rootOI, int position, int[] counts) {
if (position >= counts.length) {
return 0;
}
Category category = rootOI.getCategory();
switch (category) {
case PRIMITIVE:
counts[position] = 1;
return counts[position];
case LIST:
// total count is children count and 1 extra for itself
counts[position] = getChildCountsFromTableSchema(((ListObjectInspector)rootOI).getListElementObjectInspector(),
position + 1, counts) + 1;
return counts[position];
case STRUCT: {
// total count is children count and 1 extra for itself
int totalCount = 1;
int childPosition = position + 1;
StructObjectInspector sOi = (StructObjectInspector) rootOI;
for (StructField sf : sOi.getAllStructFieldRefs()) {
int childCount = getChildCountsFromTableSchema(sf.getFieldObjectInspector(),
childPosition, counts);
childPosition += childCount;
totalCount += childCount;
}
counts[position] = totalCount;
return counts[position];
}
case MAP: {
// total count is children count and 1 extra for itself
int totalCount = 1;
int childPosition = position + 1;
ObjectInspector kOi = ((MapObjectInspector) rootOI).getMapKeyObjectInspector();
int childCount = getChildCountsFromTableSchema(kOi, childPosition, counts);
childPosition += childCount;
totalCount += childCount;
ObjectInspector vOi = ((MapObjectInspector) rootOI).getMapValueObjectInspector();
childCount = getChildCountsFromTableSchema(vOi, childPosition, counts);
totalCount += childCount;
counts[position] = totalCount;
return counts[position];
}
case UNION: {
// total count is children count and 1 extra for itself
int totalCount = 1;
int childPosition = position + 1;
for (ObjectInspector fOi : ((UnionObjectInspector) rootOI).getObjectInspectors()) {
int childCount = getChildCountsFromTableSchema(fOi, childPosition, counts);
childPosition += childCount;
totalCount += childCount;
}
counts[position] = totalCount;
return counts[position];
}
default:
throw UserException.unsupportedError()
.message("Vectorized ORC reader is not supported for datatype: %s", category)
.build(logger);
}
}
@Override
protected void internalInit(InputSplit inputSplit, JobConf jobConf, ValueVector[] vectors) throws IOException {
final OrcSplit fSplit = (OrcSplit)inputSplit;
final Path path = fSplit.getPath();
final OrcFile.ReaderOptions opts = OrcFile.readerOptions(jobConf);
// TODO: DX-16001 make enabling async configurable.
final FileSystem fs = new HadoopFileSystemWrapper(jobConf, path.getFileSystem(jobConf), this.context.getStats());
opts.filesystem(fs);
final Reader hiveReader = OrcFile.createReader(path, opts);
final List<OrcProto.Type> types = hiveReader.getTypes();
final Reader.Options options = new Reader.Options();
long offset = fSplit.getStart();
long length = fSplit.getLength();
options.schema(fSplit.isOriginal() ? hiveReader.getSchema() : hiveReader.getSchema().getChildren().get(TRANS_ROW_COLUMN_INDEX));
options.range(offset, length);
boolean[] include = new boolean[types.size()];
int[] childCounts = new int[types.size()];
getChildCountsFromTableSchema(finalOI, fSplit.isOriginal() ? 0 : TRANS_ROW_COLUMN_INDEX + 1, childCounts);
getIncludedColumnsFromTableSchema(finalOI, fSplit.isOriginal() ? 0 : TRANS_ROW_COLUMN_INDEX + 1, childCounts, include);
include[0] = true; // always include root. reader always includes, but setting it explicitly here.
Boolean zeroCopy = OrcConf.USE_ZEROCOPY.getBoolean(jobConf);
Boolean useDirectMemory = context.getOptions().getOption(Hive3PluginOptions.HIVE_ORC_READER_USE_DIRECT_MEMORY);
options.include(fSplit.isOriginal() ? include : Arrays.copyOfRange(include, TRANS_ROW_COLUMN_INDEX + 1, include.length));
dataReader = DremioORCRecordUtils.createDefaultDataReader(context.getAllocator(), DataReaderProperties.builder()
.withBufferSize(hiveReader.getCompressionSize())
.withCompression(hiveReader.getCompressionKind())
.withFileSystem(fs)
.withPath(path)
.withTypeCount(types.size())
.withZeroCopy(zeroCopy)
.build(), useDirectMemory);
options.dataReader(dataReader);
String[] selectedColNames = getColumns().stream().map(x -> x.getAsUnescapedPath().toLowerCase()).toArray(String[]::new);
// there is an extra level of nesting in the transactional tables
if (!fSplit.isOriginal()) {
selectedColNames = ArrayUtils.addAll(new String[]{"row"}, selectedColNames);
}
if (filter != null) {
final HiveProxyingOrcScanFilter orcScanFilter = (HiveProxyingOrcScanFilter) filter;
final SearchArgument sarg = HiveUtilities.decodeSearchArgumentFromBase64(orcScanFilter.getProxiedOrcScanFilter().getKryoBase64EncodedFilter());
options.searchArgument(sarg, OrcInputFormat.getSargColumnNames(selectedColNames, types, options.getInclude(), fSplit.isOriginal()));
}
hiveOrcReader = hiveReader.rowsOptions(options);
StructObjectInspector orcFileRootOI = (StructObjectInspector) hiveReader.getObjectInspector();
if (!fSplit.isOriginal()) {
orcFileRootOI = (StructObjectInspector)orcFileRootOI.getAllStructFieldRefs().get(TRANS_ROW_COLUMN_INDEX).getFieldObjectInspector();
}
hiveBatch = createVectorizedRowBatch(orcFileRootOI, fSplit.isOriginal());
final List<Integer> projectedColOrdinals = ColumnProjectionUtils.getReadColumnIDs(jobConf);
final int[] ordinalIdsFromOrcFile = getOrdinalIdsOfSelectedColumns(types, projectedColOrdinals, fSplit.isOriginal());
HiveORCCopiers.HiveColumnVectorData columnVectorData = new HiveORCCopiers.HiveColumnVectorData(include, childCounts);
copiers = HiveORCCopiers.createCopiers(columnVectorData,
projectedColOrdinals, ordinalIdsFromOrcFile,
vectors, hiveBatch, fSplit.isOriginal(), this.operatorContextOptions);
// Store the number of vectorized columns for stats/to find whether vectorized ORC reader is used or not
context.getStats().setLongStat(Metric.NUM_VECTORIZED_COLUMNS, vectors.length);
}
@Override
protected int populateData() {
try {
final int numRowsPerBatch = (int) this.numRowsPerBatch;
int outputIdx = 0;
// Consume the left over records from previous iteration
if (offset > 0 && offset < hiveBatch.size) {
int toRead = Math.min(hiveBatch.size - offset, numRowsPerBatch - outputIdx);
copy(offset, toRead, outputIdx);
outputIdx += toRead;
offset += toRead;
}
while (outputIdx < numRowsPerBatch && hiveOrcReader.nextBatch(hiveBatch)) {
offset = 0;
int toRead = Math.min(hiveBatch.size, numRowsPerBatch - outputIdx);
copy(offset, toRead, outputIdx);
outputIdx += toRead;
offset = toRead;
}
return outputIdx;
} catch (Throwable t) {
throw createExceptionWithContext("Failed to read data from ORC file", t);
}
}
private void copy(final int inputIdx, final int count, final int outputIdx) {
for (ORCCopier copier : copiers) {
copier.copy(inputIdx, count, outputIdx);
}
}
private boolean isSupportedType(Category category) {
return (category == Category.PRIMITIVE ||
category == Category.LIST ||
category == Category.STRUCT ||
category == Category.MAP ||
category == Category.UNION);
}
private List<ColumnVector> getVectors(StructObjectInspector rowOI) {
return rowOI.getAllStructFieldRefs()
.stream()
.map((Function<StructField, ColumnVector>) structField -> {
Category category = structField.getFieldObjectInspector().getCategory();
if (!isSupportedType(category)) {
throw UserException.unsupportedError()
.message("Vectorized ORC reader is not supported for datatype: %s", category)
.build(logger);
}
return getColumnVector(structField.getFieldObjectInspector());
})
.collect(Collectors.toList());
}
private ColumnVector getColumnVector(ObjectInspector oi) {
Category category = oi.getCategory();
switch (category) {
case PRIMITIVE:
return getPrimitiveColumnVector((PrimitiveObjectInspector)oi);
case LIST:
return getListColumnVector((ListObjectInspector)oi);
case STRUCT:
return getStructColumnVector((StructObjectInspector)oi);
case MAP:
return getMapColumnVector((MapObjectInspector)oi);
case UNION:
return getUnionColumnVector((UnionObjectInspector)oi);
default:
throw UserException.unsupportedError()
.message("Vectorized ORC reader is not supported for datatype: %s", category)
.build(logger);
}
}
private ColumnVector getUnionColumnVector(UnionObjectInspector uoi) {
ArrayList<ColumnVector> vectors = new ArrayList<>();
List<? extends ObjectInspector> members = uoi.getObjectInspectors();
for (ObjectInspector unionField: members) {
vectors.add(getColumnVector(unionField));
}
ColumnVector[] columnVectors = vectors.toArray(new ColumnVector[0]);
return new UnionColumnVector(VectorizedRowBatch.DEFAULT_SIZE, columnVectors);
}
private ColumnVector getMapColumnVector(MapObjectInspector moi) {
ColumnVector keys = getColumnVector(moi.getMapKeyObjectInspector());
ColumnVector values = getColumnVector(moi.getMapValueObjectInspector());
return new MapColumnVector(VectorizedRowBatch.DEFAULT_SIZE, keys, values);
}
private ColumnVector getStructColumnVector(StructObjectInspector soi) {
ArrayList<ColumnVector> vectors = new ArrayList<>();
List<? extends StructField> members = soi.getAllStructFieldRefs();
for (StructField structField: members) {
vectors.add(getColumnVector(structField.getFieldObjectInspector()));
}
ColumnVector[] columnVectors = vectors.toArray(new ColumnVector[0]);
return new StructColumnVector(VectorizedRowBatch.DEFAULT_SIZE, columnVectors);
}
private ColumnVector getListColumnVector(ListObjectInspector loi) {
ColumnVector lecv = getColumnVector(loi.getListElementObjectInspector());
return new ListColumnVector(VectorizedRowBatch.DEFAULT_SIZE, lecv);
}
private ColumnVector getPrimitiveColumnVector(PrimitiveObjectInspector poi) {
switch (poi.getPrimitiveCategory()) {
case BOOLEAN:
case BYTE:
case SHORT:
case INT:
case LONG:
case DATE:
return new LongColumnVector(VectorizedRowBatch.DEFAULT_SIZE);
case TIMESTAMP:
return new TimestampColumnVector(VectorizedRowBatch.DEFAULT_SIZE);
case FLOAT:
case DOUBLE:
return new DoubleColumnVector(VectorizedRowBatch.DEFAULT_SIZE);
case BINARY:
case STRING:
case CHAR:
case VARCHAR:
return new BytesColumnVector(VectorizedRowBatch.DEFAULT_SIZE);
case DECIMAL:
DecimalTypeInfo tInfo = (DecimalTypeInfo) poi.getTypeInfo();
return new DecimalColumnVector(VectorizedRowBatch.DEFAULT_SIZE,
tInfo.precision(), tInfo.scale()
);
default:
throw UserException.unsupportedError()
.message("Vectorized ORC reader is not supported for datatype: %s", poi.getPrimitiveCategory())
.build(logger);
}
}
/**
* Helper method that creates {@link VectorizedRowBatch}. For each selected column an input vector is created in the
* batch. For unselected columns the vector entry is going to be null. The order of input vectors in batch should
* match the order the columns in ORC file.
*
* @param rowOI Used to find the ordinal of the selected column.
* @return
*/
private VectorizedRowBatch createVectorizedRowBatch(StructObjectInspector rowOI, boolean isOriginal) {
final List<? extends StructField> fieldRefs = rowOI.getAllStructFieldRefs();
final List<ColumnVector> vectors = getVectors(rowOI);
final VectorizedRowBatch result = new VectorizedRowBatch(fieldRefs.size());
ColumnVector[] vectorArray = vectors.toArray(new ColumnVector[0]);
if (!isOriginal) {
vectorArray = createTransactionalVectors(vectorArray);
}
result.cols = vectorArray;
result.numCols = fieldRefs.size();
result.reset();
return result;
}
private ColumnVector[] createTransactionalVectors(ColumnVector[] dataVectors) {
ColumnVector[] transVectors = new ColumnVector[6];
transVectors[0] = new LongColumnVector(VectorizedRowBatch.DEFAULT_SIZE);
transVectors[1] = new LongColumnVector(VectorizedRowBatch.DEFAULT_SIZE);
transVectors[2] = new LongColumnVector(VectorizedRowBatch.DEFAULT_SIZE);
transVectors[3] = new LongColumnVector(VectorizedRowBatch.DEFAULT_SIZE);
transVectors[4] = new LongColumnVector(VectorizedRowBatch.DEFAULT_SIZE);
transVectors[5] = new StructColumnVector(dataVectors.length, dataVectors);
return transVectors;
}
@Override
public void close() throws IOException {
if (hiveOrcReader != null) {
hiveOrcReader.close();
hiveOrcReader = null;
}
if (dataReader != null) {
if (dataReader.isRemoteRead()) {
context.getStats().addLongStat(ScanOperator.Metric.NUM_REMOTE_READERS, 1);
} else {
context.getStats().addLongStat(ScanOperator.Metric.NUM_REMOTE_READERS, 0);
}
dataReader = null;
}
super.close();
}
} |
Gaps in Material Qualification Requirement and Acceptance for Severe to Extreme Sour-Sweet Corrosive HPHT Environments Impacting Completion Design
Severe to extreme sour-corrosive environment assisted cracking (EAC) phenomenon are complex. Mandatory test qualification requirements and acceptance criteria is non-existent, in relevant API and NACE standards for fracture toughness of the CRA's. This paper, perhaps an industry first, attempts to highlight some of these gaps and how it translates into material strength uncertainties thereby impacting tubing design and risk assessment. The materials in this context are high strength group 1 to 4 corrosion resistant alloys of API 5CRA.
Fracture toughness or critical stress intensity factor is a measure of resistance to failure due to crack propagation - a key parameter for HPHT tubing material selection and design. This material aspect of fracture toughness can be influenced by several factors like Microstructure, Strength, Hardness, Heat treatment, Anisotropy etc. Low temperature is generally considered as worst case, nevertheless at higher temperatures, well environment driven embrittlement can have a serious impact on the fracture toughness value. Therefore, with several factors influencing, its characterization is important to define the burst envelope of the tubing when exposed to severe to extreme sour-sweet corrosive environment typical of HPHT wells.
A unique approach is followed to determine the brittle burst tri-axial envelope of selected tubing based on minimum fracture toughness value of the CRA material, referred to as KIMAT for SSC (or EAC) as prescribed by the mill. Proportional radial scaling is proposed to generate scaled down von-mises brittle-burst envelope. The tubing loads and the safety factors are analyzed to the shrunken envelope to visualize the risks of tubular failure under sour-sweet corrosive environment. The analysis includes varying crack depths of 5% and 3%. In addition, a minimum KIMAT for SSC (or EAC) value required to achieve full scale VME is investigated to determine specific material property requirements.
TM0177 NACE D covers methods to measure fracture toughness KIMAT for sour service at ambient temperature only and does not address the context of EAC exposure at ambient or elevated conditions i.e., KIMAT for EAC.This implies that a methodology for evaluation of EAC risk is not available as yet. Guidance on the potential for corrosion to cause cracking of CRAs is given in Table B.1 of ISO 15156-3 with primary and secondary failure mechanisms. However, a quantitative test to cover the risk of cracking of materials by specifying minimum required KIMAT for EAC for each group type in 5CRA is non-existent. Even KIMAT for sour service minimum requirements with SSC as primary failure mechanism, e.g., group 1 CRA, does not currently exist. Consequently, KIMAT for EAC minimum requirements are considered as far-fetched. Additionally, mills prescribed KIMAT for SSC lacks basis due to gaps in the minimum fracture toughness requirement stipulations for group 1 to 4 CRA materials listed at API 5CRA. Therefore, this paper provides risk insights and potential of tubing failure that can lead to serious integrity issues on a HPHT well. A joint industry program or joint API/NACE task group is proposed as a logical next step. |
As interesting as the reading is for my Women’s Studies class is, lately I’ve become more of an observer and less of an active participant. Sometimes my morning coffee hasn’t really sunk in yet, sometimes it’s because I fell asleep before finishing the reading, and sometimes it’s because I literally cannot get a damn word in edgewise.
And this is where you come in, boys. The few of you who feel not just the desire, but the privilege to speak all. the. time. No comment is too innocuous to sit on for a few minutes. No idea is too dim, no thought too dull, no analysis too self-evident to even consider not bestowing upon the rest of us. If it wasn’t enraging it could almost be endearing, the fact that you don’t even realize how you’re setting yourselves up as prime examples of the flaws in our system that feminism seeks to address.
Put simply, I’d put good money on the fact that not one of you boys has ever had the stomach twisting, pulse racing thought that maybe what you’re saying isn’t worth sharing. None of you have been told to shut up because what you’re saying is boring or not intelligent, none of you have ever stopped mid-thought to hastily explain to your audience, “never mind, I’m not even really sure”, or “this is probably stupid, but…” or even, “I’m probably wrong about this, but…”.
This culture of self-doubt, of sitting down and shutting up, has not been taught to you year after year, in classrooms and on playgrounds, in the workplace and in bars. Each class you monopolize precious minutes to your groundbreaking, fascinating inner monologue as the girls in the class raise their hands timidly, arms shaking in the air because they’ve been holding them up for like five goddamn minutes, waiting their turn, quietly. They expel their thoughts quickly and appropriately, like a time-sensitive faucet in a public bathroom. The boys take their time, a leisurely hot shower in the morning, or after sex, unaware and uncaring that time is flying by outside of their steamy little ecosystem.
It must be nice to take your time, to not grow anxious as you feel dozens of pairs of eyes bore into you as more words fall out of your mouth, and instead to feel empowered and encouraged by the attention. I wonder what it must feel like to not assume you’re wasting anyone’s time, but instead doing them a favor, a service, by sharing your knowledge.
But you do not wonder what that is like, and to that end you are not my ally. You do not question the significance of your presence in a classroom dedicated to understanding the plight of women in the history of the world. If you cannot recognize the irony of your contribution to a discussion about the ways in which men have systematically devalued women by, say, creating a society that does not appreciate or value anything a woman thinks says screams whispers mutters or dreams then you are not my ally. You are not my ally. You are not my ally. |
Although Valve says there's still no evidence that the Steam hack last November compromised passwords or credit card data, it looks like the intruders may have more information than we previously believed. Director Gabe Newell has admitted that it's "probable" the hacker or hackers obtained a backup copy of a file with information about Steam transactions between 2004 and 2008, including user names, email addresses, encrypted billing addresses, and encrypted credit card information.
Newell also said that Valve is in the process of working with law enforcement and "outside security experts" to determine the full extent of the intrusion. At this point, Valve believes the encrypted data has not been compromised, and passwords were not included in the backup file, so there's no immediate threat to Steam members. However, Newell notes that keeping an eye on your credit card and using Steam Guard for security is still a good idea. As for catching whoever's responsible for the hack, we're guessing all Valve will have to do is promise to hire them. |
// Reset sets sr's underlying io.Reader to r, and resets any reading/decoding state.
func (sr *NDJSONStreamReader) Reset(r io.Reader) {
sr.bufioReader.Reset(r)
sr.lineReader.Reset(sr.bufioReader)
sr.isEOF = false
sr.latestLine = nil
sr.latestLineReader.Reset(nil)
} |
History Edit
Add-ons, customisation and community involvement Edit
See also: Category: Microsoft Flight Simulator add-ons The long history and consistent popularity of Flight Simulator has encouraged a very large body of add-on packages to be developed as both commercial and volunteer ventures. A formal software development kit and other tools for the simulator exist to further facilitate third-party efforts, and some third parties have also learned to 'tweak' the simulator in various ways by trial and error. As for number of add-ons, tweaks, and modifications FS can accommodate solely depends on the users hardware setup. The number is not limited by the simulator, and when multiple computers are linked together with multiple monitors and 3rd party software and controls, Flight Sim Enthusiasts can build their own realistic home cockpits. Aircraft Edit A PMDG Beech 1900D of "American Flight Airways"; in AFA Express colors Individual attributes of Flight Simulator aircraft that can be customized include: cockpit layout, cockpit image, aircraft model, aircraft model textures, aircraft flight characteristics, scenery models, scenery layouts and scenery textures, often with simple-to-use programs, or only a text editor such as 'Notepad'. Dedicated 'flightsimmers' have taken advantage of Flight Simulator's vast add-on capabilities, having successfully linked Flight Simulator to homebuilt hardware, some of which approaches the complexity of commercial full-motion flight simulators. The simulator's aircraft are made up of five parts: The model , which is a 3D CAD-style model of the aircraft's exterior and virtual cockpit, if applicable. Models consist of two distinct sections - the main chassis or "core", and accessories or dynamic parts, such as the landing gear or ailerons.
, which is a 3D CAD-style model of the aircraft's exterior and virtual cockpit, if applicable. Models consist of two distinct sections - the main chassis or "core", and accessories or dynamic parts, such as the landing gear or ailerons. The textures , bitmap images which the game layers onto the model. These can be easily edited (known as repainting ), so that a model can adopt any paint scheme imaginable, fictional or real.
, bitmap images which the game layers onto the model. These can be easily edited (known as ), so that a model can adopt any paint scheme imaginable, fictional or real. The sounds , literally what the aircraft sounds like. This is determined by defining which WAV files the aircraft uses as its sound-set.
, literally what the aircraft sounds like. This is determined by defining which WAV files the aircraft uses as its sound-set. The panel , a representation of the aircraft's cockpit. This includes one or more bitmap images of the panel, instrument gauge files, and sometimes its own sounds.
, a representation of the aircraft's cockpit. This includes one or more bitmap images of the panel, instrument gauge files, and sometimes its own sounds. The FDE, or Flight Dynamics Engine. This consists of the airfile (a *.air file), which contains hundreds of parameters that define the aircraft's flight characteristics, and the aircraft.cfg file, which contains more and easier-to-edit parameters. Most versions of Microsoft Flight Simulator include some of the world's most popular aircraft from different categories, such as the Mooney Bravo and Beechcraft Baron 58, which fall into the general aviation category, the Airbus A321 and Boeing 737, which fall into the civil jets category, the Robinson R22, which falls into the helicopter category, the Air Scheffel 738, which falls into the general aviation category again, and many other planes commonly used around the world. Not being limited to using the default aircraft, add-on planes can be downloaded from many sources for free or purchased, which can then be installed into Microsoft Flight Simulator. The Beechcraft 1900D pictured above, is an add-on aircraft. Similarly, add-on repaints can be added to default aircraft; these repaints are usually downloaded for free. AI traffic Edit A growing add-on category for the series is AI (Artificial Intelligence) Traffic. AI Traffic is the simulation of other vehicles in the FS landscape. This traffic plays an important role in the simulator, as it is possible to crash into traffic (this can be disabled), thus ending your session, and to interact with the traffic via the radio and ATC. This feature is active even with 3rd party traffic. Microsoft introduced AI traffic in MSFS 2002 with several airliners and private aircraft. This has since been supplemented with many files created by third party developers. Typically, third party aircraft models have multiple levels of detail, which allow the AI traffic to be better on frame rates, while still being detailed during close looks. There are several prominent freeware developers. Some third party AI traffic can even be configured for "real time" departures. Scenery Edit FS2004 in the UK in the UK Lake District with VFR (Visual Flight Rules) photo scenery and terrain additional components Scenery add-ons usually involve replacements for existing airports, with enhanced and more accurate detail, or large expanses of highly detailed ground scenery for specific regions of the world. Some types of scenery add-on replace or add structures to the simulator. Both freeware and payware scenery add-ons are very widely available. Airport enhancements, for example, range from simple add-ons that update runways or taxiways to very elaborate packages that reproduce every lamp, pavement marking, and structure at an airport with near-total accuracy, including animated effects such as baggage cars or marshalling agents. Wide-area scenery enhancements may use detailed satellite photos and 3-D structures to closely reproduce real-world regions, particularly those including large cities, landmarks, or spectacular natural wonders. Flight networks Edit Virtual flight networks such as IVAO, VATSIM and Pilot Edge as well as Virtual Skies use special, small add-on modules for Flight Simulator to enable connection to their proprietary networks in multiplayer mode, and to allow for voice and text communication with other virtual pilots and controllers over the network. These networks allow players to enjoy and enhance realism in their game. These networks are for ATC (air traffic control). Miscellaneous Edit Some utilities, such as FSUIPC, merely provide useful tweaks for the simulator to overcome design limitations or bugs, or to allow more extensive interfacing with other third-party add-ons. Sometimes certain add-ons require other utility add-ons in order to work correctly with the simulator. Other add-ons provide navigation tools, simulation of passengers, and cameras that can view aircraft or scenery from any angle, more realistic instrument panels and gauges, and so on. Some software add-ons provide operability with specific hardware, such as game controllers and optical motion sensors. FSDeveloper.com is one website that host a forum style knowledge base aimed at the development of add-on items, tools, and software. Availability Edit A number of websites are dedicated to providing users with add-on files (such as airplanes from actual airlines, airport utility cars, actual buildings located in specific cities, textures, and city files). The wide availability over the internet of freeware add-on files for the simulation package has encouraged the development of a large and diverse virtual community, linked up by design group and enthusiast message boards, online multiplayer flying, and 'virtual airlines'. The internet has also facilitated the distribution of 'payware' add-ons for the simulator, with the option of downloading the files, which reduces distribution costs.
Reception Edit
PC Magazine in January 1983 called Flight Simulator "extraordinarily realistic ... a classic program, unique in the market". It praised the graphics and detailed scenery, and concluded "I think it's going to sell its share of IBM PCs, and will certainly sell some color/graphics adapters".[23] BYTE in December 1983 wrote that "this amazing package does an incredible job of making you think you're actually flying a small plane". While it noted the inability to use a RGB monitor or a joystick, the magazine concluded that "for $49.95 you can't have everything".[24] A pilot wrote in the magazine in March 1984 that he found the simulated Cessna 182 to be "surprisingly realistic". While criticizing the requirement of using the keyboard to fly, he concluded "Microsoft Fight Simulator is a tour de force of the programmer's art ... It can be an excellent introduction to how an aircraft actually operates for a budding or student pilot and can even help instrument pilots or those going for an instrument rating sharpen their skills".[25] Another pilot similarly praised Flight Simulator in PC Magazine that year, giving it 18 out of 18 points. He reported that its realism compared well to two $3 million hardware flight simulators he had recently flown, and that he could use real approach plates to land at and navigate airports Flight Simulator's manual did not document.[26] Compute! warned "if you don't know much about flying, this program may overwhelm you. It's not a simple simulation. It's a challenging program even for experienced pilots". The magazine concluded that Flight Simulator "is interesting, challenging, graphically superb, diverse, rewarding, and just plain fun ... sheer delight".[27] Microsoft Flight Simulator, Version 2.0 was reviewed in 1989 in Dragon #142 by Hartley, Patricia, and Kirk Lesser in "The Role of Computers" column. The reviewers gave the game 5 out of 5 stars.[28] Computer Gaming World stated in 1994 that Flight Simulator 5 "is closer to simulating real flight than ever before".[29] "Microsoft Flight Simulator X" was reviewed in 2006 by GameSpot. The reviewer gave the game an 8.4 out of 10 and commented on how it was realistic enough to be used for real-life flight training.[30]
Awards Edit
The success of the Microsoft Flight Simulator series has led to Guinness World Records awarding the series seven world records in the Guinness World Records: Gamer's Edition 2008. These records include "Longest Running Flight Sim Series", "Most Successful Flight Simulator Series", and "Most Expensive Home Flight Simulator Cockpit", which was built by Australian trucking tycoon Matthew Sheil, and cost around $200,000 to build.[31]
See also Edit |
THYROID‐STIMULATING HORMONE AND GROWTH HORMONE RELEASE ALTERATIONS INDUCED BY MOSQUITO LARVAE PROTEINS ON PITUITARY CELLS
Mosquito larvae crude extract has shown to modulate cell proliferation of different mouse epithelial as well as human mononuclear cell populations in vivo and in vitro. A soluble fraction of the extract, with a molecular weight ranging from 12 to 80kD, also showed an inhibitory effect on the proliferation of mouse hepatocytes. This effect disappeared after heating the extract at 90°C for 60min, suggesting that some proteinaceous molecule is involved. We report the effect of dialysed extract (MW >12kD) on the concentration of both thyroid‐stimulating hormone (TSH) and growth hormone (GH) in an incubation medium of pituitary cells from normal and oestrogenised rats. Time‐ and dose‐dependent response of both hormones resulted in increasing TSH levels. Concentrations of GH were lower in the treated than in control pituitary cells. The time elapsed until the finding of differences suggests the presence in the mosquito extract of some protein binding the hormone. The differences were not due to lethal toxic effects since the Trypan blue viability test showed no differences between control and treated cells. Furthermore, the effect disappeared when the extract had previously been heated at 90°C for 60min. Finally, our results suggest the presence of some proteins in the mosquito Culex pipiens L. larvae, which would act as a pituitary hormone regulator. |
def on_action_floated(self, content):
self.set_guarded(floating=True) |
// PostProcess deletes any orphaned data that exists locally
func (l *Synchronizer) PostProcess(processing map[string]struct{}) {
nodes, err := l.GetAll()
if err != nil {
glog.Warningf("Could not access locally stored data: %s", err)
return
}
for _, node := range nodes {
if _, ok := processing[node.GetID()]; ok {
} else if err := l.Delete(node.GetID()); err != nil {
glog.Warningf("Could not delete %s from locally stored data: %s", node.GetID(), err)
}
}
} |
#!/usr/bin/python
import argparse, sys, os, random, json, zlib, base64, gzip
from shutil import rmtree
from multiprocessing import cpu_count
from tempfile import mkdtemp, gettempdir
from Bio.Format.Sam import BAMFile
from Bio.Format.Fasta import FastaData
from collections import Counter
from Bio.Errors import ErrorProfileFactory
# Create an output 'error profile' object that contains
# Quality information
# Context error information
# General error information
def main(args):
sys.stderr.write("Read reference fasta\n")
fasta = FastaData(open(args.reference_fasta).read())
sys.stderr.write("Read alignment file\n")
bf = BAMFile(args.bam_input,reference=fasta)
bf.read_index()
total_qualities = []
for j in range(0,100):
total_qualities.append([])
ef = ErrorProfileFactory()
mincontext = 0
alignments = 0
for i in range(0,args.max_alignments):
rname = random.choice(bf.index.get_names())
coord = bf.index.get_longest_target_alignment_coords_by_name(rname)
if not coord: continue
bam = bf.fetch_by_coord(coord)
qual = bam.value('qual')
do_qualities(total_qualities,qual)
if not bam.is_aligned(): continue
alignments += 1
ef.add_alignment(bam)
if i%100 == 0:
mincontext = ef.get_min_context_count('target')
if mincontext:
if mincontext >= args.min_context and alignments >= args.min_alignments: break
sys.stderr.write(str(i+1)+" lines "+str(alignments)+"/"+str(args.min_alignments)+" alignments "+str(mincontext)+"/"+str(args.min_context)+" mincontext \r")
sys.stderr.write("\n")
sys.stderr.write(str(mincontext)+" minimum contexts observed\n")
target_context = ef.get_target_context_error_report()
general_error_stats = ef.get_alignment_errors().get_stats()
general_error_report = ef.get_alignment_errors().get_report()
# convert report to table
general_all = [x.split("\t") for x in general_error_report.rstrip().split("\n")]
general_head = general_all[0]
#print [y for y in general_all[1:]]
general_data = [[y[0],y[1],int(y[2]),int(y[3])] for y in general_all[1:]]
general_error_report = {'head':general_head,'data':general_data}
quality_counts = []
for vals in total_qualities:
garr = []
grp = {}
for v in vals:
if v[0] not in grp: grp[v[0]] = {}# check ordinal
if v[1] not in grp[v[0]]: grp[v[0]][v[1]] = 0 # run length
grp[v[0]][v[1]]+=1
for ordval in sorted(grp.keys()):
for runlen in sorted(grp[ordval].keys()):
garr.append([ordval,runlen,grp[ordval][runlen]])
quality_counts.append(garr)
#Quailty counts now has 100 bins, each has an ordered array of
# [ordinal_quality, run_length, observation_count]
# Can prepare an output
output = {}
output['quality_counts'] = quality_counts
output['context_error'] = target_context
output['alignment_error'] = general_error_report
output['error_stats'] = general_error_stats
of = None
if args.output[-3:]=='.gz':
of = gzip.open(args.output,'w')
else: of = open(args.output,'w')
of.write(base64.b64encode(zlib.compress(json.dumps(output)))+"\n")
of.close()
# Temporary working directory step 3 of 3 - Cleanup
if not args.specific_tempdir:
rmtree(args.tempdir)
def do_qualities(total_qualities,qual):
qualities = []
for j in range(0,100):
qualities.append([])
# break qual on homopolymers
if not qual: return
if len(qual) <= 1: return
hp = [[qual[0]]]
for i in range(1,len(qual)):
if qual[i] ==hp[-1][0]: hp[-1]+=[qual[i]]
else: hp += [[qual[i]]]
ind = 0
for vals in hp:
frac = 100*float(ind)/float(len(qual))
qualities[int(frac)].append([ord(vals[0]),len(vals)])
ind += len(vals)
prev = []
for j in range(0,100):
if len(qualities[j]) > 0: prev = qualities[j]
else: qualities[j] = prev[:]
for j in range(0,100):
total_qualities[j]+=qualities[j]
def do_inputs():
parser=argparse.ArgumentParser(description="",formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument('bam_input',help="INPUT FILE")
parser.add_argument('reference_fasta',help="Reference Fasta")
parser.add_argument('-o','--output',required=True,help="OUTPUTFILE can be gzipped")
parser.add_argument('--threads',type=int,default=cpu_count(),help="INT number of threads to run. Default is system cpu count")
parser.add_argument('--max_alignments',type=int,default=1000000,help="The absolute maximum number of alignments to try")
parser.add_argument('--min_alignments',type=int,default=1000,help="Visit at least this many alignments")
parser.add_argument('--min_context',type=int,default=10000,help="Stop after seeing this many of each context")
# Temporary working directory step 1 of 3 - Definition
group = parser.add_mutually_exclusive_group()
group.add_argument('--tempdir',default=gettempdir(),help="The temporary directory is made and destroyed here.")
group.add_argument('--specific_tempdir',help="This temporary directory will be used, but will remain after executing.")
args = parser.parse_args()
# Temporary working directory step 2 of 3 - Creation
setup_tempdir(args)
return args
def setup_tempdir(args):
if args.specific_tempdir:
if not os.path.exists(args.specific_tempdir):
os.makedirs(args.specific_tempdir.rstrip('/'))
args.tempdir = args.specific_tempdir.rstrip('/')
if not os.path.exists(args.specific_tempdir.rstrip('/')):
sys.stderr.write("ERROR: Problem creating temporary directory\n")
sys.exit()
else:
args.tempdir = mkdtemp(prefix="weirathe.",dir=args.tempdir.rstrip('/'))
if not os.path.exists(args.tempdir.rstrip('/')):
sys.stderr.write("ERROR: Problem creating temporary directory\n")
sys.exit()
if not os.path.exists(args.tempdir):
sys.stderr.write("ERROR: Problem creating temporary directory\n")
sys.exit()
return
def external_cmd(cmd,version=None):
cache_argv = sys.argv
sys.argv = cmd.split()
args = do_inputs()
main(args)
sys.argv = cache_argv
if __name__=="__main__":
#do our inputs
args = do_inputs()
main()
|
<gh_stars>10-100
package es.ubu.lsi.ubumonitor.clustering.chart;
import java.awt.Color;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import es.ubu.lsi.ubumonitor.clustering.controller.AlgorithmExecuter;
import es.ubu.lsi.ubumonitor.clustering.controller.PartitionalClusteringController;
import es.ubu.lsi.ubumonitor.clustering.controller.Connector;
import es.ubu.lsi.ubumonitor.clustering.data.ClusterWrapper;
import es.ubu.lsi.ubumonitor.clustering.data.UserData;
import es.ubu.lsi.ubumonitor.clustering.util.ExportUtil;
import es.ubu.lsi.ubumonitor.util.I18n;
import es.ubu.lsi.ubumonitor.model.EnrolledUser;
import es.ubu.lsi.ubumonitor.util.JSArray;
import es.ubu.lsi.ubumonitor.util.JSObject;
import es.ubu.lsi.ubumonitor.util.UtilMethods;
import javafx.concurrent.Worker;
import javafx.scene.web.WebEngine;
/**
* Clase que gestiona una diagrama de dispersión 2D.
*
* @author <NAME>
*
*/
public class Scatter2DChart extends ClusteringChart {
private static final Logger LOGGER = LoggerFactory.getLogger(Scatter2DChart.class);
private Connector connector;
private List<Map<UserData, double[]>> points;
/**
* Constructor.
*
* @param clusteringController controlador general
*/
public Scatter2DChart(PartitionalClusteringController clusteringController) {
super(clusteringController.getWebViewScatter());
WebEngine webEngine = getWebEngine();
connector = new Connector(clusteringController);
webEngine.getLoadWorker().stateProperty().addListener((ov, oldState, newState) -> {
if (Worker.State.SUCCEEDED != newState)
return;
netscape.javascript.JSObject window = (netscape.javascript.JSObject) webEngine.executeScript("window");
window.setMember("javaConnector", connector);
});
webEngine.load(getClass().getResource("/graphics/ClusterChart.html").toExternalForm());
}
/**
* {@inheritDoc}
*/
@Override
public void updateChart(List<ClusterWrapper> clusters) {
connector.setClusters(clusters);
LOGGER.debug("Clusters: {}", clusters);
points = AlgorithmExecuter.clustersTo(2, clusters);
Map<ClusterWrapper, Color> colors = UtilMethods.getRandomColors(clusters);
JSObject root = new JSObject();
JSArray datasets = new JSArray();
JSObject centers = new JSObject();
centers.putWithQuote("label", I18n.get("clustering.centroids"));
centers.putWithQuote("backgroundColor", "black");
JSArray centersData = new JSArray();
int total = clusters.stream().mapToInt(ClusterWrapper::size).sum();
for (int i = 0; i < points.size(); i++) {
JSObject group = new JSObject();
group.putWithQuote("label", getLegend(clusters.get(i), total));
group.put("backgroundColor", UtilMethods.colorToRGB(colors.get(clusters.get(i))));
JSArray data = new JSArray();
for (Map.Entry<UserData, double[]> userEntry : points.get(i).entrySet()) {
UserData user = userEntry.getKey();
JSObject coord = new JSObject();
double[] point = userEntry.getValue();
coord.put("x", point[0]);
coord.put("y", point.length == 2 ? point[1] : 0.0);
if (user == null) {
coord.putWithQuote("user", I18n.get("clustering.centroid"));
centersData.add(coord);
} else {
coord.putWithQuote("user", user.getEnrolledUser().getFullName());
data.add(coord);
}
}
group.put("data", data);
datasets.add(group);
}
if (!centersData.isEmpty()) {
centers.put("data", centersData);
datasets.add(centers);
}
root.put("datasets", datasets);
LOGGER.debug("2D series: {}", root);
getWebEngine().executeScript("updateChart(" + root + ")");
}
/**
* {@inheritDoc}
*/
@Override
protected void exportData(File file) throws IOException {
String[] head = new String[] { "UserId", "FullName", "Cluster", "X", "Y" };
List<List<Object>> data = new ArrayList<>();
for (Map<UserData, double[]> cluster : points) {
for (Entry<UserData, double[]> entry : cluster.entrySet()) {
UserData userData = entry.getKey();
if (userData == null) {
continue;
}
double[] point = entry.getValue();
EnrolledUser enrolledUser = userData.getEnrolledUser();
List<Object> row = new ArrayList<>();
row.add(enrolledUser.getId());
row.add(enrolledUser.getFullName());
row.add(userData.getCluster().getName());
row.add(point[0]);
row.add(point.length > 1 ? point[1] : 0.0);
data.add(row);
}
}
ExportUtil.exportCSV(file, head, data);
}
}
|
#include "opentelemetry/exporters/otlp/recordable.h"
OPENTELEMETRY_BEGIN_NAMESPACE
namespace exporter
{
namespace otlp
{
const int kAttributeValueSize = 14;
void Recordable::SetIds(trace::TraceId trace_id,
trace::SpanId span_id,
trace::SpanId parent_span_id) noexcept
{
span_.set_trace_id(reinterpret_cast<const char *>(trace_id.Id().data()), trace::TraceId::kSize);
span_.set_span_id(reinterpret_cast<const char *>(span_id.Id().data()), trace::SpanId::kSize);
span_.set_parent_span_id(reinterpret_cast<const char *>(parent_span_id.Id().data()),
trace::SpanId::kSize);
}
void PopulateAttribute(opentelemetry::proto::common::v1::KeyValue *attribute,
nostd::string_view key,
const opentelemetry::common::AttributeValue &value)
{
// Assert size of variant to ensure that this method gets updated if the variant
// definition changes
static_assert(
nostd::variant_size<opentelemetry::common::AttributeValue>::value == kAttributeValueSize,
"AttributeValue contains unknown type");
attribute->set_key(key.data(), key.size());
if (nostd::holds_alternative<bool>(value))
{
attribute->mutable_value()->set_bool_value(nostd::get<bool>(value));
}
else if (nostd::holds_alternative<int>(value))
{
attribute->mutable_value()->set_int_value(nostd::get<int>(value));
}
else if (nostd::holds_alternative<int64_t>(value))
{
attribute->mutable_value()->set_int_value(nostd::get<int64_t>(value));
}
else if (nostd::holds_alternative<unsigned int>(value))
{
attribute->mutable_value()->set_int_value(nostd::get<unsigned int>(value));
}
else if (nostd::holds_alternative<uint64_t>(value))
{
attribute->mutable_value()->set_int_value(nostd::get<uint64_t>(value));
}
else if (nostd::holds_alternative<double>(value))
{
attribute->mutable_value()->set_double_value(nostd::get<double>(value));
}
else if (nostd::holds_alternative<nostd::string_view>(value))
{
attribute->mutable_value()->set_string_value(nostd::get<nostd::string_view>(value).data(),
nostd::get<nostd::string_view>(value).size());
}
#ifdef HAVE_CSTRING_TYPE
else if (nostd::holds_alternative<const char *>(value))
{
attribute->mutable_value()->set_string_value(nostd::get<const char *>(value));
}
#endif
else if (nostd::holds_alternative<nostd::span<const bool>>(value))
{
for (const auto &val : nostd::get<nostd::span<const bool>>(value))
{
attribute->mutable_value()->mutable_array_value()->add_values()->set_bool_value(val);
}
}
else if (nostd::holds_alternative<nostd::span<const int>>(value))
{
for (const auto &val : nostd::get<nostd::span<const int>>(value))
{
attribute->mutable_value()->mutable_array_value()->add_values()->set_int_value(val);
}
}
else if (nostd::holds_alternative<nostd::span<const int64_t>>(value))
{
for (const auto &val : nostd::get<nostd::span<const int64_t>>(value))
{
attribute->mutable_value()->mutable_array_value()->add_values()->set_int_value(val);
}
}
else if (nostd::holds_alternative<nostd::span<const unsigned int>>(value))
{
for (const auto &val : nostd::get<nostd::span<const unsigned int>>(value))
{
attribute->mutable_value()->mutable_array_value()->add_values()->set_int_value(val);
}
}
else if (nostd::holds_alternative<nostd::span<const uint64_t>>(value))
{
for (const auto &val : nostd::get<nostd::span<const uint64_t>>(value))
{
attribute->mutable_value()->mutable_array_value()->add_values()->set_int_value(val);
}
}
else if (nostd::holds_alternative<nostd::span<const double>>(value))
{
for (const auto &val : nostd::get<nostd::span<const double>>(value))
{
attribute->mutable_value()->mutable_array_value()->add_values()->set_double_value(val);
}
}
else if (nostd::holds_alternative<nostd::span<const nostd::string_view>>(value))
{
for (const auto &val : nostd::get<nostd::span<const nostd::string_view>>(value))
{
attribute->mutable_value()->mutable_array_value()->add_values()->set_string_value(val.data(),
val.size());
}
}
}
void Recordable::SetAttribute(nostd::string_view key,
const opentelemetry::common::AttributeValue &value) noexcept
{
auto *attribute = span_.add_attributes();
PopulateAttribute(attribute, key, value);
}
void Recordable::AddEvent(nostd::string_view name,
core::SystemTimestamp timestamp,
const common::KeyValueIterable &attributes) noexcept
{
auto *event = span_.add_events();
event->set_name(name.data(), name.size());
event->set_time_unix_nano(timestamp.time_since_epoch().count());
attributes.ForEachKeyValue([&](nostd::string_view key, common::AttributeValue value) noexcept {
PopulateAttribute(event->add_attributes(), key, value);
return true;
});
}
void Recordable::AddLink(const opentelemetry::trace::SpanContext &span_context,
const common::KeyValueIterable &attributes) noexcept
{
auto *link = span_.add_links();
link->set_trace_id(reinterpret_cast<const char *>(span_context.trace_id().Id().data()),
trace::TraceId::kSize);
link->set_span_id(reinterpret_cast<const char *>(span_context.span_id().Id().data()),
trace::SpanId::kSize);
attributes.ForEachKeyValue([&](nostd::string_view key, common::AttributeValue value) noexcept {
PopulateAttribute(link->add_attributes(), key, value);
return true;
});
// TODO: Populate trace_state when it is supported by SpanContext
}
void Recordable::SetStatus(trace::CanonicalCode code, nostd::string_view description) noexcept
{
span_.mutable_status()->set_code(opentelemetry::proto::trace::v1::Status_StatusCode(code));
span_.mutable_status()->set_message(description.data(), description.size());
}
void Recordable::SetName(nostd::string_view name) noexcept
{
span_.set_name(name.data(), name.size());
}
void Recordable::SetSpanKind(opentelemetry::trace::SpanKind span_kind) noexcept
{
opentelemetry::proto::trace::v1::Span_SpanKind proto_span_kind =
opentelemetry::proto::trace::v1::Span_SpanKind::Span_SpanKind_SPAN_KIND_UNSPECIFIED;
switch (span_kind)
{
case opentelemetry::trace::SpanKind::kInternal:
proto_span_kind =
opentelemetry::proto::trace::v1::Span_SpanKind::Span_SpanKind_SPAN_KIND_INTERNAL;
break;
case opentelemetry::trace::SpanKind::kServer:
proto_span_kind =
opentelemetry::proto::trace::v1::Span_SpanKind::Span_SpanKind_SPAN_KIND_SERVER;
break;
case opentelemetry::trace::SpanKind::kClient:
proto_span_kind =
opentelemetry::proto::trace::v1::Span_SpanKind::Span_SpanKind_SPAN_KIND_CLIENT;
break;
case opentelemetry::trace::SpanKind::kProducer:
proto_span_kind =
opentelemetry::proto::trace::v1::Span_SpanKind::Span_SpanKind_SPAN_KIND_PRODUCER;
break;
case opentelemetry::trace::SpanKind::kConsumer:
proto_span_kind =
opentelemetry::proto::trace::v1::Span_SpanKind::Span_SpanKind_SPAN_KIND_CONSUMER;
break;
default:
// shouldn't reach here.
proto_span_kind =
opentelemetry::proto::trace::v1::Span_SpanKind::Span_SpanKind_SPAN_KIND_UNSPECIFIED;
}
span_.set_kind(proto_span_kind);
}
void Recordable::SetStartTime(opentelemetry::core::SystemTimestamp start_time) noexcept
{
span_.set_start_time_unix_nano(start_time.time_since_epoch().count());
}
void Recordable::SetDuration(std::chrono::nanoseconds duration) noexcept
{
const uint64_t unix_end_time = span_.start_time_unix_nano() + duration.count();
span_.set_end_time_unix_nano(unix_end_time);
}
} // namespace otlp
} // namespace exporter
OPENTELEMETRY_END_NAMESPACE
|
The Azure Cloud Presents a Perfect Storm for Business with Surprise Announcement of Microsoft Dynamics 365 at WPC
Didn’t attend WPC (Microsoft’s annual partner conference) in Toronto last week? Then you might have missed one of the most profound ERP or CRM related announcements made by Microsoft in the last decade.
After years of weaving “the cloud” into just about every press release, web page, blog post and meeting agenda, it seems that they’ve finally figured out the techno-chemistry for bringing their enterprise business applications and office productivity services into one solution. They’ve consolidated their CRM and ERP clouds together to share a common data model with Office 365 which will now be sold under the Microsoft Dynamics 365 brand.
This has been a dream of the industry, but also for me personally since 2000 when Microsoft acquired Great Plains. When I led that group, we dreamt of doing things for our customers and partners that are now only possible because of widespread adoption of the cloud, proliferation of data, devices and sensors, and agile development environments. – Satya Nadella, CEO at Microsoft [Read Story by Forbes]
This is a dramatic change for Microsoft partners, but even more so for Microsoft Dynamics customers. For the last decade, Microsoft has offered at least four distinct ERP roadmaps under the Dynamics brand (SL, NAV, AX and GP) as well as their customer relationship management solution, Microsoft Dynamics CRM. Customers had to go through quite a bit of due diligence to determine the best fit for their organization and locate a partner that specialized in their business model to assist with implementation. By the end of 2016, Microsoft will be well on their way to a single brand under Microsoft Dynamics 365 and is expected to pour enormous resources and money into advancing their position across small, mid-sized and even the largest enterprise customers.
To be clear, this does not mean that all of the Dynamics ERP solutions will go away any time soon. In fact, Dynamics 365 will include Microsoft Dynamics AX and Project Madeira (which will become Dynamics Financials for Business). Dynamics GP, NAV, and SL are not part of Dynamics 365 and will remain standalone products, still available for sale and with their own roadmaps.
What is included in Microsoft Dynamics 365?
When a customer chooses Microsoft Dynamics 365, they will be subscribing to a service that includes: Cortana Intelligence and Power BI, Azure IoT and Microsoft Office 365 along with deep operational and sales functionality derived from the Microsoft Dynamics suite. Dynamics AX and CRM components bring comprehensive sales, marketing, field service, customer service, project service automation and operations (finance, HR, etc..).
Better yet, partners and customers will still be able to deliver and consume 3rd party add-on solutions that cater to various industries and business needs through Microsoft AppSource. ERP, CRM and even partner solutions can reside in the Microsoft Azure cloud with the help of LCS (Lifecycle Services). SBS Group has already published multiple solutions in Microsoft AppSource , including AXIO Professional Services and AXIO Core Financials. Both enhance the use of Dynamics AX for professional services firms and other project-oriented companies.
Although changes to the makeup of Dynamics 365 as early as this year wouldn’t be a surprise to anyone given the scope of this new offering, the licensing model announced last week is fairly simple compared to iterations past. There will be a “Business Edition” and an “Enterprise Edition”. The Business Edition will include financials, sales and marketing features but an upgrade to the Enterprise addition will add in customer service, field service and project service automation components. The Business Edition is primarily targeted at business with under 250 employees while the Enterprise Edition is targeted at greater than 250 employees. Smaller business interested Enterprise Edition functionality will need step up and pay more.
Apps, Plans and Team Members. Within each edition, there will be additional building blocks: Apps, Plans and Team Members. The app represents a distinct piece of software where plans represent a grouping of apps. Light users can be licensed as “Team Members” who receive only read access to most apps and write access for simple things like time and expense entry. Companies will opt to subscribe to various plans and apps based on the needs and responsibilities of individual employees. This role-based licensing provides needed flexibility to help organizations consume the solution in a way that makes best financial and logistical sense for them.
Pricing has not yet been published by Microsoft and is expected to be announced at launch later in 2016.
How will Dynamics 365 impact the user and partner community?
Moving from product suite purchasing to role-based subscriptions is an enormous change for Dynamics customers, but operating under a shared data model is the real game-changer.
As an example, Firing off a workflow that creates a CRM lead and assigns an outbound call activity to an inside sales person is nothing new, right? Neither is creating alerts in the ERP system based on a shipment arriving or project milestones being met. These things are done all the time, but typically the interactions are confined to specific apps and the people who interact directly within them. It isn’t because these interactions aren’t possible today, because they are. The effort required to create integrations and workflows that move between multiple databases often outweighs the need or gives way to other efforts that can be handled without reaching out to IT.
In a world where ERP, CRM, customer service, sales, marketing and project management systems share the same data model, the required effort decreases significantly. Actions and reactions can be automated through the front-end by employees on the ground much more easily. By removing these perceived and real barriers, users will be more willing to explore more complex automations. Productivity rises and ROI improves.
Agile Pricing Scenarios: Without knowing the actual pricing, we can only guess as to the ultimate financial impact. Speculation from partners at the conference was that most customers would see immediate benefit to the bottom line. However, the ability to license users based on individual roles should offer greater flexibility to outfit employees with what they need and not pay for unnecessary bells and whistles.
Business Intelligence and Analytics: Big Data just got bigger. This move will open up new possibilities for advanced analytics, predictive insights and actionable next steps. Customers will be able to take advantage of data analysis based on business activities happening in Microsoft office as well as the more structured ERP and CRM applications. We’ll all get a much clearer view of how the business operates with built-in insights and intelligence within the business applications they’re working in – apps like field service, sales, finance, operations.
User Experience: More importantly will be the ease of use and overall improvements to the user experience. Integration between CRM and ERP systems has always been challenging to say the least, but a shared data model will open up possibilities here that don’t require enormous development engagements. Work done in Microsoft office, SharePoint and OneNote can easily incorporate operations and sales tasking happening in the CRM system or ERP system. I really don’t see how competitors will be able to match up.
More information on Microsoft Dynamics 365
Stay tuned to the SBS Blog for more information over the coming weeks. SBS Group has invested heavily in helping our customers take advantage of Microsoft’s move to the cloud and will be sure to “unwrap” this latest announcement for you in more detail as we learn more.
You may also want to read a great Dynamics 365 article published during WPC by Jason Gumpert (editor, MSDynamicsWorld) with quotes from Microsoft general manager, Barb Edson and Mike Ehrenberg, Microsoft technical fellow and leader of Dynamics R&D.
Best Regards,
James Bowman, CEO and President, SBS Group
Join our next webcast “AXIO Professional Services and Dynamics 365 for Project-Oriented Companies” to learn more. |
// walk is a recursive call that walks over the ASN1 structured data until no
// remaining bytes are left. For each non compound is will call the ASN1 format
// checker.
func (l *Linter) walk(der []byte) {
var err error
var d asn1.RawValue
for len(der) > 0 {
der, err = asn1.Unmarshal(der, &d)
if err != nil {
l.e.Err(err.Error())
if len(d.Bytes) == 0 {
return
}
}
if d.IsCompound {
l.walk(d.Bytes)
} else {
l.CheckFormat(d)
}
}
} |
# import sys
# sys.stdin = open("#input.txt", "r")
t = input()
print(t,t[::-1],sep='')
|
<reponame>lilittovmasyan/findpeople-master<gh_stars>0
import {Component, Input} from '@angular/core'
import {Http} from '@angular/http'
import {NgbModal, NgbActiveModal, NgbModalOptions} from '@ng-bootstrap/ng-bootstrap';
@Component({
selector: 'checkinbutton',
template: `
<template ngbModalContainer #content >
<div class="modal-header">
<h4 class="modal-title">Please enter your name</h4>
<button type="button" class="close" aria-label="Close" (click)="close()">
<span aria-hidden="true">×</span>
</button>
</div>
<div class="modal-body">
<input type="text" placeholder="enter your name"/>
</div>
<div class="modal-footer">
<button type="button" class="btn btn-secondary" (click)="Checkin()">Check in</button>
<button type="button" class="btn btn-secondary" (click)="close()">Close</button>
</div>
</template>
<button id="singlebutton" name="singlebutton" class="btn btn-primary center-block" (click)="buttonclick(content)" style="margin-top: 15px;" >Check in</button>
`,
})
export class CheckInButtonComponent {
x: string
close(): void {
let modal = <HTMLElement>document.getElementsByClassName("modal-content")[0];
modal.style.display = "none";
}
constructor(private modalService: NgbModal) {
}
buttonclick(content: any) {
this.modalService.open(content);
}
}
|
<gh_stars>1-10
use k8s_openapi::api::apps::v1::{Deployment, DeploymentSpec};
use k8s_openapi::api::core::v1::{Container, Pod, PodSpec, PodTemplateSpec};
use k8s_openapi::apimachinery::pkg::apis::meta::v1::{LabelSelector, ObjectMeta};
use kube::api::{ListParams, Meta, PostParams, WatchEvent};
use kube::runtime::Informer;
use kube::{Api, Client};
use kube_derive::CustomResource;
use serde::{Deserialize, Serialize};
use serde_json;
#[derive(CustomResource, Deserialize, Serialize, Clone, Debug)]
#[kube(group = "cache.example.com", version = "v1alpha1", namespaced)]
#[kube(status = "MemcachedStatus")]
pub struct MemcachedSpec {
size: i32,
}
#[derive(Deserialize, Serialize, Clone, Debug, Default)]
pub struct MemcachedStatus {
nodes: Vec<String>,
}
#[no_mangle]
pub extern "C" fn run() {
let client = Client::default();
let mems: Api<Memcached> = Api::namespaced(client.clone(), "default");
let inform = Informer::new(mems).params(ListParams::default());
inform.poll(move |e| {
match e {
WatchEvent::Added(mut o) | WatchEvent::Modified(mut o) => {
reconcile(&client, &mut o).expect("Reconcile error");
}
WatchEvent::Error(e) => println!("Error event: {:?}", e),
e => println!("Not handled event: {:?}", e)
}
});
}
fn reconcile(client: &Client, mem: &mut Memcached) -> Result<(), kube::Error> {
let pods: Api<Pod> = Api::namespaced(client.clone(), "default");
let mems: Api<Memcached> = Api::namespaced(client.clone(), "default");
let deployments: Api<Deployment> = Api::namespaced(client.clone(), "default");
match deployments.get(&mem.name()) {
Ok(mut existing) => {
let existing_scale = existing
.spec
.as_ref()
.map(|spec| spec.replicas.as_ref())
.flatten();
if existing_scale == Some(&mem.spec.size) {
println!("Scale is already correct");
Ok(existing)
} else {
let mut spec = existing.spec.unwrap();
spec.replicas = Some(mem.spec.size);
existing.spec = Some(spec);
println!("Replacing deployment");
deployments.replace(&existing.name(), &PostParams::default(), &existing)
}
}
Err(kube::Error::Api(ae)) if ae.code == 404 => {
println!("Creating deployment");
deployments.create(&PostParams::default(), &memcached_deployment(mem))
}
e => e,
}
.and_then(|_| pods.list(&ListParams::default().labels(&format!("memcached_cr={}", mem.name()))))
.map(|mempods| {
let pod_names: Vec<String> = mempods.iter().map(Pod::name).collect();
mem.status = Some(MemcachedStatus { nodes: pod_names });
mems.replace_status(
&mem.name(),
&PostParams::default(),
serde_json::to_vec(&mem).unwrap(),
)
})
.map(|_| ())
}
fn memcached_deployment(mem: &Memcached) -> Deployment {
let mut labels = std::collections::BTreeMap::new();
labels.insert("memcached_cr".to_string(), mem.name());
labels.insert("app".to_string(), "memcached".to_string());
Deployment {
metadata: Some(ObjectMeta {
name: Some(mem.name()),
..Default::default()
}),
spec: Some(DeploymentSpec {
replicas: Some(mem.spec.size),
selector: LabelSelector {
match_labels: Some(labels.clone()),
..Default::default()
},
template: PodTemplateSpec {
metadata: Some(ObjectMeta {
labels: Some(labels),
..Default::default()
}),
spec: Some(PodSpec {
containers: vec![Container {
name: "memcached".to_string(),
image: Some("memcached:1.4.36-alpine".to_string()),
command: Some(vec![
"memcached".to_string(),
"-m=64".to_string(),
"-o".to_string(),
"modern".to_string(),
"-v".to_string(),
]),
..Default::default()
}],
..Default::default()
}),
..Default::default()
},
..Default::default()
}),
status: None,
}
}
|
package dev.lb.cellpacker.structure.resource;
import java.util.Comparator;
public class CompoundAtlasResource extends AtlasResource{
private String compoundFileName;
private int index;
public CompoundAtlasResource(String name, String path, int magic, byte[] data, String compoundFileName, int index) {
super(name, path, magic, data);
this.compoundFileName = compoundFileName;
this.index = index;
}
public String getCompoundFileName() {
return compoundFileName;
}
public int getIndex() {
return index;
}
public static int compare(CompoundAtlasResource o1, CompoundAtlasResource o2) {
if(o1.index < o2.index){
return -1;
}else if(o1.index > o2.index){
return 1;
}else{ //Equal
return 0;
}
}
@Override
public Resource clone() {
return new CompoundAtlasResource(getName(), getPath(), getMagicNumber(), getData(), getCompoundFileName(), getIndex());
}
public static Comparator<CompoundAtlasResource> getIndexComparator(){
return new Comparator<CompoundAtlasResource>() {
@Override
public int compare(CompoundAtlasResource o1, CompoundAtlasResource o2) {
if(o1.getIndex() < o2.getIndex()) return -1;
if(o1.getIndex() == o2.getIndex()) return 0;
if(o1.getIndex() > o2.getIndex()) return 1;
return 0; //This is actually impossible to reach, but the compiler needs it to be happy.
}
};
}
}
|
import tensorflow as tf
import numpy as np
class Layer(object): #construct a layer with specified dimensionality
def __init__(self, input_size, output_size, var_scope):
self.input_size = input_size
self.output_size = output_size
self.var_scope = var_scope
with tf.variable_scope(self.scope):
self.weights = []
def get_vars():
class CNN(object):
def __init__(self, input_size, scope, layers):
self.input_size = input_size
self.scope = scope
self.layers = layers
with tf.variable_scope(self.scope): #input layer as separate so we don't have to call it layers[0]
self.input_layer = layers[0]
self.layers = layers[1:]
def get_vars(): #collect variables from each layer
variables = self.input_layer.get_vars()
for layer in self.layers:
variables.extend(layer.get_vars())
class DQN(object):
def __init__(self, ...): #alot of params go here |
The latest round of guest performers have been confirmed for the 2017 Grammy Awards with A Tribe Called Quest set for a special performance alongside Anderson .Paak and Foo Fighters's Dave Grohl.
ADVERTISEMENT
As Pitchfork reports, all three artists will perform together at this year’s ceremony, which will be broadcast live from Los Angeles’ Staples Center at 8 p.m. Eastern on February 12. Joining them on the bill will be The Weeknd and Daft Punk. French duo Daft Punk last performed live at The Grammys in 2014 alongside Pharrell and Stevie Wonder. The other new names for the 2017 ceremony are Alicia Keys and Maren Morris who will also perform together.
John Legend, Carrie Underwood, Keith Urban, and Metallica were named in the first round of performers for the 59th Annual Grammy Awards earlier this month. This year's ceremony will be hosted by James Corden. A full list of nominees can be found here. |
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: github.com/solo-io/wasm/tools/wasme/cli/operator/api/wasme/v1/filter_deployment.proto
package v1
import (
fmt "fmt"
math "math"
proto "github.com/gogo/protobuf/proto"
types "github.com/gogo/protobuf/types"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
// the state of the filter deployment
type WorkloadStatus_State int32
const (
WorkloadStatus_Pending WorkloadStatus_State = 0
WorkloadStatus_Succeeded WorkloadStatus_State = 1
WorkloadStatus_Failed WorkloadStatus_State = 2
)
var WorkloadStatus_State_name = map[int32]string{
0: "Pending",
1: "Succeeded",
2: "Failed",
}
var WorkloadStatus_State_value = map[string]int32{
"Pending": 0,
"Succeeded": 1,
"Failed": 2,
}
func (x WorkloadStatus_State) String() string {
return proto.EnumName(WorkloadStatus_State_name, int32(x))
}
func (WorkloadStatus_State) EnumDescriptor() ([]byte, []int) {
return fileDescriptor_24d13e575ab7b28c, []int{6, 0}
}
// A FilterDeployment tells the Wasme Operator
// to deploy a filter with the provided configuration
// to the target workloads.
// Currently FilterDeployments support Wasm filters on Istio
type FilterDeploymentSpec struct {
// the spec of the filter to deploy
Filter *FilterSpec `protobuf:"bytes,1,opt,name=filter,proto3" json:"filter,omitempty"`
// Spec that selects one or more target workloads in the FilterDeployment namespace
Deployment *DeploymentSpec `protobuf:"bytes,2,opt,name=deployment,proto3" json:"deployment,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *FilterDeploymentSpec) Reset() { *m = FilterDeploymentSpec{} }
func (m *FilterDeploymentSpec) String() string { return proto.CompactTextString(m) }
func (*FilterDeploymentSpec) ProtoMessage() {}
func (*FilterDeploymentSpec) Descriptor() ([]byte, []int) {
return fileDescriptor_24d13e575ab7b28c, []int{0}
}
func (m *FilterDeploymentSpec) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_FilterDeploymentSpec.Unmarshal(m, b)
}
func (m *FilterDeploymentSpec) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_FilterDeploymentSpec.Marshal(b, m, deterministic)
}
func (m *FilterDeploymentSpec) XXX_Merge(src proto.Message) {
xxx_messageInfo_FilterDeploymentSpec.Merge(m, src)
}
func (m *FilterDeploymentSpec) XXX_Size() int {
return xxx_messageInfo_FilterDeploymentSpec.Size(m)
}
func (m *FilterDeploymentSpec) XXX_DiscardUnknown() {
xxx_messageInfo_FilterDeploymentSpec.DiscardUnknown(m)
}
var xxx_messageInfo_FilterDeploymentSpec proto.InternalMessageInfo
func (m *FilterDeploymentSpec) GetFilter() *FilterSpec {
if m != nil {
return m.Filter
}
return nil
}
func (m *FilterDeploymentSpec) GetDeployment() *DeploymentSpec {
if m != nil {
return m.Deployment
}
return nil
}
// the filter to deploy
type FilterSpec struct {
// unique identifier that will be used
// to remove the filter as well as for logging.
// if id is not set, it will be set automatically to be the name.namespace
// of the FilterDeployment resource
Id string `protobuf:"bytes,1,opt,name=id,proto3" json:"id,omitempty"`
// name of image which houses the compiled wasm filter
Image string `protobuf:"bytes,2,opt,name=image,proto3" json:"image,omitempty"`
// Filter/service configuration used to configure or reconfigure a plugin
// (proxy_on_configuration).
// `google.protobuf.Struct` is serialized as JSON before
// passing it to the plugin. `google.protobuf.BytesValue` and
// `google.protobuf.StringValue` are passed directly without the wrapper.
Config *types.Any `protobuf:"bytes,3,opt,name=config,proto3" json:"config,omitempty"`
// the root id must match the root id
// defined inside the filter.
// if the user does not provide this field,
// wasme will attempt to pull the image
// and set it from the filter_conf
// the first time it must pull the image and inspect it
// second time it will cache it locally
// if the user provides
RootID string `protobuf:"bytes,4,opt,name=rootID,proto3" json:"rootID,omitempty"`
// custom options if pulling from private / custom repositories
ImagePullOptions *ImagePullOptions `protobuf:"bytes,5,opt,name=imagePullOptions,proto3" json:"imagePullOptions,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *FilterSpec) Reset() { *m = FilterSpec{} }
func (m *FilterSpec) String() string { return proto.CompactTextString(m) }
func (*FilterSpec) ProtoMessage() {}
func (*FilterSpec) Descriptor() ([]byte, []int) {
return fileDescriptor_24d13e575ab7b28c, []int{1}
}
func (m *FilterSpec) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_FilterSpec.Unmarshal(m, b)
}
func (m *FilterSpec) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_FilterSpec.Marshal(b, m, deterministic)
}
func (m *FilterSpec) XXX_Merge(src proto.Message) {
xxx_messageInfo_FilterSpec.Merge(m, src)
}
func (m *FilterSpec) XXX_Size() int {
return xxx_messageInfo_FilterSpec.Size(m)
}
func (m *FilterSpec) XXX_DiscardUnknown() {
xxx_messageInfo_FilterSpec.DiscardUnknown(m)
}
var xxx_messageInfo_FilterSpec proto.InternalMessageInfo
func (m *FilterSpec) GetId() string {
if m != nil {
return m.Id
}
return ""
}
func (m *FilterSpec) GetImage() string {
if m != nil {
return m.Image
}
return ""
}
func (m *FilterSpec) GetConfig() *types.Any {
if m != nil {
return m.Config
}
return nil
}
func (m *FilterSpec) GetRootID() string {
if m != nil {
return m.RootID
}
return ""
}
func (m *FilterSpec) GetImagePullOptions() *ImagePullOptions {
if m != nil {
return m.ImagePullOptions
}
return nil
}
type ImagePullOptions struct {
// if a username/password is required,
// specify here the name of a secret:
// with keys:
// * username: <username>
// * password: <password>
//
// the secret must live in the same namespace
// as the FilterDeployment
PullSecret string `protobuf:"bytes,1,opt,name=pullSecret,proto3" json:"pullSecret,omitempty"`
// skip verifying the image server's TLS certificate
InsecureSkipVerify bool `protobuf:"varint,2,opt,name=insecureSkipVerify,proto3" json:"insecureSkipVerify,omitempty"`
// use HTTP instead of HTTPS
PlainHttp bool `protobuf:"varint,3,opt,name=plainHttp,proto3" json:"plainHttp,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *ImagePullOptions) Reset() { *m = ImagePullOptions{} }
func (m *ImagePullOptions) String() string { return proto.CompactTextString(m) }
func (*ImagePullOptions) ProtoMessage() {}
func (*ImagePullOptions) Descriptor() ([]byte, []int) {
return fileDescriptor_24d13e575ab7b28c, []int{2}
}
func (m *ImagePullOptions) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_ImagePullOptions.Unmarshal(m, b)
}
func (m *ImagePullOptions) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_ImagePullOptions.Marshal(b, m, deterministic)
}
func (m *ImagePullOptions) XXX_Merge(src proto.Message) {
xxx_messageInfo_ImagePullOptions.Merge(m, src)
}
func (m *ImagePullOptions) XXX_Size() int {
return xxx_messageInfo_ImagePullOptions.Size(m)
}
func (m *ImagePullOptions) XXX_DiscardUnknown() {
xxx_messageInfo_ImagePullOptions.DiscardUnknown(m)
}
var xxx_messageInfo_ImagePullOptions proto.InternalMessageInfo
func (m *ImagePullOptions) GetPullSecret() string {
if m != nil {
return m.PullSecret
}
return ""
}
func (m *ImagePullOptions) GetInsecureSkipVerify() bool {
if m != nil {
return m.InsecureSkipVerify
}
return false
}
func (m *ImagePullOptions) GetPlainHttp() bool {
if m != nil {
return m.PlainHttp
}
return false
}
// how to deploy the filter
type DeploymentSpec struct {
// Types that are valid to be assigned to DeploymentType:
// *DeploymentSpec_Istio
DeploymentType isDeploymentSpec_DeploymentType `protobuf_oneof:"deploymentType"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *DeploymentSpec) Reset() { *m = DeploymentSpec{} }
func (m *DeploymentSpec) String() string { return proto.CompactTextString(m) }
func (*DeploymentSpec) ProtoMessage() {}
func (*DeploymentSpec) Descriptor() ([]byte, []int) {
return fileDescriptor_24d13e575ab7b28c, []int{3}
}
func (m *DeploymentSpec) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_DeploymentSpec.Unmarshal(m, b)
}
func (m *DeploymentSpec) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_DeploymentSpec.Marshal(b, m, deterministic)
}
func (m *DeploymentSpec) XXX_Merge(src proto.Message) {
xxx_messageInfo_DeploymentSpec.Merge(m, src)
}
func (m *DeploymentSpec) XXX_Size() int {
return xxx_messageInfo_DeploymentSpec.Size(m)
}
func (m *DeploymentSpec) XXX_DiscardUnknown() {
xxx_messageInfo_DeploymentSpec.DiscardUnknown(m)
}
var xxx_messageInfo_DeploymentSpec proto.InternalMessageInfo
type isDeploymentSpec_DeploymentType interface {
isDeploymentSpec_DeploymentType()
}
type DeploymentSpec_Istio struct {
Istio *IstioDeploymentSpec `protobuf:"bytes,2,opt,name=istio,proto3,oneof" json:"istio,omitempty"`
}
func (*DeploymentSpec_Istio) isDeploymentSpec_DeploymentType() {}
func (m *DeploymentSpec) GetDeploymentType() isDeploymentSpec_DeploymentType {
if m != nil {
return m.DeploymentType
}
return nil
}
func (m *DeploymentSpec) GetIstio() *IstioDeploymentSpec {
if x, ok := m.GetDeploymentType().(*DeploymentSpec_Istio); ok {
return x.Istio
}
return nil
}
// XXX_OneofWrappers is for the internal use of the proto package.
func (*DeploymentSpec) XXX_OneofWrappers() []interface{} {
return []interface{}{
(*DeploymentSpec_Istio)(nil),
}
}
// how to deploy to Istio
type IstioDeploymentSpec struct {
// the kind of workload to deploy the filter to
// can either be Deployment, DaemonSet or Statefulset
Kind string `protobuf:"bytes,1,opt,name=kind,proto3" json:"kind,omitempty"`
// deploy the filter to workloads with these labels
// the workload must live in the same namespace as the FilterDeployment
// if empty, the filter will be deployed to all workloads in the namespace
Labels map[string]string `protobuf:"bytes,2,rep,name=labels,proto3" json:"labels,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
// the namespace where the Istio control plane is installed.
// defaults to `istio-system`.
IstioNamespace string `protobuf:"bytes,3,opt,name=istioNamespace,proto3" json:"istioNamespace,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *IstioDeploymentSpec) Reset() { *m = IstioDeploymentSpec{} }
func (m *IstioDeploymentSpec) String() string { return proto.CompactTextString(m) }
func (*IstioDeploymentSpec) ProtoMessage() {}
func (*IstioDeploymentSpec) Descriptor() ([]byte, []int) {
return fileDescriptor_24d13e575ab7b28c, []int{4}
}
func (m *IstioDeploymentSpec) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_IstioDeploymentSpec.Unmarshal(m, b)
}
func (m *IstioDeploymentSpec) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_IstioDeploymentSpec.Marshal(b, m, deterministic)
}
func (m *IstioDeploymentSpec) XXX_Merge(src proto.Message) {
xxx_messageInfo_IstioDeploymentSpec.Merge(m, src)
}
func (m *IstioDeploymentSpec) XXX_Size() int {
return xxx_messageInfo_IstioDeploymentSpec.Size(m)
}
func (m *IstioDeploymentSpec) XXX_DiscardUnknown() {
xxx_messageInfo_IstioDeploymentSpec.DiscardUnknown(m)
}
var xxx_messageInfo_IstioDeploymentSpec proto.InternalMessageInfo
func (m *IstioDeploymentSpec) GetKind() string {
if m != nil {
return m.Kind
}
return ""
}
func (m *IstioDeploymentSpec) GetLabels() map[string]string {
if m != nil {
return m.Labels
}
return nil
}
func (m *IstioDeploymentSpec) GetIstioNamespace() string {
if m != nil {
return m.IstioNamespace
}
return ""
}
// the current status of the deployment
type FilterDeploymentStatus struct {
// the observed generation of the FilterDeployment
ObservedGeneration int64 `protobuf:"varint,1,opt,name=observedGeneration,proto3" json:"observedGeneration,omitempty"`
// for each workload, was the deployment successful?
Workloads map[string]*WorkloadStatus `protobuf:"bytes,2,rep,name=workloads,proto3" json:"workloads,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
// a human-readable string explaining the error, if any
Reason string `protobuf:"bytes,3,opt,name=reason,proto3" json:"reason,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *FilterDeploymentStatus) Reset() { *m = FilterDeploymentStatus{} }
func (m *FilterDeploymentStatus) String() string { return proto.CompactTextString(m) }
func (*FilterDeploymentStatus) ProtoMessage() {}
func (*FilterDeploymentStatus) Descriptor() ([]byte, []int) {
return fileDescriptor_24d13e575ab7b28c, []int{5}
}
func (m *FilterDeploymentStatus) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_FilterDeploymentStatus.Unmarshal(m, b)
}
func (m *FilterDeploymentStatus) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_FilterDeploymentStatus.Marshal(b, m, deterministic)
}
func (m *FilterDeploymentStatus) XXX_Merge(src proto.Message) {
xxx_messageInfo_FilterDeploymentStatus.Merge(m, src)
}
func (m *FilterDeploymentStatus) XXX_Size() int {
return xxx_messageInfo_FilterDeploymentStatus.Size(m)
}
func (m *FilterDeploymentStatus) XXX_DiscardUnknown() {
xxx_messageInfo_FilterDeploymentStatus.DiscardUnknown(m)
}
var xxx_messageInfo_FilterDeploymentStatus proto.InternalMessageInfo
func (m *FilterDeploymentStatus) GetObservedGeneration() int64 {
if m != nil {
return m.ObservedGeneration
}
return 0
}
func (m *FilterDeploymentStatus) GetWorkloads() map[string]*WorkloadStatus {
if m != nil {
return m.Workloads
}
return nil
}
func (m *FilterDeploymentStatus) GetReason() string {
if m != nil {
return m.Reason
}
return ""
}
type WorkloadStatus struct {
State WorkloadStatus_State `protobuf:"varint,1,opt,name=state,proto3,enum=wasme.io.WorkloadStatus_State" json:"state,omitempty"`
// a human-readable string explaining the error, if any
Reason string `protobuf:"bytes,2,opt,name=reason,proto3" json:"reason,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *WorkloadStatus) Reset() { *m = WorkloadStatus{} }
func (m *WorkloadStatus) String() string { return proto.CompactTextString(m) }
func (*WorkloadStatus) ProtoMessage() {}
func (*WorkloadStatus) Descriptor() ([]byte, []int) {
return fileDescriptor_24d13e575ab7b28c, []int{6}
}
func (m *WorkloadStatus) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_WorkloadStatus.Unmarshal(m, b)
}
func (m *WorkloadStatus) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_WorkloadStatus.Marshal(b, m, deterministic)
}
func (m *WorkloadStatus) XXX_Merge(src proto.Message) {
xxx_messageInfo_WorkloadStatus.Merge(m, src)
}
func (m *WorkloadStatus) XXX_Size() int {
return xxx_messageInfo_WorkloadStatus.Size(m)
}
func (m *WorkloadStatus) XXX_DiscardUnknown() {
xxx_messageInfo_WorkloadStatus.DiscardUnknown(m)
}
var xxx_messageInfo_WorkloadStatus proto.InternalMessageInfo
func (m *WorkloadStatus) GetState() WorkloadStatus_State {
if m != nil {
return m.State
}
return WorkloadStatus_Pending
}
func (m *WorkloadStatus) GetReason() string {
if m != nil {
return m.Reason
}
return ""
}
func init() {
proto.RegisterEnum("wasme.io.WorkloadStatus_State", WorkloadStatus_State_name, WorkloadStatus_State_value)
proto.RegisterType((*FilterDeploymentSpec)(nil), "wasme.io.FilterDeploymentSpec")
proto.RegisterType((*FilterSpec)(nil), "wasme.io.FilterSpec")
proto.RegisterType((*ImagePullOptions)(nil), "wasme.io.ImagePullOptions")
proto.RegisterType((*DeploymentSpec)(nil), "wasme.io.DeploymentSpec")
proto.RegisterType((*IstioDeploymentSpec)(nil), "wasme.io.IstioDeploymentSpec")
proto.RegisterMapType((map[string]string)(nil), "wasme.io.IstioDeploymentSpec.LabelsEntry")
proto.RegisterType((*FilterDeploymentStatus)(nil), "wasme.io.FilterDeploymentStatus")
proto.RegisterMapType((map[string]*WorkloadStatus)(nil), "wasme.io.FilterDeploymentStatus.WorkloadsEntry")
proto.RegisterType((*WorkloadStatus)(nil), "wasme.io.WorkloadStatus")
}
func init() {
proto.RegisterFile("github.com/solo-io/wasm/tools/wasme/cli/operator/api/wasme/v1/filter_deployment.proto", fileDescriptor_24d13e575ab7b28c)
}
var fileDescriptor_24d13e575ab7b28c = []byte{
// 641 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x8c, 0x54, 0x4d, 0x6f, 0xd3, 0x40,
0x10, 0xad, 0x9d, 0x26, 0x34, 0x13, 0x11, 0x59, 0x4b, 0x55, 0x99, 0x08, 0xaa, 0xca, 0x07, 0x54,
0x24, 0xb0, 0xd5, 0x02, 0x52, 0xe1, 0xd6, 0xaa, 0x84, 0x56, 0xe2, 0xa3, 0x72, 0xa0, 0x08, 0x2e,
0x68, 0x63, 0x4f, 0xcc, 0x2a, 0x9b, 0x5d, 0xcb, 0x5e, 0xa7, 0xf2, 0x05, 0x71, 0xe3, 0xc8, 0x5f,
0xe2, 0x1f, 0xf0, 0x97, 0x90, 0xd7, 0x4e, 0xed, 0xa4, 0x01, 0x71, 0xf2, 0xee, 0xf8, 0xbd, 0x37,
0xf3, 0x76, 0x66, 0x17, 0x3e, 0x44, 0x4c, 0x7d, 0xcd, 0xc6, 0x6e, 0x20, 0x67, 0x5e, 0x2a, 0xb9,
0x7c, 0xcc, 0xa4, 0x77, 0x45, 0xd3, 0x99, 0xa7, 0xa4, 0xe4, 0xa9, 0x5e, 0xa2, 0x17, 0x70, 0xe6,
0xc9, 0x18, 0x13, 0xaa, 0x64, 0xe2, 0xd1, 0x98, 0x55, 0xe1, 0xf9, 0x81, 0x37, 0x61, 0x5c, 0x61,
0xf2, 0x25, 0xc4, 0x98, 0xcb, 0x7c, 0x86, 0x42, 0xb9, 0x71, 0x22, 0x95, 0x24, 0x5b, 0x1a, 0xe1,
0x32, 0x39, 0xb8, 0x1b, 0x49, 0x19, 0x71, 0xf4, 0x74, 0x7c, 0x9c, 0x4d, 0x3c, 0x2a, 0xf2, 0x12,
0xe4, 0x7c, 0x83, 0xed, 0xa1, 0xe6, 0x9f, 0x5e, 0xd3, 0x47, 0x31, 0x06, 0xe4, 0x11, 0x74, 0x4a,
0x5d, 0xdb, 0xd8, 0x33, 0xf6, 0x7b, 0x87, 0xdb, 0xee, 0x42, 0xcd, 0x2d, 0xf1, 0x05, 0xca, 0xaf,
0x30, 0xe4, 0x08, 0xa0, 0x4e, 0x6f, 0x9b, 0x9a, 0x61, 0xd7, 0x8c, 0x65, 0x6d, 0xbf, 0x81, 0x75,
0x7e, 0x19, 0x00, 0xb5, 0x20, 0xe9, 0x83, 0xc9, 0x42, 0x9d, 0xb2, 0xeb, 0x9b, 0x2c, 0x24, 0xdb,
0xd0, 0x66, 0x33, 0x1a, 0xa1, 0xd6, 0xec, 0xfa, 0xe5, 0xa6, 0x28, 0x2e, 0x90, 0x62, 0xc2, 0x22,
0xbb, 0x55, 0x15, 0x57, 0x1a, 0x74, 0x17, 0x06, 0xdd, 0x63, 0x91, 0xfb, 0x15, 0x86, 0xec, 0x40,
0x27, 0x91, 0x52, 0x9d, 0x9f, 0xda, 0x9b, 0x5a, 0xa4, 0xda, 0x91, 0x21, 0x58, 0x5a, 0xee, 0x22,
0xe3, 0xfc, 0x5d, 0xac, 0x98, 0x14, 0xa9, 0xdd, 0xd6, 0x7a, 0x83, 0xba, 0xf4, 0xf3, 0x15, 0x84,
0x7f, 0x83, 0xe3, 0x7c, 0x37, 0xc0, 0x5a, 0x85, 0x91, 0x5d, 0x80, 0x38, 0xe3, 0x7c, 0x84, 0x41,
0x82, 0xaa, 0x32, 0xd4, 0x88, 0x10, 0x17, 0x08, 0x13, 0x29, 0x06, 0x59, 0x82, 0xa3, 0x29, 0x8b,
0x2f, 0x31, 0x61, 0x93, 0x5c, 0xbb, 0xdc, 0xf2, 0xd7, 0xfc, 0x21, 0xf7, 0xa0, 0x1b, 0x73, 0xca,
0xc4, 0x99, 0x52, 0xb1, 0x76, 0xbd, 0xe5, 0xd7, 0x01, 0xe7, 0x13, 0xf4, 0x57, 0xfa, 0xf7, 0x0c,
0xda, 0x2c, 0x55, 0x4c, 0x56, 0xcd, 0xb8, 0xdf, 0x70, 0x54, 0x84, 0x97, 0xd1, 0x67, 0x1b, 0x7e,
0x89, 0x3e, 0xb1, 0xa0, 0x5f, 0x37, 0xe7, 0x7d, 0x1e, 0xa3, 0xf3, 0xdb, 0x80, 0x3b, 0x6b, 0x28,
0x84, 0xc0, 0xe6, 0x94, 0x89, 0x45, 0xaf, 0xf4, 0x9a, 0x1c, 0x43, 0x87, 0xd3, 0x31, 0xf2, 0xd4,
0x36, 0xf7, 0x5a, 0xfb, 0xbd, 0xc3, 0x87, 0xff, 0xcc, 0xea, 0xbe, 0xd6, 0xd8, 0x97, 0x42, 0x25,
0xb9, 0x5f, 0x11, 0xc9, 0x03, 0xe8, 0xeb, 0x4a, 0xde, 0xd2, 0x19, 0xa6, 0x31, 0x0d, 0x50, 0x9b,
0xed, 0xfa, 0x2b, 0xd1, 0xc1, 0x73, 0xe8, 0x35, 0xe8, 0xc4, 0x82, 0xd6, 0x14, 0xf3, 0xaa, 0x98,
0x62, 0x59, 0x4c, 0xce, 0x9c, 0xf2, 0xec, 0x7a, 0x72, 0xf4, 0xe6, 0x85, 0x79, 0x64, 0x38, 0x3f,
0x4c, 0xd8, 0xb9, 0x31, 0xf3, 0x8a, 0xaa, 0x2c, 0x2d, 0xba, 0x22, 0xc7, 0x29, 0x26, 0x73, 0x0c,
0x5f, 0xa1, 0x28, 0x2e, 0x1b, 0x93, 0x42, 0xab, 0xb6, 0xfc, 0x35, 0x7f, 0xc8, 0x1b, 0xe8, 0x5e,
0xc9, 0x64, 0xca, 0x25, 0x0d, 0x17, 0x9e, 0xbd, 0xd5, 0x8b, 0xb2, 0x9a, 0xc4, 0xfd, 0xb8, 0x60,
0x94, 0xce, 0x6b, 0x05, 0x3d, 0xa9, 0x48, 0x53, 0x29, 0x2a, 0xd3, 0xd5, 0x6e, 0x70, 0x09, 0xfd,
0x65, 0xd2, 0x1a, 0xbf, 0x6e, 0xd3, 0xef, 0xd2, 0xed, 0x5b, 0x50, 0xcb, 0xf4, 0xcd, 0x93, 0xf8,
0x69, 0xd4, 0xc2, 0xd5, 0x09, 0x3c, 0x85, 0x76, 0xaa, 0xa8, 0x42, 0x2d, 0xdd, 0x3f, 0xdc, 0xfd,
0x9b, 0x8c, 0x5b, 0x7c, 0xd0, 0x2f, 0xc1, 0x8d, 0xc2, 0xcd, 0x66, 0xe1, 0x8e, 0x07, 0x6d, 0x8d,
0x23, 0x3d, 0xb8, 0x75, 0x81, 0x22, 0x64, 0x22, 0xb2, 0x36, 0xc8, 0x6d, 0xe8, 0x8e, 0xb2, 0x20,
0x40, 0x0c, 0x31, 0xb4, 0x0c, 0x02, 0xd0, 0x19, 0x52, 0xc6, 0x31, 0xb4, 0xcc, 0x93, 0xe1, 0xe7,
0xd3, 0xff, 0x7d, 0x0c, 0xe3, 0x69, 0xb4, 0xe6, 0x41, 0x74, 0x99, 0xf4, 0xe6, 0x07, 0xe3, 0x8e,
0x7e, 0x09, 0x9e, 0xfc, 0x09, 0x00, 0x00, 0xff, 0xff, 0x82, 0x66, 0x05, 0xe8, 0x5b, 0x05, 0x00,
0x00,
}
|
<gh_stars>0
/*************************************************************************************
* Functions used in the removeDuplicatesCheck and removeDuplicatesComplexity programs.
* Commented couts are left in purpose in case anybody wants to see the process that
* takes place when executing the program.
*
* Code by <NAME> on 27/12/2018.
*************************************************************************************/
#include <iostream>
#include <stdlib.h>
#include <time.h>
using namespace std;
static void mostrarTaulaEntre(int a[], unsigned ini, unsigned final){
//Pre: 0<=ini<=final;
//Post: mostra per pantalla a[ini..final];
for(unsigned i = ini; i <= final; i++){
cout << a[i] << " ";
}
cout << endl;
}
static void mostrarTaula(int a[], unsigned n){
//Pre: a té mida n;
//Post: mostra la taula a per pantalla;
for(unsigned i = 0; i<n; i++){
cout << a[i] << " ";
}
cout << endl;
}
static void fusio(int a[], unsigned n, int esq, int ini2, int dre){
// Pre: 0<=esq<ini2<=dre<n i a ordenat creixentment de esq a ini2-1 i de ini2 fins dret i a=A
// Post: a[esq..dre] conté una permutació ordenada dels valors de A[esq..dre]
// Primer es copia a la taula auxiliar, fins a ini2-1 en ordre ascendent
int aux[dre-esq+1]; // cal taula auxiliar (dret-esq+1 posicions)
int n_elem = dre-esq+1; int i_aux=0; // indexació d’aux
//cout << "Before merge, original array: "; mostrarTaulaEntre(a,esq, dre); cout << endl;
for (int k=esq; k<ini2; k++) {
aux[i_aux]=a[k]; i_aux++;
}
// A partir de ini2 esta en ordre descendent
for (int k=dre; k>=ini2; k--) { // A l’inrevès
aux[i_aux]=a[k]; i_aux++;
}
//cout << "Before merge, aixiliary array: "; mostrarTaula(aux, n_elem); cout << endl;
int i = 0, j = n_elem-1, k = esq;
while (i <= j) { // Recorrem la taula auxiliar,
if (aux[i] <= aux[j]) { // Volem estable, per tant <=
a[k] = aux[i];
i++;
}else{
a[k] = aux[j];
j--;
}
k++;
}
//cout << "After merge, original array: "; mostrarTaulaEntre(a, esq, dre); cout << endl;
//cout << "After merge, auxiliary array: "; mostrarTaula(aux, n_elem); cout << endl;
}
static void iMergeSort(int a[], unsigned n, int esq, int dre){
// Pre: 0<=esq<=dre<n<=MAX i a=A
// Post: a[esq..dre] conté una permutació ordenada dels valors de A[esq..dre]
int mig;
if (esq<dre){
mig=(esq+dre)/2;
cout << "1: "; mostrarTaulaEntre(a, esq, dre); cout << endl;
iMergeSort(a,n,esq,mig);
cout << "2: "; mostrarTaulaEntre(a, esq, dre); cout << endl;
iMergeSort(a,n,mig+1,dre);
cout << "3: "; mostrarTaulaEntre(a, esq, dre); cout << endl;
fusio(a,n,esq,mig+1,dre);
cout << "4: "; mostrarTaulaEntre(a, esq, dre); cout << endl;
}
}
static void mergeSort(int t[], unsigned n){
// pre: t[0..n-1] conté els valors a ordenat
// post: t[0..n-1] està ordenada creixentment
iMergeSort(t, n, 0, n-1);
}
static void removeDuplicatesNaive(int a[], unsigned n, int b[], unsigned& m){
//Pre: a i b tenen mida n /\ a = A
//Post: b conté els elements de A sense repetits i en el mateix ordre en les m primeres posicions;
//cost O(n^2)
for(unsigned i = 0; i<n; i++){
int num = a[i];
unsigned j = 0;
while(j < m && b[j] != num) j++;
if(j == m){
b[m] = num;
m++;
}
}
}
static void removeDuplicatesDivideAndConquer(int a[], unsigned n, int b[], unsigned& m){
//Pre: a i b tenen mida n /\ a = A
//Post: b conté els elements de A sense repetits i en el mateix ordre, en les m primeres posicions;
//cost O(n log n)
// Busquem el màxim
int max = -1;
for(unsigned i = 0; i < n; i++) if(a[i]>max) max = a[i];
// Creem un vector amb tots els numeros que hi poden haver
int posicions[max+1];
for(int i = 0; i <= max; i++) posicions[i] = -1;
// Mirem en quin ordre els anem trobant al vector original
int ordre = 0;
for(unsigned i = 0; i < n; i++){
if(posicions[a[i]] == -1){
posicions[a[i]] = ordre;
ordre++;
}
}
// Ordenem la taula original
mergeSort(a, n);
// Eliminem els repetits i el resultat el posem a una taula auxiliar auxT
int lastAdded;
int auxT[n];
m = 0;
if(0<n){
auxT[0] = a[0];
m++;
lastAdded = a[0];
}
for(unsigned i = 1; i<n; i++){
if(a[i] != lastAdded){
auxT[m] = a[i];
lastAdded = a[i];
m++;
}
}
// Tornem a posar els elements en el seu ordre original
for(unsigned i = 0; i<m; i++){
b[posicions[auxT[i]]] = auxT[i];
}
}
static void emplenarTaula(int a[], unsigned n, unsigned valorsDiferents){
//Pre: a té mida n;
//Post: a[0..n-1] elements generats aleatoriament entre 0 i valorsDiferents;
srand(time(NULL));
for(unsigned i = 0; i<n; i++){
a[i] = rand() % valorsDiferents;
}
}
|
<reponame>ahuglajbclajep/three-vrm-react-example<filename>src/App.tsx
import React, { useCallback } from "react";
import { Canvas } from "react-three-fiber";
import Controls from "./Controls";
import { useToggle, useVRM } from "./hooks";
import Inputs from "./Inputs";
import VRM from "./VRM";
const App: React.FC = () => {
const [vrm, loadVRM] = useVRM();
const [showGrid, showGridToggle] = useToggle(false);
const handleFileChange = useCallback(
async (event: React.ChangeEvent<HTMLInputElement>) => {
// eslint-disable-next-line @typescript-eslint/no-non-null-assertion
const url = URL.createObjectURL(event.target.files![0]);
await loadVRM(url);
URL.revokeObjectURL(url);
},
[loadVRM]
);
return (
<>
<Inputs
onFileChange={handleFileChange}
checked={showGrid}
onCheckChange={showGridToggle}
/>
<Canvas camera={{ position: [0, 1, 2] }}>
<directionalLight />
<VRM vrm={vrm} />
<Controls />
{showGrid && (
<>
<gridHelper />
<axesHelper />
</>
)}
</Canvas>
</>
);
};
export default App;
|
Contents
The PA0RDT Mini Whip
How Active Antennas Work
How the PA0RDT Mini Whip Works
The PA0RDT Mini Whip
This is a picture of the PA0RDT mini whip, an active antenna for the VLF and shortwave bands, together with its power feed unit:
The whole antenna is smaller than a ballpoint pen! How can anybody believe this toy can replace a dipole of full length?!
But yeeees, if we think of the wavelength in the VLF segment and even in the lower shortwave bands – is there a little chance that this antenna works and solves our problems with the landlord being not very enthusiastic about our nice wire entanglements?
Let us look at active antennas and how they work.
Fig. 1: The PA0RDT antenna together with its power feed unit to the right
How Active Antennas Work
Assume a full-sized antenna picking up a signal and external noise feeding both to a preamplifier stage. The signal to noise ratio (SNR) is determined by the external noise as the noise added by the preamplifier is of much lower level (see Fig. 2).
Fig. 2: A full-sized antenna picking up a signal and external noise
Now think of the antenna to be shortened to a whip of, say, 2 m length and the preamplifier being the amplifier of an active antenna. Because of its reduced length, the antenna achieves a much lower signal strength. But – and this is crucial – the external noise is on a much lower level, too. So the signal to noise ratio (SNR) remains the same (see Fig. 3).
With both antennas, the full-sized one of Fig. 2 and the shortened one of Fig. 3 you can achieve the same sensitivity [1]!
Fig. 3: An active antenna picking up a signal and external noise
For sure, an essential condition for this statement is that both antennas are in the same environment (outside the house). Do not think of the active antenna to be placed beneath your plasma tv because it is so small and then blame it to be of lower performance!
Now we will go a step further to the point, where the active antenna becomes too short (see Fig. 4).
When shortening the antenna again, the level of external noise and of the signal is decreasing more. At that point, the amplifier’s noise level becomes higher than that of the external noise. Now the signal to noise ratio (SNR) is lowered and small signals that could be detected with a longer antenna are now buried in noise [1].
Fig. 4: An active antenna, too short, reducing sensitivity
How the PA0RDT Mini Whip Works
Roelof, PA0RDT, uses a J310 JFET transistor as first amplifier stage of his active antenna [2]. This semiconductor is designed for VHF/UHF amplifiers and adds only low noise to the signal picked up from the antenna. The external noise at 30 MHz (upper shortwave band) exceeds 20 dB and at 100 kHz (VLF) the external noise reaches a magnitude of 90 dB [1] in relationship to the thermal noise at standard temperature conditions.
With this low noise semiconductor being the first amplifier stage, Roelof could shorten his antenna to an extent where the external noise still gives a higher voltage than the internal noise produced by the UHF transistor [3]. At that point, the antenna is still performing as depicted in Fig. 3. Roelof has run lots of experiments to prove that the length of the (shortened) antenna does not affect the ratio between the wanted signal and the external noise, in other words: has no impact on the sensitivity of the antenna. Read Roelof's full report on his experimental results.
Compared to other active antennas, the PA0RDT Mini Whip is very small. This is because it has a copper plate in place of a rod acting as antenna. As any other shortened (active) antenna, the PA0RDT Mini Whip is a capacity coupled to the electromagnetic field. The electromagnetic field doesn't care if this capacity is formed as a whip or as a copper plate - it works in both cases. ;-)
When you place this tiny antenna outside the house (!), it will perform like a full-sized antenna for the VLF and shortwave bands. Take advantage of the electric shielding a building provides. Local noise, generated by electric and electronic components belonging to modern life, is attenuated markedly by walls [1, 3]
So put your active antenna out, out, out of the house as high as you can! Get it attached to a broomstick and fix it to the window frame!
For more information on the PA0RDT Mini Whip, write an email to Roelof Bakker, PA0RDT, roelof+++ndb.demon.nl
[1] ITU-R: Radio Noise; Recommendation ITU-R P.372; P Series; Radiowave Propagation
[2] Roelof Bakker, PA0RDT: The PA0RDT-Mini-Whip
[3] Roelof Bakker, PA0RDT: The PA0RDT-Mini-Whip, an active receiving antenna for 10 kHz to 20 MHz |
<reponame>BlockPuppets/symbol-crypto-core<filename>core/src/public_key.rs
// Copyright 2021 BlockPuppets developers.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// https://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or https://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
#[cfg(feature = "serde")]
use serde::{Deserialize, Deserializer, Serialize, Serializer};
#[cfg(feature = "serde")]
use serde_bytes::{ByteBuf as SerdeByteBuf, Bytes as SerdeBytes};
use super::KEY_BYTES_SIZE;
construct_fixed_hash! {
/// 256 bit hash type.
pub struct PublicKey(KEY_BYTES_SIZE);
}
#[cfg(feature = "serde")]
impl Serialize for PublicKey {
fn serialize<S>(&self, serializer: S) -> std::result::Result<S::Ok, S::Error>
where
S: Serializer,
{
let bytes = self.as_bytes();
SerdeBytes::new(bytes).serialize(serializer)
}
}
#[cfg(feature = "serde")]
impl<'d> Deserialize<'d> for PublicKey {
fn deserialize<D>(deserializer: D) -> std::result::Result<Self, D::Error>
where
D: Deserializer<'d>,
{
let bytes = <SerdeByteBuf>::deserialize(deserializer)?;
Ok(PublicKey::from_slice(bytes.as_ref()))
}
}
|
/**
* Asks the user to update the game app
*/
void updateApp() {
AlertDialog.Builder b = new AlertDialog.Builder(this);
b.setMessage("Your version of the game has to be updated first to join this match!");
b.setPositiveButton(android.R.string.ok, new DialogInterface.OnClickListener() {
@Override
public void onClick(final DialogInterface dialogInterface, int i) {
dialogInterface.dismiss();
startActivity(new Intent(Intent.ACTION_VIEW,
Uri.parse("market://details?id=" + getPackageName()))
.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK));
}
});
b.create().show();
} |
/**
* Tests related to {@link SubscriptionsManagerImpl}.
*/
@RunWith(ChromeJUnit4ClassRunner.class)
@CommandLineFlags.Add({ChromeSwitches.DISABLE_FIRST_RUN_EXPERIENCE})
@Batch(Batch.PER_CLASS)
@DisabledTest(message = "crbug.com/1194736 Enable this test if the bug is resolved")
public class SubscriptionsManagerImplTest {
@ClassRule
public static ChromeTabbedActivityTestRule sActivityTestRule =
new ChromeTabbedActivityTestRule();
@Rule
public BlankCTATabInitialStateRule mBlankCTATabInitialStateRule =
new BlankCTATabInitialStateRule(sActivityTestRule, false);
private static final String OFFER_ID_1 = "offer_id_1";
private static final String OFFER_ID_2 = "offer_id_2";
private static final String OFFER_ID_3 = "offer_id_3";
private static final String OFFER_ID_4 = "offer_id_4";
private CommerceSubscriptionsStorage mStorage;
private SubscriptionsManagerImpl mSubscriptionsManager;
private CommerceSubscription mSubscription1;
private CommerceSubscription mSubscription2;
private CommerceSubscription mSubscription3;
private CommerceSubscription mSubscription4;
@Before
public void setUp() throws Exception {
TestThreadUtils.runOnUiThreadBlocking(() -> {
mStorage = new CommerceSubscriptionsStorage(Profile.getLastUsedRegularProfile());
mSubscriptionsManager = new SubscriptionsManagerImpl();
});
mSubscription1 =
new CommerceSubscription(CommerceSubscription.CommerceSubscriptionType.PRICE_TRACK,
OFFER_ID_1, CommerceSubscription.SubscriptionManagementType.CHROME_MANAGED,
CommerceSubscription.TrackingIdType.OFFER_ID);
mSubscription2 =
new CommerceSubscription(CommerceSubscription.CommerceSubscriptionType.PRICE_TRACK,
OFFER_ID_2, CommerceSubscription.SubscriptionManagementType.CHROME_MANAGED,
CommerceSubscription.TrackingIdType.OFFER_ID);
mSubscription3 =
new CommerceSubscription(CommerceSubscription.CommerceSubscriptionType.PRICE_TRACK,
OFFER_ID_3, CommerceSubscription.SubscriptionManagementType.CHROME_MANAGED,
CommerceSubscription.TrackingIdType.OFFER_ID);
mSubscription4 =
new CommerceSubscription(CommerceSubscription.CommerceSubscriptionType.PRICE_TRACK,
OFFER_ID_4, CommerceSubscription.SubscriptionManagementType.CHROME_MANAGED,
CommerceSubscription.TrackingIdType.OFFER_ID);
}
@After
public void tearDown() throws Exception {
TestThreadUtils.runOnUiThreadBlocking(() -> {
mStorage.deleteAll();
mStorage.destroy();
mSubscriptionsManager.setRemoteSubscriptionsForTesting(null);
});
}
@MediumTest
@Test
public void testSubscribeSingle() throws TimeoutException {
// Since remoteSubscriptions reflect the latest subscriptions from server-side, it should
// contain newSubscription.
CommerceSubscription newSubscription = mSubscription4;
List<CommerceSubscription> remoteSubscriptions =
new ArrayList<>(Arrays.asList(mSubscription1, mSubscription2, newSubscription));
List<CommerceSubscription> localSubscriptions =
new ArrayList<>(Arrays.asList(mSubscription2, mSubscription3));
List<CommerceSubscription> expectedSubscriptions =
new ArrayList<>(Arrays.asList(mSubscription1, mSubscription2, newSubscription));
// Simulate subscription state in local database and remote server.
for (CommerceSubscription subscription : localSubscriptions) {
save(subscription);
loadSingleAndCheckResult(
CommerceSubscriptionsStorage.getKey(subscription), subscription);
}
mSubscriptionsManager.setRemoteSubscriptionsForTesting(remoteSubscriptions);
// Test local cache is updated after single subscription.
ThreadUtils.runOnUiThreadBlocking(() -> mSubscriptionsManager.subscribe(newSubscription));
loadSingleAndCheckResult(CommerceSubscriptionsStorage.getKey(mSubscription3), null);
loadPrefixAndCheckResult(
CommerceSubscription.CommerceSubscriptionType.PRICE_TRACK, expectedSubscriptions);
}
@MediumTest
@Test
public void testSubscribeList() throws TimeoutException {
// Since remoteSubscriptions reflect the latest subscriptions from server-side, it should
// contain all subscriptions from newSubscriptions.
List<CommerceSubscription> newSubscriptions =
new ArrayList<>(Arrays.asList(mSubscription1, mSubscription2));
List<CommerceSubscription> remoteSubscriptions =
new ArrayList<>(Arrays.asList(mSubscription1, mSubscription2, mSubscription4));
List<CommerceSubscription> localSubscriptions =
new ArrayList<>(Arrays.asList(mSubscription2, mSubscription3));
List<CommerceSubscription> expectedSubscriptions =
new ArrayList<>(Arrays.asList(mSubscription1, mSubscription2, mSubscription4));
// Simulate subscription state in local database and remote server.
for (CommerceSubscription subscription : localSubscriptions) {
save(subscription);
loadSingleAndCheckResult(
CommerceSubscriptionsStorage.getKey(subscription), subscription);
}
mSubscriptionsManager.setRemoteSubscriptionsForTesting(remoteSubscriptions);
// Test local cache is updated after subscribing a list of subscriptions.
ThreadUtils.runOnUiThreadBlocking(() -> mSubscriptionsManager.subscribe(newSubscriptions));
loadSingleAndCheckResult(CommerceSubscriptionsStorage.getKey(mSubscription3), null);
loadPrefixAndCheckResult(
CommerceSubscription.CommerceSubscriptionType.PRICE_TRACK, expectedSubscriptions);
}
@MediumTest
@Test
public void testUnsubscribe() throws TimeoutException {
// Since remoteSubscriptions reflect the latest subscriptions from server-side, it should
// not contain removedSubscription.
CommerceSubscription removedSubscription = mSubscription3;
List<CommerceSubscription> remoteSubscriptions =
new ArrayList<>(Arrays.asList(mSubscription2, mSubscription4));
List<CommerceSubscription> localSubscriptions =
new ArrayList<>(Arrays.asList(mSubscription2, removedSubscription, mSubscription4));
List<CommerceSubscription> expectedSubscriptions =
new ArrayList<>(Arrays.asList(mSubscription2, mSubscription4));
// Simulate subscription state in local database and remote server.
for (CommerceSubscription subscription : localSubscriptions) {
save(subscription);
loadSingleAndCheckResult(
CommerceSubscriptionsStorage.getKey(subscription), subscription);
}
mSubscriptionsManager.setRemoteSubscriptionsForTesting(remoteSubscriptions);
// Test local cache is updated after unsubscription.
ThreadUtils.runOnUiThreadBlocking(
() -> mSubscriptionsManager.unsubscribe(removedSubscription));
loadSingleAndCheckResult(CommerceSubscriptionsStorage.getKey(removedSubscription), null);
loadPrefixAndCheckResult(
CommerceSubscription.CommerceSubscriptionType.PRICE_TRACK, expectedSubscriptions);
}
@MediumTest
@Test
public void testGetLocalSubscriptions() throws TimeoutException {
List<CommerceSubscription> subscriptions =
new ArrayList<>(Arrays.asList(mSubscription1, mSubscription2));
for (CommerceSubscription subscription : subscriptions) {
save(subscription);
loadSingleAndCheckResult(
CommerceSubscriptionsStorage.getKey(subscription), subscription);
}
SubscriptionsLoadCallbackHelper ch = new SubscriptionsLoadCallbackHelper();
int chCount = ch.getCallCount();
ThreadUtils.runOnUiThreadBlocking(
()
-> mSubscriptionsManager.getSubscriptions(
CommerceSubscription.CommerceSubscriptionType.PRICE_TRACK, false,
(res) -> ch.notifyCalled(res)));
ch.waitForCallback(chCount);
List<CommerceSubscription> results = ch.getResultList();
assertNotNull(results);
assertEquals(subscriptions.size(), results.size());
for (int i = 0; i < subscriptions.size(); i++) {
assertEquals(subscriptions.get(i), results.get(i));
}
}
private void save(CommerceSubscription subscription) throws TimeoutException {
CallbackHelper ch = new CallbackHelper();
int chCount = ch.getCallCount();
TestThreadUtils.runOnUiThreadBlocking(() -> {
mStorage.saveWithCallback(subscription, new Runnable() {
@Override
public void run() {
ch.notifyCalled();
}
});
});
ch.waitForCallback(chCount);
}
private void loadSingleAndCheckResult(String key, CommerceSubscription expected)
throws TimeoutException {
SubscriptionsLoadCallbackHelper ch = new SubscriptionsLoadCallbackHelper();
int chCount = ch.getCallCount();
ThreadUtils.runOnUiThreadBlocking(() -> mStorage.load(key, (res) -> ch.notifyCalled(res)));
ch.waitForCallback(chCount);
CommerceSubscription actual = ch.getSingleResult();
if (expected == null) {
assertNull(actual);
return;
}
assertNotNull(actual);
assertEquals(expected, actual);
}
private void loadPrefixAndCheckResult(String prefix, List<CommerceSubscription> expected)
throws TimeoutException {
SubscriptionsLoadCallbackHelper ch = new SubscriptionsLoadCallbackHelper();
int chCount = ch.getCallCount();
ThreadUtils.runOnUiThreadBlocking(
() -> mStorage.loadWithPrefix(prefix, (res) -> ch.notifyCalled(res)));
ch.waitForCallback(chCount);
List<CommerceSubscription> actual = ch.getResultList();
assertNotNull(actual);
assertEquals(expected.size(), actual.size());
for (int i = 0; i < expected.size(); i++) {
assertEquals(expected.get(i), actual.get(i));
}
}
} |
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
"""
log_utilities module:
This module provides help functions for logging
"""
import hashlib
import re
from SentinelExceptions import InputError
class LogUtilities:
""" This class provides static methods to support logging """
@staticmethod
def generate_hash(text):
""" Generate hash to replace user-related information """
hash_val = hashlib.md5(text.encode())
return hash_val.hexdigest()
@staticmethod
def validate_input(name, text):
""" Validating input, no None or empty """
if not text:
raise InputError(name)
@staticmethod
def is_external_tenant(tenant_domain):
""" Check if a tenant is external """
if tenant_domain.strip().lower() == 'microsoft.onmicrosoft.com':
return False
return True
@staticmethod
def sanitize_input(text):
""" Remove special chars, and limit size to 500 characters """
if not text:
return None
replaced = re.sub('[^a-zA-Z0-9._,!-]', ' ', text)
if not replaced:
return None
if len(replaced) > 500:
return replaced[0:500]
return replaced
|
SHuffle, a novel Escherichia coli protein expression strain capable of correctly folding disulfide bonded proteins in its cytoplasm
Background Production of correctly disulfide bonded proteins to high yields remains a challenge. Recombinant protein expression in Escherichia coli is the popular choice, especially within the research community. While there is an ever growing demand for new expression strains, few strains are dedicated to post-translational modifications, such as disulfide bond formation. Thus, new protein expression strains must be engineered and the parameters involved in producing disulfide bonded proteins must be understood. Results We have engineered a new E. coli protein expression strain named SHuffle, dedicated to producing correctly disulfide bonded active proteins to high yields within its cytoplasm. This strain is based on the trxB gor suppressor strain SMG96 where its cytoplasmic reductive pathways have been diminished, allowing for the formation of disulfide bonds in the cytoplasm. We have further engineered a major improvement by integrating into its chromosome a signal sequenceless disulfide bond isomerase, DsbC. We probed the redox state of DsbC in the oxidizing cytoplasm and evaluated its role in assisting the formation of correctly folded multi-disulfide bonded proteins. We optimized protein expression conditions, varying temperature, induction conditions, strain background and the co-expression of various helper proteins. We found that temperature has the biggest impact on improving yields and that the E. coli B strain background of this strain was superior to the K12 version. We also discovered that auto-expression of substrate target proteins using this strain resulted in higher yields of active pure protein. Finally, we found that co-expression of mutant thioredoxins and PDI homologs improved yields of various substrate proteins. Conclusions This work is the first extensive characterization of the trxB gor suppressor strain. The results presented should help researchers design the appropriate protein expression conditions using SHuffle strains.
Background
Many research applications require the purification of high yields of an active and correctly folded protein for either its study (biochemical analysis, X-ray crystallography, etc.), or for its direct use (e.g. as in therapeutic and diagnostic applications). In general, protein overexpression, and the generation of high yields is oftentimes difficult and unpredictable. It becomes even more arduous when the protein of interest contains post-translational modifications, such as disulfide bonds, which are critical for proper protein folding, stability, and/or activity. Disulfide bonds are formed by the oxidation of sulfhydryl groups between two cysteine side chains resulting in a covalent bond, greatly increasing the stability of a protein. A large proportion of proteins contain disulfide bonds. For example, analysis of the human genome revealed that 30% of the proteins are predicted to be targeted to the endoplasmic reticulum (ER) where disulfide bond formation is compartmentalized and of those, half are predicted to form disulfide bonds . Since disulfide bonds increase the stability of proteins, most disulfide-bonded proteins are secreted or remain anchored to the plasma membrane, exposed to the environment. This feature of disulfide-bonded proteins makes them excellent therapeutic agents or targets for the pharmaceutical industry. Recent market analysis of therapeutic proteins indicates that all classes of therapeutic proteins are composed mostly or exclusively of proteins containing disulfide bonds . It is therefore critical to have multiple expression systems which can express disulfide-bonded proteins rapidly with relative ease and low cost. Additional molecular tools must also be developed to fine tune the protein expression conditions for a given substrate protein, to achieve maximal yields to high purity.
Currently there are several expression systems available for the production of disulfide-bonded proteins, with each system having its own advantages and disadvantages. Although eukaryotic expression systems such as Chinese Hamster Ovary (CHO), yeast or insect cells offer the capacity to express complex multi-disulfide-bonded proteins, these systems are slow and expensive. Cell-free expression systems may have circumvented the problem of speed but are not feasible for scale-up. For most applications, prokaryotic expression remains the most attractive expression system due to its relatively low cost, high speed, ease of use, high yields, and the availability of large numbers of genetic tools for optimization purposes.
Escherichia coli is the most popular choice for recombinant protein production. Currently there are only a handful of E. coli expression strains commercially available. There is an ever growing demand for new, versatile and improved protein expression strains, especially those that are engineered to handle post-translational modifications such as disulfide bond formation. So far, production of soluble and active disulfide-bonded proteins to high yields in E. coli remains a challenge. This is mainly due to the fact that for most overexpression systems, the recombinant protein produced is expressed in the cytoplasm, but disulfide bond formation is compartmentalized to the periplasm where E. coli is poorly adapted for producing multidisulfide bonded proteins in high yields. Since all living cells studied to date have enzymes dedicated to reducing disulfide bonds in their cytoplasm, the formation of disulfide bonds have been compartmentalized to extra-cytoplasmic compartments such as the periplasm in gram negative bacteria or the ER in eukaryotes . Thus, proteins which require disulfide bonds for their folding and stability are poorly expressed, misfolded, and are not active when expressed in the cytoplasm of E. coli.
A major breakthrough came through the pioneering work conducted by Beckwith and co-workers during their studies into the redox pathways of E. coli . The culmination of their work along with several other labs elucidated the cytoplasmic redox pathways and enzymes in E. coli. This knowledge enabled the Beckwith lab to engineer a mutant E. coli strain capable of promoting disulfide bond formation in the cytoplasm .
The formation of a disulfide bond is catalyzed by enzymes belonging to the thioredoxin super-family . In E.coli, disulfide bond formation is catalyzed in the periplasmic space by the enzyme DsbA . DsbA is one of the strongest oxidases measured and will oxidize cysteine residues consecutively as they enter the periplasm . Proteins which require multiple nonconsecutive disulfide bonds require the action of a disulfide bond isomerase to shuffle the disulfide bonds within the mis-oxidized protein to produce its native folded state . E. coli's periplasmic disulfide bond isomerase is DsbC, a homodimeric "V" shaped protein, where each arm of the "V" is a thioredoxin fold brought together by a dimerization domain . The cleft formed by the V-shaped DsbC is hydrophobic, thought to preferentially interact with mis-oxidized proteins that have their core hydrophobic residues exposed. This hydrophobic cleft is also hypothesized to mediate the chaperone property of DsbC, which is independent of its redox active cysteines . Over-expression of DsbC greatly enhances the amount of correctly folded protein in vivo both in the periplasm and in the cytoplasm . Incubation of DsbC in vitro in cell free expression systems has also been shown to enhance the amounts of correctly folded disulfide bonded proteins .
The engineering of an E coli strain to produce large quantities of cytoplasmic protein with disulfide bonds would require engineering of the two reductive pathways (thioredoxin and glutaredoxin/glutathione) in the cytoplasm. Due to the presence of numerous thiol reductases (Grx1, Grx2, Grx3, Trx1, Trx2), glutathione, and small thiol reductants, cysteines are maintained in their reduced state in the cytoplasm of wild type E. coli and are not able to form stable disulfide bonds (they may still form transiently ). To genetically engineer a strain that allows the formation of stable disulfide bonded proteins within the cytoplasm, thioredoxin reductase (trxB) and glutathione reductase (gor) were mutated. Mutant E. coli cells carrying deletions of trxB gor are nonviable as certain essential proteins, such as ribonucleotide reductase, cannot be re-cycled back to their active reduced states . A suppressor screen for trxB gor lethality generated a strain (FÅ113) whose mutant peroxidase AhpC* had gained the ability to reduce Grx1, restoring reducing power to the cell . Thioredoxins remain in their oxidized state and can oxidize protein substrates which require disulfide bonds for their folding . This mutant E. coli strain (FÅ113) is sold commercially under the name Origami by Novagen. However, in this strain, thioredoxins, like DsbA, form disulfide bonds indiscriminately, resulting in some proteins being mis-oxidized and inactive. A marked increase in activity of some cytoplasmically expressed proteins was observed when DsbC lacking its signal sequence was co-expressed in the cytoplasm . Recently, co-expression of the yeast sulfhydryl oxidase Erv1p has also been shown to improve production of disulfide bonded proteins in the cytoplasm of E. coli . Even though this work demonstrates the various methods of producing disulfide bonded proteins, expression of cytoplasmic DsbC was still crucial in achieving high yields of correctly folded substrate protein. While this method is in its infancy, utility of this system has already been demonstrated .
The E. coli trxB gor suppressor has been a useful strain for producing disulfide bonded proteins resulting in hundreds of publications since the utility of this strain was first shown in 1999 . However, no comprehensive study has been conducted on the parameters involved in producing correctly folded protein within this strain. Furthermore, although the co-expression of cytoplasmic DsbC had been shown to improve protein folding , no such strain was engineered nor studied in detail. Here, we present a novel protein expression strain based on a different trxB gor suppressor strain (SMG96). We engineered this strain to cytoplasmically over-express DsbC under the relatively strong and highly-regulated rRNA promoter rrnB . We characterized the redox state of the strain and investigated the effects of varying three common parameters (temperature, time and strength of induction) on protein expression. Using the optimized conditions, we expressed and purified eight different substrate proteins and showed their relative solubility. Finally, we co-expressed a set of helper proteins and evaluated their ability to increase the folding of a subset of proteins. This strain is currently commercially available under the name SHuffle from New England Biolabs.
Redox state of SHuffle cells are altered to permit oxidative folding
We constructed a mutant E. coli strain with an altered redox state that permits the formation of stable disulfide bonds within its cytoplasm. This strain's parent is the previously described E. coli strain SMG96 which itself is based on the strain FÅ113 . SMG96 lacks the gor and trxB reductases; the lethality conferred by these mutations is suppressed by a mutation in the peroxidase ahpC* . Figure 1 shows a schematic of this altered redox pathway which results in the reduction of Grx1 by AhpC*, restoring viability. Trx1 remains oxidized and therefore catalyzes the formation of disulfide bonds within the cytoplasm ( Figure 1B). We have further engineered the strain to express DsbC in the cytoplasm, which should isomerize mis-oxidized proteins to their native states ( Figure 1C).
Expression cytoplasmic DsbC in SHuffle can improve oxidative folding
DsbC is an oxido-reductase chaperone, capable of enhancing the oxidative folding of proteins both in its native periplasmic compartment and when expressed cytoplasmically . To investigate the role of cytoplasmic DsbC in SHuffle cells, we compared the activity of three different proteins which require disulfide bonds to achieve their native folded state ( Figure 2). Gaussia luciferase has 10 cysteines which are all involved in disulfide bonds, although the pattern of disulfide bonds remains unknown . As schematically depicted in Table 1, urokinase and vtPA both have non-consecutive disulfide bonds with 18 and 12 cysteines, respectively, making them ideal candidates for testing the role of cytoplasmic DsbC.
We measured the activities of the three candidate enzymes in four different strain backgrounds to determine what effects an oxidizing cytoplasm and the presence of DsbC in the cytoplasm have on their activity. As expected, no or very little enzyme activity was detected in cell lysates lacking the gene of interest (GOI) ( . These results suggest that DsbC can be absolutely essential for folding of certain protein substrates. We suggest that SHuffle is an important strain background for researchers to use when expressing disulfide-bonded proteins that display low activity in other strain backgrounds. Furthermore, we conclude that SHuffle's effect on the folding of disulfide-bonded proteins is substrate protein specific.
Expression of proteins in SHuffle B strains results in greater yields compared to SHuffle K12 strains
During the course of our experiments, we noticed differences in the activities of proteins measured from SHuffle cells constructed in the K12 vs. the B strain backgrounds. In order to determine that the differences were not due to growth rate, we measured growth of cultures at 30ºC. We observed no significant difference in growth rate between SHuffle cells and their parental wild type (Additional file 1). To directly compare the effect of strain background, we measured the activities of three different substrate proteins expressed in either SHuffle K12 (C3025 or C3026) or SHuffle B (C3028 or C3029) (Figure 3). Luciferase and urokinase activities were approximately 2-fold higher in the B background than in K12. Expression of vtPA did not result in any detectable activity when produced in the K12 background, but was active in the B background. We confirmed our observation with western blot analysis and detected vtPA only in SHuffle B strains and not in SHuffle K12 (Supplementary material Figure 2). Thus, in the case of all three substrate proteins, we observed consistently higher enzyme activities in SHuffle B strains compared to K12.
We wished to explore whether the observed differences were due to differences in the mechanism of suppression of trxB gor lethality. Therefore, we sequenced the ahpC gene in SHuffle K12, SHuffle B, their parental wild type strains, and 16 new suppressors isolated using the method described previously . While SHuffle K12 contained the previously described triplet codon expansion ahpC* allele , 15 out of the 16 newly isolated SHuffle B strains had a novel triplet codon contraction allele (ahpC Δ ) and only one isolate had the classic triplet codon expansion ( Table 2). We did not observe any significant difference in vtPA activity in SHuffle B ahpC* versus ahpC Δ cells (data not shown). Even though the mechanism of disulfide bond formation did not appear to vary between the two suppressors, E. coli K12 and B might have distinct cellular responses to oxidative stress. To test this hypothesis, we grew cells in microtiter dishes with varying amounts of hydrogen peroxide. E. coli B cells ceased to grow at concentrations above 4 mM hydrogen peroxide, while E. coli K12 strains ceased to grow above 10 mM hydrogen peroxide (data not shown). We also compared the hydrogen peroxide sensitivity of SHuffle B cells having either ahpC* or ahpC Δ suppressors mutations. Both strains displayed similar levels of sensitivity and ceased to grow at hydrogen peroxide concentrations above 6 mM (data not shown). Thus, we conclude that the differences in enzyme activities observed for K12 and B strains ( Figure 3) are not due to the nature of the suppressing mutation in the two strain backgrounds but instead are more likely to be due to general genetic differences between the two strains.
Cytoplasmic DsbC in SHuffle cells are in their active hemi-reduced state
The redox state of DsbC is critical for its isomerase/ reductase activity both in vivo and in vitro . In order to function as a disulfide bond isomerase, DsbC must be maintained in its hemi-reduced state. Each DsbC monomer contains 4 cysteine residues. The N-terminal redox active cysteines (Cys98-Cys101) face the hydrophobic cleft and are maintained in a reduced form in the periplasm by the inner membrane protein DsbD . The C-terminal pair (Cys140-Cys163) form a stable disulfide bond that is critical for the folding and stability of DsbC . In the absence of DsbD, DsbC becomes oxidized and cannot function as an isomerase/reductase and instead can now function as an oxidase . Unlike the periplasm, the cytoplasm lacks a dedicated reductase such as DsbD to maintain the active site cysteines of DsbC in its reduced state. Furthermore, the reducing/ oxidizing conditions of the cytoplasm of SHuffle cells may not be able to maintain cytoplasmic DsbC in its hemi-reduced state. It is therefore critical to understand the exact redox state of cytoplasmic DsbC in SHuffle cells.
We investigated the redox state of DsbC using AMS alkylation followed by western blot analysis using anti-DsbC antibody ( Figure 4). AMS alkylates any free thiol group found in the side chains of cysteine residues, covalently adding 500 Daltons, resulting in mobility shift in SDS-PAGE analysis. Since SHuffle cells contain both periplasmic and cytoplasmic copies of DsbC, we first investigated the redox state of periplasmic DsbC in the parent strain of SHuffle K12 and SHuffle B. In both wild type E. coli K12 and B strains, periplasmic DsbC was detected mostly in its active hemi-reduced state at similar levels of expression ( Figure 4A, lane 1 and 2). Similar amounts of periplasmic DsbC were detected in K12 and B strains which had the trxB, gor, ahpC* mutations ( Figure 4B, lane 1 and 2). Significantly higher amount of hemi-reduced DsbC was detected in SHuffle K12 cells, indicating that cytoplasmic DsbC is overexpressed from the chromosome and is in the correct redox state to function as a disulfide bond isomerase ( Figure 4B, lane 3). However, SHuffle B cells did not over-express cytoplasmic DsbC to the same level as SHuffle K12 cells ( Figure 4B, lane 4). This may have to do with differential regulation of the rrnB promoter in E. coli B cells in comparison to E. coli K12, since the rrnB promoter controls the expression of cytoplasmic DsbC. In order to understand whether cytoplasmic DsbC is under-expressed and limited in SHuffle B cells, we constructed two more SHuffle B cells in which the DsbC was under the regulation of rrnB promoters with 9 or 70 times higher transcriptional activity . These strains did not show any improvement in the activity of urokinase when compared to SHuffle B, suggesting that cytoplasmic DsbC is sufficiently over-expressed (data not shown). The culmination of these results when combined with the in vivo protein expression data indicates that the majority of cytoplasmic DsbC is active in its hemi-reduced state, essential for its disulfide bond isomerase activity. We also observed significant amounts of oxidized cytoplasmic DsbC in SHuffle cells, which may directly contribute to the oxidation of substrate proteins.
Optimization of protein expression conditions
To optimize production of proteins in SHuffle cells, we investigated the effects of three parameters on the expression of seven different substrate proteins. In consideration of the average researcher who expresses proteins using a shake flask system with limited time and resources, we chose the three most commonly modified parameters: temperature, time of induction, and concentration of inducer (IPTG).
Temperature
The effect of temperature on protein folding has been well documented and is one of the most common factors to be optimized during production of proteins . We therefore investigated the role of temperature on protein expression by growing SHuffle cells in rich medium initially at 30˚until the cells reached mid log growth phase. Protein expression was induced with 1 mM IPTG and the growth temperature shifted to 16°C, 25°C, 30°C or 37°C. At the end of exponential growth, activity of the substrate protein was measured. As shown in Table 1, the optimal temperature varied among the seven proteins: for two it was 16°C, for three it was 25°C, and for the final two it was 37°C. We conclude that the effect of temperature was protein specific.
Time of induction
Using the optimal temperature discovered in the prior experiment, we investigated the effect of inducing at various growth phases. SHuffle cells were grown at the optimal temperature and were induced with 1 mM IPTG at the initial time of inoculation, mid-log or late-log growth phase. Further downstream processes were the same as described above. In the case of the two cellulases (CelZ and Cel9A) an additional method of induction, termed here 'autoexpression' was tried and found to be optimal over standard IPTG induction (Table 1). Autoexpression relies on the diauxic response of E. coli when grown in multiple carbon sources such as glucose and lactose, resulting in induction of lac promoter upon depletion of glucose . Using Magic Media supplied by Invitrogen, cells were grown overnight without induction and enzymatic assays were performed the next day. Further characterization of autoexpression was performed by comparing the yields obtained for a poor folding protein such as vtPA, when expressed under optimized IPTG conditions vs. autoexpression. The yields of purified vtPA increased from marginally detectable amounts to over 1 mg/l, indicating that autoexpression may be a suitable method of protein production in SHuffle cells ( Table 1).
Concentration of inducer
Using the optimal expression temperature and time of induction conditions discovered prior, the concentration of inducer was optimized. SHuffle cells were grown at the optimal temperature and were induced with various concentrations of IPTG (0, 0.01, 0.05, 0.1 and 1 mM) at the optimal growth phase of induction. The optimal concentration of inducer was protein-specific, varying from 0.01 mM to 1 mM (Table 1).
An example of this optimization process is shown for vtPA ( Figure 5). Using our optimization process, the optimum shake flask expression condition for vtPA was growth at 16°C during protein expression, with 1 mM IPTG induction at mid-growth phase. Overall, our results indicate that the optimal conditions for protein expression in SHuffle cells are protein-specific. However, we did note that temperature had the most profound effect and lowering of growth temperature during induction usually resulted in improved yields. While we did not investigate autoexpression systematically with all the proteins, this induction method also gave improved yields where it was used. Thus, a thorough study is required to optimize the expression conditions for any given new protein of interest.
Proteins expressed in SHuffle cells results in diverse levels of solubility
The solubility of a protein is an important indicator of its correct folding as determined by functional binding or enzymatic assays. Determining a protein's solubility will help guide the researcher design the correct experimental procedure to improve its yield. For example, a protein having only 5% of the total expressed protein soluble will require optimization of its folding pathway while another protein having 90% solubility might require increased expression levels to improve yields. We therefore quantified the levels of solubility of each of the proteins we expressed to assess the level of success of folding in SHuffle strains.
Using the panel of seven substrate proteins expressed under the optimum conditions we discovered previously, cell lysates were produced as described in methods. An aliquot of each lysate was removed to represent the total amount of protein (T). Samples were subjected to centrifugation with the supernatant representing the soluble fraction (S) and the pellet representing the insoluble fraction (P). Samples were analyzed by western blot with the appropriate antibody. As a control for proper fractionation, samples were also probed with anti-GroEL antibody to detect the soluble fraction that contains GroEL. As expected, protein solubility varied a great deal. Solubility ranged from 5% for poorly folding substrate proteins such as vtPA and urokinase to 95% for protein substrates that fold efficiently such as PhoA ( Figure 6). These data highlight the fact that the solubility of a protein is highly dependent on the nature of the protein and high levels of soluble protein can be achieved when over-expressed in SHuffle cells.
Co-expression of helper proteins can improve oxidative folding
Folding of disulfide bonded eukaryotic proteins in a prokaryotic host is challenging. For any given protein, there may be one or more bottlenecks in its folding pathway that occur when the folding of the protein is decoupled from its native host environment. Reasons for inefficient folding are diverse and unique for each protein and may be due to: the lack of intrinsic folding properties of the protein (e.g. rate of translation governed by codon usage), the physical environment (e.g. folding in a specialized compartment) or the dependence on a set of chaperones dedicated to the folding of the nascent polypeptide in the native host. This problem is highlighted by the variation in the solubility of the proteins we expressed in SHuffle. To increase the capacity of SHuffle cells to fold a greater variety of disulfide bonded proteins, we co-expressed numerous "helper" proteins based on our assumption that they may alleviate a folding bottleneck that may exist for a given protein. We therefore chose our least soluble proteins (vtPA, urokinase and chitinase) as indicators of folding improvement, as we hypothesized that these proteins would allow the largest range of improvement. To facilitate improved folding of these proteins, we co-expressed 16 different helper proteins which could subdivided into three general categories: redox active, chaperone and oxidative stress. All of the helper genes were cloned into pBAD34 expression vector with a pACYC origin of replication, under the regulation of the arabinose promoter. A second set of C-terminally flag-tagged constructs were constructed in order to assess the expression levels of the helper proteins using western blots probed with anti-flag antibodies. Full length proteins were detected for all of the helper proteins except PDI, which could be detected upon longer exposure (Supplementary material Figure 3). SHuffle cells expressing vtPA, urokinase or chitinase along with one of the helper plasmids were grown under the optimal expression conditions discovered prior. Expression of the helper protein was induced in the beginning of growth by adding final concentration of 0.2% (w/v) L-arabinose and the substrate protein was induced once the cells reached mid log growth phase. Enzymatic activities were measured and normalized to cells expressing vector alone (pBAD33). The results are summarized in Table 3. Overall, we found that co-expression of helper proteins dramatically improved the yield of vtPA (up to 11-fold) while only slightly improving the yields of urokinase and chitinase (less than 2-fold for the best helper). An in-depth description of these results is below.
Redox active helpers
It is possible that the mechanism of disulfide bond formation in the cytoplasm of SHuffle cells is not optimal for the correct folding of a given protein. There may not be sufficient disulfide bond isomerase (DsbC) for the abundance of overexpressed substrate proteins. To assess this, we expressed DsbC lacking its native signal peptide. No significant improvement in activity of urokinase and chitinase were detected upon increased levels cytoplasmic DsbC (Table 3), indicating that sufficient amounts of DsbC are expressed in SHuffle cells and that disulfide bond isomerization is not the folding bottleneck for these proteins. However, vtPA activity was reduced~5fold in SHuffle strains in comparison to isogenic strains lacking cytoplasmic DsbC ( Table 3).
The role of thioredoxins in the formation of disulfide bonds within the trxB suppressor strains has already been demonstrated . Furthermore, co-expressing mutant thioredoxins with altered active sites has resulted in significant improvement in protein production . We therefore chose the two mutant thioredoxins with altered active sites along with the wild type (CGPC = wt, CPYC = Grx1, CPHC = DsbA) to assess whether coexpressing thioredoxins could assist in the formation of correctly oxidized substrates. Co-expression of thioredoxins increased the activity of vtPA up to 10-fold but did not result in any improvement in the case of urokinase and chitinase.
Protein disulfide isomerase (PDI) is an essential ER resident oxido-reductase involved in the oxidation and isomerization of disulfide bonded proteins in eukaryotes. In vitro it catalyzes the oxidative formation, reduction, or isomerization of disulfide bonds depending on the redox potential of the environment . Expression of PDI in E. coli has already been demonstrated with mixed success. Co-expression of yeast PDI in the periplasm resulted in a 50% increase in the yield of tissue plasminogen activator (tPA), while rat PDI had no beneficial effect both in the periplasm and cytoplasm . Due to this apparent substrate specificity of PDI's, we decided to coexpress various PDI homologs from Saccharomyces cerevisiae (PDI, EUG1, MPD1 and MPD2). Co-expression of the PDI homologs was the most successful class of helper proteins. In the case of urokinase and chitinase, PDI homologs were the best helper proteins while in the case of vtPA only one PDI homolog (MPD2) was second best helper protein (Table 3).
Sulfhydryl oxidases, such as human quiescin-sulfhydryl oxidase (QSOX) , can catalyze the formation of disulfide bonds through their FAD cofactor, resulting in the reduction of oxygen to hydrogen peroxide . We chose QSOX as a helper protein, as co-expression of other sulfhydryl oxidases enhances production of disulfide bonded proteins in the cytoplasm of E. coli . Although co-expression of QSOX increased vtPA activity 8-fold, it had no positive influence on the expression of urokinase and chitinase (Table 3). Brackets indicates the best fold improvements while bold are the following best fold improvements.
Another candidate as a helper protein was the archeal cytoplasmic protein disulfide oxidoreductase (PDO) which can catalyze disulfide bond formation in vitro . We chose the PDO from Aquifex aeolicus, as this species has been predicted to have one of the most oxidizing cytoplasms . Co-expression of the A. aeolicus VF5 PDO did not result in any significant improvement in the yields of vtPA, urokinase or chitinase (Table 3).
Chaperone helpers
As a fusion protein, maltose binding protein (MBP) promotes folding and increases the solubility of its fused cargo . We co-expressed MBP as a helper protein but did not observe any significant improvement in the yields of vtPA, urokinase and chitinase. This may be due to the observation that MBP is most successful at increasing solubility when fused N-terminally , indicating that MBP may need to act on the elongating polypeptide and may not act as a chaperone post-translationally when not fused. Another periplasmic chaperone we expressed within the cytoplasm of SHuffle was the "seventeen kilo Dalton protein" (Skp) known to have a broad range of interacting substrates . Cytoplasmic co-expression of Skp improves the folding of certain eukaryotic proteins . However, no positive effects on folding of our test proteins were observed when Skp was co-expressed (Table 3).
Oxidative stress helpers
SHuffle cells lack trxB and gor and cannot efficiently reduce oxidized proteins. This result in the buildup of oxidized inactive proteins, which induces a general oxidative stress response, mediated by the transcriptional factors OxyR and the SoxRS regulon . In addition, AhpC* has lost its function as a peroxidase resulting in the accumulation of hydrogen peroxide. This can cause oxidative damage to proteins and may diminish cell viability, which in turn, may lower the yield of recombinant protein production. Under such conditions, the expression of the catalase gene katG, which scavenges and removes hydrogen peroxide and the peroxidase AhpC is highly up regulated . However, native defense mechanisms may not be sufficient, as SHuffle cells have three of its reductive pathways disrupted (glutathione, thioredoxin and peroxiredoxin pathways). We therefore chose KatG and AhpCF and the peroxidase deficient mutant AhpC*F as candidate helper proteins to combat oxidative stress. Expression of katG resulted in 12-fold increase in the activity of vtPA, making it the best helper protein, while expression of AhpCF and AhpC*F had modest effects on vtPA. In the case of urokinase, co-expression of either AhpCF or AhpC*F resulted in the best improvements in activity. In the case of chitinase, none of these helpers had any effect (Table 3). Taken together, these results further highlight the protein specific nature of protein folding and the lack of predictability in deciding which molecular chaperone system will improve protein solubility .
Discussion
In this manuscript we present a novel E. coli strain based on the trxB gor suppressor strain SMG96. We further engineered into its chromosome a dsbC gene lacking its signal sequence, under the regulation of the strong ribosomal promoter rrnB. These strains were engineered both in E.coli K12 and B strain backgrounds. A detailed characterization of the SHuffle strains along with parameters involved in protein production at bench-scale (non-high throughput) was investigated.
To expand our understanding of the mechanism of disulfide bond formation within SHuffle strains, we investigated the redox state of cytoplasmic DsbC. We showed that the majority of cytoplasmic DsbC is in its hemireduced state, which is essential for its disulfide bond isomerase activity. However, oxidized DsbC species were also detected when expressed within the oxidizing cytoplasm, which could result in DsbC directly oxidizing reduced substrates. This is not surprising, as mutant DsbB which have gained the ability to oxidize DsbC are in turn capable of oxidizing proteins in the periplasm . Oxidized DsbC species may not always be beneficial to the folding of reduced proteins which require disulfide bonds. This may explain the drop in activity observed for Gaussia luciferase when expressed in cells with cytoplasmic DsbC. Similar observations were made when expressing parathyroid hormone in trxB gor strains . In this study, co-expression of cytoplasmic DsbC had no positive influence in vivo, but did dramatically reduce the amount of misfolded species when DsbC was co-incubated in the presence of oxidized and reduced glutathione.
E. coli B strains such as BL21 are the preferred host strain for protein expression as generally give higher yields for the large majority of proteins. Some of the reasons for this may be that, unlike its K12 cousin, it has not been subjected to extensive domestication for the purpose of DNA manipulation , and it lacks the cytoplasmic protease lon known to play a key role in protein quality control . Similarly, when we compared the expression of three proteins in SHuffle K12 vs. SHuffle B strains, we consistently observed higher yields in the B strain backgrounds. However, we also observed differences between the two strains at the level of redox states of proteins. Unlike in SHuffle K12, a fraction of periplasmic DsbC was observed to be in its reduced state in the SHuffle B strain. Further redox differences were observed in the ahpC mutations between the two strains. While SHuffle K12 ahpC gene has the triplet TTC codon expansion, SHuffle B ahpC gene has the triplet codon contraction, lacking one of the three TTC codons. These differences highlight the distinct biological differences between the two SHuffle strains and require detailed studies to elucidate their biological roles.
To define conditions critical for the folding and correct formation of disulfide bonds, we tested the impact of the three most commonly manipulated physical parameters; temperature, time and strength of induction. We consistently observed that growth temperatures had the most profound impact on improving protein production in SHuffle cells. This may be due to the fact that SHuffle cells are under oxidative stress, and the resulting detrimental effects may be compounded by high metabolic activity during growth at high temperatures such as 37°C. This hypothesis is supported by the observation that over-expression of poorly folding proteins such as vtPA at 37°C in SHuffle cells is toxic (data not shown).
We observed very efficient production of proteins to high yields when SHuffle cells were grown overnight in Magic Media, reaching final yields of 400 mg/l in the case of a cellulase (with a single disulfide bond). To validate the role of the media, we produced vtPA in Magic Media and observed a 6 fold increase in the final yields compared to standard expression conditions using IPTG as an inducer. This form of protein expression in SHuffle cells may indeed be optimal, even though the mechanism of expression is not clear. Although the exact composition of Magic Media is not disclosed, it is designed to be used for the auto-expression of proteins under the control of the lac promoter. The principle of autoexpression is based on diauxic regulation where glucose is the preferred carbon source which results in the repression of the lac promoter and upon its consumption, cells switch to growth on lactose which results in the induction of the lac promoter . However, β-galactosidase activity is needed to convert lactose to allolactose, the natural inducer of the lactose operon . In the case of the SHuffle B T7 cells, the T7 RNA polymerase gene 1 is inserted into the lacZ gene, rendering it inactive. Thus, another mechanism of expression other than autoinduction must be occurring, which is why we termed this form of expression "autoexpression" instead of autoinduction.
In this study, we focused on improving folding of target substrate proteins by manipulating the strain and the conditions of expression. However, for optimal expression of proteins, many other parameters must be manipulated. For example, all proteins which require disulfide bonds for their folding will be secreted to compartments where disulfide bond formation can occur. Thus, they will all have some sort of a signal sequence at their N-terminus. However, to express these proteins in the cytoplasm, a signal sequenceless version of the target protein must be expressed. Removal of the 5' signal sequence will alter the composition and structure of the mRNA, which is known to play a key role in the expression level of the target protein . One remedy to this potential problem is to fuse the target protein to the carboxyl terminal of MBP, which is known to enhance solubility and can be proteolytically removed post production . Otherwise, using the appropriate expression vector with the optimal promoter, codon usage and ribosome binding site need to be considered for optimal expression of the target protein.
Since bottlenecks in the folding pathway of any given protein are specific to that protein, we explored whether we could increase protein yield by co-expressing various helper proteins. We chose a subset of helper proteins based on either prior experimentation which has shown their utility, or in assumptions based on the helper proteins properties. Redox-active helper proteins had the biggest effect. Co-expression of mutant thioredoxins and PDI homologs were the most successful class of helper proteins. Surprisingly, co-expression of the catalase katG improved the activity of vtPA 10-fold. This observation supports the notion that the SHuffle cells are under oxidative stress and boosting the cell's defenses against oxidative damage can increase the capacity of the cells to produce correctly folded disulfide bonded proteins. However, the decrease in vtPA activity when additional DsbC was expressed from the helper plasmid accentuates the fact that, for each individual protein, there can be an optimum level of a redox helper, with a decrease in activity at amounts higher or lower than that optimum. A similar decrease in activity was observed in the case of periplasmic expression of vtPA . Overexpression of periplasmic DsbC resulted in loss in vtPA activity and eventually resulted loss of viability. The authors attributed the loss in viability to a dramatic reduction in the oxygen uptake rate when DsbC was over-expressed . It is plausible that a similar interaction is occurring in the cytoplasm. This drop in activity was not observed when the putative disulfide bond isomerase from Aquifex aeolicus (cAaDsbC) was co-expressed. This difference highlights the protein specificities that govern the interaction between the oxido-reductase and its substrate protein.
Expression of proteins in the cytoplasm instead of in the periplasm is of great advantage. Not only does one avoid the complication of having to secrete the target substrate, the periplasm is devoid of ATP, has only a few ATP-independent chaperones, and is only~20% of the volume of cytoplasm . The advantage of cytoplasmic expression was observed in the case of vtPA, which had two fold increase in activity when expressed in the cytoplasm . Similarly, we observed a 7 fold increase in the activity of an α1,3 Galactosidase from Xanthomonas manihotis having a single disulfide bond, when expressed in the cytoplasm instead of the periplasm (data not shown).
Although cytoplasmic expression may improve the activity of certain proteins, cytoplasmic disulfide bond formation may sometimes be detrimental to certain biological processes. For example, cytoplasmic assembly of the E. coli phage M13 appears to be problematic, as SHuffle strains were incapable of forming infective phage (data not shown). In addition, SHuffle cells grown in minimal media under high dissolved oxygen rates showed poor growth when glycerol was the sole carbon source (data not shown). This may be due to altered redox state of SHuffle cells' metabalome. For example, the cydAB operon, which is under the regulation of the ArcAB two component system , shows a delayed response in transcriptional activity when shifting from aerobiosis to anerobiosis in SHuffle cells (data not shown). This is most likely due to the silencing of ArcB kinase activity by the oxidation of its cytoplasmic redoxactive cysteine residues . These observations highlight our current lack of understanding of the redox biology of SHuffle cells, with many important questions remaining unanswered. How do SHuffle cells cope with oxidizing and reducing conditions within cytoplasm? Which reductases are involved in the oxidation of substrate proteins? What is the role of cytoplasmic oxidized DsbC in disulfide bond formation? How do SHuffle cells perform in high density fermentations? Proteomic and mass spectrometric approaches to address these questions are now in progress.
The SHuffle strains and the expression conditions presented in this report represent the first detailed analysis of the conditions required for efficient cytoplasmic expression and folding of disulfide bonded proteins. The results should allow the expression of previously inaccessible production of proteins in E. coli. These SHuffle strains greatly expand the cell biologists toolkit by enabling the use of bacterial production in place of more cumbersome eukaryotic expression systems.
Conclusions
We have demonstrated the value in engineering an E. coli trxB gor suppressor strain which expresses active cytoplasmic DsbC. We found that temperature is of paramount importance and should be optimized for the optimal expression of a substrate protein. Autoexpression of proteins using Magic Media was also very helpful in improving yield. We found several intriguing redox related differences between the E. coli B and K12 versions of this strain and showed that the E. coli B version of SHuffle strains were superior to its K12 counterpart. Further improvements were made by coexpressing various helper proteins. These SHuffle strains along with the knowledge gained regarding their use will be of great use to the protein expression community.
Bacterial strains, media, and chemicals
Bacterial strains and plasmids were constructed by using standard genetic procedures. List of strains used is summarized in supplementary materials Table 1. SHuffle K12 cells were engineered based on the trxB gor suppressor SMG96 . A signal sequenceless dsbC construct under the regulation of rrnB promoter was integrated into SMG96 using the lambda inch method . SHuffle B strains are based on NEB express cells (C2523) and were constructed using the dithiothreitol (DTT) filter disk method, as described prior . While the commercial names of the SHuffle strains are SHuffle (for the K12 versions) and SHuffle express (for the B versions), we will refer to these strains as SHuffle K12 or SHuffle B for the purposes of clarity. Further versions were engineered by integrating the T7 gene 1 which encodes for the T7 RNA polymerase into lacZ, allowing for expression of genes under the regulation of the T7 promoter. A list of plasmids used in this study along with their construction is summarized in supplementary materials Table 2 and 3. Synthetic genes were purchased from Genescript (www. genscript.com). Cells were grown in Rich Media (10 g/L Tryptone, 5 g/L Yeast Extract, 5 g/L NaCl, NaOH to pH 7.2) or in Magic Media (Invitrogen cat# K6803).
Optimization of protein expression
Three parameters were optimized sequentially in the following order; temperature of growth, time of induction and strength of induction. All experiments were conducted in duplicate samples. Initially, -80°C strain stocks were used to inoculate 5 ml rich media with the appropriate antibiotics (200 μg/ml ampicillin, 40 μg/ml Kanamycin or 10 μg/ml Chloramphenicol). The following day, 25 ml of rich media in 125 ml shaker flask supplemented with antibiotics were inoculated with 250 μl (1/100 th ) of overnights and grown at 30°C for 3 hours until mid-log phase, set as default time of induction for the first step of optimization. The cultures were induced with 1 mM isopropyl-β-D-thiogalactopyranoside (IPTG) set as the default concentration of IPTG and temperature was shifted to 16°C, 25°C, 30°C or 37°C and grown respectively overnight for low temperatures (16°C or 25°C) or another 7 h for higher temperatures (30°C or 37°C). Cells were harvested by centrifugation, lysed by sonication and samples were standardized to equal amounts of protein using Bradford reagent. The optimal temperature of protein expression was determined by measurement of enzymatic activities of crude lysates with appropriate enzymatic tests. The second step of optimization was focused on the time of induction using the optimal temperature from the previous step. Cultures were inoculated as previously described. Cultures were induced either at the time of inoculation (Early induction) or at mid-log phase (Mid induction) or at late-log phase of growth (Late induction). Downstream processes were the same as previously described. Strength of induction was tested by inducing cultures at various IPTG concentrations from 0 mM to 1 mM IPTG. Cells were inoculated as previously described and grown at 30°C until optimal time of induction. Various amount of IPTG were added and cultures were incubated at optimal temperature of protein production. Enzymatic activities were measured from crude lysates as previously described.
Co-expression of helper proteins
Cultures were grown in rich media supplemented with 0.2% L-arabinose (Sigma Aldrich A3256) to induce coexpression of helper proteins and grown with optimal growth and induction conditions as previously described. Appropriate enzymatic activities were measured from crude lysates using protocol described previously.
Autoexpression
Cultures were inoculated and grown in Magic Media (Invitrogen cat# K6803) until reaching optimal time of induction. The temperature was shifted to the optimal temperature of production.
Protein activity assays Urokinase assay
Urokinase activity was quantified using a coupled reaction in a microtiter plate. 50 μl of soluble protein were added to wells containing 50 mM Tris pH 8, 60 mM 6aminohexanoic acid (Sigma Aldrich, cat# 07260), 0.1 mg/ml Bovine Plasminogen (American Diagnostica, cat# 416) and 0.4 mM Spectrozyme PL (American Diagnostica, cat# 251) to a final volume of 150 μl. The plate was incubated at 37°C and absorbance at 405 nm was measured for 2 or 3 h until reaching plateau. Activity is directly proportional to A 405nm at linear range standardized to protein amount at A 595nm using Bradford reagent.
tPA assay
Plasminogen activation was quantified using a coupled reaction in a microtiter plate. 50 μl of soluble protein were added to wells containing 50 mM Tris-HCl (pH7.4), 0.01% Tween 80, 0.04 mg/ml human glu-plasminogen (American Diagnostica, cat # 400), and 0.4 mM Spectrozyme PL (American Diagnostica, cat # 251), to a 250 μl final volume. The plate was incubated at 37°C and absorbance at 405 nm was measured after 2 or 3 h until reaching plateau. Activity is directly proportional to A 405nm at linear range standardized to protein amount at A 595nm using Bradford reagent .
PhoA assay
The PhoA activity was quantified using chromogenic reaction in a clear bottom microtiter plate. 20 μl of soluble protein were added to wells containing 180 μl of 20 mM para-nitrophenyl phosphate (pNPP, Sigma Aldrich, cat # N4645), 1 M Tris pH 8, 1 mM ZnAc. The plate was incubated at 37°C and absorbance at 410 nm was measured for 20 minutes. Activity is directly proportional to A 410nm at linear range standardized to protein amount at A 595nm using Bradford reagent.
AppA assay
AppA activity was quantified as described earlier with slight modifications. Assays were performed in microtiter plates with 20 μl of appropriately diluted soluble protein.
Reaction was stopped with 50 μl 5 M NaOH. AppA activity was measured at A 410nm and standardized to protein amount at A 595nm using Bradford reagent.
Chitinase assay
Chitinase activity was quantified by fluorometric assay as follows. In microtiter white opaque plate, a serial dilution (1:4 to 1:256) of 50 μl of soluble protein was added to wells containing 20 mM NaPO4, 200 mM NaCl, 1 mM EDTA, 20uM 4-methylimbelliferyl-N, N' , N"-triacyl-Bchitotrioside (stock in 100% DMSO) (Calbiochem) to 200 μl final volume. The plate was incubated at 25°C and fluorescence (Excitation A 320nm , Emission A 460nm ) was measured 2 to 3 h. Activity is directly proportional at linear range to A 460nm standardized to protein amount at A 595nm using Bradford reagent.
CelZ assay
Activity was measured by incubation of known quantities of celZ with the chromophore p-nitrophenylcellobioside at 50°C, in 50 mM HEPES, pH 7.2 for 30-60′ in 50μL volumes. Reactions were stopped and color developed by the addition of 12.5μL 10% w/v NaCO3 and read at 410 nm.
Cel9A assay
Activity was measured by digests of carboxymethylcellulose (CMC). Reactions were carried out with known quantities of protein in 50μL volumes of 1% w/v CMC (med. viscosity, Fluka) for 30-60′ at 50°C in 50 mM HEPES, pH 7.2. Reducing sugars liberated were measured using the 3,5-dinitrosalysilic acid (DNS) method with a panel of glucose standards, read at 540 nm. Activity is expressed in glucose equivalents.
Protein purification vtPA and gluc
Cells expressing either His tagged vtPA, or His tagged GLuc from various plasmids were grown with shaking in 500 mL Rich Medium supplemented with appropriate antibiotics. Optimal amount of IPTG was added after optimal time of growth at 30°C and the cultures were grown for an additional time at optimal temperature. Cells were harvested by centrifugation (12000 rpm, 20 min, 4°C) and resuspended in Phosphate Buffer (20 mM Phosphate Buffer, 500 mM NaCl, 20 mM Imidazole), and lysed using sonication 8 × 30s. The insoluble fractions were removed be centrifugation (14000 rpm, 30 min, 4°C). Protein was purified using a HiTrap IMAC FF 1 mL column (GE Healthcare), eluting with 1 M Imidazole. Fractions containing protein are pooled, dialyzed in storage buffer (200 mM NaCl, 20 mM Tris HCl pH 7.5, 1 mM EDTA, 0.1% Triton X-100, 50% Glycerol), and loaded on a SDS-PAGE gel. Protein amount was determined by Bradford assay using BSA as standard. Corresponding assay were performed on the purified samples as described before.
Chitinase, AppA and PhoA
Cells expressing His tagged Chitinase, AppA or PhoA from various plasmids were grown and harvested as described before. The pellet was resuspended in Tris binding buffer (20 mM Tris pH 8, 300 mM NaCl, 10 mM Imidazole) and purification was performed as described before.
Cellulase purification
Individual colonies were picked in duplicate and used to inoculate 5 mL LB-carb starter cultures at 37°C. Starter cultures were measured for growth by OD 600nm and used to inoculate either 50 or 100 mL cultures of Magic Media + 100 μg/mL carbenicillin in 250 or 500 mL (respectively) baffled flasks to a density of 0.05. Cells were grown at 37°C until OD 600nm reached 1.0 at which time, temperature was dropped to 22-25°C and cultures were grown for a total of 24 h, and harvested when 2 consecutive OD 600nm measurements (taken at 0.5 h intervals) showed no increase in density. Cells were immediately put on ice and transferred to cold 50 mL conical bottom tubes, then centrifuged at 4°C for 30′ at 3500 rpm. Cells were resuspended in 10 mL lysis buffer: 1xPBS (teknova), PMSF, leupeptin, pepstatin, 1 mg/mL lysozyme (egg white, Sigma), 1U/mL DNase I. Pellets were disrupted by sonicating for 5 minutes (30s on, 30s off ) on ice. A sample was taken for T. Disrupted cells were spun down at 3500 rpm for 30′ at 4°C. 4 mL fractions of the supernatant were diluted with 2x binding buffer (40 mM imidazole, 1 M NaCl, 0.1 M phosphate, pH 7.5) and centrifuged cold to remove new precipitations. 8 mL volumes were loaded onto a 1 mL HisTrap FF column, washed with 12 Column Volumes (CV) binding buffer, and eluted on a 20-140 mM imidazole gradient, collected in 5 mL fractions (Bio-Rad Biologic LP + Bio-Frac). Purified proteins were quantitated by the Bradford method (Bio-Rad kit). Specific activity was determined using the corresponding enzymatic assay.
Protein sample analysis AMS alkylation
Cells were grown in rich media supplemented with antibiotics until reaching late log phase of growth (5 h). OD600nm was measured and cultures were diluted to the lowest OD. 3 samples of 1 ml culture were incubated on ice for at least 15 minutes with 15% trichloroacetic acid (TCA). The supernatant was discarded after centrifuging 10 min at maximum speed. The pellets were washed with 500 μl Acetone, mixed by vortex and centrifuged for 5 min at maximum speed. The pellets were air dried and resuspended in 150 μl of either loading buffer (1X Loading buffer, 1% SDS, 0.1 M Tris pH8), 4-acetamido-4′maleimidylstilbene-2,2′-disulfonic acid (AMS) buffer (15 mM AMS, 1X Loading buffer, 1% SDS, 0.1 Tris pH8) or DTT buffer (100 mM DTT, 1X Loading buffer, 1% SDS, 0.1 Tris pH8). The samples were boiled for 20 minutes at 95°C and incubated at 4°C overnight. Samples resuspended in DTT buffer were incubated on ice for at least 15 minutes with 15% TCA and centrifuged for 10 minutes at maximum speed. The pellet was washed with 500 μl Acetone and air dried. The pellet was resuspended in AMS buffer. 15 μl of samples was loaded on a SDS-PAGE gel and probed with appropriate antibody.
Western blot
Samples were diluted 1:3 in 1x Loading Buffer (New England Biolabs, B7709) supplemented with 1x DTT. Samples were loaded on Daichi pre-cast 10/20 gels (Cosmo Bio Co. LTD, cat# 414893) and run for 1 h at 30 mA per gel. Proteins were transferred on PVDF (IPVH00010 Milipore) membranes using wet transfer methods for 1.5 h at 500 mA. Membrane was blocked with 5% Dry Milk (BioRad, 170-6404XTU) in PBS (Gibco, AM9625) for 1 h at room temperature or overnight at 4°C. Membrane was washed 3 × 5min in PBS, Tween 0.05% and incubated with appropriate antibody diluted in PBS-T Dry Milk 1% for 1 h at room temperature. After washing the membrane as described previously the membrane was incubated with secondary antibody if needed diluted in PBST for 1 h at room temperature. After washing as described above the membrane was poured with 20X LumiGLO® Reagent and 20X Peroxide (#7003 Cell signaling technology) for 30 s. The signal intensity was measured. |
import {connect} from 'react-redux';
import StateType from 'types/StateType';
import {notificationsSelector, newNotificationsFromSelector} from '../../../selectors';
import {setNewNotificationsFromAction} from '../../../actions';
import page from './page';
const mapStateToProps = (state: StateType) => ({
notifications: notificationsSelector(state),
newNotificationsFrom: newNotificationsFromSelector(state),
});
const mapDispatchToProps = {
setNewNotificationsFrom: setNewNotificationsFromAction,
};
export default connect(mapStateToProps, mapDispatchToProps)(page);
|
Hamas political chief Khaled Mashaal will make a diplomatic visit to Russia in July, the Islamist Palestinian group said Monday.
The Russian foreign ministry extended the invitation in May following a meeting between Russian Deputy Foreign Minister Mikhail Bogdanov and Hamas leadership in Qatar, according to Hamas spokesman Husam Badran, who added that the trip would cover Palestinian reconciliation and economic relations, according to Russia Today’s Arabic edition.
Russia has expressed support for the Palestinian unity deal, as long as the new government upheld existing treaties with Israel, recognized its right to exist and renounced violence.
Get The Times of Israel's Daily Edition by email and never miss our top stories Free Sign Up
However, despite a public promise from Palestinian Authority President Mahmoud Abbas and initial reports that it would agree to such terms, Hamas has stuck to its refusal to recognize Israel’s right to exist and insisted that it would not renounce its violent struggle.
The Palestinian Authority was planning to go ahead with the swearing in of a unity government Monday, although some differences with Hamas remain over the identity of its ministers.
Hamas and Fatah struck a surprise reconciliation deal on April 23. The two entities had been at odds since 2007, when Hamas violently seized the Gaza Strip from the internationally backed Abbas. Hamas, which has carried out scores of bombing, shooting and rocket attacks against Israeli civilians, is considered a terror group by Israel and the West.
Following the announcement of the unity deal, Israel suspended peace negotiations with Abbas’s PA. It has since steadfastly refused to negotiate with any government that includes Hamas unless it officially recognizes Israel and renounces violence.
Top Israeli cabinet members decided on Sunday not to hold any further negotiations with the PA as long as Hamas takes part in government affairs, Army Radio reported. The cabinet further ruled that Israel would reallocate some Palestinian tax money and use the funds to pay off the Authority’s debts to Israeli companies, according to the report. The cabinet also barred three Hamas ministers from attending Monday’s government swearing-in ceremony, according to Israel Radio.
The Israel Air Force struck two targets in Gaza Monday morning in response to rocket fire from the strip over the weekend.
Adiv Sterman and AP contributed to this report. |
def print_matlab( arr ):
arr = asarray( arr )
N = 1
if len( arr.shape ) > 1: N = arr.shape[1]
print( '[ ', end='' )
count = 0
for v in arr.ravel():
if count == N:
print( '; ', end='' )
count = 0
print( '%s ' % (v,), end='' )
count += 1
print( ']' ) |
/** \brief Amount of free space (in bytes) between end of slot vector and begin of payloads. */
size_t free_space()
{
return header_.payload_begin * sizeof(PayloadBlock)
- header_.slot_end * sizeof(Slot);
} |
package model
import (
"database/sql"
"errors"
"fmt"
"github.com/greatdanton/analytics/src/global"
"github.com/greatdanton/analytics/src/memory"
)
// WebsiteURLExist checks if url for this particular user
// already exist
func WebsiteURLExist(userID string, url string) (bool, error) {
var id string
err := global.DB.QueryRow(`SELECT id from website
where owner = $1
AND website_url = $2`, userID, url).Scan(&id)
if err != nil {
// if there are no rows website url does not exist in database
if err == sql.ErrNoRows {
return false, nil
}
// an actual error occured during lookup, return error
return true, err
}
// error did not occur url is present in database
return true, nil
}
// TrackNewWebsite adds website to the database => the software starts
// tracking records for this website
func TrackNewWebsite(userID string, websiteName string, websiteURL string) error {
// TODO: create short url that do not exist in the in memory database
shortURL, err := CreateUniqueShortURL()
if err != nil {
return fmt.Errorf("TrackNewWebsite: CreateUniqueShortURL error: %v", err)
}
var id string
err = global.DB.QueryRow(`INSERT into website(owner, name, short_url, website_url)
values($1, $2, $3, $4)
RETURNING id`, userID, websiteName, shortURL, websiteURL).Scan(&id)
if err != nil {
return fmt.Errorf("TrackNewWebsite: error while inserting into website db: %v", err)
}
err = memory.Memory.AddWebsite(id, userID, shortURL, websiteURL)
if err != nil {
return fmt.Errorf("Memory.AddWebsite: error: %v", err)
}
return nil
}
//ErrorShortURLExist is used when the website with short url already exist in memory
// (and therefore in database)
var ErrorShortURLExist = errors.New("Website with this short url already exists")
// EditWebsite handles updating website row
func EditWebsite(userID string, oldWebsite, newWebsite Website) error {
exist := memory.Memory.ShortURLExist(newWebsite.ShortURL)
// if shortUrl exist in memory return error
owner, err := memory.Memory.GetOwner(newWebsite.ShortURL)
if err != nil {
return err
}
// IF shortURL already exist and the owner of the website is not
// the same person trying to edit the website => return error
if exist && owner != userID {
return ErrorShortURLExist
}
// shortURL does not exist we can update the database
_, err = global.DB.Exec(`UPDATE website
SET name = $1, website_url = $2, short_url = $3
where owner = $4
and id = $5`, newWebsite.Name, newWebsite.URL, newWebsite.ShortURL, userID, oldWebsite.ID)
if err != nil {
return err
}
err = memory.Memory.EditWebsite(oldWebsite.ShortURL, newWebsite.ShortURL, newWebsite.URL)
if err != nil {
return err
}
return nil
}
// DeleteWebsite handles website deletion from the db
func DeleteWebsite(userID string, website Website) error {
_, err := global.DB.Exec(`DELETE from website
where owner = $1
and id = $2`, userID, website.ID)
if err != nil {
return err
}
// Delete website from memory
memory.Memory.DeleteWebsite(website.ShortURL)
return nil
}
|
<reponame>fatman2021/Enlightenment_DR16
/*
* Copyright (C) 2000-2007 <NAME>, <NAME> and various contributors
* Copyright (C) 2004-2013 <NAME>
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to
* deal in the Software without restriction, including without limitation the
* rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
* sell copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies of the Software, its documentation and marketing & publicity
* materials, and acknowledgment shall be given in the documentation, materials
* and software packages that this Software was used.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
* IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
#include "E.h"
#include "animation.h"
#include "backgrounds.h"
#include "eobj.h"
#include "iclass.h"
#include "xwin.h"
static EObj *init_win1 = NULL;
static EObj *init_win2 = NULL;
static char bg_sideways = 0;
void
StartupWindowsCreate(void)
{
Win w1, w2, win1, win2, b1, b2;
Background *bg;
ImageClass *ic;
int x, y, bx, by, bw, bh, dbw;
EObj *eo;
/* Acting only as boolean? */
if (BackgroundFind("STARTUP_BACKGROUND_SIDEWAYS"))
bg_sideways = 1;
ic = ImageclassFind("STARTUP_BAR", 0);
if (!ic)
ic = ImageclassFind("DESKTOP_DRAGBUTTON_HORIZ", 0);
bg = BackgroundFind("STARTUP_BACKGROUND");
if (!ic || !bg)
return;
dbw = Conf.desks.dragbar_width;
if (dbw <= 0)
dbw = 16;
if (bg_sideways)
{
x = WinGetW(VROOT) / 2;
y = 0;
bx = WinGetW(VROOT) - dbw;
by = 0;
bw = dbw;
bh = WinGetH(VROOT);
}
else
{
x = 0;
y = WinGetH(VROOT) / 2;
bx = 0;
by = WinGetH(VROOT) - dbw;
bw = WinGetW(VROOT);
bh = dbw;
}
eo = EobjWindowCreate(EOBJ_TYPE_MISC,
-x, -y, WinGetW(VROOT), WinGetH(VROOT), 1, "Init-1");
if (!eo)
return;
init_win1 = eo;
w1 = EobjGetWin(eo);
win1 = ECreateWindow(w1, x, y, WinGetW(VROOT), WinGetH(VROOT), 0);
eo = EobjWindowCreate(EOBJ_TYPE_MISC,
x, y, WinGetW(VROOT), WinGetH(VROOT), 1, "Init-2");
if (!eo)
return;
init_win2 = eo;
w2 = EobjGetWin(eo);
win2 = ECreateWindow(w2, -x, -y, WinGetW(VROOT), WinGetH(VROOT), 0);
EMapWindow(win1);
EMapWindow(win2);
if (bw > 0 && bh > 0)
{
b1 = ECreateWindow(w1, bx, by, bw, bh, 0);
b2 = ECreateWindow(w2, 0, 0, bw, bh, 0);
EMapRaised(b1);
EMapRaised(b2);
ImageclassApply(ic, b1, 0, 0, 0, ST_SOLID);
ImageclassApply(ic, b2, 0, 0, 0, ST_SOLID);
}
BackgroundSet(bg, win1, WinGetW(VROOT), WinGetH(VROOT));
BackgroundSet(bg, win2, WinGetW(VROOT), WinGetH(VROOT));
StartupBackgroundsDestroy();
EobjMap(init_win1, 0);
EobjMap(init_win2, 0);
EobjsRepaint();
}
void
StartupBackgroundsDestroy(void)
{
BackgroundDestroyByName("STARTUP_BACKGROUND");
BackgroundDestroyByName("STARTUP_BACKGROUND_SIDEWAYS");
}
static int
doStartupWindowsOpen(EObj * eobj __UNUSED__, int remaining,
void *state __UNUSED__)
{
int k, x, y, xOffset, yOffset;
k = 1024 - remaining;
if (bg_sideways)
{ /* so we can have two different slide methods */
x = WinGetW(VROOT) / 2;
xOffset = (x * k) >> 10;
y = 0;
yOffset = 0;
}
else
{
x = 0;
xOffset = 0;
y = WinGetH(VROOT) / 2;
yOffset = (y * k) >> 10;
}
EobjMove(init_win1, -x - xOffset, -y - yOffset);
EobjMove(init_win2, x + xOffset, y + yOffset);
if (remaining > 0)
return 0;
Mode.place.enable_features++;
EobjWindowDestroy(init_win1);
EobjWindowDestroy(init_win2);
init_win1 = NULL;
init_win2 = NULL;
return ANIM_RET_CANCEL_ANIM;
}
void
StartupWindowsOpen(void)
{
int speed, duration;
if (!init_win1 || !init_win2)
return;
Mode.place.enable_features--;
speed = Conf.desks.slidespeed > 0 ? Conf.desks.slidespeed : 500;
duration = 2000000 / speed;
AnimatorAdd(NULL, ANIM_STARTUP, doStartupWindowsOpen, duration, 0, 0, NULL);
}
|
/*******************************************************************************
* This file is part of Shadowfax
* Copyright (C) 2015 Bert Vandenbroucke ([email protected])
*
* Shadowfax is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Shadowfax is distributed in the hope that it will be useful,
* but WITOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Affero General Public License for more details.
*
* You should have received a copy of the GNU Affero General Public License
* along with Shadowfax. If not, see <http://www.gnu.org/licenses/>.
******************************************************************************/
/**
* @file SnapshotHandler.hpp
*
* @brief General interfaces for snapshot readers and writers: header
*
* @author Bert Vandenbroucke ([email protected])
*/
#ifndef SNAPSHOTHANDLER_HPP
#define SNAPSHOTHANDLER_HPP
#include "MPIGlobal.hpp" // for rank, local_rank, etc
#include "RestartFile.hpp" // for RestartFile
#include <ostream> // for ifstream, ofstream
#include <stdio.h> // for remove
#include <string> // for allocator, string, etc
class Header;
class ParticleVector;
class UnitSet;
/**
* @brief General interface for classes that read or write snapshots
*
* Stores the name of the snapshot (or the basic name for snapshot writers) and
* a reference to the internal UnitSet.
*/
class SnapshotHandler {
protected:
/*! @brief Name of the snapshot. Can be either a generic name for snapshot
* writers or an actual filename for snapshot readers */
std::string _name;
/*! @brief Internal simulation UnitSet */
UnitSet& _units;
std::string get_snapshot_name(unsigned int nr, int rank = -1, int size = 1);
public:
SnapshotHandler(std::string name, UnitSet& units);
virtual ~SnapshotHandler() {}
void dump(RestartFile& rfile);
SnapshotHandler(RestartFile& rfile, UnitSet& units);
};
/**
* @brief Abstract interface for snapshot writers
*
* The interface implements a counter keeping the index of the last snapshot
* that was written and keeps track of the internal units, output units and
* basic name of the snapshots.
* Actually writing the snapshot should be done in child classes that implement
* this interface.
*/
class SnapshotWriter : public SnapshotHandler {
protected:
/*! @brief UnitSet to be used in the output file */
UnitSet& _output_units;
/*! @brief Counter in the name of the next snapshot that will be written */
int _lastsnap;
/*! @brief Flag indicating if a separate snapshot file should be written for
* different nodes (necessary when e.g. running OpenMPI over SSH) */
bool _per_node_output;
public:
/**
* @brief Constructor
*
* We check if we have to write per node snapshots or not. This needs to be
* done when the output directory is on a node specific filesystem, in which
* case processes on different nodes cannot access the files created by each
* other.
*
* @param basename Generic name for the snapshots that will be written
* @param units Internal simulation UnitSet
* @param output_units UnitSet used in the output file
* @param lastsnap Counter in the name of the first snapshot that will be
* written
* @param per_node_output Flag indicating if each node should write a
* separate snapshot file or all nodes should write to the same file (if
* possible)
*/
SnapshotWriter(std::string basename, UnitSet& units, UnitSet& output_units,
int lastsnap = 0, bool per_node_output = false)
: SnapshotHandler(basename, units), _output_units(output_units),
_lastsnap(lastsnap), _per_node_output(per_node_output) {
// check if we have different nodes
if(!_per_node_output) {
if(MPIGlobal::local_size < MPIGlobal::size) {
// check if all nodes write to the same filesystem
string testname = basename + ".tmp";
if(!MPIGlobal::rank) {
ofstream testfile(testname);
testfile << "Written by process with rank 0\n";
}
MyMPI_Barrier();
if(MPIGlobal::rank && !MPIGlobal::local_rank) {
ifstream testfile(testname);
if(!testfile) {
_per_node_output = true;
}
}
if(!MPIGlobal::rank) {
remove(testname.c_str());
}
// communicating is not so easy: we cannot broadcast from a
// single process, since not all processes know which ranks
// correspond to local rank 0's
// we therefore perform an allreduce over all processes
// since msg_send will be 0 for all processes other than local
// rank 0's, their contributions will tell us if theirs is 0 or
// 1
int msg_send = _per_node_output;
int msg_recv;
MyMPI_Allreduce(&msg_send, &msg_recv, 1, MPI_INT, MPI_SUM);
if(msg_recv) {
_per_node_output = true;
}
}
}
}
virtual ~SnapshotWriter() {}
/**
* @brief Get a tag discriminating different implementations
*
* @return A std::string tag that is unique for every implementation
*/
virtual std::string get_tag() = 0;
/**
* @brief Write a snapshot file with the given ParticleVector a the given
* time
*
* @param t Current time of the simulation
* @param particles ParticleVector to write out
* @param write_mass Should the mass be written to the snapshot?
*/
virtual void write_snapshot(double t, ParticleVector& particles,
bool write_mass = false) = 0;
/**
* @brief Get the current value of the snapshot counter
*
* @return The snapshot counter
*/
unsigned int get_lastsnap() {
return _lastsnap;
}
/**
* @brief Dump the snapshot writer to the given RestartFile
*
* @param rfile RestartFile to write to
*/
virtual void dump(RestartFile& rfile) {
SnapshotHandler::dump(rfile);
rfile.write(_lastsnap);
rfile.write(_per_node_output);
}
/**
* @brief Restart constructor. Initialize the snapshot writer from the given
* RestartFile
*
* @param rfile RestartFile to read from
* @param units Internal simulation UnitSet
* @param output_units UnitSet used in the output file
*/
SnapshotWriter(RestartFile& rfile, UnitSet& units, UnitSet& output_units)
: SnapshotHandler(rfile, units), _output_units(output_units) {
rfile.read(_lastsnap);
rfile.read(_per_node_output);
}
};
/**
* @brief Abstract interface for snapshot readers
*
* This interface does nothing in itself but defines the read_snapshot function
* that should be implemented by child classes.
*/
class SnapshotReader : public SnapshotHandler {
public:
/**
* @brief Constructor
*
* @param name Filename to read
* @param units Internal simulation UnitSet
*/
SnapshotReader(std::string name, UnitSet& units)
: SnapshotHandler(name, units) {}
/**
* @brief Read the snapshot and store its contents in the given
* ParticleVector
*
* @param particles ParticleVector to fill
* @param read_mass Should the mass be read from the snapshot?
* @return Header containing general information about the snapshot
*/
virtual Header read_snapshot(ParticleVector& particles,
bool read_mass = false) = 0;
};
#endif // SNAPSHOTHANDLER_HPP
|
/// AllocateReg - Attempt to allocate one of the specified registers. If none
/// are available, return zero. Otherwise, return the first one available,
/// marking it and any aliases as allocated.
unsigned AllocateReg(const unsigned *Regs, unsigned NumRegs) {
unsigned FirstUnalloc = getFirstUnallocated(Regs, NumRegs);
if (FirstUnalloc == NumRegs)
return 0;
unsigned Reg = Regs[FirstUnalloc];
MarkAllocated(Reg);
return Reg;
} |
/// Returns the device's address on the bus that it's connected to.
pub fn address(&self) -> u8 {
unsafe {
libusb_get_device_address(self.device)
}
} |
/*!
\class ta_t
\brief Timed automaton over a system of synchronized timed processes
*/
class ta_t final : public tchecker::ts::full_ts_t<tchecker::ta::state_sptr_t, tchecker::ta::const_state_sptr_t,
tchecker::ta::transition_sptr_t, tchecker::ta::const_transition_sptr_t,
tchecker::ta::initial_range_t, tchecker::ta::outgoing_edges_range_t,
tchecker::ta::initial_value_t, tchecker::ta::outgoing_edges_value_t> {
public:
ta_t(std::shared_ptr<tchecker::ta::system_t const> const & system, std::size_t block_size);
ta_t(tchecker::ta::ta_t const &) = delete;
ta_t(tchecker::ta::ta_t &&) = delete;
virtual ~ta_t() = default;
tchecker::ta::ta_t & operator=(tchecker::ta::ta_t const &) = delete;
tchecker::ta::ta_t & operator=(tchecker::ta::ta_t &&) = delete;
using tchecker::ts::full_ts_t<tchecker::ta::state_sptr_t, tchecker::ta::const_state_sptr_t, tchecker::ta::transition_sptr_t,
tchecker::ta::const_transition_sptr_t, tchecker::ta::initial_range_t,
tchecker::ta::outgoing_edges_range_t, tchecker::ta::initial_value_t,
tchecker::ta::outgoing_edges_value_t>::status;
using tchecker::ts::full_ts_t<tchecker::ta::state_sptr_t, tchecker::ta::const_state_sptr_t, tchecker::ta::transition_sptr_t,
tchecker::ta::const_transition_sptr_t, tchecker::ta::initial_range_t,
tchecker::ta::outgoing_edges_range_t, tchecker::ta::initial_value_t,
tchecker::ta::outgoing_edges_value_t>::state;
using tchecker::ts::full_ts_t<tchecker::ta::state_sptr_t, tchecker::ta::const_state_sptr_t, tchecker::ta::transition_sptr_t,
tchecker::ta::const_transition_sptr_t, tchecker::ta::initial_range_t,
tchecker::ta::outgoing_edges_range_t, tchecker::ta::initial_value_t,
tchecker::ta::outgoing_edges_value_t>::transition;
virtual tchecker::ta::initial_range_t initial_edges();
virtual void initial(tchecker::ta::initial_value_t const & init_edge, std::vector<sst_t> & v);
virtual tchecker::ta::outgoing_edges_range_t outgoing_edges(tchecker::ta::const_state_sptr_t const & s);
virtual void next(tchecker::ta::const_state_sptr_t const & s, tchecker::ta::outgoing_edges_value_t const & out_edge,
std::vector<sst_t> & v);
using tchecker::ts::full_ts_t<tchecker::ta::state_sptr_t, tchecker::ta::const_state_sptr_t, tchecker::ta::transition_sptr_t,
tchecker::ta::const_transition_sptr_t, tchecker::ta::initial_range_t,
tchecker::ta::outgoing_edges_range_t, tchecker::ta::initial_value_t,
tchecker::ta::outgoing_edges_value_t>::initial;
using tchecker::ts::full_ts_t<tchecker::ta::state_sptr_t, tchecker::ta::const_state_sptr_t, tchecker::ta::transition_sptr_t,
tchecker::ta::const_transition_sptr_t, tchecker::ta::initial_range_t,
tchecker::ta::outgoing_edges_range_t, tchecker::ta::initial_value_t,
tchecker::ta::outgoing_edges_value_t>::next;
virtual bool satisfies(tchecker::ta::const_state_sptr_t const & s, boost::dynamic_bitset<> const & labels) const;
virtual void attributes(tchecker::ta::const_state_sptr_t const & s, std::map<std::string, std::string> & m) const;
virtual void attributes(tchecker::ta::const_transition_sptr_t const & t, std::map<std::string, std::string> & m) const;
tchecker::ta::system_t const & system() const;
private:
std::shared_ptr<tchecker::ta::system_t const> _system;
tchecker::ta::state_pool_allocator_t _state_allocator;
tchecker::ta::transition_pool_allocator_t _transition_allocator;
} |
/*! Same as rotate(const Quaternion&) but \p q may be modified to satisfy the rotation constraint().
Its new value corresponds to the rotation that has actually been applied to the Frame. */
void Frame::rotate(Quaternion& q)
{
if (constraint())
constraint()->constrainRotation(q, this);
q_ *= q;
q_.normalize();
Q_EMIT modified();
} |
class MelodicInterval:
"""Represents an actual MelodicInterval.
Attributes
----------
interval : Interval
order : Order
octaves : int
Number of octaves separating the 2 notes.
"""
def __init__(self, interval, order, octaves):
"""Constructor method.
Parameters
----------
interval : Interval
order : Order
octaves : int
"""
self.interval = interval
self.order = order
self.octaves = octaves
def __eq__(self, other):
return (
self.__class__ == other.__class__
and self.interval == other.interval
and self.order == other.order
and self.octaves == other.octaves
)
@classmethod
def create_melodic_interval(cls, bottom_note_int, top_note_int):
"""Constructs a MelodicInterval given 2 note integers,
Parameters
----------
bottom_note_int : int
top_note_int : int
Returns
-------
MelodicInterval
"""
number_of_octaves = (top_note_int - bottom_note_int) // 12
if number_of_octaves < 0:
number_of_octaves = (number_of_octaves + 1) * -1
return cls(
Interval.get_interval(bottom_note_int, top_note_int),
Order.check_order(bottom_note_int, top_note_int),
number_of_octaves,
)
def swap_order(self):
"""Changes a MelodicInterval with Ascending Order to Descending Order and
vice versa.
Returns
-------
MelodicInterval
"""
new_order = self.order.swap_order()
return MelodicInterval(self.interval, new_order, self.octaves)
def create_note_int(self, note_int):
"""Generates a note_int based on MelodicInterval.
Parameters
----------
note_int : int
Returns
-------
int
"""
if self.order == Order.Static:
return note_int
octave_intervals = self.octaves * 12
if self.order == Order.Ascending:
new_note_int = note_int + self.interval.value + octave_intervals
elif self.order == Order.Descending:
new_note_int = note_int - self.interval.value - octave_intervals
# TODO: Find more elegant way of avoiding negative Note values.
while new_note_int < 0:
new_note_int += 12
return new_note_int |
from unittest import TestCase
from lib.query_executor.executors.sqlalchemy import is_dialect_available
class IsDialectAvailableTestCase(TestCase):
def test_existing_dialect(self):
# We can guarantee sqlite should be always available
self.assertTrue(is_dialect_available("sqlite"))
def test_non_existent_dialect(self):
self.assertFalse(is_dialect_available("fakeMysql"))
|
Manchester United have snapped up five of Huddersfield Town’s fledgling talents after the downgrading of the Yorkshire club’s academy.
Sportsmail understands the teenagers signed deals at Old Trafford this week after trials which included friendlies behind closed doors.
Huddersfield decided to scrap teams from their Under 16s down earlier this month.
Man United academy boss Nicky Butt has seen his ranks boosted by five Huddersfield kids
That left tens of youngsters — two of whom recently earned England Under 16 call-ups — released without warning.
United have been quick off the mark to tie some down for nothing and Premier League rivals are battling for the others.
United’s new recruits have the option to carry on their education in Yorkshire or could switch to Ashton-on-Mersey School, a 10-minute drive from United’s Carrington training base.
Meanwhile, United fear that Paul Pogba’s hamstring injury could keep the French midfielder out of action until after the international break in mid-November. |
/// Returns the RGB pixel values of the specified coordinate.
fn orig_at(&self, coords: (u32, u32)) -> (u8, u8, u8) {
let idx = (coords.1 * self.size.0 + coords.0) as usize;
let red = self.orig_buf[idx * self.bytes_per_pixel + 0];
let green = self.orig_buf[idx * self.bytes_per_pixel + 1];
let blue = self.orig_buf[idx * self.bytes_per_pixel + 2];
return (red, green, blue);
} |
***Update: Ubi and SmartThings will work together to provide a complete voice-controlled home automation platform (checkout the Update tab above for videos). Check out SmartThings here. ***
New Video: Ubi & SmartThings
Ubi - Always on. Always ready to help.
Ubi is a voice-activated computer that plugs into a wall outlet. You talk to the Ubi and it talks back. It directly connects to the Internet through wifi.
"Look ma, no hands!"
We believe people want to do things when they're at home - they clean, they fold laundry, they cook, they eat, they spend time with loved ones. These are all things that (for the most part) take up use of our arms and hands. When we're at home, we'd rather use our limbs for other activities than typing, scrolling, or swiping.
Ubi is short for ubiquitous computer because it's always on, always listening, always ready to help. It can scribe, listen, analyze. Ubi will either talk back to you the information you seek or indicate information through multi-color lights.
Ubi listens to its environment and senses it through sound, temperature, light, pressure, and humidity. It can record this information or use it to trigger events and communication.
What can it do?
Ubi can be used for potentially hundreds of applications. The applications we plan to ship with the Ubi are:
Voice-enabled Internet search
Speakerphone
Indicator light (light changing based on events, e.g. weather, stock, email)
Home speaker system with sound piping
Virtual assistant (audio calendar, feed reader, podcast etc)
Voice memos
Alarm clock
Intercom system
Baby monitor
Noise pollution monitor
Controlling the climate of your home perfectly (through web enabled thermostats like Nest)
A Helping Hand
We see a huge potential for Ubi to assist those who have visual, hearing, or mobility impairments. With its indicator lights and talk-to-activate functionality, Ubi is super simple to setup and use. We want the Ubi to make it easier for our parents and loved ones to stay connected with us and the world.
Your Kickstarter Contribution
We're serious about making this technology of our dreams easily available to everyone. Your Kickstarter contribution is going to allow us to get safety approval (FCC and CE) so that it can be used everywhere. It will also help driving down the cost of making the Ubi so that everyone can afford at least one. In a nutshell, your contribution will:
Allow us to meet regulatory approval
Get our first production run in place (tooling, machining, and chip printing - we are working with local firms who are very talented)
Allow us to complete the initial apps we're going to deliver with the Ubi
Rewards
Each Ubi reward above the $1 level comes with one Ubi unit that has wifi and a slew of other functionality. It also includes the Ubi web, iPhone and Android app and access to the Ubi portal to monitor and setup the Ubi remotely. You'll also get early access to the API.
Ubi is available in arctic white or midnight black. Please specify which is your preference when you back the project.
$1 - Access to backer-only updates and early offers when product is launched
$149 - Early backers - 1 Ubi (limited to first 100)
$189 - Single Ubi
$349 - Two Ubis
$479 - Three-pack of Ubis
$649 - Five-pack of Ubis
NEW! - $1249 - The Inner Sanctum - The Ubi 10-pack and much more
How it works
Ubi plugs into a wall outlet and accesses the Internet through a wifi connection. It has a microphone and speakers and listens for commands. Saying "Ubi" wakes up the Ubi for receiving verbal commands. You can then instruct the Ubi to do your bidding. Ubi will receive plain language commands. Ubi communicates back to you through speech or by using lights.
Onboard the Ubi are sensors to monitor your environment:
Temperature
Humidity
Air pressure
Ambient light
This data can be stored online or used to trigger alerts to your mobile device or email.
The Ubi runs Android with a powerful processor to perform voice recognition and also has the ability to connect to other devices. You can plug in speakers, USB drives, or connect through Bluetooth directly to your iPhone or Android device. For developers, you can also communicate with potentially thousands of devices (through RF, wifi, or Bluetooth) and we're making the device open so peripherals and other applications can be used with it.
Technical Specs on the Ubi
Android 4.1 Jelly Bean
800 MHz ARM Cortex-A8 Processor
1 GB RAM
802.11 b/n/g Wifi Enabled (WEP, WPA, and WPA2 encryption)
Hi fidelity speakers and omni-directional microphone
USB 2.0 with 5 V power supply
Bluetooth 4.0
Temperature, humidity, and air pressure sensors
Ambient light sensor
RF Transceiver Module
Plugs into a standard NA 110 V, 15A power outlet (world versions to be forth-coming) UPDATE: NA two-prong plug, but 100-240 VAC, 50-60 Hz support! UPDATE #2: worldwide plug support available!
Dimensions (approximate): 4.0" x 4.0" x 1.1"
Where We're At
We've been working for over ten months on the concept, design, and prototyping of the Ubi and we're now at a point where we're ready to bring it to the Kickstarter community. We've also built an early working prototype and have been refining the design to make it more compact and easier to use. We've been in contact with manufacturers and suppliers and have spec'ed out the costs for bringing the Ubi to market. Ubi is currently slated to work in English with North American voltages. We hope to expand this to other regions and languages.
We're working with local and overseas suppliers for sourcing the Ubi's electronics and plan on completing the machining, assembly, testing and certification of the Ubi in the Toronto area. We've also worked with certification authorities for pricing of safety and communication testing of the Ubi and inspection will be completed at the final point of assembly.
Our team is experienced in working on dozens of projects for engineering research institutions. All of our team members have engineering backgrounds and have delivered on real engineering projects in the past.
Sample Data from the Ubi
Below is an example of some of the data that will be available from the Ubi as it monitors different aspects of its surrounding environment. The Ubi can constantly monitor your home to provide triggers or feedback.
Special Thanks and More |
United States Supreme Court case
Michael M. v. Superior Court of Sonoma County, 450 U.S. 464 (1981), was a United States Supreme Court case over the issue of gender bias in statutory rape laws. The petitioner argued that the statutory rape law discriminated based on gender and was unconstitutional. The court ruled that this differentiation passes intermediate scrutiny under the Equal Protection clause because it serves an important state goal, stating that sexual intercourse entails a higher risk for women than men. Thus, the court found the law justified.[1]
Background [ edit ]
In June 1978, Sharon, a sixteen-year-old female was spending time with her sister and three other males.
A charge of statutory rape was filed against Michael in the Court of Sonoma County, CA. In California, statutory rape was, at the time, described as "an act of sexual intercourse accomplished with a female not the wife of the perpetrator, where the female is under the age of 18 years."[2] The language of the statute made it so that only the male involved in the act could be found criminally liable, even if the act was consensual. Michael M. challenged the constitutionality of the law on the basis of Equal Protection Clause of the Fourteenth Amendment. The Equal Protection Clause prevents the state from denying "any person within its jurisdiction the equal protection of the laws." He claimed that the law discriminated based on gender, denied him protection of the law, and therefore violated the Constitution. The case was brought before the Supreme Court in 1980.[3]
There was no charge of forcible rape (see controversies below).
Opinion of the court [ edit ]
Through intermediate scrutiny, the Court upheld the statute and its gender-based distinction because it helped to further an important state goal. It was a 5-4 vote upholding California's statute. However, the Justices who voted in the majority could not decide on a reason for their ruling, so the decision is considered a plurality.[4]
Justices Rehnquist, Burger, Stewart, and Powell voted with the plurality and Justice Blackmun concurred in the judgment only. The main reasons behind their decision were that young females already faced a significant deterrent from engaging in sexual intercourse and that the statute furthered the state goal of preventing teenage pregnancy. In his written opinion, Justice Rehnquist said, "[t]he statute protects women from sexual intercourse and pregnancy at an age when the physical, emotional, and psychological consequences are particularly severe. Because virtually all of the significant harmful and identifiable consequences of teenage pregnancy fall on the female, a legislature acts well within its authority when it elects to punish only the participant who, by nature, suffers few of the consequences of his conduct."[5] Because young women are faced with the risk of unwanted pregnancy when they engage in sexual intercourse, they already face a substantial deterrent and therefore don't necessarily need to be included in the law. The risks and consequences associated with teenage pregnancy are, according to the Court, enough of a discouragement to females. However, because males don't face the same physical, mental, and emotional risks associated with sex and teen pregnancy, "imposing criminal penalties on men was necessary to ‘roughly ‘equalize’ the deterrents on the sexes."[6] The Court also said that because the statute helped to further a major goal of the state, it was constitutional and should be upheld. This form of judicial review is known as intermediate scrutiny. In order to pass an intermediate scrutiny test, "the challenged law must further an important government interest by means that are substantially related to that interest".[7] According to Justice Rehnquist, the law aided in the prevention of teen pregnancy, which was a major goal of the state of California.[8] In the trial, the state of California argued that, "the language of [the statutory rape law] and the policy and intent of the California legislature evinced in other legislation demonstrate that the prevention of pregnancy and the prevention of physical harm to female minors are the primary purposes underlying [the law],"[9] This argument emphasizes how the gender bias found in California's statutory rape law aids in furthering the state's goal of preventing teenage pregnancy. The natural discouragements that females have in regards to sexual intercourse coupled with this statute and its singling out of males as the sole perpetrators together form a significant deterrent keeping teens from engaging in sexual intercourse. The statute takes steps to avoid teenage pregnancy and therefore helps to advance the state's goals. It was because of these reasons that the Court upheld the law.
Dissenting opinion [ edit ]
Justices Brennan, White, Marshall, and Stevens dissented. The minority stated that the majority placed "too much emphasis on the desirability of achieving the State's asserted statutory goal - prevention of teenage pregnancy - and not enough emphasis on the fundamental question of whether the sex-based discrimination in the California statute is substantially related to the achievement of that goal." The dissenters felt that Rehnquist's opinion emphasized the goal of the state without any regard to the means or to the actual question at hand. They questioned whether gender-neutral statutory rape law would actually be harmful to California's goal of lowering teen pregnancy rates, since no evidence that a gender-biased law would be beneficial was provided.
Justice Brennan wrote, "[t]he burden is on the government to prove both the importance of its asserted objective and the substantial relationship between the classification and that objective. And the State cannot meet that burden without showing that a gender-neutral statute would be a less effective means of achieving that goal."[10] Without any factual evidence or comparison, according to the dissenters it is difficult to tell whether a gender-biased statute actually lowers teen pregnancy rates.
Brennan also cited that at the time of the trial, thirty-seven other states had adopted gender-neutral statutory rape laws. He hypothesized that gender-neutral laws might be a greater deterrent than non-gender neutral laws because there would be "twice as many potential violators." Justice Stevens added that he thought there was no reason to not include a woman in the law because women are "capable of using [their] own judgment of whether or not to assume the risk of sexual intercourse".[11]
Significance [ edit ]
The Michael M. v. Superior Court of Sonoma County case upheld that gender biased statutory rape laws did not violate the Equal Protection Clause of the Fourteenth Amendment to the Constitution. It demonstrated that laws can be applied differently to men and women and remain constitutional as long as the state can justify doing so.
Controversy Surrounding the Case [ edit ]
There is some controversy surrounding not the case or the issue itself, but rather the original charge Michael M. was faced with. Some critics of the case question why the defendant was charged with statutory rape and not with forcible rape. Because Michael exerted force on Sharon until she submitted to sex, some believe that, "[t]his is a case of forcible rape. But neither the California courts nor the Supreme Court saw it that way, and this was exactly what some feminists feared. Even though Sharon said no and was punched, this case was immediately charged with statutory rape." Justice Blackmun addressed these concerns: "I think...that it is only fair to point out that [Michael's] partner, Sharon, appears not to have been an unwilling participant in at least the initial stages of the intimacies that took place. [Michael] and Sharon's non-acquaintance...; their drinking; their withdrawal from others of the group; their foreplay, in which she willingly participated and seems to have encouraged; and the closeness of their ages are factors that should make this case an unattractive one to prosecute at all...especially as a felony and rather than as a misdemeanor . . . But the state has chosen to prosecute in that manner, and the facts, I reluctantly conclude, may fit the crime." [12]
Later outcome [ edit ]
California changed its law to make unlawful sexual intercourse a gender-neutral crime, in that all forms of sexual conduct with a person under 18 are illegal. This means that if a 16-year-old boy and a 17-year-old girl have consensual sex, both can be charged with a crime. However, penalties have been reduced from a felony to a misdemeanor if the older participant is no more than 4 years older. |
A while back I went on the hunt for good, durable gloves that would withstand the ravages of midwest tow ropes. A few searches and forums suggested the insulated Kinco 901 gloves , so I ordered a pair and put them through the ringer. After riding them for nearly an entire season, here’s how they stack up.
Update January 27, 2015: Still rocking the same pair, probably close to 65 days on them so far and aside from a few spots starting to wear through on my left palm/fingers (from repetitive tow-rope abuse) they are no worse for wear than they are in the picture below.
Durability: Hands-down the most durable glove I’ve used, probably ever. The Kinco 901 gloves are made from breathable pigskin leather, with reinforced palms and double-stitched seams. While I have not ridden exclusively tow-ropes, I ride a lot of tow-rope laps, and where most snowboard gloves start to fall apart in as little as a single day on the rope, my Kinco 901’s are still holding up really well, with only minimal signs of wear. Currently they have about 65 days on them.
Comfort: These are pretty rugged leather and until they break in, like a baseball mitt they are kind of stiff. However, the fleece lining is super soft and I haven’t had any problems staying warm even in 0-degree temperatures. Overall these are a pretty comfortable glove.
Fit: I’d say they run a little larger than true-to-size. I normally wear between an L and an XL so I ordered the XL. They’re bigger than I’d prefer, but not intolerable once they break in a little bit. Remember that these are work gloves designed to fit oversized man hands. I don’t think you need to size down with these, but I wouldn’t recommend sizing up.
Styling: OK so the Kinco 901 is not exactly a fashionista. They’re work gloves and they look like work gloves. Plain and simple. They are also available as mittens and as insulated work gloves.
Waterproofing: You’re gonna have to DIY with Sno-seal or another waterproofing product because the Kinco 901 gloves are not waterproof. A Mountain Journey has an excellent tutorial for anyone looking to sno-seal or waterproof their Kinco gloves or mitts.
Pricing: Depending on the model, you can get them for $20-30 a pair on Amazon.
Overall: This is a strong buy recommendation unless you’re a beginner who’s still spending a lot of time on the ground/in the snow, where the lack of waterproofing might be an issue. You could pay 2 or 3 times as much for gloves that will most likely fall apart on you in less than a week. Comfortable, warm and super durable, the Kinco 901 is exactly what you need if you’re lapping tow rope terrain parks, but they’d be a perfectly fine pair of gloves for just about anywhere else on the mountain, too.
Smash that sign up button for our monthly newsletter People Skate & Snowboard | Michigan's premiere skate & snow shop
evo.com | Great price match policy & great selection of closeout gear |
Study on 340 GHz Wave Scintillation Characteristics Based on Experimental Data
the near ground scintillation characteristic at the frequency of 340 GHz is analyzed based on the experimental data. The experiment is carried in the outdoor instead of the lab to study the real atmospheric influence on the propagation of 340 GHz wave. Results show The near ground scintillation is relatively weak in this frequency. And the probability density function of the scintillation amplitude conforms to Gaussian distribution. The correlation between meteorological parameters and scintillation index S4 in June is studied. It is concluded that water vapour density is the main factor that affect the scintillation at 340 GHz. |
/*
* Copyright (C) 2017 Kaspar Schleiser <[email protected]>
* Copyright (C) 2021 Freie Universität Berlin
*
* This file is subject to the terms and conditions of the GNU Lesser
* General Public License v2.1. See the file LICENSE in the top level
* directory for more details.
*/
/**
* @ingroup net_sock_dodtls
* @{
* @file
* @brief sock DNS client implementation
* @author Kaspar Schleiser <[email protected]>
* @author Martine S. Lenders <[email protected]>
* @}
*/
#include <assert.h>
#include <errno.h>
#include <stdbool.h>
#include "mutex.h"
#include "net/credman.h"
#include "net/dns.h"
#include "net/dns/cache.h"
#include "net/dns/msg.h"
#include "net/iana/portrange.h"
#include "net/sock/dtls.h"
#include "net/sock/udp.h"
#include "net/sock/util.h"
#include "net/sock/dodtls.h"
#include "random.h"
#include "ztimer.h"
#define ENABLE_DEBUG 0
#include "debug.h"
/* min domain name length is 1, so minimum record length is 7 */
#define SOCK_DODTLS_MIN_REPLY_LEN (unsigned)(sizeof(dns_hdr_t) + 7)
/* see https://datatracker.ietf.org/doc/html/rfc8094#section-3.1 */
#define SOCK_DODTLS_SESSION_TIMEOUT_MS (15U * MS_PER_SEC)
#define SOCK_DODTLS_SESSION_RECV_TIMEOUT_MS (1U * MS_PER_SEC)
/* Socks to the DNS over DTLS server */
static uint8_t _dns_buf[CONFIG_DNS_MSG_LEN];
static sock_udp_t _udp_sock;
static sock_dtls_t _dtls_sock;
static sock_dtls_session_t _server_session;
/* Mutex to access server sock */
static mutex_t _server_mutex = MUTEX_INIT;
/* Type of the server credentials, stored for eventual credential deletion */
static credman_type_t _cred_type = CREDMAN_TYPE_EMPTY;
/* Tag of the server credentials, stored for eventual credential deletion */
static credman_tag_t _cred_tag = CREDMAN_TAG_EMPTY;
static uint16_t _id = 0;
static inline bool _server_set(void);
static int _connect_server(const sock_udp_ep_t *server,
const credman_credential_t *creds);
static int _disconnect_server(void);
static uint32_t _now_ms(void);
static void _sleep_ms(uint32_t delay);
int sock_dodtls_query(const char *domain_name, void *addr_out, int family)
{
int res;
uint16_t id;
if (strlen(domain_name) > SOCK_DODTLS_MAX_NAME_LEN) {
return -ENOSPC;
}
res = dns_cache_query(domain_name, addr_out, family);
if (res) {
return res;
}
if (!_server_set()) {
return -ECONNREFUSED;
}
mutex_lock(&_server_mutex);
id = _id++;
for (int i = 0; i < CONFIG_SOCK_DODTLS_RETRIES; i++) {
uint32_t timeout = CONFIG_SOCK_DODTLS_TIMEOUT_MS * US_PER_MS;
uint32_t start, send_duration;
size_t buflen = dns_msg_compose_query(_dns_buf, domain_name, id,
family);
start = _now_ms();
res = sock_dtls_send(&_dtls_sock, &_server_session,
_dns_buf, buflen, timeout);
send_duration = _now_ms() - start;
if (send_duration > CONFIG_SOCK_DODTLS_TIMEOUT_MS) {
return -ETIMEDOUT;
}
timeout -= send_duration;
if (res <= 0) {
_sleep_ms(timeout);
continue;
}
res = sock_dtls_recv(&_dtls_sock, &_server_session,
_dns_buf, sizeof(_dns_buf), timeout);
if (res > 0) {
if (res > (int)SOCK_DODTLS_MIN_REPLY_LEN) {
uint32_t ttl = 0;
if ((res = dns_msg_parse_reply(_dns_buf, res, family,
addr_out, &ttl)) > 0) {
dns_cache_add(domain_name, addr_out, res, ttl);
goto out;
}
}
else {
res = -EBADMSG;
}
}
}
out:
memset(_dns_buf, 0, sizeof(_dns_buf)); /* flush-out unencrypted data */
mutex_unlock(&_server_mutex);
return res;
}
int sock_dodtls_get_server(sock_udp_ep_t *server)
{
int res = -ENOTCONN;
assert(server != NULL);
mutex_lock(&_server_mutex);
if (_server_set()) {
sock_udp_get_remote(&_udp_sock, server);
res = 0;
}
mutex_unlock(&_server_mutex);
return res;
}
sock_dtls_t *sock_dodtls_get_dtls_sock(void)
{
return &_dtls_sock;
}
sock_dtls_session_t *sock_dodtls_get_server_session(void)
{
return &_server_session;
}
int sock_dodtls_set_server(const sock_udp_ep_t *server,
const credman_credential_t *creds)
{
return (server == NULL)
? _disconnect_server()
: _connect_server(server, creds);
}
static inline bool _server_set(void)
{
return _cred_type != CREDMAN_TYPE_EMPTY;
}
static void _close_session(credman_tag_t creds_tag, credman_type_t creds_type)
{
sock_dtls_session_destroy(&_dtls_sock, &_server_session);
sock_dtls_close(&_dtls_sock);
credman_delete(creds_tag, creds_type);
sock_udp_close(&_udp_sock);
}
static int _connect_server(const sock_udp_ep_t *server,
const credman_credential_t *creds)
{
int res;
sock_udp_ep_t local = SOCK_IPV6_EP_ANY;
/* server != NULL is checked in sock_dodtls_set_server() */
assert(creds != NULL);
mutex_lock(&_server_mutex);
res = credman_add(creds);
if (res < 0 && res != CREDMAN_EXIST) {
DEBUG("Unable to add credential to credman\n");
switch (res) {
case CREDMAN_NO_SPACE:
res = -ENOSPC;
break;
case CREDMAN_ERROR:
case CREDMAN_INVALID:
case CREDMAN_TYPE_UNKNOWN:
default:
res = -EINVAL;
break;
}
goto exit;
}
res = sock_dtls_establish_session(&_udp_sock, &_dtls_sock, &_server_session,
creds->tag, &local, server, _dns_buf,
sizeof(_dns_buf));
_cred_type = creds->type;
_cred_tag = creds->tag;
_id = (uint16_t)(random_uint32() & 0xffff);
exit:
mutex_unlock(&_server_mutex);
return (res > 0) ? 0 : res;
}
static int _disconnect_server(void)
{
int res = 0;
mutex_lock(&_server_mutex);
if (!_server_set()) {
goto exit;
}
_close_session(_cred_tag, _cred_type);
_cred_tag = CREDMAN_TAG_EMPTY;
_cred_type = CREDMAN_TYPE_EMPTY;
exit:
mutex_unlock(&_server_mutex);
return res;
}
static uint32_t _now_ms(void)
{
return ztimer_now(ZTIMER_MSEC);
}
static void _sleep_ms(uint32_t delay)
{
ztimer_sleep(ZTIMER_MSEC, delay);
}
|
When I contemplated how to make vegan marshmallows, my mind wandered toward daifuku, the Japanese rice-based confection that, not unlike marshmallows, has a springy and sticky quality. So I used the sweet sticky rice powder, mochiko, in this recipe, which results in a bit of a marshmallow/mochi hybrid. Looking for a substitute for the protein in the animal-derived gelatins, I initially used organic soy flour. But it contains some fat, which inhibits stiff peaks from forming when beaten with xanthan gum (a mucous-y substance that is an excellent stabilizer and binder), cream of tartar (which helps to create volume) and water.
Thanks to the vegan marshmallow recipe on www.meatandeggfree.com, I turned to fat-free soy isolate powder, which is available at many health food stores and makes a world of difference.
These marshmallows are tasty eaten plain, added to hot cocoa, or toasted (in the toaster oven if you don't have a campfire nearby) with graham crackers and vegan chocolate. |
# -*- coding: utf-8 -*-
import os
import sys
import random
import time
import numpy as np
import codecs
import cv2
import xml.etree.ElementTree as ET
from xml.etree.ElementTree import SubElement
def process_convert(name, DIRECTORY_ANNOTATIONS, img_path, save_xml_path):
# Read the XML annotation file.
filename = os.path.join(DIRECTORY_ANNOTATIONS, name)
try:
tree = ET.parse(filename)
except:
print('error:', filename, ' not exist')
return False
root = tree.getroot()
size = root.find('size')
if size is None:
img = cv2.imread(img_path)
print('jpg_path', img_path, img.shape)
shape = [int(img.shape[0]), int(img.shape[1]), int(img.shape[2])]
# size = SubElement(root, 'size')
elif size.find('height').text is None or size.find('width').text is None:
img = cv2.imread(img_path)
print('jpg_path height', img_path, img.shape)
shape = [int(img.shape[0]), int(img.shape[1]), int(img.shape[2])]
elif int(size.find('height').text) == 0 or int(
size.find('width').text) == 0:
img = cv2.imread(img_path)
print('jpg_path zero', img_path, img.shape)
shape = [int(img.shape[0]), int(img.shape[1]), int(img.shape[2])]
else:
shape = [
int(size.find('height').text),
int(size.find('width').text),
int(size.find('depth').text)
]
height = size.find('height')
height.text = str(shape[0])
width = size.find('width')
width.text = str(shape[1])
for obj in root.findall('object'):
difficult = int(obj.find('difficult').text)
content = obj.find('name').text
content = content.replace('\t', ' ')
#if int(difficult) == 1 and content == '&*@HUST_special':
'''
这里代表HUST_vertical是text
'''
if difficult == 0 and content != '&*@HUST_special' and content != '&*HUST_shelter':
label_name = 'text'
else:
label_name = 'none'
bbox = obj.find('bndbox')
if obj.find('content') is None:
content_sub = SubElement(obj, 'content')
content_sub.text = content
else:
obj.find('content').text = content
name_ele = obj.find('name')
name_ele.text = label_name
xmin = bbox.find('xmin').text
ymin = bbox.find('ymin').text
xmax = bbox.find('xmax').text
ymax = bbox.find('ymax').text
x1 = xmin
x2 = xmax
x3 = xmax
x4 = xmin
y1 = ymin
y2 = ymin
y3 = ymax
y4 = ymax
if bbox.find('x1') is None:
x1_sub = SubElement(bbox, 'x1')
x1_sub.text = x1
x2_sub = SubElement(bbox, 'x2')
x2_sub.text = x2
x3_sub = SubElement(bbox, 'x3')
x3_sub.text = x3
x4_sub = SubElement(bbox, 'x4')
x4_sub.text = x4
y1_sub = SubElement(bbox, 'y1')
y1_sub.text = y1
y2_sub = SubElement(bbox, 'y2')
y2_sub.text = y2
y3_sub = SubElement(bbox, 'y3')
y3_sub.text = y3
y4_sub = SubElement(bbox, 'y4')
y4_sub.text = y4
else:
bbox.find('y1').text = ymin
bbox.find('y2').text = ymin
bbox.find('y3').text = ymax
bbox.find('y4').text = ymax
#print(save_xml_path)
tree.write(save_xml_path)
return True
def process_convert_txt(name, DIRECTORY_ANNOTATIONS):
# Read the XML annotation file.
filename = os.path.join(DIRECTORY_ANNOTATIONS, name)
try:
tree = ET.parse(filename)
except:
print('error:', filename, ' not exist')
return
root = tree.getroot()
all_txt_line = []
for obj in root.findall('object'):
bbox = obj.find('bndbox')
difficult = int(obj.find('difficult').text)
content = obj.find('content')
if content is not None:
content = content.text
else:
content = 0
if difficult == 1 and content == '&*@HUST_special':
continue
xmin = bbox.find('xmin').text
ymin = bbox.find('ymin').text
xmax = bbox.find('xmax').text
ymax = bbox.find('ymax').text
x1 = xmin
x2 = xmax
x3 = xmax
x4 = xmin
y1 = ymin
y2 = ymin
y3 = ymax
y4 = ymax
all_txt_line.append('{} {} {} {} {} {} {} {}\n'.format(
x1, y1, x2, y2, x3, y3, x4, y4))
txt_name = os.path.join(DIRECTORY_ANNOTATIONS, name[:-4] + '.txt')
with codecs.open(txt_name, 'w', encoding='utf-8') as f:
f.writelines(all_txt_line)
def get_all_img(directory, split_flag, logs_dir, output_dir):
count = 0
ano_path_list = []
img_path_list = []
if output_dir is not None and not os.path.exists(output_dir):
os.makedirs(output_dir)
start_time = time.time()
for root, dirs, files in os.walk(directory):
for each in files:
if each.split('.')[-1] == 'xml':
xml_path = os.path.join(root, each[:-4] + '.xml')
img_path = os.path.join(root[:-3], each[3:-4] + '.jpg')
if os.path.exists(img_path) == False:
img_path = os.path.join(root, each[:-4] + '.png')
test_png = cv2.imread(img_path)
print(test_png)
if test_png is None or os.path.exists(xml_path) == False:
continue
if output_dir is not None:
sub_path = root[len(directory)+1:]
sub_path = os.path.join(output_dir, sub_path)
if not os.path.exists(sub_path):
os.makedirs(sub_path)
save_xml_path = os.path.join(sub_path, each[3:-4] + '.xml')
else:
save_xml_path = xml_path
if process_convert(each, root, img_path, save_xml_path):
ano_path_list.append('{},{}\n'.format(img_path, save_xml_path))
img_path_list.append('{}\n'.format(img_path))
count += 1
print(count, img_path)
if count % 1000 == 0:
print(count, time.time() - start_time)
save_to_text(img_path_list, ano_path_list, count, split_flag, logs_dir)
print('all over:', count)
print('time:', time.time() - start_time)
def save_to_text(img_path_list, ano_path_list, count, split_flag, logs_dir):
if split_flag == 'yes':
train_num = int(count / 10. * 9.)
else:
train_num = count
print('train img count {0}'.format(train_num))
if not os.path.exists(logs_dir):
os.makedirs(logs_dir)
with codecs.open(
os.path.join(logs_dir, 'train_xml.txt'), 'w',
encoding='utf-8') as f_xml, codecs.open(
os.path.join(logs_dir, 'train.txt'), 'w',
encoding='utf-8') as f_txt:
f_xml.writelines(ano_path_list[:train_num])
f_txt.writelines(img_path_list[:train_num])
with codecs.open(
os.path.join(logs_dir, 'test_xml.txt'), 'w',
encoding='utf-8') as f_xml, codecs.open(
os.path.join(logs_dir, 'test.txt'), 'w',
encoding='utf-8') as f_txt:
f_xml.writelines(ano_path_list[train_num:])
f_txt.writelines(img_path_list[train_num:])
if __name__ == '__main__':
import argparse
parser = argparse.ArgumentParser(description='icdar15 generate xml tools for standard format')
parser.add_argument('--in_dir', '-i',
default='./datasets/ICDAR_15/textLocalization/train', type=str,
help='the absolute directory which contains the pic and xml')
parser.add_argument('--split_flag', '-s', default='no', type=str,
help='whether or not to split the datasets')
parser.add_argument('--save_logs', '-l', default='logs', type=str,
help='whether to save train_xml.txt')
parser.add_argument('--output_dir', '-o',
default=None, type=str,
help='where to save xmls')
args = parser.parse_args()
directory = args.in_dir
split_flag = args.split_flag
logs_dir = args.save_logs
output_dir = args.output_dir
get_all_img(directory, split_flag, logs_dir, output_dir)
|
/*
* Change heap to the stats shared memory segment
*/
void *
vlib_stats_push_heap (void *old)
{
stat_segment_main_t *sm = &stat_segment_main;
sm->last = old;
ASSERT (sm && sm->shared_header);
return clib_mem_set_heap (sm->heap);
} |
def put(self, asset_name):
try:
asset_details = remove_nulls(asset_details_parser.parse_args(strict=True))
except ValueError:
abort(400, message='asset_details must be a json object.')
asset = self._get_asset(asset_name)
try:
asset.update_details(asset_details)
except ValidationError as err:
abort(400, message='{}'.format(err))
return asset.asset_details, 201 |
// Code generated by golex. DO NOT EDIT.
package parser
import (
"fmt"
)
func (l *lexer) Lex(lval *yySymType) int {
const (
S_INIT = iota
S_COMMENTS
)
c := l.current
currentState := 0
if l.empty {
c, l.empty = l.getc(), false
}
yystate0:
l.buf.Reset()
switch yyt := currentState; yyt {
default:
panic(fmt.Errorf(`invalid start condition %d`, yyt))
case 0: // start condition: INITIAL
goto yystart1
case 1: // start condition: S_COMMENTS
goto yystart24
}
goto yystate0 // silence unused label error
goto yystate1 // silence unused label error
yystate1:
c = l.getc()
yystart1:
switch {
default:
goto yyabort
case c == '"':
goto yystate5
case c == '#':
goto yystate8
case c == ',':
goto yystate9
case c == '-':
goto yystate10
case c == '/':
goto yystate13
case c == ':' || c >= 'A' && c <= 'Z' || c == '_' || c >= 'a' && c <= 'z':
goto yystate17
case c == '<':
goto yystate18
case c == '=':
goto yystate20
case c == '\n' || c == '\r':
goto yystate4
case c == '\t' || c == ' ':
goto yystate3
case c == '\x00':
goto yystate2
case c == '`':
goto yystate21
case c == '|':
goto yystate23
}
yystate2:
c = l.getc()
goto yyrule14
yystate3:
c = l.getc()
goto yyrule5
yystate4:
c = l.getc()
goto yyrule13
yystate5:
c = l.getc()
switch {
default:
goto yyabort
case c == ':' || c >= 'A' && c <= 'Z' || c == '_' || c >= 'a' && c <= 'z':
goto yystate6
}
yystate6:
c = l.getc()
switch {
default:
goto yyabort
case c == '"':
goto yystate7
case c == '\t' || c == ' ' || c >= '0' && c <= ':' || c >= 'A' && c <= 'Z' || c == '_' || c >= 'a' && c <= 'z':
goto yystate6
}
yystate7:
c = l.getc()
goto yyrule16
yystate8:
c = l.getc()
goto yyrule12
yystate9:
c = l.getc()
goto yyrule10
yystate10:
c = l.getc()
switch {
default:
goto yyabort
case c == '-':
goto yystate11
case c == '>':
goto yystate12
}
yystate11:
c = l.getc()
goto yyrule9
yystate12:
c = l.getc()
goto yyrule7
yystate13:
c = l.getc()
switch {
default:
goto yyabort
case c == '*':
goto yystate14
case c == '/':
goto yystate15
}
yystate14:
c = l.getc()
goto yyrule2
yystate15:
c = l.getc()
switch {
default:
goto yyabort
case c == '\n':
goto yystate16
case c >= '\x01' && c <= '\t' || c == '\v' || c == '\f' || c >= '\x0e' && c <= 'ÿ':
goto yystate15
}
yystate16:
c = l.getc()
goto yyrule1
yystate17:
c = l.getc()
switch {
default:
goto yyrule15
case c >= '0' && c <= ':' || c >= 'A' && c <= 'Z' || c == '_' || c >= 'a' && c <= 'z':
goto yystate17
}
yystate18:
c = l.getc()
switch {
default:
goto yyabort
case c == '-':
goto yystate19
}
yystate19:
c = l.getc()
goto yyrule8
yystate20:
c = l.getc()
goto yyrule11
yystate21:
c = l.getc()
switch {
default:
goto yyabort
case c == '`':
goto yystate22
case c >= '\x01' && c <= '_' || c >= 'a' && c <= 'ÿ':
goto yystate21
}
yystate22:
c = l.getc()
goto yyrule17
yystate23:
c = l.getc()
switch {
default:
goto yyrule6
case c == '\t' || c == '\n' || c == ' ':
goto yystate23
}
goto yystate24 // silence unused label error
yystate24:
c = l.getc()
yystart24:
switch {
default:
goto yyabort
case c == '*':
goto yystate26
case c >= '\x01' && c <= ')' || c >= '+' && c <= 'ÿ':
goto yystate25
}
yystate25:
c = l.getc()
goto yyrule4
yystate26:
c = l.getc()
switch {
default:
goto yyrule4
case c == '/':
goto yystate27
}
yystate27:
c = l.getc()
goto yyrule3
yyrule1: // \/\/[^\r\n]*\n
{
/* single-line comments */
goto yystate0
}
yyrule2: // "/*"
{
currentState = S_COMMENTS
goto yystate0
}
yyrule3: // "*/"
{
currentState = S_INIT
goto yystate0
}
yyrule4: // .|\n
{
/* ignore chars within multi-line comments */
goto yystate0
}
yyrule5: // [\t ]
{
/* whitespace */
goto yystate0
}
yyrule6: // \|[\t \n]*
goto yystate0
yyrule7: // \->
{
lval.str = l.token()
return RE_OP
goto yystate0
}
yyrule8: // \<\-
{
lval.str = l.token()
return LE_OP
goto yystate0
}
yyrule9: // \-\-
{
lval.str = l.token()
return UE_OP
goto yystate0
}
yyrule10: // ,
{
lval.str = l.token()
return COMMA
goto yystate0
}
yyrule11: // =
{
lval.str = l.token()
return EQ
goto yystate0
}
yyrule12: // #
{
lval.str = l.token()
return HASH
goto yystate0
}
yyrule13: // [\n\r]
{
return NEWLINE
}
yyrule14: // \0
{
return EOF
}
yyrule15: // {L}({L}|{D})*
{
lval.str = l.token()
return STRING
goto yystate0
}
yyrule16: // \"{L}({L}|{D}|[\t ])*\"
{
lval.str = l.token()[1 : len(l.token())-1]
return QSTRING
goto yystate0
}
yyrule17: // `[^`]*`
{
lval.str = l.token()[1 : len(l.token())-1]
return BTICKSTR
goto yystate0
}
panic("unreachable")
goto yyabort // silence unused label error
yyabort: // no lexem recognized
l.empty = true
return int(c)
}
|
/** Field of an Entry class, with marshalling information */
static class EntryField {
/** Field for the field */
public final Field field;
/**
* True if instances of the field need to be converted
* to MarshalledWrapper. False if the type of the field
* is String, Integer, Boolean, Character, Long, Float,
* Double, Byte, or Short.
*/
public final boolean marshal;
/**
* Basic constructor.
*/
public EntryField(Field field) {
this.field = field;
Class c = field.getType();
marshal = !(c == String.class ||
c == Integer.class ||
c == Boolean.class ||
c == Character.class ||
c == Long.class ||
c == Float.class ||
c == Double.class ||
c == Byte.class ||
c == Short.class);
}
} |
package models;
import javax.persistence.Entity;
import javax.persistence.Id;
@Entity
public class Textile
{
@Id private int textileId;
private String textileName;
public int getTextileId()
{
return textileId;
}
public void setTextileId(int textileId)
{
this.textileId = textileId;
}
public String getTextileName()
{
return textileName;
}
public void setTextileName(String textileName)
{
this.textileName = textileName;
}
}
|
//
// PRPChangeNameTextField.cpp
// PrimaryParticle
//
// Created by stefan on 7/30/14.
//
//
#include "PRPChangeNameTextField.h"
#include "PRPRegexHelper.h"
#include "PRPScoreServer.h"
#include "PRPAppDelegate.h"
#include "PRPGameManager.h"
USING_NS_CC;
USING_NS_PRP;
static const int kUserNameMaxLength = 20;
ChangeNameTextField* ChangeNameTextField::create()
{
ChangeNameTextField* widget = new ChangeNameTextField();
if (widget && widget->init())
{
widget->autorelease();
return widget;
}
CC_SAFE_DELETE(widget);
return nullptr;
}
ChangeNameTextField* ChangeNameTextField::create(const std::string& placeholder,
const std::string& fontName,
const int fontSize,
const TextInputCompleteCallback& callback)
{
const ChangeNameTextField::TextInputErrorCallback errorCallb = [](const std::string errorString){
// auto appDel = APPDELEGATE();
// const Size visibleSize = appDel->getVisibleSize();
// const Point visibleOrigin = appDel->getVisibleOrigin();
// const Rect frame = Rect(visibleOrigin.x + visibleSize.width*0.05f,
// visibleOrigin.y + visibleSize.height*0.5f,
// visibleSize.width*0.9f,
// visibleSize.height*0.4f);
GAMEMANAGER()->displayMessage(errorString);
};
ChangeNameTextField* textfield = new ChangeNameTextField();
if (textfield && textfield->initWithCallbacks(callback, errorCallb))
{
textfield->setPlaceHolder(placeholder);
textfield->setFontName(fontName);
textfield->setFontSize(fontSize);
textfield->autorelease();
return textfield;
}
CC_SAFE_DELETE(textfield);
return nullptr;
}
void ChangeNameTextField::invokeOnTextInputCompleteCallback()
{
if (getDetachWithIME() == false) {
setDetachWithIME(true);
}
auto userName = UserDefault::getInstance()->getStringForKey(kUserDefaultUserNameKey);
auto name = this->getString();
/*
* Refuse upper or lower case first letter 's' in 'Scientist'
*/
if (match(name.c_str(), "^[sS]cientist")) {
_onTextInputErrorCallback("Name is already taken.");
}
else if (name.length() > kUserNameMaxLength) {
_onTextInputErrorCallback("Name is too long.");
}
else if (!stringCompare(name, userName)) {
_onTextInputCompleteCallback(name);
}
else {
// ignore it and do nothing if the names are equal
}
}
|
<filename>projects-lab/akka-http/src/main/java/io/github/kavahub/learnjava/user/User.java
package io.github.kavahub.learnjava.user;
import lombok.Data;
/**
* 用户
*
* @author <NAME>
* @since 1.0.2
*/
@Data
public class User {
private final Long id;
private final String name;
public User(Long id, String name) {
this.id = id;
this.name = name;
}
}
|
def turn_emails_off(view_func):
EMAIL_BACKEND_DUMMY = 'django.core.mail.backends.dummy.EmailBackend'
def decorated(request, *args, **kwargs):
orig_email_backend = settings.EMAIL_BACKEND
settings.EMAIL_BACKEND = EMAIL_BACKEND_DUMMY
response = view_func(request, *args, **kwargs)
settings.EMAIL_BACKEND = orig_email_backend
return response
return decorated |
package cn.jeeweb.modules.codegen.service;
import cn.jeeweb.core.common.service.ICommonService;
import cn.jeeweb.modules.codegen.entity.Column;
import java.util.List;
public interface IColumnService extends ICommonService<Column> {
List<Column> selectListByTableId(String tableId);
}
|
<filename>src/al_codec/units/AlUVideos.cpp
/*
* Copyright (c) 2018-present, <EMAIL>.
*
* This source code is licensed under the MIT license found in the
* LICENSE file in the root directory of this source tree.
*/
#include "AlUVideos.h"
#include "StringUtils.h"
#include "AsynVideoDecoder.h"
#include "AlSize.h"
#include "AlBuffer.h"
#define TAG "AlUVideos"
AlUVideos::AlUVideos(const std::string alias)
: Unit(alias) {
al_reg_msg(MSG_VIDEOS_TRACK_ADD, AlUVideos::_onAddTrack);
al_reg_msg(MSG_VIDEOS_TRACK_REMOVE, AlUVideos::_onRemoveTrack);
al_reg_msg(MSG_SEQUENCE_BEAT_VIDEO, AlUVideos::_onBeat);
al_reg_msg(MSG_VIDEOS_END, AlUVideos::_onEnd);
al_reg_msg(EVENT_LAYER_QUERY_ID_NOTIFY, AlUVideos::_onLayerDone);
}
AlUVideos::~AlUVideos() {
}
bool AlUVideos::onCreate(AlMessage *msg) {
return true;
}
bool AlUVideos::onDestroy(AlMessage *msg) {
for_each(map.begin(), map.end(),
[](std::map<AlID, std::unique_ptr<AbsVideoDecoder>>::reference &it) {
it.second->stop();
});
map.clear();
mLayerMap.clear();
mLastFrameMap.clear();
return true;
}
bool AlUVideos::_onAddTrack(AlMessage *msg) {
auto clip = std::static_pointer_cast<AlMediaClip>(msg->sp);
int64_t duration = 0;
int64_t frameDuration = 0;
_create(clip.get(), duration, frameDuration);
if (duration > 0) {
clip->setDuration(duration);
clip->setFrameDuration(frameDuration);
auto msg1 = AlMessage::obtain(MSG_SEQUENCE_TRACK_SET_DURATION);
msg1->sp = clip;
postMessage(msg1);
}
return true;
}
bool AlUVideos::_onRemoveTrack(AlMessage *msg) {
auto clips = std::static_pointer_cast<AlVector<std::shared_ptr<AlMediaClip>>>(msg->sp);
for (auto itr = clips->begin(); clips->end() != itr; ++itr) {
auto it = map.find((*itr)->id());
if (map.end() != it) {
map.erase(it);
}
}
return true;
}
bool AlUVideos::_onBeat(AlMessage *msg) {
mCurTimeInUS = msg->arg2;
std::vector<AlID> ignoreClips;
auto clips = std::static_pointer_cast<AlVector<std::shared_ptr<AlMediaClip>>>(msg->sp);
for (auto itr = clips->begin(); clips->end() != itr; ++itr) {
auto *clip = itr->get();
ignoreClips.emplace_back(clip->id());
auto decoder = _findDecoder(clip);
if (nullptr == decoder) {
continue;
}
auto seekRet = _correct(clip, decoder);
while (decoder) {
HwAbsMediaFrame *frame = nullptr;
HwResult ret = _grab(clip, decoder, &frame, mCurTimeInUS);
if (Hw::MEDIA_EOF == ret) {
AlLogI(TAG, "EOF");
break;
}
if (Hw::MEDIA_WAIT == ret) {
AlLogW(TAG, "Grab retry. cur(%lld)", mCurTimeInUS);
continue;
}
if (Hw::OK != ret) {
// AlLogW(TAG, "Grab failed.");
break;
}
if (nullptr == frame && Hw::MEDIA_EOF != ret) {
continue;
}
if (frame->isVideo()) {
int32_t layer = _findLayer(clip);
if (AlIdentityCreator::NONE_ID != layer) {
_updateLayer(clip, dynamic_cast<HwVideoFrame *>(frame));
}
_setCurTimestamp(clip, frame->getPts());
}
break;
}
if (Hw::SUCCESS == seekRet && mCurTimeMap.end() == mCurTimeMap.find(clip->id())) {
AlLogW(TAG, "Grab frame failed after seek.");
}
}
if (!mLayerMap.empty()) {
_clearLayers(ignoreClips);
postEvent(AlMessage::obtain(EVENT_COMMON_INVALIDATE, AlMessage::QUEUE_MODE_UNIQUE));
}
return true;
}
bool AlUVideos::_onEnd(AlMessage *msg) {
mCurTimeInUS = 0;
auto clips = std::static_pointer_cast<AlVector<std::shared_ptr<AlMediaClip>>>(msg->sp);
for (auto itr = clips->begin(); clips->end() != itr; ++itr) {
_seek(_findDecoder(itr->get()), 0);
}
return false;
}
bool AlUVideos::_onLayerDone(AlMessage *msg) {
mLayerMap[msg->action] = msg->arg1;
return true;
}
void AlUVideos::_create(AlMediaClip *clip, int64_t &duration, int64_t &frameDuration) {
if (nullptr == clip || AlIdentityCreator::NONE_ID == clip->id()) {
AlLogE(TAG, "failed. Invalid clip.");
return;
}
if (AlAbsInputDescriptor::kType::FILE != clip->getInputDescriptor()->type()) {
AlLogE(TAG, "failed. Not support input type.");
return;
}
std::string path = clip->getInputDescriptor()->path();
if (StringUtils::isEmpty(&path)) {
AlLogE(TAG, "failed. Invalid path(%s).", path.c_str());
return;
}
std::unique_ptr<AsynVideoDecoder> decoder = std::make_unique<AsynVideoDecoder>(true);
if (!decoder->prepare(path)) {
AlLogE(TAG, "failed. Decoder prepare failed.");
return;
}
_addLayer(clip, decoder->width(), decoder->height());
duration = decoder->getDuration();
auto frameSize = decoder->getSamplesPerBuffer();
frameDuration = 1e6 * frameSize / decoder->getSampleHz();
decoder->start();
auto timeInUS = std::min<int64_t>(mCurTimeInUS, duration);
timeInUS = std::max<int64_t>(0, timeInUS);
decoder->seek(timeInUS);
AlLogI(TAG, "%" PRId64 ", %d, %d, %d, %s",
decoder->getDuration(),
decoder->getChannels(),
decoder->getSampleHz(),
decoder->getSampleFormat(), path.c_str());
map.insert(make_pair(clip->id(), std::move(decoder)));
}
void AlUVideos::_seek(AbsVideoDecoder *decoder, int64_t timeInUS) {
if (decoder) {
AlLogI(TAG, "seek to %" PRId64, timeInUS);
decoder->seek(timeInUS, AbsDecoder::kSeekMode::EXACT);
decoder->start();
}
}
AbsVideoDecoder *AlUVideos::_findDecoder(AlMediaClip *clip) {
if (nullptr == clip) {
return nullptr;
}
auto itr = map.find(clip->id());
if (map.end() == itr) {
return nullptr;
}
return itr->second.get();
}
int32_t AlUVideos::_findLayer(AlMediaClip *clip) {
if (nullptr == clip) {
return AlIdentityCreator::NONE_ID;
}
auto itr = mLayerMap.find(clip->id());
if (mLayerMap.end() == itr) {
return AlIdentityCreator::NONE_ID;
}
return itr->second;
}
void AlUVideos::_addLayer(AlMediaClip *clip, int32_t width, int32_t height) {
auto *msg = AlMessage::obtain(MSG_LAYER_ADD_EMPTY);
msg->action = clip->id();
msg->obj = new AlSize(width, height);
postMessage(msg);
}
void AlUVideos::_updateLayer(AlMediaClip *clip, HwVideoFrame *frame) {
if (nullptr == clip || nullptr == frame) {
AlLogE(TAG, "failed.");
return;
}
auto itr = mLayerMap.find(clip->id());
if (mLayerMap.end() == itr) {
AlLogE(TAG, "failed.");
return;
}
auto *msg = AlMessage::obtain(MSG_LAYER_UPDATE_WITH_BUF);
if (HwFrameFormat::HW_IMAGE_RGBA == frame->getFormat()) {
msg->arg2 = static_cast<int64_t>(AlColorFormat::RGBA);
} else if (HwFrameFormat::HW_IMAGE_NV12 == frame->getFormat()) {
msg->arg2 = static_cast<int64_t>(AlColorFormat::NV12);
} else if (HwFrameFormat::HW_IMAGE_YV12 == frame->getFormat()) {
msg->arg2 = static_cast<int64_t>(AlColorFormat::YV12);
} else {
msg->arg2 = static_cast<int64_t>(AlColorFormat::NONE);
}
msg->arg1 = itr->second;
msg->obj = AlBuffer::wrap(frame->data(), frame->size());
msg->sp = std::make_shared<AlSize>(frame->getWidth(), frame->getHeight());
postMessage(msg);
}
HwResult AlUVideos::_grab(AlMediaClip *clip, AbsVideoDecoder *decoder,
HwAbsMediaFrame **frame, int64_t timeInUS) {
auto itr = mLastFrameMap.find(clip->id());
if (mLastFrameMap.end() != itr) {
if (timeInUS < clip->getSeqIn() + itr->second->getPts()) {
/// 解决快退的时候,缓存帧时间戳过大导致等待时间过长的问题
// if (clip->getSeqIn() + itr->second->getPts() - timeInUS >= 1e6) {
// _setCurTimestamp(clip, itr->second->getPts());
// mLastFrameMap.erase(itr);
// }
// AlLogW(TAG, "Skip frame(%d), cur(%d), %s, %d",
// (int) itr->second->getPts(),
// (int) timeInUS,
// ((AsynVideoDecoder *) decoder)->dump().c_str(),
// mCurTimeMap.end() != mCurTimeMap.find(clip->id()));
return Hw::FAILED;
} else {
*frame = itr->second;
mLastFrameMap.erase(itr);
return Hw::OK;
}
}
HwResult ret = decoder->grab(frame);
while (nullptr != *frame) {
if ((*frame)->flags() & AlMediaDef::FLAG_EOF) {
_seek(decoder, 0);
AlLogI(TAG, "FLAG_EOF");
} else if ((*frame)->flags() & AlMediaDef::FLAG_SEEK_DONE) {
AlLogI(TAG, "FLAG_SEEK_DONE");
} else {
break;
}
ret = decoder->grab(frame);
}
if (Hw::OK != ret || nullptr == *frame) {
return ret;
}
if ((*frame)->isVideo()) {
if (timeInUS < clip->getSeqIn() + (*frame)->getPts()) {
mLastFrameMap.insert(std::make_pair(clip->id(), *frame));
return Hw::FAILED;
}
return Hw::OK;
}
return Hw::FAILED;
}
HwResult AlUVideos::_correct(AlMediaClip *clip, AbsVideoDecoder *decoder) {
int64_t curTime = _getCurTimestamp(clip);
if (curTime != INT64_MIN) {
curTime = curTime < decoder->getDuration() ? curTime : 0;
} else {
return Hw::FAILED;
}
int64_t delta = curTime + clip->getSeqIn() - mCurTimeInUS;
float scale = std::abs(delta) / 33333;
if (scale >= 3) {
auto cache = mLastFrameMap.find(clip->id());
if (mLastFrameMap.end() != cache) {
mLastFrameMap.erase(cache);
}
auto timeInUS = mCurTimeInUS - clip->getSeqIn();
AlLogD(TAG, "Seek clip(%d) scale(%f) from %" PRId64 "US to %" PRId64 "US",
clip->id(), scale, curTime, timeInUS);
_seek(decoder, timeInUS);
_setCurTimestamp(clip, timeInUS);
return Hw::OK;
}
return Hw::FAILED;
}
void AlUVideos::_setCurTimestamp(AlMediaClip *clip, int64_t timeInUS) {
auto itr = mCurTimeMap.find(clip->id());
if (mCurTimeMap.end() != itr) {
mCurTimeMap.erase(itr);
}
if (timeInUS != INT64_MIN) {
mCurTimeMap.insert(std::make_pair(clip->id(), timeInUS));
}
}
int64_t AlUVideos::_getCurTimestamp(AlMediaClip *clip) {
auto itr = mCurTimeMap.find(clip->id());
if (mCurTimeMap.end() != itr) {
return itr->second;
}
return INT64_MIN;
}
void AlUVideos::_clearLayers(std::vector<AlID> &ignoreClips) {
std::for_each(mLayerMap.begin(), mLayerMap.end(),
[this, ignoreClips](std::map<AlID, int32_t>::reference &it) {
if (ignoreClips.end() ==
std::find(ignoreClips.begin(), ignoreClips.end(), it.first)) {
this->_clearLayer(it.second);
}
});
}
void AlUVideos::_clearLayer(int32_t layerID) {
auto *msg = AlMessage::obtain(MSG_LAYER_UPDATE_CLEAR);
msg->arg1 = layerID;
postMessage(msg);
}
|
/**
* This is the base class for a Validator that supports
* the Suppressible interface.
*
* @author Keith W. Boone
*
*/
public class SuppressibleValidator implements Suppressible {
/** The initial set of validation errors to be suppressed.
* Initialized to none.
*/
private Set<String> suppressed = Collections.emptySet();
/**
* The version of the CVRS Format to validate against.
*/
private String version = Validator.DEFAULT_VERSION;
@Override
public void setSuppressed(Set<String> suppressed) {
this.suppressed = suppressed;
}
@Override
public Set<String> getSuppressed() {
return suppressed;
}
/**
* Get the version of CVRS to validate against.
* @return the version of CVRS to validate against.
*/
public String getVersion() {
return version;
}
/**
* Set the version of CVRS to validate against.
* @param version the version to set.
*/
public void setVersion(String version) {
this.version = version;
}
} |
<filename>charles-university/2018-npfl104/hw/scikit-regression/scikit-regression.py
#!/usr/bin/env python3
import numpy as np
import pandas as pd
from sklearn.feature_extraction import DictVectorizer
from sklearn.neighbors import KNeighborsRegressor
from sklearn.svm import SVR
from sklearn.tree import DecisionTreeRegressor
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
def vectorize(train, test):
v = DictVectorizer(sparse=False)
d = train.append(test).to_dict('records')
x = v.fit_transform(d)
return x[:len(train)], x[len(train):]
def evaluate(models, train, test):
x_train, y_train, x_test, y_test = train[:, :-1], train[:, -1], test[:, :-1], test[:, -1]
for model, name in models:
model.fit(x_train, y_train)
y_pred = model.predict(x_test)
mse = mean_squared_error(y_test, y_pred)
print(name)
print('MSE: {:.3f}'.format(mse))
print()
np.random.seed(100)
models = [
[LinearRegression(), 'linear regressor'],
[KNeighborsRegressor(), 'k-nearest neighbors regressor'],
[SVR(), 'support vector regressor'],
[DecisionTreeRegressor(), 'decision tree regressor']
]
train = np.loadtxt('artificial_2x_train.tsv', delimiter='\t')
test = np.loadtxt('artificial_2x_test.tsv', delimiter='\t')
print('artificial_2x')
evaluate(models, train, test)
cols = [i for i in range(9)]
train = pd.read_csv('pragueestateprices_train.tsv', delimiter='\t', usecols=cols, header=None)
test = pd.read_csv('pragueestateprices_test.tsv', delimiter='\t', usecols=cols, header=None)
train.columns = test.columns = [str(i) for i in range(9)]
train, test = vectorize(train, test)
print('pragueestateprices')
evaluate(models, train, test)
|
/**
* @param sourcesMappedOnNull
* true|false to indicate whether the source properties of this
* class map's fields should be set to null (when mapping in the
* reverse direction) if the destination property's value is null
*
* @return this FieldMapBuilder
*/
public ClassMapBuilder<A, B> mapNullsInReverse(boolean sourcesMappedOnNull) {
this.sourcesMappedOnNull = sourcesMappedOnNull;
return this;
} |
<filename>matrices_using_pytorch.py<gh_stars>0
# -*- coding: utf-8 -*-
"""
Created on Thu Sep 9 01:46:42 2021
@author: Mahfuz_Shazol
"""
import torch as th
x_th=th.tensor([ [1,0],
[5,4],
[8,9],
[1,2],
]);
print(x_th.shape)
print('size',x_th.size())
#get all first row items
print(x_th[:,0])
#get 3rd row items
print(x_th[2,:])
#slicing by index_th
print(x_th[1:3]) |
A semi-classical trace formula at a non-degenerate critical level
We study the semi-classical trace formula at a critical energy level for a $h$-pseudo-differential operator whose principal symbol has a unique non-degenerate critical point for that energy. This leads to the study of Hamiltonian systems near equilibrium and near the non-zero periods of the linearized flow. The contributions of these periods to the trace formula are expressed in terms of degenerate oscillatory integrals. The new results obtained are formulated in terms of the geometry of the energy surface and the classical dynamics on this surface.
Introduction
Let be P h a h-pseudodifferential, or more generally h-admissible (see ), selfadjoint operator on R n . The semi-classical trace formula studies the asymptotic behavior, as h tends to 0, of the sums where the λ j (h) are the eigenvalues of P h . Here we suppose that the spectrum is discrete in , some sufficient conditions for this will be given below.
Let be p the principal symbol of P h and Φ t the Hamilton flow of p. The semiclassical trace formula establishes a link between the asymptotic behavior of (1), as h → 0, and the closed trajectories of Φ t of energy E. An energy E is said to be regular when ∇p(x, ξ) = 0 on Σ E , where Σ E = {(x, ξ) / p(x, ξ) = E} is the surface of energy level E, and critical if it is not regular. The case of a regular energy has been intensively studied and explicit expressions in term of Φ t are known for the leading term of (1), under suitable conditions on the flow and when the Fourier transformφ of ϕ is supported near a period, see, e.g., Gutzwiller , Balian and Bloch for the physical literature, and from a mathematical point of view Brummelhuis and Uribe , Petkov and Popov , Charbonnel and Popov , Paul and Uribe .
Here we are interested in the case of a critical energy E c of p. Brummelhuis, Paul and Uribe in have studied the semi-classical trace formula at a critical energy for quite general operators but limited to "small times", that is for supp(φ) contained in such a small neighborhood of the origin that the only period of the linearized flow in supp(φ) is 0. Khuat-Duy, in , , has obtained the contributions of the non-zero periods of the linearized flow for arbitrary ϕ with compactly supportedφ, for Schrödinger operators −∆+V (x) with V (x) a non-degenerate potential. In this case the main contribution of such a period was obtained as a regularization of the Duistermaat-Guillemin density ρ t (x, ξ) = | det(dΦ t (x, ξ) − Id| − 1 2 . Generalizing Khuat-Duy's result to more general operators was an open problem and is the purpose of this article. For a critical energy level E c of an arbitrary h-admissible operator, and (x 0 , ξ 0 ) a critical point of p, the closed trajectories of the linearized flow φ t (u) = d x,ξ Φ t (x 0 , ξ 0 )u = u, u ∈ T (x 0 ,ξ 0 ) (T * R n ), t = 0, (2) are not necessarily generated by a positive quadratic form, contrary to the case of a Schrödinger operator, and will give rise to new contributions to the semi-classical trace formula, other than those obtained in , and . More precisely, viewing d x,ξ Φ t (x 0 , ξ 0 ) as the Hamiltonian flow of the Hessian d 2 p(x 0 , ξ 0 ), here interpreted as an intrinsic quadratic form, these new contributions to the trace formula arise from the non-trivial closed trajectories of d x,ξ Φ t (x 0 , ξ 0 ) of zero energy The reader can easily verify that the set of such u is empty in the case of a Schrödinger operator : this explains why the new contributions obtained here does not appear for these operators. We will show that the new contributions are supported in (E c , x 0 , ξ 0 , T, u) with (T, u) satisfying (3) and can be expressed in term of d 2 p(x 0 , ξ 0 ) and higher order derivatives of the flow in (x 0 , ξ 0 ).
General hypotheses and main results
Let P h be in the class of h-admissible operators on R n with a real symbol. We refer to for the principal notions of semi-classical analysis which we will use. We note p the principal symbol of P h , and p 1 the sub-principal symbol. For E c a critical energy of p we will study the asymptotic behavior of the spectral function γ(E c , h), defined by under the following hypotheses which are classical in this context : w j ((x j −x 0,j ) 2 +σ j (ξ j −ξ 0,j ) 2 )+O(||(x−x 0 , ξ −ξ 0 )|| 3 ), (5) with σ j = ±1 and w j ∈ R\{0}.
By a classical result, see e.g. , the hypothesis (H 1 ) insures that the spectrum of P h is discrete in I ε = for ε < ε 0 and h small enough : this will be assumed in the following. We note Exp(tH f ), with H f = ∂ ξ f.∂ x − ∂ x f.∂ ξ , the Hamilton flow of a function f ∈ C ∞ (T * R n ). For Φ t = Exp(tH p ) taking the derivative with respect to the initial conditions gives a symplectomorphism d x,ξ Φ t (x, ξ) : T (x,ξ) (T * R n ) → T Φt(x,ξ) (T * R n ), and for z 0 = (x 0 , ξ 0 ) a critical point of p we have the fundamental automorphism Near the critical point of p we can write where the functions p j are homogeneous of degree j in (x − x 0 , ξ − ξ 0 ). In particular, p 2 is the Hessian in z 0 and can be interpreted as an invariantly defined quadratic form on T z 0 (T * R n ).
To this linear subspace we associate its dimension l T = 2d T = dim(F T ), and also the following three objects : We say that The next condition is inspired by proposition 2.1 of D. Khuat Duy E c − ε, E c + ε , about this object is Proposition 7 Let Λ be the Lagrangian manifold associated to the flow of p We recall the compactly supported cut-off function ψ = ψ(x, ξ), on whose support (H 5 ) holds. If ψ 1 is such that ψ 1 ψ = ψ, with supp(ψ 1 ) small enough, then by cyclicity of the trace, After perhaps a local change of variable in y, the operator ψ w h Θ(P h )Exp( it h P h )ψ w 1,h can be approximated , modulo an error O(h N ), by an h-FIO with kernel For a detailed construction we refer to or .
Remark 8 Because of the presence of Θ(P h ) andφ, the amplitudes a N are of compact support. Since we are interested in the main contribution to the trace formula we note a the amplitude of (17), i.e. a depends on (h, N).
Study of the phase function and of the classical dynamics
First, we study the nature of the critical points of (17). After we establish some results on the classical dynamics related to p(x, ξ) and we compute the Taylor expansion of the phase function. Resonance-type conditions will naturally occur in the study of this question.
Singularity of the phase
By Theorem 5.3 of , we can, after perhaps a local change of variables in y, suppose that the flow Φ t , near (x 0 , ξ 0 ) and for t ∈ supp(φ) sufficiently small, has a generating function S(t, x, η), i.e.
which therefore, by a classical result, satisfies the Hamilton-Jacobi equation ∂ t S(t, x, η)+p(x, ∂ x S(t, x, η) = 0. Hence, near (T, x 0 , ξ 0 ), the Lagrangian manifold Λ of the flow is parameterized by the phase function S(t, x, η) − y, η . This choice for the phase is only valid near (x 0 , ξ 0 ) when ξ 0 = 0, but if ξ 0 = 0 we can change the operator P h by e i h x,ξ 1 P h e − i h x,ξ 1 with ξ 1 = 0. This does not affect the spectrum and the new operator obtained has symbol p(x, ξ − ξ 1 ) and critical point (x 0 , ξ 1 ). With (19) a critical point of (17) satisfies The following lemma is classical and can for example be found in . Recall that we will often denote points (x, ξ) of phase space by a single letter z.
If we use Lemma 9, we obtain for our phase function The next result is also well known from classical mechanics Let T = 0 be a period of dΦ t (z 0 ). Corollary 10 shows that we must introduce This function g is C ∞ and satisfies To simplify the notations we write g(t, z) = g(t, x, ξ) and Lemma 12 In a neighborhood of z 0 , and near T, the only critical point, on the energy surface Σ Ec , of the functions S(T, x, ξ) − x, ξ and g(t, x, ξ) is z 0 .
Where I n is the identity matrix of order n. But Φ t (∂ ξ S(t, x, ξ), ξ) = (x, ∂ x S(t, x, ξ)) and this leads to Since dΦ t is an isomorphism Eq. (22) imposes In a suitable neighborhood of z 0 this implies that ( We then have by the group law For t = T and |t − T | small the point (∂ ξ S(t, x, ξ), ξ)) would be periodic, with period (t − T ). For (x, ξ) ∈ Σ Ec , and near z 0 , (H 5 ) implies that (x, ξ) = z 0 .
The Hessian matrix with respect to z = (x, ξ) of g in (T, z 0 ) satisfies with B non singular, as seen before. This proves the relation In the case of a total period T of dΦ t (z 0 ) we obtain Proposition 13 If dΦ t (z 0 ) is totally periodic, with period T , then the function g(t, x, ξ) satisfies Hess z (g)(T, z 0 ) = Hess(p)(z 0 ).
The linearized flow in z 0
Up to a permutation of coordinates we can assume that The flow of p 2 , viewed as an element of End and "diag" means diagonal matrix. In the following we work on the subspace {x ′′ = ξ ′′ = 0} obtained by projection on the periodic variables. Let be I a subset of {1, ..., n} with l elements, l > 1. The existence of a non trivial closed trajectory of dimension l imposes that there exists a c ∈ R * such that Remark 14 Let M(w) = {k ∈ Z n / k, w = 0} be the Z-module of resonances of the vector w = (w 1 , ..., w n ). The relations (26), for l > 1 lead to resonances, since n i w i −n j w j = 0, but a resonant system can have no periodic trajectories of dimension greater than 1, as is shown by ( Since we are interested by the periods of dΦ t (z 0 ) we define (assuming (25)) and also Let P 1 and P 2 be the linear subspaces obtained by projecting orthogonally on the effective variables of Q + , Q − .
Proof. Choosing coordinates as in (25), we see that With dim(F T ) = 2d T we can choose our coordinates such that Elementary considerations show that where * designs matrix blocs which are irrelevant for the present discussion. Hence, by restriction to F T : (Hess(g)(T, z 0 )) |F T = (Hess(p)(z 0 )) |F T .
Taylor series of the flow near z 0
We start by the general case of an autonomous system near an equilibrium. Let be Φ t the flow of a C ∞ vector field X on R n with coordinates z = (z 1 , ..., z n ) and let z 0 a fixed point of Φ t . We denote by A(z 0 ) the matrix of the linearization of X in z 0 and we recall that dΦ t (z 0 ) = Exp(tA(z 0 )). Here, and in the following, the derivatives d will be taken with respect to z, we denote by d k f the k-th derivative of f regarded as a multi-linear form on the k-fold product R n × ... × R n , and d k f (z 0 ) or d k z 0 f this derivative evaluated in z 0 . As an example, we compute the second derivative of the flow in z 0 : Hence at the point z 0 we obtain Let us write Hess(X)(z 0 ) for the vector valued Hessian of X evaluated in z 0 . Interpreting (29) as an inhomogeneous system of equations for (30) Now, let us assume that z 0 is the origin, we generalize as follows : Proof. The first result is trivial. At the order k we have for |α| = k Hence, for z = 0 we simply have , and by integration, with the initial condition d k Φ 0 (0) = 0, we obtain .., dΦ s (0))ds, and the result holds by linearity.
A more general result is
Theorem 17 Let be z 0 an equilibrium point of X and Φ t the flow of X. For all m ∈ N * , there exists a polynomial map P m , of degree at most m, such that (32) In addition P m is uniquely determined by the m-jet of X in z 0 .
Proof. For m = 1, dΦ t (z 0 ) is determined by the operator A(z 0 ), i.e. by the 1-jet of X. We note x l ∈ (R n ) l the image of x under the diagonal mapping, with the same convention for any vector. If f and g are smooth we obtain this leads to the differential equation, operator valued With the initial condition d m Φ 0 (z 0 ) = 0, we obtain that the solution is given by (32). Moreover, Eq.(33) shows that P m is completely determined by the derivatives of order less or equal than m of X.
Application to Hamiltonian systems
Proposition 16 applied to the flow of H p shows that We now consider more closely d 2 Φ t (z 0 ) for t = T , a period of dΦ t (z 0 ). We introduce the following terminology Remark 19 This notion is weaker than the usual resonance condition since for l even there always exists a pseudo-resonance. For example, for l = 4 we have (w i − w i ) ± (w j − w j ) = 0, although w can be non-resonant at the order 4. We also observe that all resonances of order 3 are pseudo-resonances.
In term of the frequencies w i , we then have Theorem 20 If the frequencies w satisfy no pseudo-resonance relation of order 3 and if T is a total period of dΦ t (z 0 ), we have d 2 Φ T (z 0 ) = 0.
Proof. Under the condition (H 3 ), d 2 Φ t (z 0 ) can be expressed as a linear combination of integrals of the elementary functions s → exp(±is(w i +ε 1 w j +ε 2 w k )) with ε j = ±1. Hence, to determine d 2 Φ T (z 0 ) we must compute T 0 exp(±is(w i + ε 1 w j + ε 2 w k ))ds, but under the assumptions of the theorem all these integrals are 0.
For all l ∈ N ⋆ let be M l (W ) = {k ∈ Z 2n / k, (w, w) = 0, |k| = l} the Zmodule of resonances of order l. In the presence of resonances we can say the following : The proof is trivial when going back to the proof of Theorem 20.
Let f * be the pullback by a map f . Then by proposition 16 we have : Corollary 22 For a Hamiltonian system with the equilibrium point z 0 and such that d j z 0 p = 0, ∀j ∈ {3, ..., k − 1}, we obtain And Theorem 20 generalizes trivially to the order k under the conditions of Corollary 22. More precisely, if k is odd and if there is no pseudo-resonance of order k then, under the assumptions of Corollary 22, we have d k−1 z 0 Φ t = 0.
Relation between the phase and the flow
Like in the preceding section we consider a Hamiltonian function p with total period T for the linearized flow, satisfying, near 0 We recall that z = (x, ξ) and z k = (z, ..., z) ∈ R 2nk . By Taylor, we have Under these conditions we can write the generating function at time T as where R k is homogeneous of degree k and R k+1 is the remainder of the Taylor expansion. Let J be the matrix of the standard symplectic form on T * R n . The relation between the phase function and the flow of p is given by In addition we have R j (x, ξ) = 0, 3 ≤ j < k, and Proof. With equation (19) we obtain by identification of homogeneous terms we have successively where the last result holds by homogeneity. Corollary 22 then implies (38).
Normal forms of the phase function
In this section we derive suitable normal forms for Ψ(t, x, ξ) = S(t, x, ξ)− x, ξ in our oscillatory integral representation of γ 2 (E c , h). We recall the decomposition cf. formulas (20) and (21). In the micro-local neighborhood of z 0 = (x 0 , ξ 0 ) we are interested in, the only critical point of R and of g(t, ·) is z 0 for t close to T and, moreover, z 0 is non-degenerate for the latter.
A further very important simplifying assumption we will make for the moment is that, until further notice, T is a total period of dΦ t (z 0 ). We will show in section 6.3 below how to relax this assumption. If T is such a total period, then clearly R(z) = O(||z|| 3 ). This can be made more precise Lemma 24 If near z 0 the function p satisfies (H 3 ) and condition (35) then for t near T there exist a non degenerate quadratic form Q t (x, ξ) such that Q T (x, ξ) = p 2 (x, ξ) and Proof. On replacing t − T by t we can write Ψ = R(z) + tG(t, z) with G(t, z) = g(t+T, z). By a second order Taylor expansion around z 0 and proposition 13 we have that G(t, z) = Q t (z) + h(t, z), with Q 0 (z) = p 2 (z) and h(t, z) = O(||z|| 3 ). Now by proposition 23, R(z) = O(||z|| k ), given that p satisfies (35), and since Φ T (z) = z + O(||z|| k−1 ), we have that S(T, x, ξ) = x, ξ + O(||(x, ξ)|| k ). Therefore h(t, z) = O(||z|| k ) uniformly in t and the lemma follows.
Reduction of the phase with respect to C Q T .
Let us suppose S(T, x, ξ) contains effectively some terms of order k. We write, as before, R(z) = R k (z) + R k+1 (z), where R k is the homogeneous component of degree k of R and R k+1 is the remainder of the Taylor series. The following lemma is useful for any perturbation by a function of odd degree.
Lemma 25 Let be Q a non degenerate quadratic form on R n , n > 3, with inertia indices greater than 2. For all odd continuous function R, Q and R have a common zero on S n−1 .
Proof. Up to a linear change of coordinates we can assume that Q(x) = ||x 1 || 2 − ||x 2 || 2 , with x 1 ∈ R p , x 2 ∈ R q , p, q ≥ 2, p + q = n. Now the cone of the zeros of Q is invariant under isometries of the subspaces (x 1 , 0) and (0, x 2 ). By rotating around the origins there exist a continuous curve γ 1 inside the cone of Q mapping (x 1 , x 2 ) to (−x 1 , x 2 ) and a curve γ 2 mapping (−x 1 , x 2 ) to (−x 1 , −x 2 ), inside the cone. If γ = γ 1 .γ 2 is the union of the two previous curves, the function R(γ) gives the result by continuity since R is odd.
Remark 26 A consequence of Lemma 25 is that the set C Q T ∩ C R k ∩ S n−1 is not empty when the function R k is non-zero and odd and Q T is non-definite.
We choose polar coordinates z = (x, ξ) = rθ, θ ∈ S 2n−1 (R). These coordinates will perform a "blow-up" of R×(T * R n \{0}). In general one uses the projective space P 2n−1 (R), but here, since the singularities are carried by the conic set of the zeros of Q T , it is convenient to use the sphere S 2n−1 . For any function f , positively homogeneous on R n , we note C f = {x ∈ R n / f (x) = 0}, the conic set of the zeros of f . Finally, g ≃ h means that applications g and h are conjugated by a local diffeomorphism.
Remark 30
Coordinates χ form a system of admissible charts near (T, z 0 ) and these are singular in z = z 0 as coordinates on T * R n . In the three systems of coordinates the measures are r 2n−1 | Dχ D(t,r,θ) (t, r, θ)|dtdrdθ, this term r 2n−1 plays a major role since the critical sets of our normal forms are {r = 0}.
Combining Lemma 27, 28 and 29 gives
Theorem 31 If (H 3 ) and conditions of Lemma 29 are satisfied and if T is a total period of dΦ t (z 0 ), the phase function S(t, x, ξ) − x, ξ + tE c has one of the following normal forms on the blow-up of (T, x 0 , ξ 0 ) : second normal forms : third normal forms : We end this section with two lemmas on asymptotics of oscillatory integrals.
Lemma 32 There is a sequence where : Proof. We defineĝ(τ, r) = F t (a(t, r))(τ ), where F t is the partial Fourier transform with respect to t. Then we obtain Taking the Taylor series in r ofĝ(τ, r), at the origin, giveŝ
Straightforward computations shows that
with : Proof. We use the Berstein-Sato polynomial, see e.g. . We write ∞ 0 e iλr k a(r)dr = 1 2iπ Since for all positive r we have we can compute the asymptotic by the residue method. All poles are simple and, by pushing of the complex path of integration to the right, we obtain Straightforward computations then show that µ l = (−1) k Γ( l k ) exp(iπ l 2k ).
Proofs of the main theorems
We start with the simpler case of T being a total period of dΦ t (z 0 ). Afterwards we study the contributions of non-total periods T , distinguishing out particular case of a function p 2 whose restriction to the linear subspace F T has constant sign. In the following we suppose, without loss of generality, that the support of the amplitude contains only one non-zero period of the linearized flow.
Blow-up and partition of the sphere
We will apply the results of sections 4 and 5, and we recall that By a time translation and with the polar coordinates (x, ξ) = rθ, θ ∈ S 2n−1 (R), we obtain for the top order part of (2πh) n γ 2 (E c , h) : where dθ is the standard surface measure on the sphere.
Partition of unity on the sphere.
Since C Q T ∩ S 2n−1 (R) is compact, we can introduce a finite partition of unity i∈I with the property that Q T (θ) = 0 on supp(Ω 1 i ), R k (θ) = 0 on supp(Ω 2 j ) and C Q T ∩C R k ⊂ supp(Ω 3 l ). We split up the integral I(T, h) according to this partition of unity and use the normal forms of Theorem 31. On any chart let be Jχ the Jacobian of the relevant diffeomorphism of blow-up from Theorem 31. For j in I, J and L respectively, we define Then, from Theorem 31, we obtain the local contributions for the first normal forms. Also for the second normal forms and for the third normal forms where the amplitudes are respectively given by It is convenient, for the calculations below, to introduce A j,i = χ 2n−1 1Ãj,i , for j ∈ {1, 2, 3}, c.f. Remark . By construction the functions A j,i are of compact support in their system of coordinates.
Remark 34
We obtain for each new phases the following critical sets Where C(f ) denotes the critical set of a function f .
Analysis in the case of a total period
First normal forms. We note F and F t the total and partial Fourier transform with respect to t.
For an amplitude of the form a(r, t) = O(r 2n−1 ) and k = 2, where r = χ 1 , Lemma 32 shows that the first non-zero coefficient is obtained for l 0 = 2n − 1.
Observe that the oscillatory integrals J j (λ) can be treated individually using Lemma 33, but we first analyze the remainder term, r n−2 (λ), which equals We split the integral with respect to dr as : where A = A(λ) will be chosen below. We accordingly split r n−2 (λ) as r 1,A n−2 (λ)+ r 2,A n−2 (λ) and r 1,A n−2 (λ) is given by Eq.(55), with the integral restrained to . Easy estimates then show that r 1,A n−2 (λ) ≤ CA 2 λ −(n−1) with C independent of λ.
Next, for r 2,A n−2 (λ) we do an integration by part with respect to t, this leads to We then observe that the first integral in Eq.(56) is equal to by similar estimates as for r 1,A n−2 (λ). Finally, the last integral in Eq.(56) can be estimated by remembering thatâ has a compact support in v. In conclusion, we find that where we have chosen A = λ − 1 2 .
The K j (λ) can be treated individually via Lemma 32. For j = 0, we obtain The leading term, obtained for l = 2n − 3, is a 0 (0, s) .
Hence for our amplitude we have the main contribution Like for the second normal forms, the other terms K j (λ), with j > 0, and the remainder r n−2 (λ) give contributions of strictly lower orders.
Finally, on each local chart the main contributions are Remark 35 The proofs above show in fact much more than just an asymptotic equivalent for I(T, h), and therefore for γ 2 (E c , h). They show the existence of a limited asymptotic expansion in the case of indefinite Q T : where the sum is over all ν such that 2n+k−2+ν k < n, or ν < (k − 2)(n − 1), and of a complete asymptotic expansion if Q T is definite. A similar remark applies for the case of a non-total period, which we examine in the next section.
We now compute the leading term of the expansion in case of a non-definite Q T and for T a total period of the linearized flow.
Since charts associated to second normal forms cover the trace of the cone, by summation over the partition of unity we obtain where θ are now local coordinates on the surface C p 2 ∩S 2n−1 . The top-order contribution to the trace formula follows from γ 2 (E c , T, h) = (2π) −n−1 h −n I(T, h).
Case of a non-empty intersection of cones.
Here the main contribution is given by the normal forms 2 and 3. The local contribution of any chart associated to second normal forms can be computed like in the previous section. The contribution of the third normal forms is given by equation (59) with the amplitudeà 3,l (0, 0, 0, χ 3 ). Let be z = (z 1 , z 2 , z 3 ), we use again an oscillatory representation of the delta-Dirac distribution via Since (χ 0 , χ 1 ) = (t, r), integration w.r.t. (z 1 , z 3 , χ 0 , χ 1 ) gives , Ω 3 l (θ)a(T, 0) dθdz 2 . A classical result, see volume 1 page 167, is By construction (χ 2 , χ 3 )(0, 0, θ) = (p 2 (θ), R k (θ)) and with the notations of Theorem 4 we obtain, using again p 2 as a local coordinate µ k a(T, 0) on a chart associated to the second normal form and the total contribution arising from normal forms 2 and 3 is Since a(T, 0) =φ(T ) exp(iT p 1 (z 0 )), see formula (66) below, this proves Theorem 4 for a total period.
Case of a non-total period
Let be T a non total period of the linearized flow. We can assume, up to a permutation of coordinates, that We can apply the Morse lemma with parameter since the phase function at time T is only degenerate in z along F T (cf. Corollary 10). The quadratic part S 2 (t, x, ξ) of the function S is given by Using Theorem 5.3 of we can assume that det = 0 and the function S 2 is well determined locally. Then we have the following facts By the Morse lemma, after a change of variable z →z and callingz again z, we have Ψ(t, z) = q(z 2 ) + Ψ(t, z 1 , z 2 (t, z 1 )) = q(z 2 ) +Ψ(t, z 1 ), and by Corollary 10 again, q = 1 2 q T = p 2 |F ⊥ T . In the following, we note R(t, z 1 ) ≡ R(z 1 , z 2 (t, z 1 )), g(t, z 1 ) ≡ g(t, z 1 , z 2 (t, z 1 )), Ψ(t, z 1 ) ≡ Ψ(t, z 1 , z 2 (t, z 1 )) = R(z 1 , z 2 (t, z 1 )) + (t − T )g(t, z 1 , z 2 (t, z 1 )).
With these conventions we can write whereã is the new amplitude after change of variables due to the Morse lemma. The stationary phase method applied to the z 2 -integral gives and in particular for the leading term we have We now distinguish two different cases : the quadratic form Q T , that is p 2 restricted to F T , is definite or non-definite. In the first case only the normal forms ±χ 0 χ 2 1 will occur, i.e. the knowledge of dΦ t (z 0 ) is sufficient. In the second case the main contribution involves R k , and hence the operator d k−1 Φ t (z 0 ).
But by construction χ 1 = r, and since we have localized the amplitude near T (65) We can now use a result of , also used in and . Since the propagator Exp( i h tP h ) is a FIO associated to the Lagrangian manifold of the flow, it's principal symbol in the coordinates (t, y, η) is given by the half-density The representation of the propagator with the kernel 1 (2πh) n R n e i h (S(t,x,η)− y,η ) (α(t, x, η) + hα 1 (t, x, η, h))dy, leads to the half-density α(t, x, η)|dtdxdη| For our unique critical point z 0 we obtain If there is no period of dΦ t (z 0 ) on supp(φ) Theorem 5.6 of gives for a certain m 0 . The denominator has a zero of order d T in t = T , hence for all t = T in a sufficiently small neighborhood of T , we have Finally, since the contribution is smooth in t, we obtain that and this completes the proof of Theorem 3.
Remark 36
If there are no rational relations between the eigenvalues of Q + and Q − all contributions of the non-zero periods of dΦ t (z 0 ) are given by Theorem 3, as is shown by Proposition 15. This gives a total contribution where the summation is over the non-zero periods of dΦ t (z 0 ).
The case Q T indefinite.
Copying the construction for a total period we obtain normal forms for the phase Ψ(t, z 1 ) = R(t, z 1 ) + (t − T )g(t, z 1 ), with the decomposition w.r.t. C Q T : Now the dimension is l T = 2d T and results obtained for total periods show again that the contributions of normal forms 2 and 3 are dominating these of normal forms 1. Combining this with the leading term of the stationary phase method gives I(T, h) = (2πh) n−d T e i π 4 sgn(q T ) | det q T | 1 2 ( R×R l T a(t, z 1 , 0)e i h (R(t,z 1 )+(t−T )g(t,z 1 )) dtdz 1 +O(h)), and the contribution of a non-total period is computed by restriction of all objects to F T . This proves Theorems 3 and 4 in their general forms.
The intersection C Q ∩ C R 3 is obtained by solving the system This leads to the surfaces (S 1 ) : By symmetry we just examine gradients on the first surface (S 1 ), we have But, since the minor determinant extracted from ∇Q |S 1 and ∇R 3|S 1 D(x 1 , ξ 1 ) = x 1 ξ 1 (x 2 1 + ξ 2 1 ), is non zero for (x 1 , ξ 1 ) = 0, ∇Q, ∇R 3 are linearly independent on C Q ∩C R 3 ∩S 3 . |
/*
* Copyright 2021 Data and Service Center for the Humanities - DaSCH.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
package project
import (
"context"
"encoding/json"
"fmt"
"github.com/EventStore/EventStore-Client-Go/position"
"log"
"time"
"github.com/EventStore/EventStore-Client-Go/client"
"github.com/EventStore/EventStore-Client-Go/direction"
"github.com/EventStore/EventStore-Client-Go/messages"
"github.com/EventStore/EventStore-Client-Go/streamrevision"
"github.com/dasch-swiss/dasch-service-platform/services/admin/backend/entity/project"
"github.com/dasch-swiss/dasch-service-platform/services/admin/backend/event"
"github.com/dasch-swiss/dasch-service-platform/shared/go/pkg/valueobject"
"github.com/gofrs/uuid"
)
// projectRepository contains a pointer to the client.
type projectRepository struct {
c *client.Client
}
// NewProjectRepository creates a new repository to store project events in.
func NewProjectRepository(client *client.Client) *projectRepository {
return &projectRepository{
c: client,
}
}
// Save stores the project events in the projectRepository.
func (r *projectRepository) Save(ctv context.Context, p *project.Aggregate) (valueobject.Identifier, error) {
var proposedEvents []messages.ProposedEvent
streamRevision := streamrevision.StreamRevisionStreamExists
for _, ev := range p.Events() {
switch e := ev.(type) {
case *event.ProjectCreated:
j, err := json.Marshal(e)
if err != nil {
return e.ID, fmt.Errorf("problem serializing '%T' event to json", e)
}
eventID, _ := uuid.NewV4()
pe := messages.ProposedEvent{
EventID: eventID,
EventType: "ProjectCreated",
ContentType: "application/json",
Data: j,
}
proposedEvents = append(proposedEvents, pe)
streamRevision = streamrevision.StreamRevisionNoStream
case *event.ProjectChanged:
j, err := json.Marshal(e)
if err != nil {
return e.ID, fmt.Errorf("problem serializing '%T' event to json", e)
}
eventID, _ := uuid.NewV4()
pe := messages.ProposedEvent{
EventID: eventID,
EventType: "ProjectChanged",
ContentType: "application/json",
Data: j,
}
proposedEvents = append(proposedEvents, pe)
case *event.ProjectDeleted:
j, err := json.Marshal(e)
if err != nil {
return e.ID, fmt.Errorf("problem serializing '%T' event to json", e)
}
eventID, _ := uuid.NewV4()
pe := messages.ProposedEvent{
EventID: eventID,
EventType: "ProjectDeleted",
ContentType: "application/json",
Data: j,
}
proposedEvents = append(proposedEvents, pe)
}
}
streamID := "Project-" + p.ID().String()
ctx, cancel := context.WithTimeout(context.Background(), time.Duration(5)*time.Second)
defer cancel()
_, err := r.c.AppendToStream(ctx, streamID, streamRevision, proposedEvents)
if err != nil {
log.Fatalf("Unexpected failure %+v", err)
}
return p.ID(), nil
}
// Load reads the events from the event store and recreates a project aggregate.
func (r *projectRepository) Load(ctx context.Context, id valueobject.Identifier) (*project.Aggregate, error) {
streamID := "Project-" + id.String()
// TODO: figure out the correct way to replay all the events, currently hardcoded to replay the last 1000 events
recordedEvents, err := r.c.ReadStreamEvents(ctx, direction.Forwards, streamID, streamrevision.StreamRevisionStart, 1000, false)
if err != nil {
log.Printf("Unexpected failure %+v", err)
return &project.Aggregate{}, project.ErrProjectNotFound
}
var events []event.Event
for _, record := range recordedEvents {
switch eventType := record.EventType; eventType {
case "ProjectCreated":
var e event.ProjectCreated
err := json.Unmarshal(record.Data, &e)
if err != nil {
return &project.Aggregate{}, fmt.Errorf("problem deserializing '%s' event from json", record.EventType)
}
events = append(events, &e)
case "ProjectChanged":
var e event.ProjectChanged
err := json.Unmarshal(record.Data, &e)
if err != nil {
return &project.Aggregate{}, fmt.Errorf("problem deserializing '%s' event from json", record.EventType)
}
events = append(events, &e)
case "ProjectDeleted":
var e event.ProjectDeleted
err := json.Unmarshal(record.Data, &e)
if err != nil {
return &project.Aggregate{}, fmt.Errorf("problem deserializing '%s' event from json", record.EventType)
}
events = append(events, &e)
default:
log.Printf("unexpected event type: %T", eventType)
}
}
return project.NewAggregateFromEvents(events), nil
}
// GetProjectIds returns a list of all active project ids.
// returnDeletedProjects can be used to also return projects marked as deleted in the list.
func (r *projectRepository) GetProjectIds(ctx context.Context, returnDeletedProjects bool) ([]valueobject.Identifier, error) {
numberOfEventsToRead := 1000
numberOfEvents := uint64(numberOfEventsToRead)
recordedEvents, err := r.c.ReadAllEvents(ctx, direction.Forwards, position.StartPosition, numberOfEvents, true)
if err != nil {
log.Printf("Unexpected failure %+v", err)
return nil, err
}
var projectIds []valueobject.Identifier
// filter to select only ProjectCreated events
for _, record := range recordedEvents {
switch eventType := record.EventType; eventType {
case "ProjectCreated":
var e event.ProjectCreated
err := json.Unmarshal(record.Data, &e)
if err != nil {
return []valueobject.Identifier{}, fmt.Errorf("problem deserializing '%s' event from json", record.EventType)
}
projectIds = append(projectIds, e.ID)
case "ProjectDeleted":
var e event.ProjectDeleted
err := json.Unmarshal(record.Data, &e)
if err != nil {
return []valueobject.Identifier{}, fmt.Errorf("problem deserializing '%s' event from json", record.EventType)
}
if !returnDeletedProjects { // if deleted project should not be returned
for i := range projectIds { // loop through the project ids
if projectIds[i] == e.ID { // if a deleted project is found among the project ids
projectIds = append(projectIds[:i], projectIds[i+1:]...) // remove it
}
}
}
}
}
return projectIds, nil
}
|
SHANGHAI, CHINA - AUGUST 21: (CHINA OUT) English singer-songwriter and musician James Blunt performs on stage during his concert at Mercedes-Benz Arena on August 21, 2011 in Shanghai, China. (Photo by ChinaFotoPress/Getty Images)
James Blunt is reportedly saying "Goodbye My Lover" to the music industry.
"I just want to take some time out for myself," said Blunt in an interview with The Daily Mail. "I haven't got any plans to do more songwriting. I have been chilling out since I finished my world tour and I've spent a lot of time in Ibiza."
Blunt rose to fame with his 2005 album, "Back to Bedlam," which featured hit tracks "You're Beautiful" and the aforementioned "Goodbye My Lover." The British singer began his music career after serving as an officer in the British Army's Life Guards, including completing a stint under NATO in Kosovo in 1999.
Blunt released two other albums after "Back to Bedlam," 2007's "All the Lost Souls," and 2010's "Some Kind of Trouble." While both were generally well-received, "Back to Bedlam," was far and away the most successful, selling 11 million copies and earning him five Grammy nominations including Best New Artist, Song of the Year, and Best Pop Vocal Album. |
/**
* evaluates an expression and adds containing vars to the sets.
*/
private void handleExpression(
CFAEdge edge,
CExpression exp,
String varName,
final VariableOrField lhs) {
handleExpression(edge, exp, varName, 0, lhs);
} |
/** Run SYNC on the table, i.e., write out data from the cache to the
FTS auxiliary INDEX table and clear the cache at the end.
@param[in,out] table fts table
@param[in] wait whether wait for existing sync to finish
@return DB_SUCCESS on success, error code on failure. */
dberr_t fts_sync_table(dict_table_t* table, bool wait)
{
dberr_t err = DB_SUCCESS;
ut_ad(table->fts);
if (table->space && table->fts->cache
&& !dict_table_is_corrupted(table)) {
err = fts_sync(table->fts->cache->sync, !wait, wait);
}
return(err);
} |
/* xpdatetime.h */
/* Cross-platform (and eXtra Precision) date/time functions */
/* $Id: xpdatetime.h,v 1.4 2014/02/10 09:20:44 deuce Exp $ */
/****************************************************************************
* @format.tab-size 4 (Plain Text/Source Code File Header) *
* @format.use-tabs true (see http://www.synchro.net/ptsc_hdr.html) *
* *
* Copyright 2008 <NAME> - http://www.synchro.net/copyright.html *
* *
* This library is free software; you can redistribute it and/or *
* modify it under the terms of the GNU Lesser General Public License *
* as published by the Free Software Foundation; either version 2 *
* of the License, or (at your option) any later version. *
* See the GNU Lesser General Public License for more details: lgpl.txt or *
* http://www.fsf.org/copyleft/lesser.html *
* *
* Anonymous FTP access to the most recent released source is available at *
* ftp://vert.synchro.net, ftp://cvs.synchro.net and ftp://ftp.synchro.net *
* *
* Anonymous CVS access to the development source and modification history *
* is available at cvs.synchro.net:/cvsroot/sbbs, example: *
* cvs -d :pserver:[email protected]:/cvsroot/sbbs login *
* (just hit return, no password is necessary) *
* cvs -d :pserver:[email protected]:/cvsroot/sbbs checkout src *
* *
* For Synchronet coding style and modification guidelines, see *
* http://www.synchro.net/source.html *
* *
* You are encouraged to submit any modifications (preferably in Unix diff *
* format) via e-mail to <EMAIL> *
* *
* Note: If this box doesn't appear square, then you need to fix your tabs. *
****************************************************************************/
#ifndef _XPDATETIME_H_
#define _XPDATETIME_H_
#include "gen_defs.h" /* uint32_t and time_t */
#include "wrapdll.h"
#if defined(__cplusplus)
extern "C" {
#endif
/**************************************/
/* Cross-platform date/time functions */
/**************************************/
#define INVALID_TIME (time_t)-1 /* time_t representation of an invalid date/time */
typedef struct {
unsigned year; /* 0-9999 */
unsigned month; /* 1-12 */
unsigned day; /* 1-31 */
} xpDate_t;
typedef struct {
unsigned hour; /* 0-23 */
unsigned minute; /* 0-59 */
float second; /* 0.0-59.999, supports fractional seconds */
} xpTime_t;
typedef int xpTimeZone_t;
#define xpTimeZone_UTC 0
#define xpTimeZone_LOCAL 1
typedef struct {
xpDate_t date;
xpTime_t time;
xpTimeZone_t zone; /* minutes +/- UTC */
} xpDateTime_t;
DLLEXPORT xpDateTime_t DLLCALL xpDateTime_create(unsigned year, unsigned month, unsigned day
,unsigned hour, unsigned minute, float second
,xpTimeZone_t);
DLLEXPORT xpDateTime_t DLLCALL xpDateTime_now(void);
DLLEXPORT time_t DLLCALL xpDateTime_to_time(xpDateTime_t);
DLLEXPORT xpDateTime_t DLLCALL time_to_xpDateTime(time_t, xpTimeZone_t);
DLLEXPORT xpDateTime_t DLLCALL gmtime_to_xpDateTime(time_t);
DLLEXPORT xpTimeZone_t DLLCALL xpTimeZone_local(void);
/**********************************************/
/* Decimal-coded ISO-8601 date/time functions */
/**********************************************/
typedef uint32_t isoDate_t; /* CCYYMMDD (decimal) */
typedef uint32_t isoTime_t; /* HHMMSS (decimal) */
#define isoDate_create(year,mon,day) (((year)*10000)+((mon)*100)+(day))
#define isoTime_create(hour,min,sec) (((hour)*10000)+((min)*100)+((unsigned)sec))
#define isoDate_year(date) ((date)/10000)
#define isoDate_month(date) (((date)/100)%100)
#define isoDate_day(date) ((date)%100)
#define isoTime_hour(time) ((time)/10000)
#define isoTime_minute(time) (((time)/100)%100)
#define isoTime_second(time) ((time)%100)
DLLEXPORT BOOL DLLCALL isoTimeZoneStr_parse(const char* str, xpTimeZone_t*);
DLLEXPORT xpDateTime_t DLLCALL isoDateTimeStr_parse(const char* str);
/**************************************************************/
/* Conversion between time_t (local and GMT) and isoDate/Time */
/**************************************************************/
DLLEXPORT isoTime_t DLLCALL time_to_isoTime(time_t);
DLLEXPORT isoTime_t DLLCALL gmtime_to_isoTime(time_t);
DLLEXPORT isoDate_t DLLCALL time_to_isoDateTime(time_t, isoTime_t*);
DLLEXPORT isoDate_t DLLCALL gmtime_to_isoDateTime(time_t, isoTime_t*);
DLLEXPORT time_t DLLCALL isoDateTime_to_time(isoDate_t, isoTime_t);
#define time_to_isoDate(t) time_to_isoDateTime(t,NULL)
#define gmtime_to_isoDate(t) gmtime_to_isoDateTime(t,NULL)
/***************************************************/
/* Conversion between xpDate/Time and isoDate/Time */
/***************************************************/
#define xpDate_to_isoDate(date) isoDate_create((date).year,(date).month,(date).day)
#define xpTime_to_isoTime(time) isoTime_create((time).hour,(time).minute,(unsigned)((time).second))
DLLEXPORT xpDateTime_t DLLCALL isoDateTime_to_xpDateTime(isoDate_t, isoTime_t);
DLLEXPORT isoDate_t DLLCALL xpDateTime_to_isoDateTime(xpDateTime_t, isoTime_t*);
/*****************************************************************/
/* Conversion from xpDate/Time/Zone to isoDate/Time/Zone Strings */
/*****************************************************************/
/* NULL sep (separator) values are automatically replaced with ISO-standard separators */
/* precision example output
* -2 "14"
* -1 "14:02"
* 0 "14:02:39"
* 1 "14.02:39.8"
* 2 "14.02:39.82"
* 3 "14.02:39.829"
*/
DLLEXPORT char* DLLCALL xpDate_to_isoDateStr(xpDate_t
,const char* sep
,char* str, size_t maxlen);
DLLEXPORT char* DLLCALL xpTime_to_isoTimeStr(xpTime_t
,const char* sep
,int precision
,char* str, size_t maxlen);
DLLEXPORT char* DLLCALL xpTimeZone_to_isoTimeZoneStr(xpTimeZone_t
,const char* sep
,char *str, size_t maxlen);
DLLEXPORT char* DLLCALL xpDateTime_to_isoDateTimeStr(xpDateTime_t
,const char* date_sep, const char* datetime_sep, const char* time_sep
,int precision
,char* str, size_t maxlen);
#if defined(__cplusplus)
}
#endif
#endif /* Don't add anything after this line */
|
//
// Copyright 2007-2008 Christian Henning
//
// Distributed under the Boost Software License, Version 1.0
// See accompanying file LICENSE_1_0.txt or copy at
// http://www.boost.org/LICENSE_1_0.txt
//
#ifndef BOOST_GIL_IO_SCANLINE_READ_ITERATOR_HPP
#define BOOST_GIL_IO_SCANLINE_READ_ITERATOR_HPP
#include <boost/gil/io/error.hpp>
#include <boost/gil/io/typedefs.hpp>
#include <boost/iterator/iterator_facade.hpp>
#include <iterator>
#include <memory>
#include <vector>
namespace boost { namespace gil {
#if BOOST_WORKAROUND(BOOST_MSVC, >= 1400)
#pragma warning(push)
#pragma warning(disable:4512) //assignment operator could not be generated
#endif
/// Input iterator to read images.
template< typename Reader >
class scanline_read_iterator : public boost::iterator_facade< scanline_read_iterator< Reader >
, byte_t*
, std::input_iterator_tag
>
{
private:
typedef boost::iterator_facade< scanline_read_iterator< Reader >
, byte_t*
, std::input_iterator_tag
> base_t;
public:
scanline_read_iterator( Reader& reader
, int pos = 0
)
: _pos( pos )
, _read_scanline( true )
, _skip_scanline( true )
, _reader( reader )
{
_buffer = std::make_shared< buffer_t >( buffer_t( _reader._scanline_length ));
_buffer_start = &_buffer->front();
}
private:
friend class boost::iterator_core_access;
void increment()
{
if( _skip_scanline == true )
{
_reader.skip( _buffer_start
, _pos
);
}
++_pos;
_skip_scanline = true;
_read_scanline = true;
}
bool equal( const scanline_read_iterator& rhs ) const
{
return _pos == rhs._pos;
}
typename base_t::reference dereference() const
{
if( _read_scanline == true )
{
_reader.read( _buffer_start
, _pos
);
}
_skip_scanline = false;
_read_scanline = false;
return _buffer_start;
}
private:
Reader& _reader;
mutable int _pos;
mutable bool _read_scanline;
mutable bool _skip_scanline;
using buffer_t = std::vector<byte_t>;
using buffer_ptr_t = std::shared_ptr<buffer_t>;
buffer_ptr_t _buffer;
mutable byte_t* _buffer_start;
};
#if BOOST_WORKAROUND(BOOST_MSVC, >= 1400)
#pragma warning(pop)
#endif
} // namespace gil
} // namespace boost
#endif
|
<gh_stars>1-10
package com.hk.luatela.dialect.mysql;
import com.hk.luatela.dialect.Dialect;
import com.hk.luatela.dialect.Dialect.*;
import com.hk.str.HTMLText;
public class MySQLTableMeta implements TableMeta, MySQLDialect.MySQLDialectOwner
{
final String tableName;
public MySQLTableMeta(Owner owner, String name)
{
this.tableName = '`' + owner.getPrefix() + name + '`';
}
@Override
public FieldMeta field(String name)
{
return new MySQLFieldMeta(this, name);
}
@Override
public HTMLText print(HTMLText txt)
{
return txt.wr(tableName);
}
}
|
// New returns a pointer to an instance of the monitor
func New(Config store.Config, Status store.Status) *Monitor {
prober := NewProber(Config, Status)
prober.Start()
mon := &Monitor{
Status: Status,
Config: Config,
Prober: prober,
}
mon.setupConfigWatcher()
return mon
} |
/**
* Modifies the current status of a Push source. This operation allows you
* to update the activity logs of a Push source (and consequently the
* activity indicators in the Coveo Cloud V2 administration console).
* Pushing an active source status (i.e., REBUILD, REFRESH, or INCREMENTAL)
* creates an activity. Pushing the IDLE status terminates the ongoing
* activity and marks it as completed.
*
* @param status
* @return
* @throws CoveoResponseException
*/
public CoveoResponse updateSourceStatus(PushAPIStatus status) throws IOException {
String uri = this.baseUrl + "/status?statusType=" + status.toString();
HttpPost request = new HttpPost(uri);
setDefaultHeaders(request);
log.debug("Setting Source Status: {}", status);
HttpResponse response = null;
try {
response = client.execute(request);
ResponseOrResponseException respOrEx = convertResponse(request.getRequestLine(), response);
if (respOrEx.responseException == null) {
return respOrEx.response;
} else {
throw respOrEx.responseException;
}
} catch (IOException ex) {
log.error("Error updating source status", ex);
throw ex;
} finally {
if (request != null) {
request.releaseConnection();
}
}
} |
<filename>projects/angular-ngrx-material-starter/src/app/features/examples/authenticated/authenticated.component.ts
import { ChangeDetectionStrategy, Component, OnInit } from '@angular/core';
import { ROUTE_ANIMATIONS_ELEMENTS } from '../../../core/core.module';
@Component({
selector: 'mfework-authenticated',
templateUrl: './authenticated.component.html',
styleUrls: ['./authenticated.component.scss'],
changeDetection: ChangeDetectionStrategy.OnPush
})
export class AuthenticatedComponent implements OnInit {
routeAnimationsElements = ROUTE_ANIMATIONS_ELEMENTS;
constructor() {}
ngOnInit() {}
}
|
// errorOnUserKeyOverlap returns an error if the last two written sstables in
// this compaction have revisions of the same user key present in both sstables,
// when it shouldn't (eg. when splitting flushes).
func (c *compaction) errorOnUserKeyOverlap(ve *versionEdit) error {
if n := len(ve.NewFiles); n > 1 {
meta := ve.NewFiles[n-1].Meta
prevMeta := ve.NewFiles[n-2].Meta
if !prevMeta.Largest.IsExclusiveSentinel() &&
c.cmp(prevMeta.Largest.UserKey, meta.Smallest.UserKey) >= 0 {
return errors.Errorf("pebble: compaction split user key across two sstables: %s in %s and %s",
prevMeta.Largest.Pretty(c.formatKey),
prevMeta.FileNum,
meta.FileNum)
}
}
return nil
} |
<filename>scripts/manual/0_test.ts
import "@nomiclabs/hardhat-ethers";
import "@openzeppelin/hardhat-upgrades";
import { upgrades, ethers } from "hardhat";
import { Contract } from "ethers";
async function main() {
const [owner] = await ethers.getSigners();
console.log("Deploying contracts with the account:", owner.address);
console.log("Account balance:", (await owner.getBalance()).toString());
let box: Contract;
const Box = await ethers.getContractFactory("Box");
box = Box.attach("0xDfB3cf8D499912Fc837bAc755D1f350491AB00ED");
const tnx = await box.increment();
console.log(tnx);
console.log((await box.retrieve()).toString());
}
main()
.then(() => process.exit(0))
.catch(error => {
console.error(error);
process.exit(1);
});
|
/**
* Parses an optional <tt>resultMatcher</tt> element.
* The default is to leave this unspecified .
*
* @param runElement
* @return an instance of the ResultMatcher class, if specified, or
* null if no result matcher was specified
* @throws TestParseException if a parsing error was encountered
*/
private ResultMatcher parseResultMatcher(final Element runElement) throws TestParseException {
final Element goElement = runElement.getChild("resultMatcher");
if (goElement == null) {
return EQUALITY_RESULT_MATCHER;
}
final String goClass = goElement.getTextTrim();
final ResultMatcher resultMatcher = (ResultMatcher)getInstance(goClass, ResultMatcher.class);
if (resultMatcher == null) {
throw new TestParseException(
"Could not create instance of ResultMatcher from class " + goClass);
}
return resultMatcher;
} |
/**
* Created by mnural on 8/5/15.
*/
@Configuration
@PropertySource("classpath:config/config.properties")
public class Config {
@Autowired
Environment env;
public Environment getEnv() {
return env;
}
public String getProperty(String key){
return env.getProperty(key);
}
public boolean getBooleanProperty(String key) {
return Boolean.parseBoolean(env.getProperty(key));
}
} |
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int gcd(int a,int b){
if(b==0)
return a;
return gcd(b,a%b);
}
int main(){
int c,i,tmp,N,X,x[101000],dx[101000];
scanf("%d %d",&N,&X );
for(i=0;i<N;i++){
scanf("%d",&x[i] );
dx[i]=abs(X-x[i]);
}
if(dx[0]<dx[1]){
tmp=dx[0];
dx[0]=dx[1];
dx[1]=tmp;
}
c=gcd(dx[0],dx[1]);
for(i=2;i<N;i++){
if(c<dx[i]){
tmp=c;
c=dx[i];
dx[i]=tmp;
}
c=gcd(c,dx[i]);
}
printf("%d",c );
}
|
<filename>jni/Vpacker.java
/*-----------------------------------------------------------------------------
* Vpacker.java - A wrapper of vpacker32/64 for JNI
*
* Coding-Style: google-styleguide
* https://code.google.com/p/google-styleguide/
*
* Copyright 2013 <NAME> <<EMAIL>>
*-----------------------------------------------------------------------------
*/
public class Vpacker {
/*-------------------------------------------------
* A interface for compression, a input 32-bit or
* 64-bit array in *src is compressed, and output
* to *dst as a byte sequence. Notice that the
* compress functions is assumed to encode a
* sequence of 'positive' integers with high
* skewness. Therefore, negative integers could
* deteriorate compression ratios.
*
* src : input buffer
* dst : output buffer
* n : # of input integers
* return : # of written bytes, or 0 if it fails
*-------------------------------------------------
*/
public native static long
compress32(final int[] src, byte[] dst, long n);
public native static long
compress64(final long[] src, byte[] dst, long n);
/*-------------------------------------------------
* A interface for decompression, a input byte
* sequence compressed by vpacker32/64_compress()
* is decompressed, and output to *dst as a 32-bit
* or 64-bit array.
*
* src : input buffer
* dst : output buffer
* n : # of input bytes
* return : # of read bytes, or 0 if it fails
*-------------------------------------------------
*/
public native static long
uncompress32(final byte[] src, int[] dst, long n);
public native static long
uncompress64(final byte[] src, long[] dst, long n);
/*-------------------------------------------------
* The function provides the maximumx size that
* vpacker32/64_compress() may output. It is useful
* to know the size in advance because of memory
* allocation for *dst during compression. These
* functions returns 0 if the size is beyound
* supported ones for vpacker.
*
* n : # of input integers
* return : maximum size, or 0 if unsupported
*-------------------------------------------------
*/
public native static long
compress32_bound(long n);
public native static long
compress64_bound(long n);
}
|
BIODIVERSITY AND FUNTIONING OF SELECTED TERRESTRIAL ECOSYSTEMS : ALPINE AND ARCTIC ECOSYSTEMS
Ecosystem integrity on steep mountain slopes and in high elevation landscapes is in general a question of soil stability, which in turn depends on plant cover and rooting patterns. Terrestrial net primary production and decomposition rates in arctic and alpine ecosystems are low and revegetation after human disturbance can take centuries. Relatively few species regulate the annual input and loss of nitrogen from arctic and alpine ecosystems and changes in the abundance of these species could profoundly alter the resource base that governs rates of biogeochemical processes. |
import { AlunosRepository } from "../../repositories/AlunosRepository";
import { PedidosRepository } from "../../repositories/PedidosRepository";
import { ListPedidosByALunosAguardandoDoadorController } from "./ListPedidosByAlunosAguardandoDoadorController";
import { ListPedidosByAlunosAguardandoDoadorUseCase } from "./ListPedidosByAlunosAguardandoDoadorUseCase";
export default (): ListPedidosByALunosAguardandoDoadorController => {
const pedidosRepository = new PedidosRepository();
const alunosRepository = new AlunosRepository();
const listPedidosByAlunoAguardandoDoadorUseCase = new ListPedidosByAlunosAguardandoDoadorUseCase(
pedidosRepository,
alunosRepository
);
const listPedidosByAlunosAguardandoDoadorController = new ListPedidosByALunosAguardandoDoadorController(
listPedidosByAlunoAguardandoDoadorUseCase
);
return listPedidosByAlunosAguardandoDoadorController;
};
|
package org.cryptimeleon.predenc.abe.cp.large.distributed;
import org.cryptimeleon.math.serialization.Representation;
import org.cryptimeleon.math.serialization.StandaloneRepresentable;
import org.cryptimeleon.math.serialization.annotations.ReprUtil;
import org.cryptimeleon.math.serialization.annotations.Represented;
import org.cryptimeleon.predenc.abe.distributed.MasterKeyShare;
import java.math.BigInteger;
import java.util.Objects;
public class DistributedABECPWat11MasterKeyShare implements StandaloneRepresentable, MasterKeyShare {
@Represented
private Integer serverID;
@Represented
private BigInteger share;
public DistributedABECPWat11MasterKeyShare(int serverID, BigInteger share) {
super();
this.serverID = serverID;
this.share = share;
}
public DistributedABECPWat11MasterKeyShare(Representation repr) {
new ReprUtil(this).deserialize(repr);
}
public int getServerID() {
return serverID;
}
public BigInteger getShare() {
return share;
}
@Override
public Representation getRepresentation() {
return ReprUtil.serialize(this);
}
@Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result + serverID;
result = prime * result + ((share == null) ? 0 : share.hashCode());
return result;
}
@Override
public boolean equals(Object obj) {
if (this == obj)
return true;
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
DistributedABECPWat11MasterKeyShare other = (DistributedABECPWat11MasterKeyShare) obj;
return Objects.equals(serverID, other.serverID)
&& Objects.equals(share, other.share);
}
}
|
/**
* An implementation of {@link EventDataConverter} for protocol version v0.22.
*
* The previous converter implementation, {@link EventDataConverterV21}, does
* not expose the proper blip hierarchy, for example, parent blip can be the
* blip that contains the container thread, or the previous sibling blip. This
* implementation, however, is purely based on the {@link ConversationThread}.
*
*/
public class EventDataConverterV22 extends EventDataConverterV21 {
@Override
public WaveletData toWaveletData(Wavelet wavelet, Conversation conversation,
EventMessageBundle eventMessageBundle) {
WaveletData waveletData = super.toWaveletData(wavelet, conversation,
eventMessageBundle);
List<String> blipIds = Lists.newLinkedList();
for (ConversationBlip conversationBlip : conversation.getRootThread().getBlips()) {
blipIds.add(conversationBlip.getId());
}
waveletData.setRootThread(new BlipThread("", -1 , blipIds, null));
return waveletData;
}
@Override
public BlipData toBlipData(ConversationBlip blip, Wavelet wavelet,
EventMessageBundle eventMessageBundle) {
BlipData blipData = super.toBlipData(blip, wavelet, eventMessageBundle);
String threadId = blip.getThread().getId();
blipData.setThreadId(threadId);
// If it's the root thread, that doesn't have thread id, then skip.
if (!threadId.isEmpty()) {
ConversationThread thread = blip.getThread();
addThread(eventMessageBundle, thread, -1, wavelet);
}
// Add the inline reply threads.
List<String> threadIds = Lists.newLinkedList();
for (LocatedReplyThread<? extends ConversationThread> thread : blip.locateReplyThreads()) {
String replyThreadId = thread.getThread().getId();
threadIds.add(replyThreadId);
addThread(eventMessageBundle, thread.getThread(), thread.getLocation(), wavelet);
}
blipData.setReplyThreadIds(threadIds);
return blipData;
}
/**
* Finds the children of a blip, defined as all blips in all reply threads.
*
* @param blip the blip.
* @return the children of the given blip.
*/
@Override
public List<ConversationBlip> findBlipChildren(ConversationBlip blip) {
List<ConversationBlip> children = Lists.newArrayList();
// Add all children from the inline reply threads.
for (LocatedReplyThread<? extends ConversationThread> thread : blip.locateReplyThreads()) {
for (ConversationBlip child : thread.getThread().getBlips()) {
children.add(child);
}
}
return children;
}
/**
* Finds the parent of a blip.
*
* @param blip the blip.
* @return the blip's parent, or {@code null} if the blip is the first blip
* in a conversation.
*/
@Override
public ConversationBlip findBlipParent(ConversationBlip blip) {
return blip.getThread().getParentBlip();
}
/**
* Converts a {@link ConversationThread} into API {@link BlipThread}, then add it
* to the given {@link EventMessageBundle}.
*
* @param eventMessageBundle the event message bundle to add the thread to.
* @param thread the {@link ConversationThread} to convert.
* @param location the anchor location of the thread, or -1 if it's not an
* inline reply thread.
* @param wavelet the wavelet to which the given thread belongs.
*/
private static void addThread(EventMessageBundle eventMessageBundle, ConversationThread thread,
int location, Wavelet wavelet) {
String threadId = thread.getId();
if (eventMessageBundle.hasThreadId(threadId)) {
// The bundle already has the thread, so we don't need to do the
// conversion.
return;
}
// Convert the XML offset into the text offset.
ConversationBlip parent = thread.getParentBlip();
// Locate the thread, if necessary.
if (location == -1) {
for (LocatedReplyThread<? extends ConversationThread> inlineReplyThread :
parent.locateReplyThreads()) {
if (thread.getId().equals(inlineReplyThread.getThread().getId())) {
location = inlineReplyThread.getLocation();
break;
}
}
}
// Use ApiView to convert the offset.
if (location != -1) {
ApiView apiView = new ApiView(parent.getContent(), wavelet);
location = apiView.transformToTextOffset(location);
}
// Get the ids of the contained blips.
List<String> blipIds = Lists.newLinkedList();
for (ConversationBlip blip : thread.getBlips()) {
blipIds.add(blip.getId());
}
eventMessageBundle.addThread(threadId, new BlipThread(thread.getId(), location, blipIds, null));
}
} |
import { Component, Input, EventEmitter, Output, OnInit, OnDestroy } from '@angular/core';
import { UtilsService } from '@app/app/manage-learn/core';
@Component({
selector: 'app-page-questions',
templateUrl: './page-questions.component.html',
styleUrls: ['./page-questions.component.scss'],
})
export class PageQuestionsComponent implements OnInit,OnDestroy {
@Input() inputIndex ;
@Input() data: any;
@Input() isLast: boolean;
@Input() isFirst: boolean;
@Output() nextCallBack = new EventEmitter();
@Output() updateLocalData = new EventEmitter();
@Output() previousCallBack = new EventEmitter()
@Input() evidenceId: string;
@Input() hideButton: boolean;
@Input() submissionId: any;
@Input() imageLocalCopyId: string;
@Input() generalQuestion: boolean;
@Input() schoolId;
@Input() enableQuestionReadOut: boolean;
notNumber: boolean;
questionValid: boolean;
text: string;
constructor(private utils: UtilsService) { }
ngOnDestroy() {
console.log(JSON.stringify(this.data))
for (const question of this.data.pageQuestions) {
// Do check only for questions without visibleif. For visibleIf questions isCompleted property is set in checkForVisibility()
if (!question.visibleIf) {
question.isCompleted = this.utils.isQuestionComplete(question);
}
}
}
ngOnInit() {
this.data.startTime = this.data.startTime ? this.data.startTime : Date.now();
}
updateLocalDataInPageQuestion(): void {
this.updateLocalData.emit();
}
checkForVisibility(currentQuestionIndex) {
const currentQuestion = this.data.pageQuestions[currentQuestionIndex];
let display = true;
for (const question of this.data.pageQuestions) {
for (const condition of currentQuestion.visibleIf) {
if (condition._id === question._id) {
let expression = [];
if (condition.operator != "===") {
if (question.responseType === 'multiselect') {
for (const parentValue of question.value) {
for (const value of condition.value) {
expression.push("(", "'" + parentValue + "'", "===", "'" + value + "'", ")", condition.operator);
}
}
} else {
for (const value of condition.value) {
expression.push("(", "'" + question.value + "'", "===", "'" + value + "'", ")", condition.operator)
}
}
expression.pop();
} else {
if (question.responseType === 'multiselect') {
for (const value of question.value) {
expression.push("(", "'" + condition.value + "'", "===", "'" + value + "'", ")", "||");
}
expression.pop();
} else {
expression.push("(", "'" + question.value + "'", condition.operator, "'" + condition.value + "'", ")")
}
}
if (!eval(expression.join(''))) {
this.data.pageQuestions[currentQuestionIndex].isCompleted = true;
return false
} else {
this.data.pageQuestions[currentQuestionIndex].isCompleted = this.utils.isQuestionComplete(currentQuestion);
}
}
}
}
return display
}
}
|
// MarshalXML to year month day
func (dx DateYMD) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
d := dx.Val()
if d == nil {
return nil
}
err := e.EncodeToken(start)
if err == nil {
e.EncodeElement(d.Year(), xml.StartElement{Name: xml.Name{Local: "year"}})
}
if err == nil {
e.EncodeElement(d.Month(), xml.StartElement{Name: xml.Name{Local: "month"}})
}
if err == nil {
e.EncodeElement(d.Day(), xml.StartElement{Name: xml.Name{Local: "day"}})
}
if err == nil {
e.EncodeToken(xml.EndElement{Name: start.Name})
}
return nil
} |
//update the database and memolist acccording to the "num" memo that Edit.class return
private void updateLitePalAndList(int requestCode, Intent it) {
int num=requestCode;
int tag=it.getIntExtra("tag",0);
Calendar c=Calendar.getInstance();
String current_date=getCurrentDate(c);
String current_time=getCurrentTime(c);
String alarm=it.getStringExtra("alarm");
String mainText=it.getStringExtra("mainText");
boolean gotAlarm = alarm.length() > 1 ? true : false;
OneMemo new_memo = new OneMemo(tag, current_date, current_time, gotAlarm, mainText);
if((requestCode+1)>memolist.size()) {
addRecordToLitePal(num, tag, current_date, current_time, alarm, mainText);
memolist.add(new_memo);
}
else {
if(memolist.get(num).getAlarm()) {
cancelAlarm(num);
}
ContentValues temp = new ContentValues();
temp.put("tag", tag);
temp.put("textDate", current_date);
temp.put("textTime", current_time);
temp.put("alarm", alarm);
temp.put("mainText", mainText);
String where = String.valueOf(num);
DataSupport.updateAll(Memo.class, temp, "num = ?", where);
memolist.set(num, new_memo);
}
if(gotAlarm) {
loadAlarm(alarm, requestCode, 0);
}
adapter.notifyDataSetChanged();
} |
/**
* This unit test fires up a client and server and then tests that the client can request gzip content from the server.
* @author Tom Haggie
*/
public class AcceptEncodingGZipTest {
private static final String MESSAGE = "Hello world!";
private int port;
private HttpServer<ByteBuf, ByteBuf> server;
private HttpClient<ByteBuf, ByteBuf> client;
@Before
public void setupServer() {
server = createServer();
server.start();
port = server.getServerPort();
client = createClient("localhost", port);
}
@After
public void stopServer() throws InterruptedException {
server.shutdown();
client.shutdown();
}
/**
* Just here to show that things work without the compression
*/
@Test
public void getUnzippedContent() {
HttpClientRequest<ByteBuf> request = HttpClientRequest.create(HttpMethod.GET, "/test");
testRequest(client, request);
}
/**
* The actual test - fails with a IllegalReferenceCountException
*/
@Test
public void getZippedContent() {
HttpClientRequest<ByteBuf> request = HttpClientRequest.create(HttpMethod.GET, "/test");
request.withHeader("Accept-Encoding", "gzip, deflate");
testRequest(client, request);
}
/**
* Test a request by sending it to the server and then asserting the answer we get back is correct.
*/
private static void testRequest(HttpClient<ByteBuf, ByteBuf> client, HttpClientRequest<ByteBuf> request) {
String message = client.submit(request)
.flatMap(getContent)
.map(toString)
.toBlocking()
.single();
Assert.assertEquals(MESSAGE, message);
}
/**
* Ignore the headers etc. just get the response content.
*/
private static final Func1<HttpClientResponse<ByteBuf>, Observable<ByteBuf>> getContent = new Func1<HttpClientResponse<ByteBuf>, Observable<ByteBuf>>() {
@Override
public Observable<ByteBuf> call(HttpClientResponse<ByteBuf> response) {
return response.getContent();
}
};
/**
* Converts a ByteBuf to a string - assumes UTF-8
*/
private static final Func1<ByteBuf, String> toString = new Func1<ByteBuf, String>() {
@Override
public String call(ByteBuf byteBuf) {
return byteBuf.toString(StandardCharsets.UTF_8);
}
};
/**
* Create a dumb server that just responds to any request with the same "Hello World!" response.
* If there's an "Accept-Encoding" header with gzip the response will be zipped before its returned.
*/
private static HttpServer<ByteBuf, ByteBuf> createServer() {
return RxNetty.newHttpServerBuilder(0, new RequestHandler<ByteBuf, ByteBuf>() {
@Override
public Observable<Void> handle(HttpServerRequest<ByteBuf> request, final HttpServerResponse<ByteBuf> response) {
String acceptEncoding = request.getHeaders().get("Accept-Encoding");
if (acceptEncoding != null && acceptEncoding.contains("gzip")) {
response.getHeaders().add("Content-Encoding", "gzip");
byte[] zMessage = zipMessage(MESSAGE);
return response.writeBytesAndFlush(zMessage);
} else {
return response.writeStringAndFlush(MESSAGE);
}
}
}).pipelineConfigurator(new HttpServerPipelineConfigurator<ByteBuf, ByteBuf>()).build();
}
/**
* Create a simple client with the a content decompressor
*/
private static HttpClient<ByteBuf, ByteBuf> createClient(String host, int port) {
HttpClientBuilder<ByteBuf, ByteBuf> builder = RxNetty.newHttpClientBuilder(host, port);
builder.pipelineConfigurator(
new PipelineConfiguratorComposite<HttpClientResponse<ByteBuf>, HttpClientRequest<ByteBuf>>(
new HttpClientPipelineConfigurator<ByteBuf, ByteBuf>(),
gzipPipelineConfigurator)
);
return builder.build();
}
/**
* Configurator so that we can support setting the "Accept-Encoding: gzip, deflate" header.
*/
private static final PipelineConfigurator<HttpClientResponse<ByteBuf>, HttpClientRequest<ByteBuf>> gzipPipelineConfigurator = new PipelineConfigurator<HttpClientResponse<ByteBuf>, HttpClientRequest<ByteBuf>>() {
@Override
public void configureNewPipeline(ChannelPipeline pipeline) {
ChannelHandler handlers = new HttpContentDecompressor();
pipeline.addLast(handlers);
}
};
/**
* Returns a byte array with the message gzipped.
*/
private static byte[] zipMessage(String message) {
ByteArrayOutputStream out = new ByteArrayOutputStream();
try {
GZIPOutputStream gzos = new GZIPOutputStream(out);
try {
gzos.write(message.getBytes(StandardCharsets.UTF_8));
} finally {
gzos.close();
}
} catch (IOException e) {
throw new RuntimeException(e);
}
return out.toByteArray();
}
} |
/**
* Connected session for a memcached server
*
* @author dennis
*/
public class MemcachedTCPSession extends NioTCPSession implements
MemcachedSession, Serializable {
/**
* Command which are already sent
*/
protected BlockingQueue<Command> commandAlreadySent;
private final AtomicReference<Command> currentCommand = new LinkedTransferQueue.PaddedAtomicReference<Command>(null);
private SocketAddress remoteSocketAddress; // prevent channel is closed
private int sendBufferSize;
private final MemcachedOptimizer optimiezer;
private volatile boolean allowReconnect;
private volatile boolean authFailed;
private final CommandFactory commandFactory;
private InetSocketAddressWrapper inetSocketAddressWrapper;
public MemcachedTCPSession(NioSessionConfig sessionConfig,
int readRecvBufferSize, MemcachedOptimizer optimiezer,
int readThreadCount, CommandFactory commandFactory) {
super(sessionConfig, readRecvBufferSize);
this.optimiezer = optimiezer;
if (this.selectableChannel != null) {
this.remoteSocketAddress = ((SocketChannel) this.selectableChannel)
.socket().getRemoteSocketAddress();
this.allowReconnect = true;
try {
this.sendBufferSize = ((SocketChannel) this.selectableChannel)
.socket().getSendBufferSize();
} catch (SocketException e) {
this.sendBufferSize = 8 * 1024;
}
}
this.commandAlreadySent = (BlockingQueue<Command>) SystemUtils.createTransferQueue();
this.commandFactory = commandFactory;
}
public InetSocketAddressWrapper getInetSocketAddressWrapper() {
return this.inetSocketAddressWrapper;
}
public int getOrder() {
return this.getInetSocketAddressWrapper().getOrder();
}
public int getWeight() {
return this.getInetSocketAddressWrapper().getWeight();
}
public void setInetSocketAddressWrapper(
InetSocketAddressWrapper inetSocketAddressWrapper) {
this.inetSocketAddressWrapper = inetSocketAddressWrapper;
}
@Override
public String toString() {
return SystemUtils.getRawAddress(this.getRemoteSocketAddress()) + ":"
+ this.getRemoteSocketAddress().getPort();
}
public void destroy() {
Command command = this.currentCommand.get();
if (command != null) {
command.setException(new MemcachedException(
"Session has been closed"));
CountDownLatch latch = command.getLatch();
if (latch != null) {
latch.countDown();
}
}
while ((command = this.commandAlreadySent.poll()) != null) {
command.setException(new MemcachedException(
"Session has been closed"));
CountDownLatch latch = command.getLatch();
if (latch != null) {
latch.countDown();
}
}
}
@Override
public InetSocketAddress getRemoteSocketAddress() {
InetSocketAddress result = super.getRemoteSocketAddress();
if (result == null && this.remoteSocketAddress != null) {
result = (InetSocketAddress) this.remoteSocketAddress;
}
return result;
}
@Override
protected WriteMessage preprocessWriteMessage(WriteMessage writeMessage) {
Command currentCommand = (Command) writeMessage;
// Check if IoBuffer is null
if (currentCommand.getIoBuffer() == null) {
currentCommand.encode();
}
if (currentCommand.getStatus() == OperationStatus.SENDING) {
/**
* optimieze commands
*/
currentCommand = this.optimiezer.optimize(currentCommand,
this.writeQueue, this.commandAlreadySent,
this.sendBufferSize);
}
currentCommand.setStatus(OperationStatus.WRITING);
return currentCommand;
}
public boolean isAuthFailed() {
return this.authFailed;
}
public void setAuthFailed(boolean authFailed) {
this.authFailed = authFailed;
}
private BufferAllocator bufferAllocator;
public final BufferAllocator getBufferAllocator() {
return this.bufferAllocator;
}
public final void setBufferAllocator(BufferAllocator bufferAllocator) {
this.bufferAllocator = bufferAllocator;
}
@Override
protected final WriteMessage wrapMessage(Object msg,
Future<Boolean> writeFuture) {
((Command) msg).encode();
((Command) msg).setWriteFuture((FutureImpl<Boolean>) writeFuture);
if (log.isDebugEnabled()) {
log.debug("After encoding" + ((Command) msg).toString());
}
return (WriteMessage) msg;
}
/**
* get current command from queue
*
* @return
*/
private Command takeExecutingCommand() {
try {
return this.commandAlreadySent.take();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
return null;
}
/**
* is allow auto recconect if closed?
*
* @return
*/
public boolean isAllowReconnect() {
return this.allowReconnect;
}
public void setAllowReconnect(boolean reconnected) {
this.allowReconnect = reconnected;
}
public void addCommand(Command command) {
this.commandAlreadySent.add(command);
}
public void setCurrentCommand(Command cmd) {
this.currentCommand.set(cmd);
}
public Command getCurrentCommand() {
return this.currentCommand.get();
}
public void takeCurrentCommand() {
this.setCurrentCommand(this.takeExecutingCommand());
}
public void quit() {
this.write(this.commandFactory.createQuitCommand());
}
} |
/*!
* \brief Sets the alpha map with a reference to a texture
* \return true If successful
*
* \param alphaMap Texture
*
* \remark Invalidates the pipeline
*/
inline void Material::SetAlphaMap(TextureRef alphaMap)
{
m_alphaMap = std::move(alphaMap);
m_pipelineInfo.hasAlphaMap = m_alphaMap.IsValid();
InvalidatePipeline();
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.